Download as pdf or txt
Download as pdf or txt
You are on page 1of 180

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

n s e
i ce
b l el
fe ra
a n s
n - tr
a no
h a s eฺ
m ) u id
i s ฺco ent G
s sas Stud
l e s@ this
b r ob use
( to
bles
R o
ul io
B r a
Oracle Database 12c:
Clusterware Administration
Activity Guide
D81246GC11
Edition 1.1 | July 2015 | D91717

Learn more from Oracle University at oracle.com/education/


Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Disclaimer

This document contains proprietary information and is protected by copyright and other intellectual property laws. You may copy and
print this document solely for your own use in an Oracle training course. The document may not be modified or altered in any way.
Except where your use constitutes "fair use" under copyright law, you may not use, share, download, upload, copy, print, display,
perform, reproduce, publish, license, post, transmit, or distribute this document in whole or in part without the express authorization
of Oracle.
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

The information contained in this document is subject to change without notice. If you find any problems in the document, please
report them in writing to: Oracle University, 500 Oracle Parkway, Redwood Shores, California 94065 USA. This document is not
warranted to be error-free.

Restricted Rights Notice

If this documentation is delivered to the United States Government or anyone using the documentation on behalf of the United
States Government, the following notice is applicable:

U.S. GOVERNMENT RIGHTS


The U.S. Government’s rights to use, modify, reproduce, release, perform, display, or disclose these training materials are restricted
n s e
by the terms of the applicable Oracle license agreement and/or the applicable U.S. Government contract.
ice
Trademark Notice
b l el
fe ra
owners.
a n s
Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective

n - tr
Author
a no
Jim Womack
h a s eฺ
m ) u id
Technical Contributors and Reviewers
ฺco Herbert
n G
t Bradbury,
Allan Graves, Gerlinde Frenzen, Branislavis e
s a s tud
Valny, Ira Singer,

@ s is S
Harald Van Breederode, Joel Goodman, Sean Kim, Andy Fortunak, Al Flournoy,
s
Markus Michalewicz, MariaeBillings,
l th Lee
Jerry
b
ro o us e
( b
b l es publishedtusing: oracletutor
This book was

i o Ro
ul
Bra
Table of Contents
Course Practice Environment: Security Credentials ....................................................................................I-1
Course Practice Environment: Security Credentials.......................................................................................I-2
Practices for Lesson 1: Introduction to Clusterware ....................................................................................1-1
Practices for Lesson 1 ....................................................................................................................................1-2
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

Practices for Lesson 2: Oracle Clusterware Architecture ............................................................................2-1


Practices for Lesson 2: Overview ...................................................................................................................2-2
Practice 2-1: Laboratory Introduction .............................................................................................................2-3
Practices for Lesson 3: Flex Clusters ............................................................................................................3-1
Practices for Lesson 3 ....................................................................................................................................3-2
Practices for Lesson 4: Grid Infrastructure Preinstallation Tasks ..............................................................4-1
Practices for Lesson 4: Overview ...................................................................................................................4-2
Practice 4-1: Preinstallation Tasks .................................................................................................................4-3
n s e
Practices for Lesson 5: Grid Infrastructure Installation ...............................................................................5-1
ice
l
Practices for Lesson 5: Overview ...................................................................................................................5-2
b el
Practice 5-1: Configuring a new Flex Cluster with Flex ASM .........................................................................5-3
fe r a
n s
Practices for Lesson 6: Managing Cluster Nodes .........................................................................................6-1
a
- tr
Practices for Lesson 6: Overview ...................................................................................................................6-2
n
a no
Practice 6-1: Add a New Hub Node to the Cluster .........................................................................................6-3
Practice 6-2: Add a New Leaf Node to the Cluster .........................................................................................6-13
h a s eฺ
m ) u id
Practice 6-3: Installing RAC Database Software and Creating a RAC Database ...........................................6-20

tG
ฺco e....................................................................7-1
Practice 6-4: Creating a RAC Database .........................................................................................................6-24
Practices for Lesson 7: Traditional Clusterware Management
i s n
s as Stud
Practices for Lesson 7: Overview ...................................................................................................................7-2
s
e s@ Oracle t h is Configuration Files ................................................7-14
Practice 7-1: Verifying, Starting, and Stopping Oracle Clusterware ...............................................................7-3
l
oabBackup uof stheeOCR and OLR ..............................................................................7-17
Practice 7-2: Adding and Removing Clusterware

b r
Practice 7-3: Performing

e s ( Network
Practice 7-4: Configuring
l
to Interfaces Using oifcfg .............................................................................7-20
Practiceb
R o 7-5: Working with SCANs, SCAN Listeners, and GNS ....................................................................7-37

oPracticefor7-6:Lesson
uliPractices
Recovering From Voting Disk Corruptions ...............................................................................7-47

Bra
8: Policy-Based Cluster Management ........................................................................8-1
Practices for Lesson 8: Overview ...................................................................................................................8-2
Practice 8-1: Configuring and Using Policy-Based Cluster Management .......................................................8-3
Practices for Lesson 9: Upgrading and Patching Grid Infrastructure .........................................................9-1
Practices for Lesson 9 ....................................................................................................................................9-2
Practices for Lesson 10: Troubleshooting Oracle Clusterware ...................................................................10-1
Practices for Lesson 10: Overview .................................................................................................................10-2
Practice 10-1: Working with Log Files ............................................................................................................10-3
Practice 10-2: Working with OCRDUMP ........................................................................................................10-7
Practice 10-3: Working with CLUVFY ............................................................................................................10-11
Practice 10-4: Working with Cluster Health Monitor .......................................................................................10-16
Practices for Lesson 11: Making Applications Highly Available .................................................................11-1
Practices for Lesson 11: Overview .................................................................................................................11-2
Practice 11-1: Configuring Highly Available Application Resources on Flex Cluster Leaf Nodes ..................11-3
Practice 11-2: Protecting the Apache Application ..........................................................................................11-12

Copyright © 2015. Oracle and/or its affiliates. All rights reserved.

Oracle Database 12c: Clusterware Administration Table of Contents


iii
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

B r a
ioul
R o bl
es
( b r
l e
to
s
ob use
i
s@ this
s
sas Stud
m ) h a
ฺco ent G
u
a
id
no
s eฺ
n - tr
a n s fe r
a b l el
i
ce n s e
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

n s e
i ce
b l el
fer a
a n s
n - tr
no
Course Practice
a
a s eฺSecurity
Environment:
h
m )
Credentialsu id
i s ฺco ent G
s sas SChapter
tud I
l e s@ this
b r ob use
( to
bl es
R o
ul io
B r a

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Course Practice Environment: Security Credentials


Chapter I - Page 1
Course Practice Environment: Security Credentials

For operating system (Linux) usernames and passwords, see the following:
• If you are attending a classroom-based or live virtual class, ask your instructor or LVC
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

producer for OS credential information.


• If you are using a self-study format, refer to the communication that you received from
Oracle University for this course.

For product-specific credentials used in this course, see the following table:

Product-Specific Credentials
n s e
Product/Application Username Password
i ce
Database (orcl) SYS oracle_4U
b l el
fe r a
Database (orcl) SYSTEM s
oracle_4U
a n
Grid
n - tr
SYSASM
oracle_4U
Operating system grid a no oracle
h a s eฺ oracle
Operating system root
m ) u id
Operating system
i s co ent G
ฺoracle oracle

s sas Stud
l e s@ this
b r ob use
l e s( to
R ob
u lio
Bra

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Course Practice Environment: Security Credentials


Chapter I - Page 2
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

n s e
i ce
b l el
fer a
a n s
n - tr
Practices afor noLesson 1:
Introduction
h a s etoฺ Clusterware
m ) u id
o n1t G
ฺcChapter
i s
s tude
s a
s is S
s @
o b le se th
( b r o u
es t
o bl
io R
r a ul
B

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 1: Introduction to Clusterware


Chapter 1 - Page 1
Practices for Lesson 1

There are no practices for this lesson.


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

n s e
i ce
b l el
fer a
a n s
n - tr
a no
h a s eฺ
m ) u id
i s ฺco ent G
s sas Stud
l e s@ this
b r ob use
( to
bl es
R o
ul io
B r a

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 1: Introduction to Clusterware


Chapter 1 - Page 2
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

n s e
i ce
b l el
fer a
a n s
n - tr
Practices afornoLesson 2:
Oracleh a s eฺ
Clusterware
m ) u
Architecture
id
i s ฺco ent G
s sas SChapter
tud 2
l e s@ this
b r ob use
( to
bl es
R o
ul io
B r a

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 2: Oracle Clusterware Architecture


Chapter 2 - Page 1
Practices for Lesson 2: Overview
Practices Overview
In this practice you will be familiarized with the laboratory environment for this course.
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

n s e
i ce
b l el
fer a
a n s
n - tr
a no
h a s eฺ
m ) u id
i s ฺco ent G
s sas Stud
l e s@ this
b r ob use
( to
bl es
R o
ul io
B r a

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 2: Oracle Clusterware Architecture


Chapter 2 - Page 2
Practice 2-1: Laboratory Introduction
Overview
In this practice you will be familiarized with the laboratory environment for this course.

Tasks
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

Access to your laboratory environment will be through a graphical display running on your
classroom workstation or hosted on a remote machine. Your instructor will provide you with
instructions to access your practice environment.
The practice environment for this course is hosted on a server running Oracle Virtual Machine
(OVM). In turn, OVM hosts numerous virtual machines (VMs). Each VM is a logically separate
server that will be used to run Oracle Database 12c software, including Clusterware, Automatic
Storage Management (ASM) and Real Application Clusters (RAC).
n s e
i ce
1. Open a terminal window and become the root user. Execute the xm list command.
You should see output similar to the example displayed below. It shows that your server is b l el
hosting six domains. Domain-0 is the OVM server. The other domains relate to five VMs fer a
which you will use to form a cluster in upcoming practices. Exit the root account when a n s
finished. n - tr
[vncuser@classroom_pc ~] su - a no
h a s eฺ
Password: *****
[root@classroom_pc ~]# xm listm
) u id
G
ฺco IDentMem
Name
s i s d VCPUs State
Time(s) a u
ss is St 0 1024
Domain-0
e s @ t h 2 r----- 7872.6
host01 l
ob use 1 4000 1 -b---- 4582.3
host02( b r
e s to 2 3400 1 -b---- 5130.9
l
obhost03 3 3400 1 -b---- 4470.0

io R host04 4 1600 1 -b---- 2566.3

Br aul host05 5 1600 1 -b---- 2592.8

[root@classroom_pc ~]# exit


[vncuser@classroom_pc ~] $
Check the Mem column and take notice of the memory allocated for each VM. The first host,
host01 is allocated 4000M. Next, host02 and host03 are allocated 3400M. In addition to
Clusterware, they will also host a RAC instance. Lastly, host04 and host05 are allocated
1600M. They will become leaf nodes and will not host any database instances.
2. Using SSH, connect to host01 as the root OS user. Enter the root password when you
are prompted for the password.
[vncuser@classroom_pc ~] ssh root@host01
root@host01's password: *****
[root@host01 ~]#

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 2: Oracle Clusterware Architecture


Chapter 2 - Page 3
3. Your environment has been pre-configured with a series of shared disk devices that will be
used by Oracle Clusterware and ASM. Execute the following command to identify the
shared disks.
[root@host01 ~]# ls -l /dev/asmdisk*
brw-rw---- 1 grid asmadmin 202, 81 May 7 18:37 /dev/asmdisk1p1
brw-rw---- 1 grid asmadmin 202, 91 May 7 18:37 /dev/asmdisk1p10
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

brw-rw---- 1 grid asmadmin 202, 92 May 7 18:37 /dev/asmdisk1p11


brw-rw---- 1 grid asmadmin 202, 93 May 7 18:37 /dev/asmdisk1p12
brw-rw---- 1 grid asmadmin 202, 82 May 7 18:37 /dev/asmdisk1p2
brw-rw---- 1 grid asmadmin 202, 83 May 7 18:37 /dev/asmdisk1p3
brw-rw---- 1 grid asmadmin 202, 85 May 7 18:37 /dev/asmdisk1p4
brw-rw---- 1 grid asmadmin 202, 86 May 7 18:37 /dev/asmdisk1p5
brw-rw---- 1 grid asmadmin 202, 87 May 7 18:37 /dev/asmdisk1p6
n s e
brw-rw---- 1 grid asmadmin 202, 88 May 7 18:37 /dev/asmdisk1p7
i ce
brw-rw---- 1 grid asmadmin 202, 89 May 7 18:37 /dev/asmdisk1p8
b l el
brw-rw---- 1 grid asmadmin 202, 90 May 7 18:37
fer a
/dev/asmdisk1p9
brw-rw---- 1 grid asmadmin 202, 97 May 7 18:37
a n s
/dev/asmdisk2p1
brw-rw---- 1 grid n -
asmadmin 202, 107 Maytr 7 18:37 /dev/asmdisk2p10
brw-rw---- 1 grid
a no
asmadmin 202, 108 May 7 18:37 /dev/asmdisk2p11
brw-rw---- 1 grid
h a s eฺ
asmadmin 202, 98 May 7 18:37 /dev/asmdisk2p2
brw-rw---- 1 grid m ) u id
asmadmin 202, 99 May 7 18:37 /dev/asmdisk2p3
brw-rw---- 1 grid
i s ฺco ent G
asmadmin 202, 101 May 7 18:37 /dev/asmdisk2p4
brw-rw---- 1 grid
s sas Studasmadmin 202, 102 May 7 18:37 /dev/asmdisk2p5
brw-rw---- 1 grid
l e s@ this asmadmin 202, 103 May 7 18:37 /dev/asmdisk2p6

b r ob use
brw-rw---- 1 grid asmadmin 202, 104 May 7 18:37 /dev/asmdisk2p7
( to
brw-rw---- 1 grid asmadmin 202, 105 May 7 18:37 /dev/asmdisk2p8

bl es
brw-rw---- 1 grid asmadmin 202, 106 May 7 18:37 /dev/asmdisk2p9
R o [root@host01 ~]#

a u li4.o Examine host02 to confirm that the shared disk devices are also visible on that node.
Br [root@host01 ~]# ssh host02 "ls -l /dev/asmdisk*"
brw-rw---- 1 grid asmadmin 202, 81 May 7 18:37 /dev/asmdisk1p1
brw-rw---- 1 grid asmadmin 202, 91 May 7 18:37 /dev/asmdisk1p10
brw-rw---- 1 grid asmadmin 202, 92 May 7 18:37 /dev/asmdisk1p11
brw-rw---- 1 grid asmadmin 202, 93 May 7 18:37 /dev/asmdisk1p12
brw-rw---- 1 grid asmadmin 202, 82 May 7 18:37 /dev/asmdisk1p2
brw-rw---- 1 grid asmadmin 202, 83 May 7 18:37 /dev/asmdisk1p3
brw-rw---- 1 grid asmadmin 202, 85 May 7 18:37 /dev/asmdisk1p4
brw-rw---- 1 grid asmadmin 202, 86 May 7 18:37 /dev/asmdisk1p5
brw-rw---- 1 grid asmadmin 202, 87 May 7 18:37 /dev/asmdisk1p6
brw-rw---- 1 grid asmadmin 202, 88 May 7 18:37 /dev/asmdisk1p7
brw-rw---- 1 grid asmadmin 202, 89 May 7 18:37 /dev/asmdisk1p8
brw-rw---- 1 grid asmadmin 202, 90 May 7 18:37 /dev/asmdisk1p9
brw-rw---- 1 grid asmadmin 202, 97 May 7 18:37 /dev/asmdisk2p1
brw-rw---- 1 grid asmadmin 202, 107 May 7 18:37 /dev/asmdisk2p10
Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 2: Oracle Clusterware Architecture


Chapter 2 - Page 4
brw-rw---- 1 grid asmadmin 202, 108 May 7 18:37 /dev/asmdisk2p11
brw-rw---- 1 grid asmadmin 202, 98 May 7 18:37 /dev/asmdisk2p2
brw-rw---- 1 grid asmadmin 202, 99 May 7 18:37 /dev/asmdisk2p3
brw-rw---- 1 grid asmadmin 202, 101 May 7 18:37 /dev/asmdisk2p4
brw-rw---- 1 grid asmadmin 202, 102 May 7 18:37 /dev/asmdisk2p5
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

brw-rw---- 1 grid asmadmin 202, 103 May 7 18:37 /dev/asmdisk2p6


brw-rw---- 1 grid asmadmin 202, 104 May 7 18:37 /dev/asmdisk2p7
brw-rw---- 1 grid asmadmin 202, 105 May 7 18:37 /dev/asmdisk2p8
brw-rw---- 1 grid asmadmin 202, 106 May 7 18:37 /dev/asmdisk2p9
[root@host01 ~]#
5. Examine host03 to confirm that the shared disk devices are also visible on that node.
[root@host01 ~]# ssh host03 "ls -l /dev/asmdisk*"
n s e
ce
brw-rw---- 1 grid asmadmin 202, 81 May 7 18:37 /dev/asmdisk1p1
brw-rw---- 1 grid asmadmin 202, 91 May 7 18:37 /dev/asmdisk1p10
eli
a b l
brw-rw---- 1 grid asmadmin 202, 92 May 7 18:37 /dev/asmdisk1p11
fer
brw-rw---- 1 grid asmadmin 202, 93 May 7 18:37 /dev/asmdisk1p12
a n s
brw-rw---- 1 grid - tr
asmadmin 202, 82 May 7 18:37 /dev/asmdisk1p2
n
brw-rw---- 1 grid no
asmadmin 202, 83 May 7 18:37 /dev/asmdisk1p3
a
brw-rw---- 1 grid
a s eฺ
asmadmin 202, 85 May 7 18:37 /dev/asmdisk1p4
h
brw-rw---- 1 grid ) u id
asmadmin 202, 86 May 7 18:37 /dev/asmdisk1p5
m
brw-rw---- 1 grid
i s ฺco ent G
asmadmin 202, 87 May 7 18:37 /dev/asmdisk1p6

sas Stud
brw-rw---- 1 grid asmadmin 202, 88 May 7 18:37 /dev/asmdisk1p7
s
s@ this
brw-rw---- 1 grid asmadmin 202, 89 May 7 18:37 /dev/asmdisk1p8
l e
brw-rw---- 1 grid asmadmin 202, 90 May 7 18:37 /dev/asmdisk1p9

b r ob use
brw-rw---- 1 grid asmadmin 202, 97 May 7 18:37 /dev/asmdisk2p1
( to
bl es
brw-rw---- 1 grid asmadmin 202, 107 May 7 18:37 /dev/asmdisk2p10

R o brw-rw---- 1 grid asmadmin 202, 108 May 7 18:37 /dev/asmdisk2p11

ul io brw-rw---- 1 grid asmadmin 202, 98 May 7 18:37 /dev/asmdisk2p2

B r a brw-rw---- 1 grid asmadmin 202, 99 May 7 18:37 /dev/asmdisk2p3


brw-rw---- 1 grid asmadmin 202, 101 May 7 18:37 /dev/asmdisk2p4
brw-rw---- 1 grid asmadmin 202, 102 May 7 18:37 /dev/asmdisk2p5
brw-rw---- 1 grid asmadmin 202, 103 May 7 18:37 /dev/asmdisk2p6
brw-rw---- 1 grid asmadmin 202, 104 May 7 18:37 /dev/asmdisk2p7
brw-rw---- 1 grid asmadmin 202, 105 May 7 18:37 /dev/asmdisk2p8
brw-rw---- 1 grid asmadmin 202, 106 May 7 18:37 /dev/asmdisk2p9
[root@host01 ~]#

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 2: Oracle Clusterware Architecture


Chapter 2 - Page 5
6. Examine host04 and host05 as well. Notice that the devices are not available on these
nodes. You will see how these nodes can participate in the cluster in upcoming practices.
[root@host01 ~]# ssh host04 "ls -l /dev/asmdisk*"
ls: cannot access /dev/asmdisk*: No such file or directory
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

[root@host01 ~]# ssh host05 "ls -l /dev/asmdisk*"


ls: cannot access /dev/asmdisk*: No such file or directory
[root@host01 ~]#
So far you have examined the cluster nodes and identified the two main differences (the
amount of system memory and the availability of shared storage) between them.
7. The nodes have all been preconfigured with the requirements for installing Oracle Database
12c. This includes the OS user accounts required for a role separated installation. Using
su, assume the identity of the grid user.
n s e
i ce
[root@host01 ~]# su - grid
b l el
[grid@host01 ~]$
fer a
8. Examine the grid user account and take note of the OS groups that it is associated with.
a n s
[grid@host01 ~]$ id n - tr
uid=54322(grid) gid=54321(oinstall) a no
a s eฺ
groups=54321(oinstall),54327(asmdba),54328(asmoper),54329(asmadm
h
in)
m ) u id
[grid@host01 ~]$
i s ฺco ent G
9. Exit the grid session.
s sas Stud
[grid@host01 ~]$
l e s@exit this
logout
b r ob use
s(
[root@host01
l e
o
t~]#
ob su, assume the identity of the oracle user and examine the user account to identify
10. Using
R
u lio the OS groups associated with it.
Bra [root@host01 ~]# su - oracle
[oracle@host01 ~]$ id
uid=54321(oracle) gid=54321(oinstall)
groups=54321(oinstall),54322(dba),54323(oper),54327(asmdba)
[oracle@host01 ~]$
The root, grid and oracle OS user accounts have all been pre-configured to enable
passwordless SSH between the cluster nodes. For the grid and oracle users, this is
required to install Oracle Database 12c. For the root user, this configuration simplifies
some of the practice tasks (as you have already seen in steps 4 and 5 above).

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 2: Oracle Clusterware Architecture


Chapter 2 - Page 6
11. Confirm the configuration of passwordless SSH for the oracle and grid OS users.
[oracle@host01 bin]$ ssh host02
[oracle@host02 ~]$ logout
Connection to host02 closed.
[oracle@host01 bin]$ ssh host03
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

[oracle@host03 ~]$ logout


Connection to host03 closed.
[oracle@host01 bin]$ ssh host04
[oracle@host04 ~]$ logout
Connection to host04 closed.
[oracle@host01 bin]$ ssh host05
[oracle@host05 ~]$ logout
n s e
Connection to host05 closed.
i ce
[oracle@host01 bin]$ logout
b l el
fer a
[root@host01 ~]# su - grid
a n s
[grid@host01 ~]$ ssh host02
n - tr
[grid@host02 ~]$ exit
a no
logout
h a s eฺ
Connection to host02 closed. m ) u id
i
[grid@host01 ~]$ ssh host03s ฺco ent G
sas Stud
[grid@host03 ~]$ logout
s
l e s@ this
Connection to host03 closed.

b r ob use
[grid@host01 ~]$ ssh host04
( to
es
[grid@host04 ~]$ logout

o bl
Connection to host04 closed.

io R [grid@host01 ~]$ ssh host05


r a ul [grid@host05 ~]$ logout
B
Connection to host05 closed.
[grid@host01 ~]$
12. Exit your terminal session.
[grid@host01 ~]$ logout

[root@host01 ~]# logout

Connection to host01 closed.


[vncuser@classroom_pc ~] $
Note: Each of the following practice exercises will instruct you to initiate one or more
terminal sessions using different user accounts. Ensure that you always start a new
terminal session when instructed to do so, and exit all of your terminal sessions at the end
of each practice.

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 2: Oracle Clusterware Architecture


Chapter 2 - Page 7
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

n s e
i ce
b l el
fer a
a n s
n - tr
a no
h a s eฺ
m ) u id
i s ฺco ent G
s sas Stud
l e s@ this
b r ob use
( to
bl es
R o
ul io
B r a

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 2: Oracle Clusterware Architecture


Chapter 2 - Page 8
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

n s e
i ce
b l el
fer a
a n s
n - tr
Practices afor noLesson 3: Flex
Clustersh a s eฺ
m ) u id
o n3t G
ฺcChapter
i s
s tude
s a
s is S
s @
o b le se th
( b r o u
es t
o bl
io R
r a ul
B

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 3: Flex Clusters


Chapter 3 - Page 1
Practices for Lesson 3

There are no practices for this lesson.


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

n s e
i ce
b l el
fer a
a n s
n - tr
a no
h a s eฺ
m ) u id
i s ฺco ent G
s sas Stud
l e s@ this
b r ob use
( to
bl es
R o
ul io
B r a

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 3: Flex Clusters


Chapter 3 - Page 2
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

n s e
i ce
b l el
fer a
a n s
n - tr
Practices afornoLesson 4: Grid
h a s eฺPreinstallation
Infrastructure
m )
Tasks u id
i s ฺco ent G
s sas SChapter
tud 4
l e s@ this
b r ob use
( to
bl es
R o
ul io
B r a

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 4: Grid Infrastructure Preinstallation Tasks


Chapter 4 - Page 1
Practices for Lesson 4: Overview
Practices Overview
In this practice, you perform various tasks that are required before installing Oracle Grid
Infrastructure.
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

n s e
i ce
b l el
fer a
a n s
n - tr
a no
h a s eฺ
m ) u id
i s ฺco ent G
s sas Stud
l e s@ this
b r ob use
( to
bl es
R o
ul io
B r a

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 4: Grid Infrastructure Preinstallation Tasks


Chapter 4 - Page 2
Practice 4-1: Preinstallation Tasks
Overview
In this practice you will perform required preinstallation tasks for Oracle12c Grid Infrastructure.

Tasks
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

You will perform various tasks that are required before installing Oracle Grid Infrastructure.
These tasks include:
• Creating base directory
• Setting shell limits
• Editing profile entries

1. SSH to host01 as the root user. The cluster will use CTSSD for time synchronization
n s e
between cluster nodes so NTPD will not be used. Check that NTPD is not running on any i ce
cluster nodes. Perform this step on all five of your nodes.
b l el
[vncuser@classroom_pc ~] $ ssh root@host01 fer a
a n s
Password: *****
n - tr
no
[root@host01 ~]# service ntpd status
ntpd is stopped a
s eฺ
) h a
[root@host01 ~]# ssh host02 service ntpd status
id
m u
ntpd is stopped
i s ฺco ent G
sas Stud
[root@host01 ~]# ssh host03 service ntpd status
ntpd is stopped s
l e s@ this
[root@host01 ~]# ssh host04 service ntpd status
ntpd is stopped
b r ob use
( to
es
[root@host01 ~]# ssh host05 service ntpd status
bl
o
ntpd is stopped
R
a u lio [root@host01 ~]#
Br 2. As the root user, make sure the local naming cache daemon is running on all five cluster
nodes. If NSCD is not running on any of the nodes, start it. Perform these steps on all
five of your nodes.
[root@host01 ~]# service nscd status
nscd (pid 1545) is running...
[root@host01 ~]# ssh host02 service nscd status
nscd (pid 1541) is running...
[root@host01 ~]# ssh host03 service nscd status
nscd (pid 1525) is running...
[root@host01 ~]# ssh host04 service nscd status
nscd (pid 12076) is running...
[root@host01 ~]# ssh host05 service nscd status
nscd (pid 10345) is running...
[root@host01 ~]#

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 4: Grid Infrastructure Preinstallation Tasks


Chapter 4 - Page 3
3. As the root user, run the /stage/GRID/labs/less_04/limits.sh script on host01.
This script replaces the profile for the oracle and grid users and replaces
/etc/profile. It replaces the /etc/security/limits.conf file with a new one with
entries for oracle and grid. Cat the /stage/GRID/labs/less_04/bash_profile
and /stage/GRID/labs/less_04/profile to view the new files. It also installs the
CVU rpm.
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

[root@host01 ~]# cat /stage/GRID/labs/less_04/bash_profile


# .bash_profile

# Get the aliases and functions


if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi

# User specific environment and startup programs n s e


i ce
PATH=/home/oracle/firefox:$PATH:$HOME/bin
b l el
export PATH
fer a
a n s
umask 022
n - tr
a no
a
[root@host01 ~]# cat /stage/GRID/labs/less_04/profile
h s eฺ
# /etc/profile
m ) u id
i s ฺstartup tG
co enprograms,
# System wide environment and
s a sin /etc/bashrc
t u d for login setup
# Functions and aliases
@ s is S
go

pathmunge () { ble
s th
if b o s
!r echo $PATH
e
u | /bin/egrep -q "(^|:)$1($|:)" ; then
( t o
b l es if [ "$2" = "after" ] ; then
o PATH=$PATH:$1

u lio R else

Bra PATH=$1:$PATH
fi
fi
}

# ksh workaround
if [ -z "$EUID" -a -x /usr/bin/id ]; then
EUID=`id -u`
UID=`id -ru`
fi

# Path manipulation
if [ "$EUID" = "0" ]; then
pathmunge /sbin
pathmunge /usr/sbin
pathmunge /usr/local/sbin
fi

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 4: Grid Infrastructure Preinstallation Tasks


Chapter 4 - Page 4
# No core files by default
ulimit -S -c 0 > /dev/null 2>&1

if [ -x /usr/bin/id ]; then
USER="`id -un`"
LOGNAME=$USER
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

MAIL="/var/spool/mail/$USER"
fi

HOSTNAME=`/bin/hostname`
HISTSIZE=1000

if [ -z "$INPUTRC" -a ! -f "$HOME/.inputrc" ]; then


INPUTRC=/etc/inputrc
fi
n s e
i ce
export PATH USER LOGNAME MAIL HOSTNAME HISTSIZE INPUTRC
b l el
for i in /etc/profile.d/*.sh ; do fer a
if [ -r "$i" ]; then a n s
. $i n - tr
fi
a no
done
h a s eฺ
m d then
) "grid"ui];
ฺco ent G
if [ $USER = "oracle" ] || [ $USER =
umask 022 i s
s as Stu];d then
if [ $SHELL = "/bin/ksh"
s
ulimit -p 16384s
l
ulimite s@ thi
-n 65536
elsero b u s e
(
sfi b ulimitt o -u 16384 -n 65536
l e
R
fi ob
u lio
Bra unset i
unset pathmunge

[root@host01 ~]# cat /stage/GRID/labs/less_04/limits.conf

# - priority - the priority to run user process with


# - locks - max number of file locks the user can hold
# - sigpending - max number of pending signals
# - msgqueue - max memory used by POSIX message queues
(bytes)
# - nice - max nice priority allowed to raise to
# - rtprio - max realtime priority
#<domain> <type> <item> <value>

#* soft core 0
#* hard rss 10000
#@student hard nproc 20
#@faculty soft nproc 20

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 4: Grid Infrastructure Preinstallation Tasks


Chapter 4 - Page 5
#@faculty hard nproc 50
#ftp hard nproc 0
#@student - maxlogins 4
# End of file
oracle soft nofile 131072
oracle hard nofile 131072
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

oracle soft nproc 131072


oracle hard nproc 131072
oracle soft core unlimited
oracle hard core unlimited
oracle soft memlock 3500000
oracle hard memlock 3500000
grid soft nofile 131072
grid hard nofile 131072
grid soft nproc 131072
n s e
grid hard nproc 131072
i ce
grid soft core unlimited
b l el
grid
grid
hard
soft
core unlimited
memlock 3500000 fer a
grid hard memlock 3500000 a n s
n - tr
# Recommended stack hard limit 32MB for oracle installations
# oracle hard stack 32768
a no
h a s eฺ
id
) /home/oracle/.bash_profile
[root@host01 ~]# cat /stage/GRID/labs/less_04/limits.sh
m u
ฺco ent G/home/grid/.bash_profile
cp /stage/GRID/labs/less_04/bash_profile
i s
cp /stage/GRID/labs/less_04/bash_profile
s as Stud /etc/security/limits.conf
cp /stage/GRID/labs/less_04/limits.conf
s
s@ this
cp /etc/profile /root/etc_profile
b l e
cp /stage/GRID/labs/less_04/profile
e /etc/profile
b r o u s
( t o
ssh b l es cp /stage/GRID/labs/less_04/bash_profile
host02
Ro host02 cp /stage/GRID/labs/less_04/bash_profile
/home/oracle/.bash_profile
io ssh
u l
Bra /home/grid/.bash_profile
ssh host02 cp /stage/GRID/labs/less_04/limits.conf
/etc/security/limits.conf
ssh host02 cp /etc/profile /root/etc_profile
ssh host02 cp /stage/GRID/labs/less_04/profile /etc/profile

ssh host03 cp /stage/GRID/labs/less_04/bash_profile


/home/oracle/.bash_profile
ssh host03 cp /stage/GRID/labs/less_04/bash_profile
/home/grid/.bash_profile
ssh host03 cp /stage/GRID/labs/less_04/limits.conf
/etc/security/limits.conf
ssh host03 cp /etc/profile /root/etc_profile
ssh host03 cp /stage/GRID/labs/less_04/profile /etc/profile

ssh host04 cp /stage/GRID/labs/less_04/bash_profile


/home/oracle/.bash_profile

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 4: Grid Infrastructure Preinstallation Tasks


Chapter 4 - Page 6
ssh host04 cp /stage/GRID/labs/less_04/bash_profile
/home/grid/.bash_profile
ssh host04 cp /stage/GRID/labs/less_04/limits.conf
/etc/security/limits.conf
ssh host04 cp /etc/profile /root/etc_profile
ssh host04 cp /stage/GRID/labs/less_04/profile /etc/profile
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

ssh host05 cp /stage/GRID/labs/less_04/bash_profile


/home/oracle/.bash_profile
ssh host05 cp /stage/GRID/labs/less_04/bash_profile
/home/grid/.bash_profile
ssh host05 cp /stage/GRID/labs/less_04/limits.conf
/etc/security/limits.conf
ssh host05 cp /etc/profile /root/etc_profile
ssh host05 cp /stage/GRID/labs/less_04/profile /etc/profile
n s e
i ce
rpm -iv /stage/clusterware/rpm/cvuqdisk-1.0.9-1.rpm
b l el
ssh
ssh
host02 rpm -iv /stage/clusterware/rpm/cvuqdisk-1.0.9-1.rpm
host03 rpm -iv /stage/clusterware/rpm/cvuqdisk-1.0.9-1.rpm fer a
ssh host04 rpm -iv /stage/clusterware/rpm/cvuqdisk-1.0.9-1.rpm a n s
ssh n - tr
host05 rpm -iv /stage/clusterware/rpm/cvuqdisk-1.0.9-1.rpm
a no
h a s eฺ
[root@host01 ~]# /stage/GRID/labs/less_04/limits.sh
Preparing packages for installation...
m id
) is ualready
package cvuqdisk-1.0.9-1.x86_64
ฺ c o nt G installed
s
si tude is already installed
Preparing packages for installation...
s a
package cvuqdisk-1.0.9-1.x86_64
Preparing packages for sinstallation...
s S
e s @ th i
package lcvuqdisk-1.0.9-1.x86_64 is already installed
o b s e
r cvuqdisk-1.0.9-1.x86_64
Preparing packages u
for installation...
( b t o
s packages for installation... is already installed
package
l e
ob package cvuqdisk-1.0.9-1.x86_64 is already installed
Preparing
R
u lio
Bra [root@host01 ~]#
4. Make sure the Oracle Pre-Install RPM is installed on all five of your cluster nodes.
[root@host01 ~]# rpm -qa|grep -i preinstall
oracle-rdbms-server-12cR1-preinstall-1.0-8.el6.x86_64

[root@host01 ~]# ssh host02 rpm -qa|grep -i preinstall


oracle-rdbms-server-12cR1-preinstall-1.0-8.el6.x86_64

[root@host01 ~]# ssh host03 rpm -qa|grep -i preinstall


oracle-rdbms-server-12cR1-preinstall-1.0-8.el6.x86_64

[root@host01 ~]# ssh host04 rpm -qa|grep -i preinstall


oracle-rdbms-server-12cR1-preinstall-1.0-8.el6.x86_64

[root@host01 ~]# ssh host05 rpm -qa|grep -i preinstall


Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 4: Grid Infrastructure Preinstallation Tasks


Chapter 4 - Page 7
oracle-rdbms-server-12cR1-preinstall-1.0-8.el6.x86_64

[root@host01 ~]#
5. Check that the proper scheduler is used for your shared disks (actual devices are xvde and
xvdf). On each cluster node, enter the following commands to ensure that the Deadline
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

disk I/O scheduler is configured for use:


[root@host01 ~]# cat /sys/block/xvdf/queue/scheduler
noop [deadline] cfq

[root@host01 ~]# ssh host02 cat /sys/block/xvde/queue/scheduler


noop [deadline] cfq

n s e
ce
[root@host01 ~]# ssh host02 cat /sys/block/xvdf/queue/scheduler
eli
noop [deadline] cfq
a b l
fer
[root@host01 ~]# ssh host03 cat /sys/block/xvde/queue/scheduler
a n s
noop [deadline] cfq
n - tr
a no
a s eฺ
[root@host01 ~]# ssh host03 cat /sys/block/xvdf/queue/scheduler
h
noop [deadline] cfq m ) u id
i s ฺco ent G
[root@host01 ~]#
s sas Stud
s grid and oracle-owned software. Set the
s@ tforhiboth
6. Create the installation directories
l e
b r ob uofsethe directories as shown below: Do this on all five hosts.
ownership and permissions

l e s( to
Use the /stage/GRID/labs/less_04/cr_dir.sh script to save time.

R ob
[root@host01 ~]# cat /stage/GRID/labs/less_04/cr_dir.sh

u lio
Bra #!/bin/bash

mkdir -p /u01/app/12.1.0/grid
mkdir -p /u01/app/grid
mkdir -p /u01/app/oracle
chown -R grid:oinstall /u01
chown oracle:oinstall /u01/app/oracle
chmod -R 775 /u01/

ssh host02 mkdir -p /u01/app/12.1.0/grid


ssh host02 mkdir -p /u01/app/grid
ssh host02 mkdir -p /u01/app/oracle
ssh host02 chown -R grid:oinstall /u01
ssh host02 chown oracle:oinstall /u01/app/oracle
ssh host02 chmod -R 775 /u01/

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 4: Grid Infrastructure Preinstallation Tasks


Chapter 4 - Page 8
ssh host03 mkdir -p /u01/app/12.1.0/grid
ssh host03 mkdir -p /u01/app/grid
ssh host03 mkdir -p /u01/app/oracle
ssh host03 chown -R grid:oinstall /u01
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

ssh host03 chown oracle:oinstall /u01/app/oracle


ssh host03 chmod -R 775 /u01/

ssh host04 mkdir -p /u01/app/12.1.0/grid


ssh host04 mkdir -p /u01/app/grid
ssh host04 mkdir -p /u01/app/oracle
ssh host04 chown -R grid:oinstall /u01
ssh host04 chown oracle:oinstall /u01/app/oracle
n s e
i ce
el
ssh host04 chmod -R 775 /u01/
a b l
fer
ssh host05 mkdir -p /u01/app/12.1.0/grid
a n s
ssh host05 mkdir -p /u01/app/grid
n - tr
ssh host05 mkdir -p /u01/app/oracle
a no
ssh host05 chown s eฺ
-R grid:oinstall /u01
h a
ssh host05 chown ) id
oracle:oinstall /u01/app/oracle
m u
ssh host05 chmod
ฺco ent G
-R 775 /u01/
i s
s sas Stud
[root@host01 ~]# /stage/GRID/labs/less_04/cr_dir.sh
[root@host01 ~]#les@ t h is
b r
7. Close all terminal se for these practices.
b uopened
owindows
l e s( to
R ob
u lio
Bra

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 4: Grid Infrastructure Preinstallation Tasks


Chapter 4 - Page 9
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

n s e
i ce
b l el
fer a
a n s
n - tr
a no
h a s eฺ
m ) u id
i s ฺco ent G
s sas Stud
l e s@ this
b r ob use
( to
bl es
R o
ul io
B r a

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 4: Grid Infrastructure Preinstallation Tasks


Chapter 4 - Page 10
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

n s e
i ce
b l el
fer a
a n s
n - tr
Practices afor noLesson 5: Grid
h a s eฺInstallation
Infrastructure
m ) u id
o n5t G
ฺcChapter
i s
s tude
s a
s is S
s @
o b le se th
( b r o u
es t
o bl
io R
r a ul
B

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 5: Grid Infrastructure Installation


Chapter 5 - Page 1
Practices for Lesson 5: Overview
Practices Overview
In these practices, you will:
• Configure a new Flex Cluster with Flex ASM.
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

n s e
i ce
b l el
fer a
a n s
n - tr
a no
h a s eฺ
m ) u id
i s ฺco ent G
s sas Stud
l e s@ this
b r ob use
( to
bl es
R o
ul io
B r a

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 5: Grid Infrastructure Installation


Chapter 5 - Page 2
Practice 5-1: Configuring a new Flex Cluster with Flex ASM
Overview
In this practice you will install and configure a new Flex Cluster with Flex ASM. You will install to
three nodes; host01, host02 and host04. You will designate host01 and host02 to be hub
nodes and host04 will be designated as a leaf node.
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

1. From a vncuser terminal on your desktop PC, change to the root account. First, set the
time across all nodes using the command shown below. Then restart the NAMED service to
ensure viability and availability of the services for the software installation.
[vncuser@classroom_pc - ~]$ su -
Password:

n s e
[root@classroom_pc ~]# TIME="`date +%T`";for H in host01 host02ice
host03 host04 host05;do ssh $H date -s $TIME;done
b l el
f e ra
root@host01's password: a n s
n -t r
Wed Jun 17 08:25:14 UTC 2015
o
root@host02's password:
s an ฺ
Wed Jun 17 08:25:14 UTC 2015
) ha uide
m
ฺco ent G
root@host03's password:
Wed Jun 17 08:25:14 UTC i2015 s
root@host04's password:s sas Stud
s@ UTC t h s
i2015
e
Wed Jun 17 08:25:14
l
r
root@host05's
b u se
ob password:
Wed s ( 17 08:25:14
Jun to UTC 2015
b l e
R o
io
aul
[root@classroom_pc ~]# service named restart

Br Stopping named: .
Starting named:
[
[
OK
OK
]
]
[root@classroom_pc ~]# exit
Logout
[vncuser@classroom_pc - ~]$

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 5: Grid Infrastructure Installation


Chapter 5 - Page 3
2. Establish a terminal session connected to host01 as the grid OS user. Ensure that you
specify the –X option for ssh to configure the X environment properly for the grid
user.
[vncuser@classroom_pc ~]# ssh –X grid@host01
grid@host01's password:
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

[grid@host01 ~]$
3. Start the Oracle Clusterware release 12.1 installer.
[grid@host01 ~]$ /stage/clusterware/runInstaller
Starting Oracle Universal Installer...

4. On the Select Installation Option screen, click Next to accept the default selection (Install
and Configure Oracle Grid Infrastructure for a Cluster).
5. On the Select Cluster Type screen, select Configure a Flex Cluster and click Next. n s e
i ce
6. On the Select Product Languages screen, click Next to accept the default selection
b l el
(English).
fer a
7. Use the Grid Plug and Play Information screen to configure the following settings:
a n s
• Cluster Name: cluster01
n - tr
• SCAN Name: cluster01-scan.cluster01.example.com
a no
• SCAN Port: 1521 h a s eฺ
• GNS VIP Address: 192.0.2.155 om
) u id
i s ฺc ent G
• GNS Sub Domain: cluster01.example.com
sasandS“Configure
Make sure that the “ConfiguresGNS” tud nodes Virtual IPs…” check boxes are
s@ a new
selected. Click on the “create t h s radio button and then click Next.
iGNS”
l e
r
8. On the Cluster Node
b u se screen, click Add to begin the process of specifying
ob Information
( nodes.
additional cluster
s to
l e
Clickbthe Add button and add host02.example.com. Make sure to set Node Role to HUB,
R o
u lio and click OK. Click the add button again and add host04.example.com. Make sure the
Node Role is set to LEAF and click OK.
Bra

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 5: Grid Infrastructure Installation


Chapter 5 - Page 4
9. Click the SSH Connectivity button. Enter the grid password (please refer to
D81246GC11_passwords.doc for account passwords) into the OS Password field and click
Test to confirm that the required SSH connectivity is configured across the cluster. Your lab
environment is preconfigured with the required SSH connectivity so you will next see a
dialog confirming this. Click OK to continue. Review the information in the Cluster Node
Information page and click Next. Your screen Cluster Node Information page should look
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

like the one below:

n s e
i ce
b l el
fer a
a n s
n - tr
a no
h a s eฺ
m ) u id
i s ฺco ent G
s sas Stud
l e s@ this
b r ob use
( to
bl es
R o
ul io
B r a
10. On the Specify Network Interface Usage screen, ensure that network interface eth0 is
designated as the Public network and that network interface eth1 is designated as the
ASM & Private network. The eth2 interface should be designated “Do not use”. Click Next
to continue.
11. On the Create ASM Disk Group screen, click Change Discovery Path to customize the ASM
Disk Discovery Path.
12. In the Change Disk Discovery Path dialog box, set the Disk Discovery Path to
/dev/asmdisk* and click OK.
13. On the Create ASM Disk Group screen, make sure the Disk group name is DATA and
select the first 10 candidate disks in the list:
• /dev/asmdisk1p1
• /dev/asmdisk1p10
• /dev/asmdisk1p11
• /dev/asmdisk1p12
Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 5: Grid Infrastructure Installation


Chapter 5 - Page 5
• /dev/asmdisk1p2
• /dev/asmdisk1p3
• /dev/asmdisk1p4
• /dev/asmdisk1p5
• /dev/asmdisk1p6
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

• /dev/asmdisk1p7
Select Normal for the Redundancy and click Next to continue.
14. On the Specify ASM Password screen, select ‘Use same passwords for these accounts’
and enter (please refer to D81246GC11_passwords.doc for account passwords) as the
password. Then click Next to continue.
15. On the Failure Isolation Support screen, click Next to accept the default setting (Do not use
IPMI).
n s e
16. On the Specify Management Options screen, accept the defaults (nothing selected) and
i ce
click Next to continue.
b l el
17. On the Privileged Operating System Groups screen, the values should default to the
fer a
following:
a n s
• Oracle ASM Administrator Group:
n - tr
asmadmin
• Oracle ASM DBA Group: asmdba
a no
• Oracle ASM Operator Group :
h a
asmopers eฺ
Click Next to accept the default values. m ) u id
i s
18. On the Specify Installation Location screen, t Gthe Oracle base is
ฺcomakeensure
/u01/app/grid and change the s s tulocation
aSoftware d to /u01/app/12.1.0/grid. Click
Next to continue. s
@ this S
l e sscreen,
r o b
19. On the Create Inventory
u s e click Next to accept the default installation inventory
s (b
location of /u01/app/oraInventory.
to
20. On thele Root script execution configuration screen, check ‘Automatically run configuration
ob and select ‘Use “root” user credential’. Enter the root password as the password
scripts’
R
u lio and click Next to proceed. (please refer to D81246GC11_passwords.doc for account
Bra passwords)
21. Wait while a series of prerequisite checks are performed.

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 5: Grid Infrastructure Installation


Chapter 5 - Page 6
22. Due to the constraints of this lab environment, you may see a series of warnings resulting
from the prerequisite checks. If the warnings relate to ‘Physical Memory’ and ‘Device
Checks for ASM’ (as illustrated in the following screenshot) you may safely ignore them.
Check ‘Ignore all’ and then click Next to continue.
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

n s e
i ce
b l el
fer a
a n s
n - tr
a no
h a s eฺ
m ) u id
i s ฺco ent G
s sas Stud
l e s@ this
b r ob use
( to
bl es
R o
u o
li23. In the confirmation dialog, click Yes to ignore the prerequisites flagged by the installer.
a
Br 24. Examine the Summary screen. When ready, click Install to begin the installation. Oracle
Grid Infrastructure release 12.1 will now install on the cluster. The Install Product screen
follows the course of the installation.
25. Oracle Universal Installer pauses prior to executing the root configuration scripts. Click
Yes to proceed. Oracle Universal Installer now automatically executes the root
configuration scripts and you can follow the progress using the Install Product screen.
26. After configuration completes you will see the following message:
“The installation of Oracle Grid Infrastucture for a Cluster was successful”
Click Close to close Oracle Universal Installer.
27. Back in your terminal session, configure the environment using the oraenv script. Enter
+ASM1 when you are prompted for an ORACLE_SID value.
[grid@host01 ~]$ . oraenv
ORACLE_SID = [grid] ? +ASM1
The Oracle base has been set to /u01/app/grid
[grid@host01 ~]$

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 5: Grid Infrastructure Installation


Chapter 5 - Page 7
28. Now check the status of the cluster. Ensure that all the listed services are online on all the
cluster nodes.
[grid@host01 ~]$ crsctl check cluster -all
**************************************************************
host01:
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

CRS-4537: Cluster Ready Services is online


CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
host02:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online n s e
i ce
**************************************************************
b l el
host04:
fer a
CRS-4537: Cluster Ready Services is online
a n s
CRS-4529: Cluster Synchronization Services is online
n - tr
CRS-4533: Event Manager is online
a no
a s eฺ
**************************************************************
h
[grid@host01 ~]$
m ) u id
29. List the Clusterware resources. Ensure
i s ฺcoall the
that
e n tG
Clusterware resources are running as
s s
a Stu
shown in the following output. Notice the d
following new or altered resources in release 12.1:
− Flex ASM listeners: @ s is
ora.ASMNET1LSNR_ASM.lsnr
l e s t h
b r u se
ob ora.LISTENER_LEAF.lsnr
− Leaf listeners:
− Flex (ASM instances:
to ora.asm
l e s
R ob[grid@host01 ~]$ crsctl status resource -t
u lio ----------------------------------------------------------------
Bra Name Target State Server State details
----------------------------------------------------------------
Local Resources
----------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
ONLINE ONLINE host01 STABLE
ONLINE ONLINE host02 STABLE
ora.DATA.dg
ONLINE ONLINE host01 STABLE
ONLINE ONLINE host02 STABLE
ora.LISTENER.lsnr
ONLINE ONLINE host01 STABLE
ONLINE ONLINE host02 STABLE
ora.LISTENER_LEAF.lsnr
OFFLINE OFFLINE host04 STABLE
Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 5: Grid Infrastructure Installation


Chapter 5 - Page 8
ora.net1.network
ONLINE ONLINE host01 STABLE
ONLINE ONLINE host02 STABLE
ora.ons
ONLINE ONLINE host01 STABLE
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

ONLINE ONLINE host02 STABLE


----------------------------------------------------------------
Cluster Resources
----------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE host02 STABLE
ora.LISTENER_SCAN2.lsnr
n s e
ce
1 ONLINE ONLINE host01 STABLE
ora.LISTENER_SCAN3.lsnr
eli
a b l
1 ONLINE ONLINE host01 STABLE
fer
ora.MGMTLSNR
a n s
1 ONLINE ONLINE host01
n - tr
169.254.247.174 192.

a no 168.1.101,STABLE
ora.asm
h a s eฺ
1 ONLINE ONLINE m )
host01 u id STABLE
2
i
ONLINE ONLINE s ฺco ent G
host02 STABLE
3 sas Stud
OFFLINE OFFLINE
s STABLE
ora.cvu
l e s@ this
1
b r ob use
ONLINE ONLINE host01 STABLE
ora.gns ( to
bl es
1 ONLINE ONLINE host01 STABLE
R o
ul io ora.gns.vip

B r a 1 ONLINE ONLINE host01 STABLE


ora.host01.vip
1 ONLINE ONLINE host01 STABLE
ora.host02.vip
1 ONLINE ONLINE host02 STABLE
ora.oc4j
1 ONLINE ONLINE host01 STABLE
ora.mgmtdb
1 ONLINE ONLINE host01 Open,STABLE
ora.scan1.vip
1 ONLINE ONLINE host02 STABLE
ora.scan2.vip
1 ONLINE ONLINE host01 STABLE

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 5: Grid Infrastructure Installation


Chapter 5 - Page 9
ora.scan3.vip
1 ONLINE ONLINE host01 STABLE
----------------------------------------------------------------
[grid@host01 ~]$
30. Close all terminal windows opened for these practices.
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

n s e
i ce
b l el
fer a
a n s
n - tr
a no
h a s eฺ
m ) u id
i s ฺco ent G
s sas Stud
l e s@ this
b r ob use
( to
bl es
R o
ul io
B r a

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 5: Grid Infrastructure Installation


Chapter 5 - Page 10
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

n s e
i ce
b l el
fer a
a n s
n - tr
Practices afor noLesson 6:
Managingh a s Cluster
eฺ Nodes
) i d
ฺ c om n6t Gu
Chapter
a s is de
s s S tu
l e s@ this
b r ob use
( to
bl es
R o
ul io
B r a

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 6: Managing Cluster Nodes


Chapter 6 - Page 1
Practices for Lesson 6: Overview
Practices Overview
In these practices, you will:
• Add a new Hub node to your cluster
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

• Add a new Leaf node to your cluster.


• Install Oracle RAC database software
• Create a RAC database

n s e
i ce
b l el
fer a
a n s
n - tr
a no
h a s eฺ
m ) u id
i s ฺco ent G
s sas Stud
l e s@ this
b r ob use
( to
bl es
R o
ul io
B r a

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 6: Managing Cluster Nodes


Chapter 6 - Page 2
Practice 6-1: Add a New Hub Node to the Cluster
Overview
In this practice, you will use addnode.sh in graphical mode to extend your cluster by adding
host03 as the third hub node.
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

1. Establish a terminal session connected to host01 as the grid OS user. Ensure that you
specify the –X option for ssh to configure the X environment properly for the grid user.
[vncuser@classroom_pc ~] $ ssh –X grid@host01
grid@host01's password: *****
[grid@host01 ~]$
2. Make sure that you can connect from your host01 to host03 without being prompted for
passwords. Exit back to host01 when finished.
n s e
[grid@host01 ~]$ ssh host03
i ce
[grid@host03 ~]$ exit
b l el
logout
fer a
Connection to host03 closed.
a n s
[grid@host01 ~]$ n - tr
a nofor the grid user to point to
3. Make sure that you set up your environment variables correctly
your grid installation.
h a s eฺ
m ) u id
ฺco ent G
[grid@host01 ~]$ . oraenv
i s
as set
ORACLE_SID = [grid] ? +ASM1
The Oracle base hasss been S tutod /u01/app/grid
[grid@host01 ~]$
l e s@ this
r
4. Check your pre-grid
b u sefor host03 node using the Cluster Verification Utility.
obinstallation
l e s(
[grid@host01
o cluvfy stage -pre crsinst -n host03
t~]$
R ob
u lio Performing pre-checks for cluster services setup
Bra
Checking node reachability...
Node reachability check passed from node "host01"

Checking user equivalence...


User equivalence check passed for user "grid"

Checking node connectivity...

Checking hosts config file...

Verification of the hosts config file successful

Check: Node connectivity using interfaces on subnet "192.0.2.0"


Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 6: Managing Cluster Nodes


Chapter 6 - Page 3
Node connectivity passed for subnet "192.0.2.0" with node(s)
host03
TCP connectivity check passed for subnet "192.0.2.0"
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

Check: Node connectivity using interfaces on subnet


"192.168.1.0"
Node connectivity passed for subnet "192.168.1.0" with node(s)
host03
TCP connectivity check passed for subnet "192.168.1.0"

Node connectivity check passed


n s e
i ce
Checking multicast communication...
b l el
f er a
an s
Checking subnet "192.168.1.0" for multicast communication
multicast group "224.0.0.251"... n - t r with

n ocommunication
Check of subnet "192.168.1.0" for multicast
s a ฺ with
multicast group "224.0.0.251" passed.
) a
h uide
ฺ c om passed.
n tG
i
Check of multicast communication s e
Total memory check failed
s sas Stud
Check failed on @ nodes: is
l e s th
o b
host03
r memory s e
ucheck passed
(
Available b t o
b l
Swapes space check passed
Ro
io Free disk space check passed for

Br aul "host03:/usr,host03:/var,host03:/etc,host03:/sbin,host03:/tmp"
Free disk space check passed for "host03:/u01/app/12.1.0/grid"
Check for multiple users with UID value 54322 passed
User existence check passed for "grid"
Group existence check passed for "oinstall"
Group existence check passed for "dba"
Membership check for user "grid" in group "oinstall" [as
Primary] passed
Membership check for user "grid" in group "dba" failed
Check failed on nodes:
host03
Run level check passed
Hard limits check passed for "maximum open file descriptors"
Soft limits check passed for "maximum open file descriptors"
Hard limits check passed for "maximum user processes"
Soft limits check passed for "maximum user processes"
Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 6: Managing Cluster Nodes


Chapter 6 - Page 4
System architecture check passed
Kernel version check passed
Kernel parameter check passed for "semmsl"
Kernel parameter check passed for "semmns"
Kernel parameter check passed for "semopm"
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

Kernel parameter check passed for "semmni"


Kernel parameter check passed for "shmmax"
Kernel parameter check passed for "shmmni"
Kernel parameter check passed for "shmall"
Kernel parameter check passed for "file-max"
Kernel parameter check passed for "ip_local_port_range"
Kernel parameter check passed for "rmem_default"
n s e
ce
Kernel parameter check passed for "rmem_max"
Kernel parameter check passed for "wmem_default"
eli
a b l
Kernel parameter check passed for "wmem_max"
fer
Kernel parameter check passed for "aio-max-nr"
a n s
n
Kernel parameter check passed for "panic_on_oops"- tr
no
Package existence check passed for "binutils"
a
a s eฺ
Package existence check passed for "compat-libcap1"
h
m ) u id
Package existence check passed for "compat-libstdc++-33(x86_64)"

i s ฺco ent G
Package existence check passed for "libgcc(x86_64)"
sas Stud
Package existence check passed for "libstdc++(x86_64)"
s
l e s@ this
Package existence check passed for "libstdc++-devel(x86_64)"

b r ob use
Package existence check passed for "sysstat"
( to
Package existence check passed for "gcc"
bl es
Package existence check passed for "gcc-c++"
R o
ul io Package existence check passed for "ksh"

B r a Package existence check passed for "make"


Package existence check passed for "glibc(x86_64)"
Package existence check passed for "glibc-devel(x86_64)"
Package existence check passed for "libaio(x86_64)"
Package existence check passed for "libaio-devel(x86_64)"
Package existence check passed for "nfs-utils"
Check for multiple users with UID value 0 passed
Current group ID check passed

Starting check for consistency of primary group of root user

Check for consistency of root user's primary group passed

Starting Clock synchronization checks using Network Time


Protocol(NTP)...
No NTP Daemons or Services were found to be running
Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 6: Managing Cluster Nodes


Chapter 6 - Page 5
Clock synchronization check using Network Time Protocol(NTP)
passed

Core file name pattern consistency check passed.


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

User "grid" is not part of "root" group. Check passed


Default user file creation mask check passed
Time zone consistency check passed

Checking integrity of name service switch configuration file


"/etc/nsswitch.conf" ...
Check for integrity of name service switch configuration file
n s e
"/etc/nsswitch.conf" passed
i ce
b l el
f er a
Checking daemon "avahi-daemon" is not configured aand
s
n running
- t r
Daemon not configured check passed for process
n on "avahi-daemon"
Daemon not running check passed for process
s a ฺ"avahi-daemon"
) a
h uide
Starting check for Reverse path
ฺ c om filter
n t G setting ...
s
si tude
Check for Reverse path s a
s filter S setting passed
s @ th i s
Starting ro ble for
check u s e
Network interface bonding status of private
s ( b
interconnect t o
network interfaces ...
l e
Rob
io
aul
Check for Network interface bonding status of private

Br
interconnect network interfaces passed

Starting check for /dev/shm mounted as temporary file system ...

Check for /dev/shm mounted as temporary file system passed

Starting check for /boot mount ...

Check for /boot mount passed

Starting check for zeroconf check ...

Check for zeroconf check passed

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 6: Managing Cluster Nodes


Chapter 6 - Page 6
Pre-check for cluster services setup was unsuccessful on all the
nodes.

[grid@host01 ~]$
The only exceptions you should encounter are total system memory and grid membership
in the dba group, which is normal.
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

5. Change directory to $ORACLE_HOME/addnode and execute addnode.sh.


[grid@host01 ~]$ cd $ORACLE_HOME/addnode
[grid@host01 addnode]$ ./addnode.sh

6. On the Cluster Add Node Information screen, click Add to add the hostname for the node to
be added.
7. Enter host03.example.com for the Public Hostname and select HUB as the Node Role. n s e
i ce
el
Click OK, then click Next to continue.
8. On the Perform Prerequisite Checks page, the Physical memory test will fail as host03 has
a b l
less than 4GB of memory. Click the Ignore All check box and click Next. fer
9. Click Yes in the Oracle Grid Infrastructure dialog box. a n s
n - tr
10. Click Install on the Summary page.
a no
11. The Execute Configuration Scripts dialog box will prompt you to run the root scripts on
h a
host03. Run the orainstRoot.sh script first.
s eฺ
m ) u id
o G
sฺc dent
[root@host01 ~]# ssh host03 /u01/app/oraInventory/orainstRoot.sh
Changing permissions ofsi/u01/app/oraInventory.
s sa Stufor group.
Adding read,write permissions
l e s@ this permissions for world.
Removing read,write,execute
b r ob use
l e s ( groupname
Changing to of /u01/app/oraInventory to oinstall.
R obThe execution of the script is complete.
io
Br aul [root@host01 ~]#
12. Next, execute the root.sh.script on host03.
[root@host01 ~]# ssh host03 /u01/app/12.1.0/grid/root.sh
root@host03's password:
Performing root user operation for Oracle 12c

The following environment variables are set as:


ORACLE_OWNER= grid
ORACLE_HOME= /u01/app/12.1.0/grid

Enter the full pathname of the local bin directory:


[/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 6: Managing Cluster Nodes


Chapter 6 - Page 7
The contents of "coraenv" have not changed. No need to
overwrite.

Creating /etc/oratab file...


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

Entries will be added to the /etc/oratab file as needed by


Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Using configuration parameter file:
/u01/app/12.1.0/grid/crs/install/crsconfig_params
2015/05/16 11:14:28 CLSRSC-363: User ignored prerequisites
n s e
during installation
i ce
b l el
OLR initialization - successful fer a
a n s
file 'oracle-ohasd.conf' n - tr
2015/05/16 11:14:56 CLSRSC-330: Adding Clusterware entries to

a no
CRS-4133: Oracle High AvailabilityhServices a s eฺhas been stopped.
m u id has been started.
) Services
CRS-4123: Oracle High Availability
G
ฺco entServices
s i
CRS-4133: Oracle High Availabilitys d has been stopped.
CRS-4123: Oracle High
s a S t u
s Availability Services has been started.
@ s
thi onof'host03'
CRS-2791: Starting
b l es resources
shutdown
e
Oracle High Availability
roAttempting
Services-managed
b u s
(
CRS-2673:
s t o to stop 'ora.drivers.acfs' on 'host03'
l e
RobCRS-2677: Stop of 'ora.drivers.acfs' on 'host03' succeeded
io CRS-2793: Shutdown of Oracle High Availability Services-managed

Br aul resources
CRS-4133:
on 'host03' has completed
Oracle High Availability Services has been stopped.
CRS-4123: Starting Oracle High Availability Services-managed
resources
CRS-2672: Attempting to start 'ora.mdnsd' on 'host03'
CRS-2672: Attempting to start 'ora.evmd' on 'host03'
CRS-2676: Start of 'ora.mdnsd' on 'host03' succeeded
CRS-2676: Start of 'ora.evmd' on 'host03' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'host03'
CRS-2676: Start of 'ora.gpnpd' on 'host03' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'host03'
CRS-2676: Start of 'ora.gipcd' on 'host03' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'host03'
CRS-2676: Start of 'ora.cssdmonitor' on 'host03' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'host03'
CRS-2672: Attempting to start 'ora.diskmon' on 'host03'
Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 6: Managing Cluster Nodes


Chapter 6 - Page 8
CRS-2676: Start of 'ora.diskmon' on 'host03' succeeded
CRS-2789: Cannot stop resource 'ora.diskmon' as it is not
running on server 'host03'
CRS-2676: Start of 'ora.cssd' on 'host03' succeeded
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on
'host03'
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

CRS-2672: Attempting to start 'ora.ctssd' on 'host03'


CRS-2676: Start of 'ora.ctssd' on 'host03' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'host03'
succeeded
CRS-2672: Attempting to start 'ora.asm' on 'host03'
CRS-2676: Start of 'ora.asm' on 'host03' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'host03'
n s e
CRS-2676: Start of 'ora.storage' on 'host03' succeeded
i ce
CRS-2672: Attempting to start 'ora.crf' on 'host03'
b l el
CRS-2676: Start of 'ora.crf' on 'host03' succeeded fer a
a n s
n - tr
CRS-2672: Attempting to start 'ora.crsd' on 'host03'

no
CRS-2676: Start of 'ora.crsd' on 'host03' succeeded
a
CRS-6017: Processing resource auto-start for servers: host03
s eฺ
) h a id
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN2.lsnr' on
m u
ฺco ent G
'host01'
i s
CRS-2672: Attempting to start 'ora.ASMNET1LSNR_ASM.lsnr' on
'host03'
s sas Stud
l e s@ this
CRS-2672: Attempting to start 'ora.ons' on 'host03'

b r ob use
CRS-2677: Stop of 'ora.LISTENER_SCAN2.lsnr' on 'host01'
(
succeeded
to
bl es
CRS-2673: Attempting to stop 'ora.scan2.vip' on 'host01'
R o CRS-2677: Stop of 'ora.scan2.vip' on 'host01' succeeded
ul io
B r a CRS-2672: Attempting to start 'ora.scan2.vip' on 'host03'
CRS-2676: Start of 'ora.scan2.vip' on 'host03' succeeded
CRS-2672: Attempting to start 'ora.LISTENER_SCAN2.lsnr' on
'host03'
CRS-2676: Start of 'ora.ASMNET1LSNR_ASM.lsnr' on 'host03'
succeeded
CRS-2672: Attempting to start 'ora.asm' on 'host03'
CRS-2676: Start of 'ora.ons' on 'host03' succeeded
CRS-2676: Start of 'ora.LISTENER_SCAN2.lsnr' on 'host03'
succeeded
CRS-2676: Start of 'ora.asm' on 'host03' succeeded
CRS-2672: Attempting to start 'ora.host03.vip' on 'host03'
CRS-2676: Start of 'ora.host03.vip' on 'host03' succeeded
CRS-2672: Attempting to start 'ora.LISTENER.lsnr' on 'host03'
CRS-2676: Start of 'ora.LISTENER.lsnr' on 'host03' succeeded
CRS-2672: Attempting to start 'ora.proxy_advm' on 'host03'
Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 6: Managing Cluster Nodes


Chapter 6 - Page 9
CRS-2676: Start of 'ora.proxy_advm' on 'host03' succeeded
CRS-6016: Resource auto-start has completed for server host03
CRS-6024: Completed start of Oracle Cluster Ready Services-
managed resources
CRS-4123: Oracle High Availability Services has been started.
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

2015/05/16 11:20:06 CLSRSC-343: Successfully started Oracle


clusterware stack

clscfg: EXISTING configuration version 5 detected.


clscfg: version 5 is 12c Release 1.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
n s e
i ce
2015/05/16 11:20:19 CLSRSC-326: Configure Oracle Grid b l el
Infrastructure for a Cluster ... Succeeded f er a
an s
13. When the root scripts have finished running on host03, return to the
- t rExecute
Configuration Scripts dialog box and click OK.
n on
14. Click Close on the Finish screen when prompted by the OUI.
s a ฺ
15. Set your environment and execute the crsctl stat
) a
h res i d
–te command to verify that all
local and cluster resources are running on o m as
host03 G u
expected.
s ฺ c n t
[root@host01 ~]# . oraenv
s a si tude
ORACLE_SID = [root]s ? +ASM1S
e s@ t h isset to /u01/app/grid
The Oracle base l has been
se stat res -t
ob ~]#ucrsctl
b r
[root@host01

l e s( to
----------------------------------------------------------------

R obName Target State Server State

ulio details
Bra ----------------------------------------------------------------
Local Resources
----------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
ONLINE ONLINE host01 STABLE
ONLINE ONLINE host02 STABLE
ONLINE ONLINE host03 STABLE
ora.DATA.dg
ONLINE ONLINE host01 STABLE
ONLINE ONLINE host02 STABLE
ONLINE ONLINE host03 STABLE
ora.LISTENER.lsnr
ONLINE ONLINE host01 STABLE
ONLINE ONLINE host02 STABLE
ONLINE ONLINE host03 STABLE
Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 6: Managing Cluster Nodes


Chapter 6 - Page 10
ora.LISTENER_LEAF.lsnr
OFFLINE OFFLINE host04 STABLE
ora.net1.network
ONLINE ONLINE host01 STABLE
ONLINE ONLINE host02 STABLE
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

ONLINE ONLINE host03 STABLE


ora.ons
ONLINE ONLINE host01 STABLE
ONLINE ONLINE host02 STABLE
ONLINE ONLINE host03 STABLE
----------------------------------------------------------------
Cluster Resources
n s e
ce
----------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
eli
a b l
1 ONLINE ONLINE host02
fer
STABLE
a n s
ora.LISTENER_SCAN2.lsnr
n - tr
1 ONLINE ONLINE host03
a no STABLE
ora.LISTENER_SCAN3.lsnr
h a s eฺ
1 ONLINE ONLINE
m ) u
host01 id STABLE
ora.MGMTLSNR
i s ฺco ent G
1
sas Stud
ONLINE ONLINE
s
host01 169.254.78.72 192.16

l e s@ this 8.1.101,STABLE
ora.asm
b r ob use
(1
to
ONLINE ONLINE host01 STABLE

bl es 2 ONLINE ONLINE host02 STABLE


R o 3 ONLINE ONLINE host03 STABLE
ul io
B r a ora.cvu
1 ONLINE ONLINE host01 STABLE
ora.gns
1 ONLINE ONLINE host01 STABLE
ora.gns.vip
1 ONLINE ONLINE host01 STABLE
ora.host01.vip
1 ONLINE ONLINE host01 STABLE
ora.host02.vip
1 ONLINE ONLINE host02 STABLE
ora.host03.vip
1 ONLINE ONLINE host03 STABLE
ora.mgmtdb
ONLINE ONLINE host01 open,STABLE
ora.oc4j
1 ONLINE ONLINE host01 STABLE
Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 6: Managing Cluster Nodes


Chapter 6 - Page 11
ora.scan1.vip
1 ONLINE ONLINE host02 STABLE
ora.scan2.vip
1 ONLINE ONLINE host03 STABLE
ora.scan3.vip
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

1 ONLINE ONLINE host01 STABLE


----------------------------------------------------------------
[root@host01 ~]#

n s e
i ce
b l el
fer a
a n s
n - tr
a no
h a s eฺ
m ) u id
i s ฺco ent G
s sas Stud
l e s@ this
b r ob use
( to
bl es
R o
ul io
B r a

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 6: Managing Cluster Nodes


Chapter 6 - Page 12
Practice 6-2: Add a New Leaf Node to the Cluster
Overview
In this practice, you will use addnode.sh in silent mode to extend your cluster by adding
host05 as the second leaf node.
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

1. Establish a terminal session connected to host01 as the grid OS user. Do not specify
the –X option for ssh.
[grid@host01 addnode]$ exit (From the grid terminal opened
in the previous practice)

[vncuser@classsroom_pc ~]$ ssh grid@host01


grid@host01's password: *****
n s e
[grid@host01 ~]$
i ce
2. Make sure that you can connect from your host01 to host05 without being prompted for
b l el
passwords. Exit back to host01 when finished.
fer a
[grid@host01 ~]$ ssh host05
a n s
[grid@host05 ~]$ exit n - tr
logout a no
h a s eฺ
Connection to host05 closed.
m ) u id
[grid@host01 ~]$
i s t Gto +ASM1.
ฺcoSet ethenSID
sas Stud
3. Execute . oraenv to set your environment.
[grid@host01 ~]$ . soraenv
ORACLE_SID =le s@ ? th+ASM1
[grid]
is
b
The Oracler obbase uhassebeen set to /u01/app/grid
l e s(
[grid@host01
o
t~]$
4. R ob directory to $ORACLE_HOME/addnode and run addnode.sh silently and add
Change
u lio host05 to your cluster as a leaf node.
Bra [grid@host01 ~]$ cd $ORACLE_HOME/addnode
[grid@host01 addnode]$ ./addnode.sh -silent
"CLUSTER_NEW_NODES={host05}" "CLUSTER_NEW_NODE_ROLES={leaf}"
Starting Oracle Universal Installer...

Checking Temp space: must be greater than 120 MB. Actual 7382
MB Passed
Checking swap space: must be greater than 150 MB. Actual 8623
MB Passed
[WARNING] [INS-13014] Target environment does not meet some
optional requirements.
CAUSE: Some of the optional prerequisites are not met. See
logs for details. /u01/app/oraInventory/logs/addNodeActions2015-
05-13_04-57-53AM.log
ACTION: Identify the list of failed prerequisite checks from
the log: /u01/app/oraInventory/logs/addNodeActions2015-05-13_04-
Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 6: Managing Cluster Nodes


Chapter 6 - Page 13
57-53AM.log. Then either from the log file or from installation
manual find the appropriate configuration to meet the
prerequisites and fix it manually.

Prepare Configuration in progress.


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

Prepare Configuration successful.


.................................................. 8% Done.
You can find the log of this install session at:
/u01/app/oraInventory/logs/addNodeActions2015-05-13_04-57-
53AM.log

Instantiate files in progress.


n s e
i ce
Instantiate files successful.
b l el
..................................................
fer a
14% Done.
a n s
n - tr
no
Copying files to node in progress.
a
Copying files to node successful. has eฺ
) i d
ฺ c om nt Gu
.................................................. 73% Done.

a s is de
s
Saving cluster inventory tu
s inSprogress.
l e s@ this
.................................................. 80% Done.
b
ro o us e
( b
s clustert inventory successful.
Saving
b l e
Ro The Cluster Node Addition of /u01/app/12.1.0/grid was
io successful.

Br aul Please check '/tmp/silentInstall.log' for more details.

Setup Oracle Base in progress.

Setup Oracle Base successful.


.................................................. 88% Done.

As a root user, execute the following script(s):


1. /u01/app/oraInventory/orainstRoot.sh
2. /u01/app/12.1.0/grid/root.sh

Execute /u01/app/oraInventory/orainstRoot.sh on the following


nodes:
[host05]
Execute /u01/app/12.1.0/grid/root.sh on the following nodes:

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 6: Managing Cluster Nodes


Chapter 6 - Page 14
[host05]

The scripts can be executed in parallel on all the nodes.

..........
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

Update Inventory in progress.


.................................................. 100% Done.

Update Inventory successful.


Successfully Setup Software.

[grid@host01 addnode]$
n s e
5. Execute the orainstRoot.sh and root.sh scripts on host05.
i ce
[root@host01 ~]# ssh host05 /u01/app/oraInventory/orainstRoot.sh
b l el
Changing permissions of /u01/app/oraInventory.
fer a
a n s
Adding read,write permissions for group.
n - tr
no
Removing read,write,execute permissions for world.
a
s eฺto oinstall.
Changing groupname of /u01/app/oraInventory
) h a id
m u
ฺco ent G
The execution of the script is complete.
i s
[root@host01 ~]# ssh s as S/u01/app/12.1.0/grid/root.sh
shost05 tud
Performing roots@
e t h is
user operation for Oracle 12c
b l e
( b ro o us
The s
e following t environment variables are set as:
b l
o ORACLE_OWNER= grid
io R
aul
ORACLE_HOME= /u01/app/12.1.0/grid

Br Copying dbhome to /usr/local/bin ...


Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...

Creating /etc/oratab file...


Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Using configuration parameter file:
/u01/app/12.1.0/grid/crs/install/crsconfig_params
2015/05/16 16:46:06 CLSRSC-363: User ignored prerequisites
during installation

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 6: Managing Cluster Nodes


Chapter 6 - Page 15
OLR initialization - successful
2015/05/16 16:46:39 CLSRSC-330: Adding Clusterware entries to
file 'oracle-ohasd.conf'
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

CRS-4133: Oracle High Availability Services has been stopped.


CRS-4123: Oracle High Availability Services has been started.
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
CRS-2791: Starting shutdown of Oracle High Availability
Services-managed resources on 'host05'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'host05'
CRS-2677: Stop of 'ora.drivers.acfs' on 'host05' succeeded
n s e
i ce
el
CRS-2793: Shutdown of Oracle High Availability Services-managed
resources on 'host05' has completed
a b l
CRS-4133: Oracle High Availability Services has been stopped. fer
a n s
resources n - tr
CRS-4123: Starting Oracle High Availability Services-managed

a no
CRS-2672: Attempting to start 'ora.mdnsd' on 'host05'
h a s eฺ
CRS-2672: Attempting to start 'ora.evmd' on 'host05'
m ) u id
ฺco ent G
CRS-2676: Start of 'ora.mdnsd' on 'host05' succeeded
i s
CRS-2676: Start of 'ora.evmd' on 'host05' succeeded
s sas Stud
CRS-2672: Attempting to start 'ora.gpnpd' on 'host05'

l e s@ this
CRS-2676: Start of 'ora.gpnpd' on 'host05' succeeded
r ob use
CRS-2672: Attempting to start 'ora.gipcd' on 'host05'
b
( to
es
CRS-2676: Start of 'ora.gipcd' on 'host05' succeeded
bl
R o CRS-2672: Attempting to start 'ora.cssdmonitor' on 'host05'

ul io CRS-2676: Start of 'ora.cssdmonitor' on 'host05' succeeded

B r a CRS-2672: Attempting to start 'ora.cssd' on 'host05'


CRS-2672: Attempting to start 'ora.diskmon' on 'host05'
CRS-2676: Start of 'ora.diskmon' on 'host05' succeeded
CRS-2789: Cannot stop resource 'ora.diskmon' as it is not
running on server 'host05'
CRS-2676: Start of 'ora.cssd' on 'host05' succeeded
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on
'host05'
CRS-2672: Attempting to start 'ora.ctssd' on 'host05'
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'host05'
succeeded
CRS-2676: Start of 'ora.ctssd' on 'host05' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'host05'
CRS-2676: Start of 'ora.storage' on 'host05' succeeded
CRS-2672: Attempting to start 'ora.crf' on 'host05'
CRS-2676: Start of 'ora.crf' on 'host05' succeeded
Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 6: Managing Cluster Nodes


Chapter 6 - Page 16
CRS-2672: Attempting to start 'ora.crsd' on 'host05'
CRS-2676: Start of 'ora.crsd' on 'host05' succeeded
CRS-6017: Processing resource auto-start for servers: host05
CRS-6016: Resource auto-start has completed for server host05
CRS-6024: Completed start of Oracle Cluster Ready Services-
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

managed resources
CRS-4123: Oracle High Availability Services has been started.
2015/05/16 16:50:35 CLSRSC-343: Successfully started Oracle
clusterware stack

Successfully accumulated necessary OCR keys.


Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
n s e
i ce
el
2015/05/16 16:50:41 CLSRSC-325: Configure Oracle Grid
Infrastructure for a Cluster ... succeeded
a b l
fer
a n s
[root@host01 ~]#
n - tr
6.
no
Execute crsctl stat res –t and ensure the ora.LISTENER_LEAF.lsnr resource is
a
located on host05. The resource status should be OFFLINE.
h a s eฺ
[root@host01 ~]# . oraenv
m ) u id
ORACLE_SID = [root] ? +ASM1co
i s ฺ e n tG
sasstat
The Oracle base has been set tod /u01/app/grid
s S tures -t
s@ this
[root@host01 ~]# crsctl
l e
se State
----------------------------------------------------------------
b r ob Target
u
Name
( t o Server State

b l es
details

Ro ----------------------------------------------------------------
io
aul
Local Resources
Br ----------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
ONLINE ONLINE host01 STABLE
ONLINE ONLINE host02 STABLE
ONLINE ONLINE host03 STABLE
ora.DATA.dg
ONLINE ONLINE host01 STABLE
ONLINE ONLINE host02 STABLE
ONLINE ONLINE host03 STABLE
ora.LISTENER.lsnr
ONLINE ONLINE host01 STABLE
ONLINE ONLINE host02 STABLE
ONLINE ONLINE host03 STABLE
ora.LISTENER_LEAF.lsnr
OFFLINE OFFLINE host04 STABLE
Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 6: Managing Cluster Nodes


Chapter 6 - Page 17
OFFLINE OFFLINE host05 STABLE
ora.net1.network
ONLINE ONLINE host01 STABLE
ONLINE ONLINE host02 STABLE
ONLINE ONLINE host03 STABLE
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

ora.ons
ONLINE ONLINE host01 STABLE
ONLINE ONLINE host02 STABLE
ONLINE ONLINE host03 STABLE
----------------------------------------------------------------
Cluster Resources
----------------------------------------------------------------
n s e
ce
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE host02
eli
STABLE a b l
fer
ora.LISTENER_SCAN2.lsnr
a n s
1 ONLINE ONLINE host03
n - tr STABLE
ora.LISTENER_SCAN3.lsnr
a no
1 ONLINE ONLINE
a s eฺ
host01
h
STABLE
ora.MGMTLSNR
m ) u id
1 ONLINE ONLINE
i s ฺco ent G
host01 169.254.247.174 192.

s sas Stud 168.1.101,STABLE


ora.asm
l e s@ this
ob use
1 ONLINE ONLINE host01 STABLE
( b r
es
2
to
ONLINE ONLINE host02 STABLE

o bl 3 ONLINE ONLINE host03 STABLE

io R ora.cvu

r a ul 1 ONLINE ONLINE host01 STABLE


B ora.gns
1 ONLINE ONLINE host01 STABLE
ora.gns.vip
1 ONLINE ONLINE host01 STABLE
ora.host01.vip
1 ONLINE ONLINE host01 STABLE
ora.host02.vip
1 ONLINE ONLINE host02 STABLE
ora.host03.vip
1 ONLINE ONLINE host03 STABLE
ora.mgmtdb
1 ONLINE ONLINE host01 Open,STABLE
ora.oc4j
1 ONLINE ONLINE host01 STABLE
ora.scan1.vip
Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 6: Managing Cluster Nodes


Chapter 6 - Page 18
1 ONLINE ONLINE host02 STABLE
ora.scan2.vip
1 ONLINE ONLINE host03 STABLE
ora.scan3.vip
1 ONLINE ONLINE host01 STABLE
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

----------------------------------------------------------------

[root@host01 ~]#
7. At this point you have configured a Flex Cluster with Flex ASM using all available nodes.
Next, you will install database software and create a RAC database on the cluster. In
preparation for this you will now create another ASM disk group to host the Fast Recovery
Area (FRA). Exit out of the grid terminal session and start s grid terminal session using the
–X option. Start the ASM Configuration Assistant (asmca).
n s e
i ce
[grid@host01 addnode]$ exit (From the current grid terminal)
b l el
fer a
[vncuser@classsroom_pc ~]$ ssh –X grid@host01
a n s
grid@host01's password: *****
n - tr
a no
[grid@host01 ~]$ . oraenv
h a s eฺ
ORACLE_SID = [grid] ? +ASM1
m ) u id
The Oracle base has been set
i s tG
ฺcoto e/u01/app/grid
n
s s as Stud
[grid@host01 ~]$@ asmca is
s th appears, click Create.
le sAssistant
r o b
8. After the ASM Configuration
u e
9. In the Create(b t
Disk Group o window, enter FRA as the disk group name and select first three
l
candidate
b esdisks (/dev/asmdisk1p8, /dev/asmdisk1p9, and /dev/asmdisk2p1).
o sure the Redundancy is External. Then click OK to create the disk group.
io R
Make

r a ul10. After the disk group creation process completes you will see a dialog window indicating that
B the disk group has been created. Click OK to proceed.
11. Click Exit to quit the ASM Configuration Assistant.
12. Click Yes to confirm that you want to quit the ASM Configuration Assistant.

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 6: Managing Cluster Nodes


Chapter 6 - Page 19
Practice 6-3: Installing RAC Database Software and Creating a RAC
Database
Overview
In this practice you will install Oracle Database 12c software and create an Oracle RAC
database.
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

1. Establish an ssh connection to host01 using the –X option as the oracle user.
[vncuser@classsroom_pc ~]$ ssh -X oracle@host01
oracle@host01's password:
Last login: Tue Apr 30 17:07:09 2015 from 192.0.2.1
[oracle@host01 ~]$
2. Change directory to /stage/database/ and start the installer.
n s e
[oracle@host01 ~]$ cd /stage/database
i ce
[oracle@host01 database]$ ./runInstaller
b l el
Starting Oracle Universal Installer...
fer a
a n s
n - tr
no
Checking Temp space: must be greater than 500 MB. Actual 7934 MB
Passed
a
s eฺ
Checking swap space: must be greater than 150 MB.
) h a
Actual 8632 MB
id
Passed
m u
s ฺco ent G
Checking monitor: must be configured to display at least 256
i
colors. Actual 16777216
s sas Stud
Passed

s@ this
Preparing to launch Oracle Universal Installer from
l e
/tmp/OraInstall2015-05-22_03-35-08PM. Please wait ...
b
3. On the Configure r se screen, make sure that The I wish to receive security
b uUpdates
oSecurity
s (My OracletoSupport check box is unselected and click Next. Click Yes to confirm
updatesevia
l
obyou will not configure security updates for this installation.
that
R
u li4.o On the Select Installation Option screen, select Install database software only and click
Bra
Next.
5. On the Gird Installation Options screen, click Next to accept the default selection (Oracle
Real Application Clusters database installation).
6. On the Select List of Nodes screen, ensure that all the cluster nodes are selected and click
SSH Connectivity. Note that Oracle recommends that you install the Oracle Database
software on all the cluster nodes, even Leaf Nodes. This simplifies things if you ever want
to convert a Leaf Node to a Hub Node and run database instances on it.
7. Enter the oracle user password (please refer to D81246GC11_passwords.doc for account
passwords) into the OS Password field and click Test to confirm that the required SSH
connectivity is configured across the cluster. Your laboratory environment is preconfigured
with the required SSH connectivity so you will next see a dialog confirming this. Click OK to
continue.
8. If the required SSH connectivity was not present you could now click Setup to perform the
required configuration. However, since the laboratory environment is already configured
correctly, click Next to continue.
9. On the Select Product Languages screen, click Next to accept the default selection
(English).
Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 6: Managing Cluster Nodes


Chapter 6 - Page 20
10. On the Select Database Edition screen, click Next to accept the default selection
(Enterprise Edition).
11. On the Specify Installation Location screen, click Next to accept the default installation
location. The Oracle base should be /u01/app/oracle and the Software location should
be /u01/app/oracle/product/12.1.0/dbhome_1.
12. On the Privileged Operating System Groups screen, click Next to accept the default
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

settings. They should all be dba except Database Operator which should be oper.
13. On the System Prerequisite Checks page, a series of prerequisite checks is performed. A
warning regarding Maximum locked memory is displayed. Click Fix and Check again.
Execute the fixup scripts on all indicated nodes. Click on the Verification Result tab, click
the Ignore All check box and then click Next.
An Oracle Database 12c Rel 1 Installation popup screen appears next. Click Yes to
continue.
14. Start Examine the Summary screen. When ready, click Install to start the installation.
n s e
i ce
el
15. Oracle Database release 12.1 software now installs on the cluster. The Install Product
screen follows the course of the installation.
a b l
16. Near the end of the installation process, you will see the Execute Configuration scripts fer
dialog box. Execute the orainstRoot.sh script on host05, if prompted. In your root a n s
n - tr
terminal session on host01, execute the configuration script. Press the Enter key when
no
you are prompted for the local bin directory location.
a
a s eฺ
Run the script on host02, host03, host04 and host05 as shown below.
h
m ) u id
o nt G
[root@host01 ~]# /u01/app/oracle/product/12.1.0/dbhome_1/root.sh
ฺcfor
Performing root user operation
s i s d e
Oracle 12c
a
ss is St u
e s
The following environment@ thvariables are set as:
b l s e
ro ooracle
ORACLE_OWNER=
( b u
es
ORACLE_HOME= t /u01/app/oracle/product/12.1.0/dbhome_1
o b l
u lio R
Enter the full pathname of the local bin directory:

Bra
[/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by


Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
You have new mail in /var/spool/mail/root

[root@host01 ~]# ssh host02


/u01/app/oracle/product/12.1.0/dbhome_1/root.sh
Performing root user operation for Oracle 12c

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 6: Managing Cluster Nodes


Chapter 6 - Page 21
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/oracle/product/12.1.0/dbhome_1

Enter the full pathname of the local bin directory:


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

[/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by


Database Configuration Assistant when a database is created
Finished running generic part of root script. n s e
i ce
Now product-specific root actions will be performed.
b l el
fer a
[root@host01 ~]# ssh host03
a n s
/u01/app/oracle/product/12.1.0/dbhome_1/root.sh
n - tr
Performing root user operation for Oracle 12c
a no
The following environment variables )are hassetuidas:
eฺ
ORACLE_OWNER= oracle
ฺ c om nt G
s
si tude
s a
ORACLE_HOME= /u01/app/oracle/product/12.1.0/dbhome_1
s is S
s @ th the local bin directory:
o b le se of
Enter the full pathname

( b r
[/usr/local/bin]:
o u
e s
The contents of t
"dbhome" have not changed. No need to overwrite.
b l
o contents of "oraenv" have not changed. No need to overwrite.
The
R
io The contents of "coraenv" have not changed. No need to overwrite.
u l
Bra Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.

[root@host01 ~]# ssh host04


/u01/app/oracle/product/12.1.0/dbhome_1/root.sh
Performing root user operation for Oracle 12c

The following environment variables are set as:


ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/oracle/product/12.1.0/dbhome_1

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 6: Managing Cluster Nodes


Chapter 6 - Page 22
Enter the full pathname of the local bin directory:
[/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

Entries will be added to the /etc/oratab file as needed by


Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.

[root@host01 ~]# ssh host05


/u01/app/oracle/product/12.1.0/dbhome_1/root.sh
n s e
i ce
el
Performing root user operation for Oracle 12c

a b l
fer
The following environment variables are set as:
a n s
ORACLE_OWNER= oracle
n - tr
no
ORACLE_HOME= /u01/app/oracle/product/12.1.0/dbhome_1
a
Enter the full pathname of the local) h
as ideฺ
bin directory:
[/usr/local/bin]:
ฺ c om nt Gu
s
The contents of "dbhome" have
a is not dchanged.
e No need to overwrite.
s t u
@s thhave
The contents of "oraenv" have S not changed. No need to overwrite.
The contents of e s is not changed. No need to overwrite.
bl use
"coraenv"
r o
(b be added
Entriesswill to to the /etc/oratab file as needed by
l e
ob Configuration Assistant when a database is created
Database
R
u lio Finished running generic part of root script.
Bra Now product-specific root actions will be performed.
[root@host01 ~]#
17. After you have executed the required configuration script on all of the cluster nodes, return
to your Oracle Universal Installer session and click OK to proceed.
18. After configuration completes you will see the Finish screen. Click Close to close Oracle
Universal Installer.

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 6: Managing Cluster Nodes


Chapter 6 - Page 23
Practice 6-4: Creating a RAC Database
Overview
In this practice you will create an Oracle RAC database.
1. Using the same oracle terminal session from the previous practice, change directory to
/u01/app/oracle/product/12.1.0/dbhome_1/bin/ and execute dbca.
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

[oracle@host01 database]$ cd
/u01/app/oracle/product/12.1.0/dbhome_1/bin
[oracle@host01 bin]$ ./dbca
2. On the Database Operation screen, click Next to accept the default selection (Create
Database).
3. On the Creation Mode screen, click Next to accept the default selection (Advanced Mode).
4. On the Database Template screen, click Next to accept the default settings for a Policy-
n s e
Managed RAC Database using the General Purpose template). i ce
5. On the Database Identification screen, specify orcl as the Global Database Name and b l el
click Next. fer a
6. On the Database Placement screen, specify orcldb for the Server pool Name and set its a n s
cardinality to 3. Click Next to proceed. n - tr
7. a no
On the Management Options screen, unselect Run CVU Checks Periodically, ensure that
a s eฺ
Configure EM Database Express is selected and the EM Database Port is 5500. Click Next
h
to continue. m ) u id
8. ฺco ent G
On the Database Credentials screen, select ‘Use the Same Administrative Passwords for
i s
Next to continue. s sas Stud
All Accounts’ and enter the administrative accounts (SYS, SYSTEM) password. Then click

9. l e s@ this
On the Storage Locations screen, make sure storage Type is ASM and make the following
adjustments:
b r ob use
( to
es
− Database File Locations: +DATA
bl
R o − Fast Recovery Area: +FRA
ul io − Fast Recovery Area Size: 5160
B r a Space constraints in the laboratory environment require that you ignore the warning
regarding the size of the Fast Recovery Area. Once your values match the values above,
click Next to continue.).
10. On the Database Options screen, select Sample Schemas and click Next. Click Yes in the
Database Configuration Assistant dialog box regarding instance placement on the local
node.
11. On the Memory tab, in the Initialization Parameters screen, set Memory Size (SGA and
PGA) to 1300 and click on the Character Sets tab.
12. On the Character Sets tab, in the Initialization Parameters screen, select Use Unicode
(AL32UTF8) and click Next.
13. On the Creation Options screen, click Next to accept the default selection (Create
Database).
14. Wait while a series of prerequisite checks are performed.
15. Examine the Summary screen. When you are ready, click Finish to start the database
creation process.
16. Follow the database creation process on the Progress Page.
Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 6: Managing Cluster Nodes


Chapter 6 - Page 24
17. Examine the dialog which indicates that the database creation process is completed. Take
note of the EM Database Express URL.
18. Click Close to quit the Database Configuration Assistant.
19. Back in the oracle user terminal, configure the environment using the oraenv script.
Enter orcl when you are prompted for an ORACLE_SID value.
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

[oracle@host01 bin]$ . oraenv


ORACLE_SID = [oracle] ? orcl
The Oracle base has been set to /u01/app/oracle
[oracle@host01 bin]$
20. Use the srvctl command to check on which cluster nodes the database instances are
running.
[oracle@host01 bin]$ srvctl status database -db orcl
n s e
ce
Instance orcl_1 is running on node host03
Instance orcl_2 is running on node host01
eli
a b l
Instance orcl_3 is running on node host02
fer
[oracle@host01 bin]$
a n s
21. Close all terminal windows opened for these practices.
n - tr
no
Congratulations! You have successfully configured an Oracle Database 12c Flex Cluster
a
with Flex ASM and a RAC database.
h a s eฺ
m ) u id
i s ฺco ent G
s sas Stud
l e s@ this
b r ob use
( to
bl es
R o
ul io
B r a

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 6: Managing Cluster Nodes


Chapter 6 - Page 25
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

n s e
i ce
b l el
fer a
a n s
n - tr
a no
h a s eฺ
m ) u id
i s ฺco ent G
s sas Stud
l e s@ this
b r ob use
( to
bl es
R o
ul io
B r a

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 6: Managing Cluster Nodes


Chapter 6 - Page 26
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

n s e
i ce
b l el
fer a
a n s
n - tr
Practices afor noLesson 7:
Traditional
h a s Clusterware
eฺ
) i d
ฺ c om nt Gu
Management
a s is Chapter
de 7
s
s is S t u
s @
o b le se th
( b r o u
es t
o bl
io R
r a ul
B

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 7: Traditional Clusterware Management


Chapter 7 - Page 1
Practices for Lesson 7: Overview
Practices Overview
In these practices, you will:
• Verify, stop, and start Oracle Clusterware.
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

• Add and remove Oracle Clusterware configuration files and back up the Oracle Cluster
Registry and the Oracle Local Registry.
• Determine the location of the Oracle Local Registry (OLR) and perform backups of the
OCR and OLR.
• Use oifcfg to configure a second private interconnect.
• Drop and add SCANs, SCAN listeners, and the GNS VIP.
• Recover from a voting disk corruption. e
n s
i ce
b l el
fer a
a n s
n - tr
a no
h a s eฺ
m ) u id
i s ฺco ent G
s sas Stud
l e s@ this
b r ob use
( to
bl es
R o
ul io
B r a

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 7: Traditional Clusterware Management


Chapter 7 - Page 2
Practice 7-1: Verifying, Starting, and Stopping Oracle Clusterware
Overview
In this practice, you check the status of Oracle Clusterware using both the operating system
commands and the crsctl utility. You will also start and stop Oracle Clusterware.
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

1. Connect to the first node of your cluster as the grid user. You can use the oraenv script
to configure your environment.
[vncuser@classroom_pc ~] ssh grid@host01
Password: *******

[grid@host01~] $ . oraenv
ORACLE_SID = [grid] ? +ASM1
The Oracle base has been set to /u01/app/grid n s e
i ce
[grid@host01~] $
b l el
f er a
s
n processes
adaemon
2. Using operating system commands, verify that the Oracle Clusterware
- t r
n daemon processes
o
are running on the current node. (Hint: Most of the Oracle Clusterware
n
have names that end with d.bin.)
s a ฺ
[grid@host01~]$ pgrep -l d.bin
) a
h uide
309 ocssd.bin
ฺ c om nt G
679 octssd.bin s
si tude
1721 crsd.bin
s a
s is S
17048 ohasd.bin
17363 mdnsd.bin e s @ th
o b l s e
17380 gpnpd.bin
( b r o u
s
17388 gipcd.bin
18065 le
t
b osysmond.bin
Ro gnsd.bin
29304
io 32739
r a ul evmd.bin
B [grid@host01 ~]$

3. Using the crsctl utility, verify that Oracle Clusterware is running on the current node.
[grid@host01 ~]$ crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
[grid@host01 ~]$

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 7: Traditional Clusterware Management


Chapter 7 - Page 3
4. Verify the status of all cluster resources that are being managed by Oracle Clusterware for
all nodes.
[grid@host01 ~]$ crsctl stat res -t
-------------------------------------------------------------------
Name Target State Server State details
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

-------------------------------------------------------------------
Local Resources
-------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
ONLINE ONLINE host01 STABLE
ONLINE ONLINE host02 STABLE
ONLINE ONLINE host03 STABLE
n s e
ce
ora.DATA.dg
eli
ONLINE ONLINE host01 STABLE
a b l
ONLINE ONLINE host02 STABLE
fer
ONLINE ONLINE host03 STABLE
a n s
ora.FRA.dg
n - tr
ONLINE ONLINE host01
a no STABLE
ONLINE ONLINE host02
h a s eฺ STABLE
ONLINE ONLINE m )
host03 u id STABLE
ora.LISTENER.lsnr
i s ฺco ent G
s sas Stud
ONLINE ONLINE host01 STABLE

l e s@ this
ONLINE ONLINE host02 STABLE

b r ob use
ONLINE ONLINE host03 STABLE
( to
ora.LISTENER_LEAF.lsnr
bl es OFFLINE OFFLINE host04 STABLE
R o
ul io OFFLINE OFFLINE host05 STABLE

B r a ora.net1.network
ONLINE ONLINE host01 STABLE
ONLINE ONLINE host02 STABLE
ONLINE ONLINE host03 STABLE
ora.ons
ONLINE ONLINE host01 STABLE
ONLINE ONLINE host02 STABLE
ONLINE ONLINE host03 STABLE
-------------------------------------------------------------------
Cluster Resources
-------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE host02 STABLE
ora.LISTENER_SCAN2.lsnr
1 ONLINE ONLINE host03 STABLE

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 7: Traditional Clusterware Management


Chapter 7 - Page 4
ora.LISTENER_SCAN3.lsnr
1 ONLINE ONLINE host01 STABLE
ora.MGMTLSNR
1 ONLINE ONLINE host01 169.254.247.174 192.
168.1.101,STABLE
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

ora.asm
1 ONLINE ONLINE host01 STABLE
2 ONLINE ONLINE host02 STABLE
3 ONLINE ONLINE host03 STABLE
ora.cvu
1 ONLINE ONLINE host01 STABLE
ora.gns
n s e
ce
1 ONLINE ONLINE host01 STABLE
ora.gns.vip
eli
a b l
1 ONLINE ONLINE host01 STABLE
fer
ora.host01.vip
a n s
1 ONLINE ONLINE host01
n - tr STABLE
ora.host02.vip
a no
1 ONLINE ONLINE host02
h a s eฺ STABLE
ora.host03.vip m ) u id
1 ONLINE ONLINE
i s ฺco ent G
host03 STABLE
ora.mgmtdb
s sas Stud
1
l e s@ this
ONLINE ONLINE host01 Open,STABLE
ora.oc4j
b r ob use
1 ( to
ONLINE ONLINE host01 STABLE
bl es
ora.orcl.db
R o
ul io 1 ONLINE ONLINE host02 Open,STABLE

B r a 2 ONLINE ONLINE host03 Open,STABLE


3 ONLINE ONLINE host01 Open,STABLE
ora.scan1.vip
1 ONLINE ONLINE host02 STABLE
ora.scan2.vip
1 ONLINE ONLINE host03 STABLE
ora.scan3.vip
1 ONLINE ONLINE host01 STABLE
-------------------------------------------------------------------

[grid@host01 ~]$

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 7: Traditional Clusterware Management


Chapter 7 - Page 5
5. Attempt to stop Oracle Clusterware on the current node while logged in as the grid user.
What happens and why?
[grid@host01 ~]$ crsctl stop crs
CRS-4563: Insufficient user privileges.
CRS-4000: Command Stop failed, or completed with errors.
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

[grid@host01 ~]$

6. Switch to the root account, set the environment with oraenv and stop Oracle Clusterware
only on the current node.
[grid@host01 ~]$ su -
[root@host01 ~]# . oraenv
ORACLE_SID = [root] ? +ASM1
n s e
The Oracle base has been set to /u01/app/grid
i ce
[root@host01 ~]# crsctl stop crs
b l el
CRS-2791: Starting shutdown of Oracle High Availability Services-
fer a
managed resources on 'host01'
a n s
CRS-2673: Attempting to stop 'ora.crsd' on 'host01'
n - tr
no
CRS-2790: Starting shutdown of Cluster Ready Services-managed
a
resources on 'host01'
h a s eฺ
) id
CRS-2673: Attempting to stop 'ora.FRA.dg' on 'host01'
m u
i s ฺco ent G
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'host01'

sas Stud
CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'host01'
s
s@ this
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN3.lsnr' on 'host01'
l e
ob use
CRS-2673: Attempting to stop 'ora.cvu' on 'host01'

( b r
to
CRS-2673: Attempting to stop 'ora.oc4j' on 'host01'
es
bl
CRS-2673: Attempting to stop 'ora.proxy_advm' on 'host01'
o
io R
CRS-2673: Attempting to stop 'ora.gns' on 'host01'

r a ul CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'host01' succeeded


B CRS-2677: Stop of 'ora.LISTENER_SCAN3.lsnr' on 'host01' succeeded
CRS-2673: Attempting to stop 'ora.scan3.vip' on 'host01'
CRS-2673: Attempting to stop 'ora.host01.vip' on 'host01'
CRS-2677: Stop of 'ora.FRA.dg' on 'host01' succeeded
CRS-2677: Stop of 'ora.scan3.vip' on 'host01' succeeded
CRS-2672: Attempting to start 'ora.scan3.vip' on 'host02'
CRS-2677: Stop of 'ora.DATA.dg' on 'host01' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'host01'
CRS-2677: Stop of 'ora.host01.vip' on 'host01' succeeded
CRS-2672: Attempting to start 'ora.host01.vip' on 'host03'
CRS-2676: Start of 'ora.scan3.vip' on 'host02' succeeded
CRS-2676: Start of 'ora.host01.vip' on 'host03' succeeded
CRS-2677: Stop of 'ora.asm' on 'host01' succeeded
CRS-2673: Attempting to stop 'ora.ASMNET1LSNR_ASM.lsnr' on 'host01'
CRS-2677: Stop of 'ora.cvu' on 'host01' succeeded
Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 7: Traditional Clusterware Management


Chapter 7 - Page 6
CRS-2672: Attempting to start 'ora.cvu' on 'host03'
CRS-2672: Attempting to start 'ora.LISTENER_SCAN3.lsnr' on 'host02'
CRS-2676: Start of 'ora.cvu' on 'host03' succeeded
CRS-2677: Stop of 'ora.gns' on 'host01' succeeded
CRS-2673: Attempting to stop 'ora.gns.vip' on 'host01'
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

CRS-2677: Stop of 'ora.gns.vip' on 'host01' succeeded


CRS-2672: Attempting to start 'ora.gns.vip' on 'host02'
CRS-2676: Start of 'ora.gns.vip' on 'host02' succeeded
CRS-2672: Attempting to start 'ora.gns' on 'host02'
CRS-2677: Stop of 'ora.ASMNET1LSNR_ASM.lsnr' on 'host01' succeeded
CRS-2676: Start of 'ora.LISTENER_SCAN3.lsnr' on 'host02' succeeded
CRS-2677: Stop of 'ora.proxy_advm' on 'host01' succeeded
n s e
ce
CRS-2677: Stop of 'ora.oc4j' on 'host01' succeeded
CRS-2672: Attempting to start 'ora.oc4j' on 'host02'
eli
a b l
CRS-2676: Start of 'ora.gns' on 'host02' succeeded
fer
CRS-2676: Start of 'ora.oc4j' on 'host02' succeeded
a n s
CRS-2673: Attempting to stop 'ora.ons' on 'host01'
n - tr
CRS-2677: Stop of 'ora.ons' on 'host01' succeeded
a no
a s eฺ
CRS-2673: Attempting to stop 'ora.net1.network' on 'host01'
h
m ) u id
CRS-2677: Stop of 'ora.net1.network' on 'host01' succeeded

i s ฺco ent G
CRS-2792: Shutdown of Cluster Ready Services-managed resources on
'host01' has completed
s sas Stud
l e s@ this
CRS-2677: Stop of 'ora.crsd' on 'host01' succeeded

b r ob use
CRS-2673: Attempting to stop 'ora.crf' on 'host01'
( to
CRS-2673: Attempting to stop 'ora.ctssd' on 'host01'

bl es
CRS-2673: Attempting to stop 'ora.evmd' on 'host01'
R o
CRS-2673: Attempting to stop 'ora.storage' on 'host01'
ul io
B r a CRS-2673: Attempting to stop 'ora.gpnpd' on 'host01'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'host01'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'host01'
CRS-2677: Stop of 'ora.storage' on 'host01' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'host01'
CRS-2677: Stop of 'ora.drivers.acfs' on 'host01' succeeded
CRS-2677: Stop of 'ora.crf' on 'host01' succeeded
CRS-2677: Stop of 'ora.gpnpd' on 'host01' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'host01' succeeded
CRS-2677: Stop of 'ora.evmd' on 'host01' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'host01' succeeded
CRS-2677: Stop of 'ora.asm' on 'host01' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on
'host01'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'host01'
succeeded
Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 7: Traditional Clusterware Management


Chapter 7 - Page 7
CRS-2673: Attempting to stop 'ora.cssd' on 'host01'
CRS-2677: Stop of 'ora.cssd' on 'host01' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'host01'
CRS-2677: Stop of 'ora.gipcd' on 'host01' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

resources on 'host01' has completed


CRS-4133: Oracle High Availability Services has been stopped.
[root@host01 ~]#

7. Attempt to check the status of Oracle Clusterware now that it has been successfully
stopped.
[root@host01 ~]# crsctl check cluster
CRS-4639: Could not contact Oracle High Availability Services n s e
i ce
CRS-4000: Command Check failed, or completed with errors.
b l el
fer a
[root@host01 ~]#
a n s
n - tr
no Clusterware is still
8. Connect to the second node of your cluster and verify that Oracle
a
h a s node
running on that node. Set your environment for the second
e ฺby using the oraenv
utility.
m ) ui d
[root@host01 ~]# ssh host02 ฺc o nt G
s i s 2015
d e
Last login: Thu May 16 16:00:25
s s a Stu from host01.example.com
[root@host02 ~]# . @ oraenv is
s
le ?s+ASM2 th
ORACLE_SID = [root]
o b
r has e
ubeen set to /u01/app/grid
The Oracle ( bbase
t o
b l es
[root@host02 ~]# crsctl check crs
o
io R
CRS-4638: Oracle High Availability Services is online
u l
Bra
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
[root@host02 ~]#

9. Verify that all cluster resources are running on the second and third nodes, stopped on the
first node, and that the VIP resources from the first node have migrated or failed over to the
other two nodes. When you are finished, exit from host02.
[root@host02 ~]# crsctl stat res -t
-------------------------------------------------------------------
Name Target State Server State details
-------------------------------------------------------------------
Local Resources
-------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 7: Traditional Clusterware Management


Chapter 7 - Page 8
ONLINE ONLINE host02 STABLE
ONLINE ONLINE host03 STABLE
ora.DATA.dg
ONLINE ONLINE host02 STABLE
ONLINE ONLINE host03 STABLE
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

ora.FRA.dg
ONLINE ONLINE host02 STABLE
ONLINE ONLINE host03 STABLE
ora.LISTENER.lsnr
ONLINE ONLINE host02 STABLE
ONLINE ONLINE host03 STABLE
ora.LISTENER_LEAF.lsnr
n s e
ce
OFFLINE OFFLINE host04 STABLE
OFFLINE OFFLINE host05 STABLE
eli
a b l
ora.net1.network
fer
ONLINE ONLINE host02 STABLE
a n s
ONLINE ONLINE host03
n - tr STABLE
ora.ons
a no
ONLINE ONLINE host02
h a s eฺ STABLE
ONLINE ONLINE m )
host03 u id STABLE

i s ฺco ent G
-------------------------------------------------------------------
Cluster Resources
s sas Stud
l e s@ this
-------------------------------------------------------------------

b r ob use
ora.LISTENER_SCAN1.lsnr
1 ( to
ONLINE ONLINE host02 STABLE
bl es
ora.LISTENER_SCAN2.lsnr
R o
ul io 1 ONLINE ONLINE host03 STABLE

B r a ora.LISTENER_SCAN3.lsnr
1 ONLINE ONLINE host02 STABLE
ora.MGMTLSNR
1 ONLINE ONLINE host02 169.254.247.177 192.
168.1.102,STABLE
ora.asm
1 ONLINE OFFLINE STABLE
2 ONLINE ONLINE host02 STABLE
3 ONLINE ONLINE host03 STABLE
ora.cvu
1 ONLINE ONLINE host03 STABLE
ora.gns
1 ONLINE ONLINE host02 STABLE
ora.gns.vip
1 ONLINE ONLINE host02 STABLE

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 7: Traditional Clusterware Management


Chapter 7 - Page 9
ora.host01.vip
1 ONLINE INTERMEDIATE host03 FAILED OVER,STABLE
ora.host02.vip
1 ONLINE ONLINE host02 STABLE
ora.host03.vip
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

1 ONLINE ONLINE host03 STABLE


ora.mgmtdb
1 ONLINE ONLINE host02 Open,STABLE
ora.oc4j
1 ONLINE ONLINE host02 STABLE
ora.orcl.db
1 ONLINE ONLINE host02 Open,STABLE
n s e
ce
2 ONLINE ONLINE host03 Open,STABLE
3 ONLINE OFFLINE host01 STABLE
eli
a b l
ora.scan1.vip
fer
1 ONLINE ONLINE host02 STABLE
a n s
ora.scan2.vip
n - tr
1 ONLINE ONLINE host03
a no STABLE
ora.scan3.vip
h a s eฺ
1 ONLINE ONLINE m )host02u id STABLE

i s ฺco ent G
-------------------------------------------------------------------

s sas Stud
[root@host02 ~]# exit
e s @ this
logout
o l
b use
b r
s ( to host02
Connection
l e to closed.
R ob
u lio [root@host01 ~]#

Bra 10. Restart Oracle Clusterware on host01 as the root user. Exit from the root account back
to grid and verify the results of the crsctl start crs command you just executed. It
may take several minutes for the entire stack to be up on all nodes.
[root@host01 ~]# crsctl start crs
CRS-4123: Oracle High Availability Services has been started.

[root@host01 ~]# exit


logout
Connection to host01 closed.

[grid@host01 ~]$ crsctl stat res -t


-------------------------------------------------------------------
Name Target State Server State details
-------------------------------------------------------------------

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 7: Traditional Clusterware Management


Chapter 7 - Page 10
Local Resources
-------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
ONLINE ONLINE host01 STABLE
ONLINE ONLINE host02 STABLE
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

ONLINE ONLINE host03 STABLE


ora.DATA.dg
ONLINE ONLINE host01 STABLE
ONLINE ONLINE host02 STABLE
ONLINE ONLINE host03 STABLE
ora.FRA.dg
ONLINE ONLINE host01 STABLE
n s e
ce
ONLINE ONLINE host02 STABLE
ONLINE ONLINE host03 STABLE
eli
a b l
ora.LISTENER.lsnr
fer
ONLINE ONLINE host01 STABLE
a n s
ONLINE ONLINE host02
n - tr STABLE
ONLINE ONLINE host03
a no STABLE
ora.LISTENER_LEAF.lsnr
h a s eฺ
OFFLINE OFFLINE m )
host04 u id STABLE

i
OFFLINE OFFLINE s ฺco ent G
host05 STABLE
ora.net1.network
s sas Stud
l e s@ this
ONLINE ONLINE host01 STABLE

b r ob use
ONLINE ONLINE host02 STABLE
( to
ONLINE ONLINE host03 STABLE
bl
ora.ons es
R o
ul io ONLINE ONLINE host01 STABLE

B r a ONLINE ONLINE host02 STABLE


ONLINE ONLINE host03 STABLE
-------------------------------------------------------------------
Cluster Resources
-------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE host01 STABLE
ora.LISTENER_SCAN2.lsnr
1 ONLINE ONLINE host03 STABLE
ora.LISTENER_SCAN3.lsnr
1 ONLINE ONLINE host02 STABLE
ora.MGMTLSNR
1 ONLINE ONLINE host02 169.254.247.177 192.
168.1.102,STABLE
ora.asm

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 7: Traditional Clusterware Management


Chapter 7 - Page 11
1 ONLINE ONLINE host01 STABLE
2 ONLINE ONLINE host02 STABLE
3 ONLINE ONLINE host03 STABLE
ora.cvu
1 ONLINE ONLINE host03 STABLE
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

ora.gns
1 ONLINE ONLINE host02 STABLE
ora.gns.vip
1 ONLINE ONLINE host02 STABLE
ora.host01.vip
1 ONLINE ONLINE host01 STABLE
ora.host02.vip
n s e
ce
1 ONLINE ONLINE host02 STABLE
ora.host03.vip
eli
a b l
1 ONLINE ONLINE host03 STABLE
fer
ora.mgmtdb
a n s
1 ONLINE ONLINE host02
n - tr
Open,STABLE
ora.orcl.db
a no
1 ONLINE ONLINE host02
h a s eฺ Open,STABLE
2 ONLINE ONLINE m )
host03 u id Open,STABLE
3 ONLINE ONLINE
i s ฺco ent G
host01 Open,STABLE
ora.oc4j
s sas Stud
1
l e s@ this
ONLINE ONLINE host02 STABLE
ora.scan1.vip
b r ob use
1 ( to
ONLINE ONLINE host01 STABLE
bl es
ora.scan2.vip
R o
ul io 1 ONLINE ONLINE host03 STABLE

B r a ora.scan3.vip
1 ONLINE ONLINE host02 STABLE
-------------------------------------------------------------------

[grid@host01 ~]$

11. Relocate the Grid Infrastructure management repository back to host01.


[grid@host01 ~]$ srvctl relocate mgmtdb -n host01

[grid@host01 ~]$ crsctl stat res ora.mgmtdb -t


-------------------------------------------------------------------
Name Target State Server State details
-------------------------------------------------------------------
Cluster Resources
-------------------------------------------------------------------
Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 7: Traditional Clusterware Management


Chapter 7 - Page 12
ora.mgmtdb
1 ONLINE ONLINE host01 Open,STABLE
-------------------------------------------------------------------

[grid@host01 ~]$
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

12. Close all terminal windows opened for this practice.

n s e
i ce
b l el
fer a
a n s
n - tr
a no
h a s eฺ
m ) u id
i s ฺco ent G
s sas Stud
l e s@ this
b r ob use
( to
bl es
R o
ul io
B r a

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 7: Traditional Clusterware Management


Chapter 7 - Page 13
Practice 7-2: Adding and Removing Oracle Clusterware Configuration
Files
Overview
In this practice, you determine the current location of your voting disks and Oracle Cluster
Registry (OCR) files. You will then add another OCR location and remove it.
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

1. As the grid user, use the crsctl utility to determine the location of the voting disks that
are currently used by your Oracle Clusterware installation.
[vncuser@classroom_pc ~] ssh grid@host01
Password: *******

[grid@host01~] $ . oraenv n s e
i ce
ORACLE_SID = [grid] ? +ASM1
b l el
The Oracle base has been set to /u01/app/grid
fer a
a n s
[grid@host01 ~]$ crsctl query css votedisk
n - tr
## STATE File Universal Id
a noFile Name Disk group
-- ----- -----------------
h a s e---------
ฺ ---------
1. ONLINE
m
7b50c15f90dd4f8bbf81d40d84f81f33) d
ui (/dev/asmdisk1p1)
o G
sฺc dent
[DATA]
s i
2. ONLINE
[DATA] s sa Stu
d084edbe14444ffdbf362ff8ec50c511 (/dev/asmdisk1p10)

3. ONLINE
l e s@ this
3c7041d9c84e4ffdbfc550f315b61226 (/dev/asmdisk1p11)
[DATA] r o b u s e
( b t o
es
Located 3 voting disk(s).
bl
o
[grid@host01 ~]$
R
a u lio
Br 2. Use the ocrcheck utility to determine the location of the Oracle Clusterware Registry
(OCR) files.
[grid@host01 ~]$ ocrcheck -config
Oracle Cluster Registry configuration is :
Device/File Name : +DATA
[grid@host01 ~]$

3. Verify that the FRA ASM disk group is currently online for all nodes using the crsctl utility.
[grid@host01 ~]$ crsctl stat res ora.FRA.dg -t
----------------------------------------------------------------
Name Target State Server State
details

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 7: Traditional Clusterware Management


Chapter 7 - Page 14
----------------------------------------------------------------
Local Resources
---------------------------------------------------------------
ora.FRA.dg
ONLINE ONLINE host01 STABLE
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

ONLINE ONLINE host02 STABLE


ONLINE ONLINE host03 STABLE

----------------------------------------------------------------
[grid@host01 ~]$

4. Switch to the root account and add a second OCR location that is to be stored in the FRA
ASM disk group. Set the environment with oraenv and use the ocrcheck command to n s e
i ce
el
verify the results.
[grid@host01 ~]$ su -
a b l
fer
Password:
a n s
n - tr
[root@host01 ~]# . oraenv
a no
ORACLE_SID = [root] ? +ASM1
h a s eฺ
)
The Oracle base has been set to /u01/app/grid
m u id
i s ฺco ent G
[root@host01 ~]# ocrconfig
s sas-addS ud
t+FRA
s
@ thi-config
socrcheck
l e
[[root@host01 ~]#
b r obRegistry
u se configuration is :
s (Device/File
to Name
Oracle Cluster

b l e : +DATA
o
io R
Device/File Name : +FRA
u l
Bra [root@host01 ~]#

5. Examine the contents of the ocr.loc configuration file to see the changes made to the file
referencing the new OCR location.
[root@host01 ~]# cat /etc/oracle/ocr.loc
#Device/file getting replaced by
+FRA/cluster01/OCRFILE/registry.255.883185695
ocrconfig_loc=+DATA/cluster01/OCRFILE/registry.255.883015549
ocrmirrorconfig_loc=+FRA/cluster01/OCRFILE/registry.255.883185695
local_only=false

[root@host01 ~]#

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 7: Traditional Clusterware Management


Chapter 7 - Page 15
6. Open a connection to your second node as the root user, set your environment and
remove the second OCR file that was added from the first node. Exit the remote connection
and verify the results when completed.
[root@host01 ~]# ssh host02
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

[root@host02 ~]# . oraenv


ORACLE_SID = [root] ? +ASM2
The Oracle base has been set to /u01/app/grid

[root@host02 ~]# ocrconfig -delete +FRA

[root@host02 ~]# exit


logout
n s e
i ce
el
Connection to host02 closed.
a b l
fer
[root@host01 ~]# ocrcheck -config
a n s
Oracle Cluster Registry configuration is :
n - tr
Device/File Name : +DATA
a no
h a s eฺ
[root@host01 ~]#
m ) u id
i s ฺco ent G
s sasfor this
7. Close all terminal windows opened S ud
tpractice.
l e s@ this
b r ob use
l e s( to
R ob
u lio
Bra

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 7: Traditional Clusterware Management


Chapter 7 - Page 16
Practice 7-3: Performing a Backup of the OCR and OLR
Overview
In this practice, you determine the location of the Oracle Local Registry (OLR) and perform
backups of the OCR and OLR files.
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

1. As the root user, use the ocrconfig utility to list the automatic backups of the Oracle
Cluster Registry (OCR) and the node or nodes on which they have been performed.
Note: You will see backups listed only if it has been more than four hours since Grid
Infrastructure was installed. Your list will most like look slightly different from the example
below.
[vncuser@classroom_pc ~] ssh root@host01
Password: *******
n s e
i ce
[root@host01~] $ . oraenv
b l el
ORACLE_SID = [grid] ? +ASM1
fer a
a n s
The Oracle base has been set to /u01/app/grid
n - tr
[root@host01 ~]# ocrconfig -showbackup
a no
h a s eฺ
host01 2015/05/19 08:07:28
m ) u id
i s ฺco ent G
/u01/app/12.1.0/grid/cdata/cluster01/backup00.ocr

host01 2015/05/19 s sas Stud


04:07:26
e s@ this
/u01/app/12.1.0/grid/cdata/cluster01/backup01.ocr
l
b r ob use
host01
l e s ( 2015/05/19
to 00:07:25
ob
/u01/app/12.1.0/grid/cdata/cluster01/backup02.ocr

io R
u l host01 2015/05/19 00:07:25
Bra /u01/app/12.1.0/grid/cdata/cluster01/day.ocr

host01 2015/05/19 00:07:25


/u01/app/12.1.0/grid/cdata/cluster01/week.ocr

host01 2015/05/19 05:50:54


/u01/app/12.1.0/grid/cdata/cluster01/backup_20150519_055054.ocr

host01 2015/05/19 04:47:54


/u01/app/12.1.0/grid/cdata/cluster01/backup_20150519_044754.ocr

host01 2015/05/18 20:08:18


/u01/app/12.1.0/grid/cdata/cluster01/backup_20150518_200818.ocr

[root@host01 ~]#

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 7: Traditional Clusterware Management


Chapter 7 - Page 17
2. Perform a manual backup of the OCR.
[root@host01 ~]# ocrconfig -manualbackup

host02 2015/05/20 11:16:09


/u01/app/12.1.0/grid/cdata/cluster01/backup_20150520_111609.ocr
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

host01 2015/05/16 11:20:14


/u01/app/12.1.0/grid/cdata/cluster01/backup_20150516_112014.ocr

host01 2015/05/07 20:38:04


/u01/app/12.1.0/grid/cdata/cluster01/backup_20150507_203804.ocr

[root@host01 ~]#
n s e
Note: Your output may vary slightly. i ce
b l el
3. Use the crsctl utility to determine the OCR locations that are currently used by your fer a
Oracle Clusterware installation. a n s
n - tr
no
[root@host01 ~]# ocrconfig -showbackup manual
a
h a s eฺ
host02 2015/05/19 08:49:32
m ) u id
ฺco ent G
/u01/app/12.1.0/grid/cdata/cluster01/backup_20150519_084932.ocr
i s
host01 2015/05/19 s sas Stud
05:50:54

l e s@ this
/u01/app/12.1.0/grid/cdata/cluster01/backup_20150519_055054.ocr

b r ob use
host01 s ( 2015/05/19
to 04:47:54
l e
ob
/u01/app/12.1.0/grid/cdata/cluster01/backup_20150519_044754.ocr
R
u lio
Bra
host01 2015/05/18 20:08:18
/u01/app/12.1.0/grid/cdata/cluster01/backup_20150518_200818.ocr

host01 2015/05/18 20:01:51


/u01/app/12.1.0/grid/cdata/cluster01/backup_20150518_200151.ocr

[root@host01 ~]#
Note: Your output may vary slightly.

4. Determine the location of the Oracle Local Registry (OLR) using the ocrcheck utility.
When finished, log out as the root user.
[root@host01 ~]# ocrcheck -local
Status of Oracle Local Registry is as follows :
Version : 4
Total space (kbytes) : 409568

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 7: Traditional Clusterware Management


Chapter 7 - Page 18
Used space (kbytes) : 1056
Available space (kbytes) : 408512
ID : 1935905403
Device/File Name : /u01/app/12.1.0/grid/cdata/host01.olr
Device/File integrity check succeeded
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

Local registry integrity check succeeded

Logical corruption check succeeded

[root@host01 ~]# exit


logout
n s e
i ce
[grid@host01 ~]$
b l el
fer a
5. Close all terminal windows opened for this practice.
a n s
n - tr
a no
h a s eฺ
m ) u id
i s ฺco ent G
s sas Stud
l e s@ this
b r ob use
( to
bl es
R o
ul io
B r a

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 7: Traditional Clusterware Management


Chapter 7 - Page 19
Practice 7-4: Configuring Network Interfaces Using oifcfg

Overview
In this practice, you add a new interface to create an HAIP configuration for the cluster.
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

1. As the grid user, ensure that Oracle Clusterware is running on all of the cluster nodes.
[vncuser@classroom_pc ~] ssh grid@host01
Password: *******

[grid@host01~] $ . oraenv
ORACLE_SID = [grid] ? +ASM1
The Oracle base has been set to /u01/app/grid

[grid@host01 ~]$ olsnodes -s n s e


i ce
el
host01 Active
host02 Active
a b l
host03 Active
fer
host04 Active
a n s
host05 Active
n - tr
a no
[grid@host01 ~]$
h a s eฺ
m ) u id
i s n t G what they are being used for in
ฺco andeascertain
2. Check the list of available network interfaces
the current cluster configuration
[grid@host01 ~]$ oifcfg s as Stud
siflist
eth0 192.0.2.0 es@ t h is
o l
b use
eth1 192.168.1.0
b r
s(
eth1 169.254.0.0
eth2 le
to
ob
192.168.2.0
R
io [grid@host01 ~]$ oifcfg getif
u l
Bra eth0
eth1
192.0.2.0 global public
192.168.1.0 global cluster_interconnect,asm

[grid@host01 ~]$

3. The eth0 interface is currently used for the public interface and eth1 is used for the
private interconnect/ASM. We’ll add eth2 to the interconnect configuration. Ensure that the
replacement interface is configured and operational in the operating system on all of the
nodes.
[grid@host01 ~]$ /sbin/ifconfig eth2
eth2 Link encap:Ethernet HWaddr 00:16:3E:00:01:21
inet addr: 192.168.2.101 Bcast:192.168.2.255 Mask:255.255.255.0
inet6 addr: fe80::216:3eff:fe00:121/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:240917 errors:0 dropped:0 overruns:0 frame:0
TX packets:30185 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 7: Traditional Clusterware Management


Chapter 7 - Page 20
RX bytes:37543266 (35.8 MiB) TX bytes:3499615 (3.3 MiB)

[grid@host01 ~]$ ssh host02 /sbin/ifconfig eth2


eth2 Link encap:Ethernet HWaddr 00:16:3E:00:01:22
inet addr:192.168.2.102 Bcast:192.168.2.255 Mask:255.255.255.0
inet6 addr: fe80::216:3eff:fe00:122/64 Scope:Link
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1


RX packets:232912 errors:0 dropped:0 overruns:0 frame:0
TX packets:30491 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:36034120 (34.3 MiB) TX bytes:3454982 (3.2 MiB)

[grid@host01 ~]$ ssh host03 /sbin/ifconfig eth2


eth2 Link encap:Ethernet HWaddr 00:16:3E:00:01:23
inet addr:192.168.2.103 Bcast:192.168.2.255 Mask:255.255.255.0
n s e
inet6 addr: fe80::216:3eff:fe00:123/64 Scope:Link
i ce
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
b l el
RX packets:351430 errors:0 dropped:0 overruns:0 frame:0
TX packets:29898 errors:0 dropped:0 overruns:0 carrier:0 fer a
collisions:0 txqueuelen:1000 a n s
n - tr
RX bytes:54946101 (52.4 MiB) TX bytes:3424947 (3.2 MiB)
a no
[grid@host01 ~]$ ssh host04 /sbin/ifconfig
h a s eฺ
eth2
)
eth2 Link encap:Ethernet HWaddr 00:16:3E:00:01:24
m u id Mask:255.255.255.0
ฺco ent G Scope:Link
inet addr:192.168.2.104 Bcast:192.168.2.255
i s
inet6 addr: fe80::216:3eff:fe00:124/64
s tud MTU:1500 Metric:1
UP BROADCAST RUNNINGaMULTICAST
s s Sdropped:0 overruns:0 frame:0
@
RX packets:49993 errors:0
s i s
th dropped:0 overruns:0 carrier:0
TX packets:7194
o b letxqueuelen:1000
errors:0
s e
( b r
collisions:0
o u (7.2 MiB) TX bytes:821526 (802.2 KiB)
es
RX bytes:7597687 t
b l
Ro Link encap:Ethernet
[grid@host01
io eth2
~]$ ssh host05 /sbin/ifconfig eth2
u l HWaddr 00:16:3E:00:01:25
Bra inet addr:192.168.2.105 Bcast:192.168.2.255 Mask:255.255.255.0
inet6 addr: fe80::216:3eff:fe00:125/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:323700 errors:0 dropped:0 overruns:0 frame:0
TX packets:35817 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:50238663 (47.9 MiB) TX bytes:3758667 (3.5 MiB)

[grid@host01 ~]$

4. Switch user to root, set the Grid environment, and add eth2 to the cluster configuration
as a global private interconnect and ASM.
[grid@host01 ~]$ su -
Password:

[root@host01 ~]# . oraenv


ORACLE_SID = [root] ? +ASM1

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 7: Traditional Clusterware Management


Chapter 7 - Page 21
The Oracle base has been set to /u01/app/grid

[root@host01 ~]# oifcfg setif -global


eth2/192.168.2.0:cluster_interconnect,asm

[root@host01 ~]#
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

5. Verify that the new interface has been added to the cluster configuration.
[root@host01 ~]$ oifcfg getif
eth0 192.0.2.0 global public
eth1 192.168.1.0 global cluster_interconnect,asm
eth2 192.168.2.0 global cluster_interconnect,asm
[root@host01 ~]#

n s e
i ce
el
6. Stop Clusterware on all nodes of the cluster.
[root@host01 ~]# crsctl stop cluster -all a b l
fer
CRS-2673: Attempting to stop 'ora.crsd' on 'host04'
a n s
CRS-2673: Attempting to stop 'ora.crsd' on 'host05'
CRS-2677: Stop of 'ora.crsd' on 'host04' succeeded n - tr
a no
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on
'host04'
h a s eฺ
) id
CRS-2673: Attempting to stop 'ora.ctssd' on 'host04'
m u
i s ฺco ent G
CRS-2673: Attempting to stop 'ora.evmd' on 'host04'

sas Stud
CRS-2673: Attempting to stop 'ora.storage' on 'host04'
CRS-2677: Stop of 'ora.storage' on 'host04' succeeded
s
l e s@ this
CRS-2677: Stop of 'ora.crsd' on 'host05' succeeded

ob use
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on
'host05'
( b r
es to
CRS-2673: Attempting to stop 'ora.ctssd' on 'host05'
o bl
CRS-2673: Attempting to stop 'ora.evmd' on 'host05'

io R
CRS-2673: Attempting to stop 'ora.storage' on 'host05'

r a ul CRS-2677: Stop of 'ora.storage' on 'host05' succeeded


B CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'host04'
succeeded
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'host05'
succeeded
CRS-2677: Stop of 'ora.evmd' on 'host04' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'host04' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'host05' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'host04'
CRS-2677: Stop of 'ora.cssd' on 'host04' succeeded
CRS-2677: Stop of 'ora.evmd' on 'host05' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'host05'
CRS-2677: Stop of 'ora.cssd' on 'host05' succeeded
CRS-2673: Attempting to stop 'ora.crsd' on 'host01'
CRS-2673: Attempting to stop 'ora.crsd' on 'host03'
CRS-2790: Starting shutdown of Cluster Ready Services-managed
resources on 'host01'
CRS-2673: Attempting to stop 'ora.cvu' on 'host01'
CRS-2673: Attempting to stop 'ora.FRA.dg' on 'host01'

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 7: Traditional Clusterware Management


Chapter 7 - Page 22
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'host01'
CRS-2673: Attempting to stop 'ora.orcl.db' on 'host01'
CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'host01'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'host01'
CRS-2673: Attempting to stop 'ora.proxy_advm' on 'host01'
CRS-2673: Attempting to stop 'ora.crsd' on 'host02'
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

CRS-2790: Starting shutdown of Cluster Ready Services-managed


resources on 'host02'
CRS-2790: Starting shutdown of Cluster Ready Services-managed
resources on 'host03'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN2.lsnr' on 'host03'
CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'host03'
CRS-2673: Attempting to stop 'ora.orcl.db' on 'host03'
CRS-2673: Attempting to stop 'ora.proxy_advm' on 'host03'
CRS-2673: Attempting to stop 'ora.orcl.db' on 'host02'
n s e
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN3.lsnr' on 'host02'
i ce
CRS-2673: Attempting to stop 'ora.oc4j' on 'host02'
b l el
CRS-2673:
CRS-2673:
Attempting to stop 'ora.LISTENER.lsnr' on 'host02'
Attempting to stop 'ora.gns' on 'host02' fer a
CRS-2673: Attempting to stop 'ora.proxy_advm' on 'host02' a n s
CRS-2677: n - tr
Stop of 'ora.LISTENER_SCAN1.lsnr' on 'host01' succeeded
CRS-2673: no
Attempting to stop 'ora.scan1.vip' on 'host01'
a
CRS-2677: s eฺ
Stop of 'ora.LISTENER.lsnr' on 'host01' succeeded
h a
CRS-2673: ) id
Attempting to stop 'ora.host01.vip' on 'host01'
m u
ฺco ent G
CRS-2677: Stop of 'ora.scan1.vip' on 'host01' succeeded
CRS-2677: i s
Stop of 'ora.cvu' on 'host01' succeeded
CRS-2677:
sas Stud
Stop of 'ora.host01.vip' on 'host01' succeeded
s
s@ this
CRS-2677: Stop of 'ora.gns' on 'host02' succeeded
CRS-2673: l e
Attempting to stop 'ora.gns.vip' on 'host02'
CRS-2677:
b r ob use
Stop of 'ora.FRA.dg' on 'host01' succeeded
( to
es
CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'host03' succeeded
o bl
CRS-2673: Attempting to stop 'ora.host03.vip' on 'host03'

io R
CRS-2677: Stop of 'ora.LISTENER_SCAN2.lsnr' on 'host03' succeeded

r a ul CRS-2673: Attempting to stop 'ora.scan2.vip' on 'host03'


B CRS-2677:
CRS-2677:
Stop of 'ora.gns.vip' on 'host02' succeeded
Stop of 'ora.DATA.dg' on 'host01' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'host01'
CRS-2677: Stop of 'ora.LISTENER_SCAN3.lsnr' on 'host02' succeeded
CRS-2673: Attempting to stop 'ora.scan3.vip' on 'host02'
CRS-2677: Stop of 'ora.scan2.vip' on 'host03' succeeded
CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'host02' succeeded
CRS-2673: Attempting to stop 'ora.host02.vip' on 'host02'
CRS-2677: Stop of 'ora.host03.vip' on 'host03' succeeded
CRS-2677: Stop of 'ora.scan3.vip' on 'host02' succeeded
CRS-2677: Stop of 'ora.host02.vip' on 'host02' succeeded
CRS-2677: Stop of 'ora.orcl.db' on 'host01' succeeded
CRS-2677: Stop of 'ora.asm' on 'host01' succeeded
CRS-2673: Attempting to stop 'ora.ASMNET1LSNR_ASM.lsnr' on 'host01'
CRS-2677: Stop of 'ora.ASMNET1LSNR_ASM.lsnr' on 'host01' succeeded
CRS-2677: Stop of 'ora.proxy_advm' on 'host01' succeeded
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'host03'
CRS-2673: Attempting to stop 'ora.ons' on 'host01'

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 7: Traditional Clusterware Management


Chapter 7 - Page 23
CRS-2677: Stop of 'ora.ons' on 'host01' succeeded
CRS-2673: Attempting to stop 'ora.net1.network' on 'host01'
CRS-2677: Stop of 'ora.net1.network' on 'host01' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on
'host01' has completed
CRS-2675: Stop of 'ora.oc4j' on 'host02' failed
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

CRS-2679: Attempting to clean 'ora.oc4j' on 'host02'


CRS-2681: Clean of 'ora.oc4j' on 'host02' succeeded
CRS-2677: Stop of 'ora.crsd' on 'host01' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'host01'
CRS-2673: Attempting to stop 'ora.evmd' on 'host01'
CRS-2673: Attempting to stop 'ora.storage' on 'host01'
CRS-2677: Stop of 'ora.storage' on 'host01' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'host01'
CRS-2677: Stop of 'ora.ctssd' on 'host01' succeeded
n s e
CRS-2677: Stop of 'ora.evmd' on 'host01' succeeded
i ce
CRS-2677: Stop of 'ora.proxy_advm' on 'host02' succeeded
b l el
CRS-2677: Stop of 'ora.proxy_advm' on 'host03' succeeded
CRS-2677: Stop of 'ora.orcl.db' on 'host03' succeeded fer a
CRS-2677: Stop of 'ora.DATA.dg' on 'host03' succeeded a n s
n
CRS-2673: Attempting to stop 'ora.FRA.dg' on 'host03' - tr
no
CRS-2677: Stop of 'ora.FRA.dg' on 'host03' succeeded
a
h a s eฺ
CRS-2673: Attempting to stop 'ora.asm' on 'host03'
) id
CRS-2677: Stop of 'ora.asm' on 'host03' succeeded
m u
ฺco ent G
CRS-2673: Attempting to stop 'ora.ASMNET1LSNR_ASM.lsnr' on 'host03'
i s
CRS-2677: Stop of 'ora.ASMNET1LSNR_ASM.lsnr' on 'host03' succeeded
sas Stud
CRS-2673: Attempting to stop 'ora.ons' on 'host03'
s
s@ this
CRS-2677: Stop of 'ora.ons' on 'host03' succeeded
l e
CRS-2673: Attempting to stop 'ora.net1.network' on 'host03'
r ob use
CRS-2677: Stop of 'ora.net1.network' on 'host03' succeeded
b
( to
es
CRS-2792: Shutdown of Cluster Ready Services-managed resources on
bl
'host03' has completed
o
io R
CRS-2677: Stop of 'ora.asm' on 'host01' succeeded

r a ul CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on


B 'host01'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'host01'
succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'host01'
CRS-2677: Stop of 'ora.cssd' on 'host01' succeeded
CRS-2677: Stop of 'ora.orcl.db' on 'host02' succeeded
CRS-2673: Attempting to stop 'ora.FRA.dg' on 'host02'
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'host02'
CRS-2677: Stop of 'ora.crsd' on 'host03' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'host03'
CRS-2673: Attempting to stop 'ora.evmd' on 'host03'
CRS-2673: Attempting to stop 'ora.storage' on 'host03'
CRS-2677: Stop of 'ora.storage' on 'host03' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'host03'
CRS-2677: Stop of 'ora.FRA.dg' on 'host02' succeeded
CRS-2677: Stop of 'ora.DATA.dg' on 'host02' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'host02'
CRS-2677: Stop of 'ora.asm' on 'host02' succeeded

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 7: Traditional Clusterware Management


Chapter 7 - Page 24
CRS-2673: Attempting to stop 'ora.ASMNET1LSNR_ASM.lsnr' on 'host02'
CRS-2677: Stop of 'ora.ctssd' on 'host03' succeeded
CRS-2677: Stop of 'ora.evmd' on 'host03' succeeded
CRS-2677: Stop of 'ora.ASMNET1LSNR_ASM.lsnr' on 'host02' succeeded
CRS-2673: Attempting to stop 'ora.ons' on 'host02'
CRS-2677: Stop of 'ora.ons' on 'host02' succeeded
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

CRS-2673: Attempting to stop 'ora.net1.network' on 'host02'


CRS-2677: Stop of 'ora.net1.network' on 'host02' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on
'host02' has completed
CRS-2677: Stop of 'ora.crsd' on 'host02' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'host02'
CRS-2673: Attempting to stop 'ora.evmd' on 'host02'
CRS-2673: Attempting to stop 'ora.storage' on 'host02'
CRS-2677: Stop of 'ora.storage' on 'host02' succeeded
n s e
CRS-2673: Attempting to stop 'ora.asm' on 'host02'
i ce
CRS-2677: Stop of 'ora.evmd' on 'host02' succeeded
b l el
CRS-2677: Stop of 'ora.asm' on 'host03' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on fer a
'host03' a n s
CRS-2677: Stop of 'ora.ctssd' on 'host02' succeeded n - tr
no
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'host03'
a
succeeded
h a s eฺ
) id
CRS-2673: Attempting to stop 'ora.cssd' on 'host03'
m u
ฺco ent G
CRS-2677: Stop of 'ora.cssd' on 'host03' succeeded
i s
CRS-2677: Stop of 'ora.asm' on 'host02' succeeded
sas Stud
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on
s
s@ this
'host02'
l e
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'host02'
succeeded
b r ob use
( to
es
CRS-2673: Attempting to stop 'ora.cssd' on 'host02'
bl
CRS-2677: Stop of 'ora.cssd' on 'host02' succeeded
o
io R
[root@host01 ~]#

r a ul
B 7. Restart Clusterware on all nodes.
[root@host01 ~]# crsctl start cluster -all
CRS-2672: Attempting to start 'ora.evmd' on 'host04'
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'host04'
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'host05'
CRS-2672: Attempting to start 'ora.evmd' on 'host05'
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'host02'
CRS-2672: Attempting to start 'ora.evmd' on 'host02'
CRS-2676: Start of 'ora.evmd' on 'host04' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'host01'
CRS-2672: Attempting to start 'ora.evmd' on 'host01'
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'host03'
CRS-2672: Attempting to start 'ora.evmd' on 'host03'
CRS-2676: Start of 'ora.cssdmonitor' on 'host04' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'host04'
CRS-2672: Attempting to start 'ora.diskmon' on 'host04'
CRS-2676: Start of 'ora.evmd' on 'host05' succeeded

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 7: Traditional Clusterware Management


Chapter 7 - Page 25
CRS-2676: Start of 'ora.diskmon' on 'host04' succeeded
CRS-2676: Start of 'ora.evmd' on 'host02' succeeded
CRS-2676: Start of 'ora.cssdmonitor' on 'host05' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'host05'
CRS-2676: Start of 'ora.evmd' on 'host01' succeeded
CRS-2672: Attempting to start 'ora.diskmon' on 'host05'
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

CRS-2676: Start of 'ora.cssdmonitor' on 'host03' succeeded


CRS-2672: Attempting to start 'ora.cssd' on 'host03'
CRS-2672: Attempting to start 'ora.diskmon' on 'host03'
CRS-2676: Start of 'ora.cssdmonitor' on 'host02' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'host02'
CRS-2672: Attempting to start 'ora.diskmon' on 'host02'
CRS-2676: Start of 'ora.diskmon' on 'host02' succeeded
CRS-2676: Start of 'ora.diskmon' on 'host05' succeeded
CRS-2676: Start of 'ora.cssdmonitor' on 'host01' succeeded
n s e
CRS-2672: Attempting to start 'ora.cssd' on 'host01'
i ce
CRS-2672: Attempting to start 'ora.diskmon' on 'host01'
b l el
CRS-2676:
CRS-2676:
Start of 'ora.diskmon' on 'host01' succeeded
Start of 'ora.evmd' on 'host03' succeeded fer a
CRS-2676: Start of 'ora.diskmon' on 'host03' succeeded a n s
CRS-2676: n
Start of 'ora.cssd' on 'host02' succeeded - tr
CRS-2672: no
Attempting to start 'ora.ctssd' on 'host02'
a
CRS-2672: s eฺ
Attempting to start 'ora.cluster_interconnect.haip' on
h a
'host02'
m ) u id
CRS-2676:
CRS-2672: Attempting to starti s c ent Gon 'host01'
Start of 'ora.cssd' ono'host01'
ฺ'ora.ctssd'
succeeded

CRS-2672:
sas S
Attempting to start
s tud
'ora.cluster_interconnect.haip' on
'host01'
CRS-2676: e
Start lof
@ this on 'host03' succeeded
s'ora.cssd'
CRS-2672: r ob uto
Attempting
b sestart 'ora.ctssd' on 'host03'
(Attempting
to to start 'ora.cluster_interconnect.haip'
es
CRS-2672: on
obl
'host03'
R
CRS-2676: Start of 'ora.ctssd' on 'host02' succeeded
u l i o
CRS-2676: Start of 'ora.ctssd' on 'host01' succeeded
Bra CRS-2676:
CRS-2676:
Start
Start
of
of
'ora.ctssd' on 'host03' succeeded
'ora.cluster_interconnect.haip' on 'host02'
succeeded
CRS-2672: Attempting to start 'ora.asm' on 'host02'
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'host01'
succeeded
CRS-2672: Attempting to start 'ora.asm' on 'host01'
CRS-2676: Start of 'ora.asm' on 'host01' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'host01'
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'host03'
succeeded
CRS-2672: Attempting to start 'ora.asm' on 'host03'
CRS-2676: Start of 'ora.asm' on 'host03' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'host03'
CRS-2676: Start of 'ora.asm' on 'host02' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'host02'
CRS-2676: Start of 'ora.storage' on 'host02' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'host02'

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 7: Traditional Clusterware Management


Chapter 7 - Page 26
CRS-2676: Start of 'ora.crsd' on 'host02' succeeded
CRS-2676: Start of 'ora.cssd' on 'host05' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'host05'
CRS-2672: Attempting to start 'ora.ctssd' on 'host05'
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on
'host05'
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

CRS-2676: Start of 'ora.storage' on 'host05' succeeded


CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'host05'
succeeded
CRS-2676: Start of 'ora.cssd' on 'host04' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'host04'
CRS-2672: Attempting to start 'ora.ctssd' on 'host04'
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on
'host04'
CRS-2676: Start of 'ora.storage' on 'host04' succeeded
n s e
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'host04'
i ce
succeeded
b l el
CRS-2676: Start of 'ora.ctssd' on 'host05' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'host05' fer a
CRS-2676: Start of 'ora.ctssd' on 'host04' succeeded a n s
CRS-2672: Attempting to start 'ora.crsd' on 'host04' n - tr
no
CRS-2676: Start of 'ora.crsd' on 'host04' succeeded
a
h a s eฺ
CRS-2676: Start of 'ora.crsd' on 'host05' succeeded
) id
CRS-2676: Start of 'ora.storage' on 'host03' succeeded
m u
ฺco ent G
CRS-2672: Attempting to start 'ora.crsd' on 'host03'
i s
CRS-2676: Start of 'ora.crsd' on 'host03' succeeded
sas Stud
CRS-2676: Start of 'ora.storage' on 'host01' succeeded
s
s@ this
CRS-2672: Attempting to start 'ora.crsd' on 'host01'
l e
CRS-2676: Start of 'ora.crsd' on 'host01' succeeded
[root@host01 ~]#
b r ob use
( to
bl es
8. o
Use ifconfig to determine view how Clusterware has implemented HAIP. Inspect the
R
ul io HAIP configuration on all three of your Hub nodes.
B r a [root@host01 ~]# ifconfig -a
eth0 Link encap:Ethernet HWaddr 00:16:3E:00:01:01
inet addr:192.0.2.101 Bcast:192.0.2.255 Mask:255.255.255.0
inet6 addr: fe80::216:3eff:fe00:101/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:4595987 errors:0 dropped:0 overruns:0 frame:0
TX packets:5267904 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:2574654840 (2.3 GiB) TX bytes:55555868162 (51.7 GiB)

eth0:1 Link encap:Ethernet HWaddr 00:16:3E:00:01:01


inet addr:192.0.2.248 Bcast:192.0.2.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

eth0:2 Link encap:Ethernet HWaddr 00:16:3E:00:01:01


inet addr:192.0.2.245 Bcast:192.0.2.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 7: Traditional Clusterware Management


Chapter 7 - Page 27
eth1 Link encap:Ethernet HWaddr 00:16:3E:00:01:11
inet addr:192.168.1.101 Bcast:192.168.1.255 Mask:255.255.255.0
inet6 addr: fe80::216:3eff:fe00:111/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:1811725 errors:0 dropped:0 overruns:0 frame:0
TX packets:1690173 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

RX bytes:1420158131 (1.3 GiB) TX bytes:1096966452 (1.0 GiB)

eth1:1 Link encap:Ethernet HWaddr 00:16:3E:00:01:11


inet addr:169.254.83.86 Bcast:169.254.127.255 Mask:255.255.128.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

eth2 Link encap:Ethernet HWaddr 00:16:3E:00:01:21


inet addr:192.168.2.101 Bcast:192.168.2.255 Mask:255.255.255.0
inet6 addr: fe80::216:3eff:fe00:121/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
n s e
RX packets:720224 errors:0 dropped:0 overruns:0 frame:0
i ce
TX packets:717663 errors:0 dropped:0 overruns:0 carrier:0
b l el
collisions:0 txqueuelen:1000
fer a
RX bytes:349302089 (333.1 MiB) TX bytes:365597286 (348.6 MiB)
a n s
eth2:1 Link encap:Ethernet HWaddr 00:16:3E:00:01:21
n - tr
no
inet addr:169.254.145.237 Bcast:169.254.255.255 Mask:255.255.128.0
a
h a s eฺ
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

lo Link encap:Local Loopback


m ) u id
o G
i sฺc dent
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 sScope:Host
s
UP LOOPBACK RUNNING saerrors:0
S tu dropped:0
MTU:16436 Metric:1
s @
RX packets:5305717
t h i s overruns:0 frame:0

o b le txqueuelen:0
TX packets:5305717
s e errors:0 dropped:0 overruns:0 carrier:0

( b r
collisions:0
o u
es
RX t
bytes:5480515150 (5.1 GiB) TX bytes:5480515150 (5.1 GiB)

o bl
io R
[root@host01 ~]# ssh host02 ifconfig -a

r a ul eth0 Link encap:Ethernet HWaddr 00:16:3E:00:01:02


inet addr:192.0.2.102 Bcast:192.0.2.255 Mask:255.255.255.0
B inet6 addr: fe80::216:3eff:fe00:102/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:889540 errors:0 dropped:0 overruns:0 frame:0
TX packets:743769 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:13759360548 (12.8 GiB) TX bytes:598398153 (570.6 MiB)

eth0:1 Link encap:Ethernet HWaddr 00:16:3E:00:01:02


inet addr:192.0.2.155 Bcast:192.0.2.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

eth0:3 Link encap:Ethernet HWaddr 00:16:3E:00:01:02


inet addr:192.0.2.244 Bcast:192.0.2.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

eth0:7 Link encap:Ethernet HWaddr 00:16:3E:00:01:02


inet addr:192.0.2.247 Bcast:192.0.2.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

eth1 Link encap:Ethernet HWaddr 00:16:3E:00:01:12


Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 7: Traditional Clusterware Management


Chapter 7 - Page 28
inet addr:192.168.1.102 Bcast:192.168.1.255 Mask:255.255.255.0
inet6 addr: fe80::216:3eff:fe00:112/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:1730581 errors:0 dropped:0 overruns:0 frame:0
TX packets:1779120 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1689087148 (1.5 GiB) TX bytes:1232422586 (1.1 GiB)
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

eth1:1 Link encap:Ethernet HWaddr 00:16:3E:00:01:12


inet addr:169.254.20.48 Bcast:169.254.127.255 Mask:255.255.128.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

eth2 Link encap:Ethernet HWaddr 00:16:3E:00:01:22


inet addr:192.168.2.102 Bcast:192.168.2.255 Mask:255.255.255.0
inet6 addr: fe80::216:3eff:fe00:122/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:766042 errors:0 dropped:0 overruns:0 frame:0
n s e
TX packets:716383 errors:0 dropped:0 overruns:0 carrier:0
i ce
collisions:0 txqueuelen:1000
b l el
RX bytes:497245496 (474.2 MiB) TX bytes:445682604 (425.0 MiB)
fer a
a n s
tr
eth2:1 Link encap:Ethernet HWaddr 00:16:3E:00:01:22
n -
inet addr:169.254.175.126 Bcast:169.254.255.255 Mask:255.255.128.0
no
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
a
h a s eฺ
lo Link encap:Local Loopback
m )
inet addr:127.0.0.1 Mask:255.0.0.0 u id
o tG
sฺc denMetric:1
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING siMTU:16436
s saerrors:0
RX packets:1038607
S tu dropped:0 overruns:0 frame:0
s @
TX packets:1038607
t i s
errors:0
h dropped:0 overruns:0 carrier:0

r o ble use (708.0 MiB) TX bytes:742484033 (708.0 MiB)


collisions:0 txqueuelen:0
RX bytes:742484033
s (b to
l e
ob Link encap:Ethernet HWaddr 00:16:3E:00:01:03
[root@host01 ~]# ssh host03 ifconfig -a

io R
eth0

u l inet addr:192.0.2.103 Bcast:192.0.2.255 Mask:255.255.255.0

Bra
inet6 addr: fe80::216:3eff:fe00:103/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:768703 errors:0 dropped:0 overruns:0 frame:0
TX packets:586788 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:12896112928 (12.0 GiB) TX bytes:59606340 (56.8 MiB)

eth0:1 Link encap:Ethernet HWaddr 00:16:3E:00:01:03


inet addr:192.0.2.252 Bcast:192.0.2.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

eth0:2 Link encap:Ethernet HWaddr 00:16:3E:00:01:03


inet addr:192.0.2.246 Bcast:192.0.2.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

eth1 Link encap:Ethernet HWaddr 00:16:3E:00:01:13


inet addr:192.168.1.103 Bcast:192.168.1.255 Mask:255.255.255.0
inet6 addr: fe80::216:3eff:fe00:113/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:863871 errors:0 dropped:0 overruns:0 frame:0
TX packets:905537 errors:0 dropped:0 overruns:0 carrier:0
Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 7: Traditional Clusterware Management


Chapter 7 - Page 29
collisions:0 txqueuelen:1000
RX bytes:430490837 (410.5 MiB) TX bytes:720920377 (687.5 MiB)

eth1:1 Link encap:Ethernet HWaddr 00:16:3E:00:01:13


inet addr:169.254.1.180 Bcast:169.254.127.255 Mask:255.255.128.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

eth2 Link encap:Ethernet HWaddr 00:16:3E:00:01:23


inet addr:192.168.2.103 Bcast:192.168.2.255 Mask:255.255.255.0
inet6 addr: fe80::216:3eff:fe00:123/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:532799 errors:0 dropped:0 overruns:0 frame:0
TX packets:569469 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:281101356 (268.0 MiB) TX bytes:347012713 (330.9 MiB)

eth2:1 Link encap:Ethernet HWaddr 00:16:3E:00:01:23


n s e
inet addr:169.254.148.19 Bcast:169.254.255.255 Mask:255.255.128.0
i ce
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
b l el
fer a
lo Link encap:Local Loopback
a n s
tr
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
n -
UP LOOPBACK RUNNING MTU:16436 Metric:1
a no
h a s eฺ
RX packets:131363 errors:0 dropped:0 overruns:0 frame:0

collisions:0 txqueuelen:0
m u id
TX packets:131363 errors:0 dropped:0 overruns:0 carrier:0
)
i s ฺco ent G
RX bytes:100433099 (95.7 MiB) TX bytes:100433099 (95.7 MiB)

[root@host01 ~]# s sas Stud


Instead of using eth1 e s@
and t h is Clusterware has configured private VIPs on
eth2 directly,
l
ob usinguIPseaddresses on the reserved 169.254.*.* subnet to support
HAIP. s ( b r
eth1:1 and eth2:1

e to
ob l
R
i9.o As the root user, use the ifdown command to bring down the eth1 interface. Wait a few
u l
Bra moments and run ifconfig –a. What do you observe?
[root@host01 ~]# ifdown eth1

[root@host01 ~]# ifconfig -a


eth0 Link encap:Ethernet HWaddr 00:16:3E:00:01:01
inet addr:192.0.2.101 Bcast:192.0.2.255 Mask:255.255.255.0
inet6 addr: fe80::216:3eff:fe00:101/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:3153422 errors:0 dropped:0 overruns:0 frame:0
TX packets:3505718 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:2329894884 (2.1 GiB) TX bytes:31209559373 (29.0 GiB)

eth0:1 Link encap:Ethernet HWaddr 00:16:3E:00:01:01


inet addr:192.0.2.248 Bcast:192.0.2.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

eth0:2 Link encap:Ethernet HWaddr 00:16:3E:00:01:01


inet addr:192.0.2.245 Bcast:192.0.2.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 7: Traditional Clusterware Management


Chapter 7 - Page 30
eth1 Link encap:Ethernet HWaddr 00:16:3E:00:01:11
BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:1245622 errors:0 dropped:0 overruns:0 frame:0
TX packets:1081446 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1156779722 (1.0 GiB) TX bytes:638570547 (608.9 MiB)
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

eth2 Link encap:Ethernet HWaddr 00:16:3E:00:01:21


inet addr:192.168.2.101 Bcast:192.168.2.255 Mask:255.255.255.0
inet6 addr: fe80::216:3eff:fe00:121/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:36232 errors:0 dropped:0 overruns:0 frame:0
TX packets:32107 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:13984401 (13.3 MiB) TX bytes:13994857 (13.3 MiB)
n s e
eth2:1 Link encap:Ethernet HWaddr 00:16:3E:00:01:21
i ce
inet addr:169.254.145.237 Bcast:169.254.255.255
l
Mask:255.255.128.0
b el
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
fer a
eth2:2 Link encap:Ethernet HWaddr 00:16:3E:00:01:21
a n s
n - tr
inet addr:169.254.83.86 Bcast:169.254.127.255 Mask:255.255.128.0
no
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
a
lo Link encap:Local Loopback
h a s eฺ
m
inet addr:127.0.0.1 Mask:255.0.0.0 ) u id
o tG
sฺc denMetric:1
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING siMTU:16436
RX packets:5104167
s saerrors:0
S tu dropped:0 overruns:0 frame:0
s @
TX packets:5104167
t i s
errors:0
h dropped:0 overruns:0 carrier:0

r o ble use (4.9 GiB) TX bytes:5264344338 (4.9 GiB)


collisions:0 txqueuelen:0
RX bytes:5264344338
Note that s (beth1 interface
the to has no IP address, indicative that it is down. Note the HAIP
l e
R ob
interconnect configured on eth1:1 has been failed over to eth2:2.

ul10. io
B r a Use ifconfig –a to check the HAIP interfaces on host02 and host03.
[root@host01 ~]# ssh host02 ifconfig -a
eth0 Link encap:Ethernet HWaddr 00:16:3E:00:01:02
inet addr:192.0.2.102 Bcast:192.0.2.255 Mask:255.255.255.0
inet6 addr: fe80::216:3eff:fe00:102/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:864009 errors:0 dropped:0 overruns:0 frame:0
TX packets:710606 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:13743233812 (12.7 GiB) TX bytes:338572572 (322.8 MiB)

eth0:1 Link encap:Ethernet HWaddr 00:16:3E:00:01:02


inet addr:192.0.2.155 Bcast:192.0.2.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

eth0:3 Link encap:Ethernet HWaddr 00:16:3E:00:01:02


inet addr:192.0.2.244 Bcast:192.0.2.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

eth0:7 Link encap:Ethernet HWaddr 00:16:3E:00:01:02


Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 7: Traditional Clusterware Management


Chapter 7 - Page 31
inet addr:192.0.2.247 Bcast:192.0.2.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

eth1 Link encap:Ethernet HWaddr 00:16:3E:00:01:12


inet addr:192.168.1.102 Bcast:192.168.1.255 Mask:255.255.255.0
inet6 addr: fe80::216:3eff:fe00:112/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

RX packets:1695932 errors:0 dropped:0 overruns:0 frame:0


TX packets:1746256 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1612028227 (1.5 GiB) TX bytes:1216570792 (1.1 GiB)

eth2 Link encap:Ethernet HWaddr 00:16:3E:00:01:22


inet addr:192.168.2.102 Bcast:192.168.2.255 Mask:255.255.255.0
inet6 addr: fe80::216:3eff:fe00:122/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:661234 errors:0 dropped:0 overruns:0 frame:0
n s e
TX packets:608837 errors:0 dropped:0 overruns:0 carrier:0
i ce
collisions:0 txqueuelen:1000
b l el
RX bytes:432865197 (412.8 MiB) TX bytes:375332650 (357.9 MiB)
fer a
a n s
tr
eth2:1 Link encap:Ethernet HWaddr 00:16:3E:00:01:22
n -
inet addr:169.254.175.126 Bcast:169.254.255.255 Mask:255.255.128.0
no
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
a
h a s eฺ
eth2:2 Link encap:Ethernet HWaddr 00:16:3E:00:01:22
m ) u
inet addr:169.254.20.48 Bcast:169.254.127.255id Mask:255.255.128.0
o G
s i sฺc dent
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

lo
s a Stu
Link encap:LocalsLoopback
s @
inet addr:127.0.0.1
t h i sMask:255.0.0.0
r o ble uRUNNING
inet6 addr:
UP LOOPBACK se MTU:16436 Metric:1
::1/128 Scope:Host
b
( packets:1018083
s RX to errors:0 dropped:0 overruns:0 frame:0
l e
ob collisions:0 txqueuelen:0
TX packets:1018083 errors:0 dropped:0 overruns:0 carrier:0

io R
u l RX bytes:714443596 (681.3 MiB) TX bytes:714443596 (681.3 MiB)

Bra [root@host01 ~]# ssh host03 ifconfig -a


eth0 Link encap:Ethernet HWaddr 00:16:3E:00:01:03
inet addr:192.0.2.103 Bcast:192.0.2.255 Mask:255.255.255.0
inet6 addr: fe80::216:3eff:fe00:103/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:762685 errors:0 dropped:0 overruns:0 frame:0
TX packets:583389 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:12832168282 (11.9 GiB) TX bytes:59330680 (56.5 MiB)

eth0:1 Link encap:Ethernet HWaddr 00:16:3E:00:01:03


inet addr:192.0.2.252 Bcast:192.0.2.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

eth0:2 Link encap:Ethernet HWaddr 00:16:3E:00:01:03


inet addr:192.0.2.246 Bcast:192.0.2.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

eth1 Link encap:Ethernet HWaddr 00:16:3E:00:01:13


inet addr:192.168.1.103 Bcast:192.168.1.255 Mask:255.255.255.0
Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 7: Traditional Clusterware Management


Chapter 7 - Page 32
inet6 addr: fe80::216:3eff:fe00:113/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:840478 errors:0 dropped:0 overruns:0 frame:0
TX packets:882403 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:416600064 (397.3 MiB) TX bytes:691901335 (659.8 MiB)
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

eth2 Link encap:Ethernet HWaddr 00:16:3E:00:01:23


inet addr:192.168.2.103 Bcast:192.168.2.255 Mask:255.255.255.0
inet6 addr: fe80::216:3eff:fe00:123/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:483294 errors:0 dropped:0 overruns:0 frame:0
TX packets:509757 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:253240295 (241.5 MiB) TX bytes:304008574 (289.9 MiB)

eth2:1 Link encap:Ethernet HWaddr 00:16:3E:00:01:23


n s e
inet addr:169.254.148.19 Bcast:169.254.255.255 Mask:255.255.128.0
i ce
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
b l el
fer a
eth2:2 Link encap:Ethernet HWaddr 00:16:3E:00:01:23
a n s
tr
inet addr:169.254.1.180 Bcast:169.254.127.255 Mask:255.255.128.0
n
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1-
a no
lo Link encap:Local Loopback
h a s eฺ
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host)
m u id
UP LOOPBACK RUNNING MTU:16436
i s e n t G overruns:0 frame:0
ฺco dropped:0
Metric:1
s tudropped:0
RX packets:123948 errors:0
s aerrors:0 d
TX packets:123948
collisions:0@
s s S overruns:0 carrier:0

l es e t(87.6hi MiB) TX bytes:91915590 (87.6 MiB)


txqueuelen:0

ro b
RX bytes:91915590
us
( b~]# to
eseth1 is up on host02 and host03 but the HAIP interconnect eth1:1 on both
[root@host01

o
Noteb l
that

u lio Rnodes has been failed over to eth2:2.

Bra 11. Use the ifup command to bring up the eth1 interface on host01. Use ifconfig to
check the HAIP configuration on the three Hub nodes.
[root@host01 ~]# ifup eth1

[root@host01 ~]# ifconfig -a


eth0 Link encap:Ethernet HWaddr 00:16:3E:00:01:01
inet addr:192.0.2.101 Bcast:192.0.2.255 Mask:255.255.255.0
inet6 addr: fe80::216:3eff:fe00:101/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:4595987 errors:0 dropped:0 overruns:0 frame:0
TX packets:5267904 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:2574654840 (2.3 GiB) TX bytes:55555868162 (51.7 GiB)

eth0:1 Link encap:Ethernet HWaddr 00:16:3E:00:01:01


inet addr:192.0.2.248 Bcast:192.0.2.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 7: Traditional Clusterware Management


Chapter 7 - Page 33
eth0:2 Link encap:Ethernet HWaddr 00:16:3E:00:01:01
inet addr:192.0.2.245 Bcast:192.0.2.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

eth1 Link encap:Ethernet HWaddr 00:16:3E:00:01:11


inet addr:192.168.1.101 Bcast:192.168.1.255 Mask:255.255.255.0
inet6 addr: fe80::216:3eff:fe00:111/64 Scope:Link
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1


RX packets:1811725 errors:0 dropped:0 overruns:0 frame:0
TX packets:1690173 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1420158131 (1.3 GiB) TX bytes:1096966452 (1.0 GiB)

eth1:1 Link encap:Ethernet HWaddr 00:16:3E:00:01:11


inet addr:169.254.83.86 Bcast:169.254.127.255 Mask:255.255.128.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
n s e
eth2 Link encap:Ethernet HWaddr 00:16:3E:00:01:21
i ce
inet addr:192.168.2.101 Bcast:192.168.2.255 Mask:255.255.255.0
b l el
inet6 addr: fe80::216:3eff:fe00:121/64 Scope:Link
fer a
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
a n s
tr
RX packets:720224 errors:0 dropped:0 overruns:0 frame:0
n -
TX packets:717663 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
a no
h a s eฺ
RX bytes:349302089 (333.1 MiB) TX bytes:365597286 (348.6 MiB)

m )
eth2:1 Link encap:Ethernet HWaddr 00:16:3E:00:01:21 u id
inet addr:169.254.145.237 c o nt G
ฺ Bcast:169.254.255.255 Mask:255.255.128.0
i s e
s s as Stud
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

lo
s @
Link encap:Local
e isMask:255.0.0.0
Loopback
t h
l se Scope:Host
obaddr: u::1/128
inet addr:127.0.0.1
b r
inet6
( LOOPBACK
e s UP to RUNNING MTU:16436 Metric:1
l
ob TX packets:5305717 errors:0 dropped:0 overruns:0 carrier:0
RX packets:5305717 errors:0 dropped:0 overruns:0 frame:0

io R
u l collisions:0 txqueuelen:0

Bra
RX bytes:5480515150 (5.1 GiB) TX bytes:5480515150 (5.1 GiB)

[root@host01 ~]# ssh host02 ifconfig -a


eth0 Link encap:Ethernet HWaddr 00:16:3E:00:01:02
inet addr:192.0.2.102 Bcast:192.0.2.255 Mask:255.255.255.0
inet6 addr: fe80::216:3eff:fe00:102/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:889540 errors:0 dropped:0 overruns:0 frame:0
TX packets:743769 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:13759360548 (12.8 GiB) TX bytes:598398153 (570.6 MiB)

eth0:1 Link encap:Ethernet HWaddr 00:16:3E:00:01:02


inet addr:192.0.2.155 Bcast:192.0.2.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

eth0:3 Link encap:Ethernet HWaddr 00:16:3E:00:01:02


inet addr:192.0.2.244 Bcast:192.0.2.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

eth0:7 Link encap:Ethernet HWaddr 00:16:3E:00:01:02


Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 7: Traditional Clusterware Management


Chapter 7 - Page 34
inet addr:192.0.2.247 Bcast:192.0.2.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

eth1 Link encap:Ethernet HWaddr 00:16:3E:00:01:12


inet addr:192.168.1.102 Bcast:192.168.1.255 Mask:255.255.255.0
inet6 addr: fe80::216:3eff:fe00:112/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

RX packets:1730581 errors:0 dropped:0 overruns:0 frame:0


TX packets:1779120 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1689087148 (1.5 GiB) TX bytes:1232422586 (1.1 GiB)

eth1:1 Link encap:Ethernet HWaddr 00:16:3E:00:01:12


inet addr:169.254.20.48 Bcast:169.254.127.255 Mask:255.255.128.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

eth2 Link encap:Ethernet HWaddr 00:16:3E:00:01:22


n s e
inet addr:192.168.2.102 Bcast:192.168.2.255 Mask:255.255.255.0
i ce
inet6 addr: fe80::216:3eff:fe00:122/64 Scope:Link
b l el
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
fer a
RX packets:766042 errors:0 dropped:0 overruns:0 frame:0
a n s
tr
TX packets:716383 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
n -
no
RX bytes:497245496 (474.2 MiB) TX bytes:445682604 (425.0 MiB)
a
h a s eฺ
eth2:1 Link encap:Ethernet HWaddr 00:16:3E:00:01:22
m ) u id
inet addr:169.254.175.126 Bcast:169.254.255.255 Mask:255.255.128.0
o G
s i sฺc dent
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

lo
s a Stu
Link encap:LocalsLoopback
s @
inet addr:127.0.0.1
t h i sMask:255.0.0.0
r o ble uRUNNING
inet6 addr:
UP LOOPBACK se MTU:16436 Metric:1
::1/128 Scope:Host
b
( packets:1038607
s RX to errors:0 dropped:0 overruns:0 frame:0
l e
ob collisions:0 txqueuelen:0
TX packets:1038607 errors:0 dropped:0 overruns:0 carrier:0

io R
u l RX bytes:742484033 (708.0 MiB) TX bytes:742484033 (708.0 MiB)

Bra [root@host01 ~]# ssh host03 ifconfig -a


eth0 Link encap:Ethernet HWaddr 00:16:3E:00:01:03
inet addr:192.0.2.103 Bcast:192.0.2.255 Mask:255.255.255.0
inet6 addr: fe80::216:3eff:fe00:103/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:768703 errors:0 dropped:0 overruns:0 frame:0
TX packets:586788 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:12896112928 (12.0 GiB) TX bytes:59606340 (56.8 MiB)

eth0:1 Link encap:Ethernet HWaddr 00:16:3E:00:01:03


inet addr:192.0.2.252 Bcast:192.0.2.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

eth0:2 Link encap:Ethernet HWaddr 00:16:3E:00:01:03


inet addr:192.0.2.246 Bcast:192.0.2.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

eth1 Link encap:Ethernet HWaddr 00:16:3E:00:01:13


inet addr:192.168.1.103 Bcast:192.168.1.255 Mask:255.255.255.0
Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 7: Traditional Clusterware Management


Chapter 7 - Page 35
inet6 addr: fe80::216:3eff:fe00:113/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:863871 errors:0 dropped:0 overruns:0 frame:0
TX packets:905537 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:430490837 (410.5 MiB) TX bytes:720920377 (687.5 MiB)
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

eth1:1 Link encap:Ethernet HWaddr 00:16:3E:00:01:13


inet addr:169.254.1.180 Bcast:169.254.127.255 Mask:255.255.128.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

eth2 Link encap:Ethernet HWaddr 00:16:3E:00:01:23


inet addr:192.168.2.103 Bcast:192.168.2.255 Mask:255.255.255.0
inet6 addr: fe80::216:3eff:fe00:123/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:532799 errors:0 dropped:0 overruns:0 frame:0
TX packets:569469 errors:0 dropped:0 overruns:0 carrier:0
n s e
collisions:0 txqueuelen:1000
i ce
RX bytes:281101356 (268.0 MiB) TX bytes:347012713 (330.9 MiB)
b l el
fer a
eth2:1 Link encap:Ethernet HWaddr 00:16:3E:00:01:23
a n s
tr
inet addr:169.254.148.19 Bcast:169.254.255.255 Mask:255.255.128.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
n -
a no
lo Link encap:Local Loopback
h a s eฺ
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host)
m u id
UP LOOPBACK RUNNING MTU:16436
i s e n t G overruns:0 frame:0
ฺco dropped:0
Metric:1
s tudropped:0
RX packets:131363 errors:0
s aerrors:0 d
TX packets:131363
collisions:0@
s s S overruns:0 carrier:0
i MiB) TX bytes:100433099 (95.7 MiB)
es e th(95.7
txqueuelen:0
b l
RX bytes:100433099
o s
( b r o u
l e s ~]# t
[root@host01
b
o HAIP configuration on eth1:1 has been restored on all three nodes.
The
io R
r a ul12. Close all terminal windows opened for these practices.
B

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 7: Traditional Clusterware Management


Chapter 7 - Page 36
Practice 7-5: Working with SCANs, SCAN Listeners, and GNS
Overview
In this practice, you will remove and add GNS, SCAN, and SCAN listener components.
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

1. As the grid user, use srvctl to check the status and configuration of the GNS on your
cluster.
[vncuser@classroom_pc ~] ssh grid@host01
Password: *******

[grid@host01~] $ . oraenv
ORACLE_SID = [grid] ? +ASM1
The Oracle base has been set to /u01/app/grid
n s e
i ce
[grid@host01 ~]$ srvctl status gns
GNS is running on node host01. b l el
GNS is enabled on node host01.
fer a
a n s
[grid@host01 ~]$ srvctl config gns -detail
n - tr
GNS is enabled.
a no
h a s eฺ
GNS is listening for DNS server requests on port 53
)
GNS is using port 5353 to connect to mDNS
m u id
ฺco ent G
GNS status: OK
i s
Domain served by GNS: cluster01.example.com
GNS version: 12.1.0.2.0
s sas Stud
s@ this
Globally unique identifier of the cluster where GNS is running:
l e
b596525249286f61ffdcb4e07a8221ce
r ob use
Name of the cluster where GNS is running: cluster01
b
( to
es
Cluster type: server.
bl
GNS log level: 1.
o
io R
GNS listening addresses: tcp://192.0.2.155:50211.

r a ul GNS is individually enabled on nodes:


GNS is individually disabled on nodes:
B
[grid@host01 ~]$

2. Use srvctl to check the status and configuration of the SCAN VIPs on your cluster.
Note: Since the VIPs are cluster resources, they may not be running on the same hosts as
the example below.
[grid@host01 ~]$ srvctl status scan -verbose
SCAN VIP scan1 is enabled
SCAN VIP scan1 is running on node host02
SCAN VIP scan2 is enabled
SCAN VIP scan2 is running on node host03
SCAN VIP scan3 is enabled
SCAN VIP scan3 is running on node host01

[grid@host01 ~]$ srvctl config scan -all


SCAN name: cluster01-scan.cluster01.example.com, Network: 1
Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 7: Traditional Clusterware Management


Chapter 7 - Page 37
Subnet IPv4: 192.0.2.0/255.255.255.0/eth0, dhcp
Subnet IPv6:
SCAN 0 IPv4 VIP: -/scan1-vip/192.0.2.251
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes:
SCAN VIP is individually disabled on nodes:
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

SCAN 1 IPv4 VIP: -/scan2-vip/192.0.2.246


SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes:
SCAN VIP is individually disabled on nodes:
SCAN 2 IPv4 VIP: -/scan3-vip/192.0.2.249
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes:
SCAN VIP is individually disabled on nodes:

n s e
[grid@host01 ~]$
i ce
b l el
3. Use srvctl to check the status and configuration of the SCAN Listeners on your cluster.
fer a
a n s
[grid@host01 ~]$ srvctl status scan_listener
SCAN Listener LISTENER_SCAN1 is enabled n - tr
a no
SCAN listener LISTENER_SCAN1 is running on node host02
SCAN Listener LISTENER_SCAN2 is enabled
h a s eฺ
) id
SCAN listener LISTENER_SCAN2 is running on node host03
m u
i s ฺco ent G
SCAN Listener LISTENER_SCAN3 is enabled

as Stud
SCAN listener LISTENER_SCAN3 is running on node host01
s s
s @
[grid@host01 ~]$ srvctl config
t h i sexists.
scan_listener
b le se
SCAN Listener LISTENER_SCAN1
o
Port: TCP:1521
Registrationr invitedunodes:
s (b invited
Registration to subnets:
l e
SCANbListener is enabled.
Ro Listener is individually enabled on nodes:
io SCAN
u l SCAN Listener is individually disabled on nodes:
Bra SCAN Listener LISTENER_SCAN2 exists. Port: TCP:1521
Registration invited nodes:
Registration invited subnets:
SCAN Listener is enabled.
SCAN Listener is individually enabled on nodes:
SCAN Listener is individually disabled on nodes:
SCAN Listener LISTENER_SCAN3 exists. Port: TCP:1521
Registration invited nodes:
Registration invited subnets:
SCAN Listener is enabled.
SCAN Listener is individually enabled on nodes:
SCAN Listener is individually disabled on nodes:

[grid@host01 ~]$

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 7: Traditional Clusterware Management


Chapter 7 - Page 38
4. Use the host command to check the resolution of the SCANs on your cluster. Take note of
the IPs assigned to the SCANs and the GNS specifics.
[grid@host01 ~]$ host -a cluster01-scan.cluster01.example.com
Trying "cluster01-scan.cluster01.example.com"
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 61902
;; flags: qr rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 1, ADDITIONAL: 1
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

;; QUESTION SECTION:
;cluster01-scan.cluster01.example.com. IN ANY

;; ANSWER SECTION:
cluster01-scan.cluster01.example.com. 120 IN A 192.0.2.249
cluster01-scan.cluster01.example.com. 120 IN A 192.0.2.251
cluster01-scan.cluster01.example.com. 120 IN A 192.0.2.246
n s e
;; AUTHORITY SECTION:
i ce
cluster01.example.com. 86400 IN NS cluster01-
b l el
gns.example.com.
fe r a
a n s
;; ADDITIONAL SECTION:
n - tr
no
cluster01-gns.example.com. 86400 IN 192.0.2.155 A

Received 146 bytes from 192.0.2.1#53 ina10


a
s ms eฺ
) h u id
m
[grid@host01 ~]$
i s ฺco ent G
s s as Stud
5. Use srvctl to stop the GNS
@ VIP. is
s
lesrvctl th gns
[grid@host01 ~]$
o b s e stop
PRCN-2018 (:brCurrent u
s t o user grid is not a privileged user
o ble
[grid@host01 ~]$
io R
r a ul6.
B Stopping the GNS VIP must be done by a privileged user. Open a terminal to host01 as
the root user and stop the GNS. Check the status.
[vncuser@classroom_pc ~] ssh root@host01
Password: *******

[root@host01 ~]# . oraenv


ORACLE_SID = [root] ? +ASM1
The Oracle base has been set to /u01/app/grid

[root@host01 ~]# srvctl stop gns

[root@host01 ~]# srvctl status gns


GNS is not running.
GNS is enabled.

[root@host01 ~]# crsctl stat res -t -w "TYPE = ora.gns.type"

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 7: Traditional Clusterware Management


Chapter 7 - Page 39
--------------------------------------------------------------------
Name Target State Server State
details
--------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

ora.gns
1 OFFLINE OFFLINE STABLE
--------------------------------------------------------------------

[root@host01 ~]#

7. Use srvctl to remove GNS VIP. Verify GNS VIP removal.


[root@host01 ~]# srvctl remove gns
n s e
Remove GNS? (y/[n]) y
i ce
b l el
[root@host01 ~]# srvctl config gns
fer a
PRKF-1110 : Neither GNS server nor GNS client is configured on this
a n s
cluster
n - tr
[root@host01 ~]#
a no
h a s eฺ
m
8. Stop the SCAN Listeners on the cluster. Verify they id stopped, then remove them.
) haveubeen
ฺco ent G
[grid@host01 ~]$ srvctl stopisscan_listener
s s as Stud
[grid@host01 ~]$ srvctl
s @ status
t h i s scan_listener
r o ble use
SCAN Listener LISTENER_SCAN1 is enabled
SCAN listener
( b LISTENER_SCAN1
t o is not running
l es
SCAN Listener
SCANblistener
LISTENER_SCAN2 is enabled

Ro Listener LISTENER_SCAN3 is enabled


LISTENER_SCAN2 is not running

u lio SCAN SCAN listener LISTENER_SCAN3 is not running


Bra [grid@host01 ~]$ crsctl stat res -t -w "TYPE =
ora.scan_listener.type"
--------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 OFFLINE OFFLINE STABLE
ora.LISTENER_SCAN2.lsnr
1 OFFLINE OFFLINE STABLE
ora.LISTENER_SCAN3.lsnr
1 OFFLINE OFFLINE STABLE
--------------------------------------------------------------------

[grid@host01 ~]$ srvctl remove scan_listener


Remove scan listener? (y/[n]) y
Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 7: Traditional Clusterware Management


Chapter 7 - Page 40
[grid@host01 ~]$ srvctl status scan_listener
PRCS-1103 : Could not find any Single Client Access Name (SCAN)
listener resources using filter TYPE=ora.scan_listener.type on
network 1
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

[grid@host01 ~]$

9. Use srvctl to stop the SCAN VIPs.


[grid@host01 ~]$ srvctl stop scan

[grid@host01 ~]$ srvctl status scan


SCAN VIP scan1 is enabled
SCAN VIP scan1 is not running
n s e
SCAN VIP scan2 is enabled
i ce
SCAN VIP scan2 is not running
b l el
SCAN VIP scan3 is enabled
fer a
SCAN VIP scan3 is not running
a n s
n - tr
[grid@host01 ~]$ crsctl stat res -t -w "TYPE = ora.scan_vip.type"
a no
--------------------------------------------------------------------
Name Target State Server
h a s eฺ State
details
m ) u id
i s ฺco ent G
--------------------------------------------------------------------

sas Stud
Cluster Resources
--------------------------------------------------------------------
s
ora.scan1.vip
l e s@ this
ob use
1 OFFLINE OFFLINE STABLE
ora.scan2.vip
( b r
1
es to
OFFLINE OFFLINE STABLE
o bl
ora.scan3.vip

io R 1 OFFLINE OFFLINE STABLE

r a ul --------------------------------------------------------------------
B [grid@host01 ~]$

10. Remove the SCANs and recheck status and configuration.


[grid@host01 ~]$ srvctl remove scan
Remove the scan? (y/[n]) y
PRCS-1024 : Failed to remove Single Client Access Name Virtual
Internet Protocol(VIP) resources cluster01-
scan.cluster01.example.com
PRCN-2018 : Current user grid is not a privileged user
PRCN-2018 : Current user grid is not a privileged user
PRCN-2018 : Current user grid is not a privileged user

[grid@host01 ~]$

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 7: Traditional Clusterware Management


Chapter 7 - Page 41
11. SCAN VIPs must be removed by a privileged user. Switch to the root terminal window and
remove the SCAN VIPs.
[root@host01 ~]# srvctl remove scan
Remove the scan? (y/[n]) y

[root@host01 ~]# srvctl config scan


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

PRCS-1102 : Could not find any Single Client Access Name (SCAN)
Virtual Internet Protocol (VIP) resources using filter
TYPE=ora.scan_vip.type on network 1
[root@host01 ~]#

12. Use the host command to re-check the operating system resolution of the SCANs on your
cluster. Use tnsping to check SCAN resolution on the database side. Note the TNS-
12541 returned by the tnsping command.
n s e
ice
el
[root@host01 ~]# host -a cluster01-scan.cluster01.example.com
Trying "cluster01-scan.cluster01.example.com"
a b l
Received 54 bytes from 192.0.2.1#53 in 198 ms
fer
Trying "cluster01-scan.cluster01.example.com.example.com"
a n s
- tr
Host cluster01-scan.cluster01.example.com not found: 3(NXDOMAIN)
n
Received 115 bytes from 192.0.2.1#53 in 11 ms
a no
h a s eฺ
[grid@host01 ~]$ tnsping cluster01-scan.cluster01.example.com
m ) u id
TNS Ping Utility for Linux: Version
i s ฺ n tG
co e12.1.0.2.0 - Production on 27-
MAY-2015 08:14:25 s
sa Stu d
s is
Copyright (c) 1997,
l e s@2014,thOracle. All rights reserved.
b
ro files: u s e
( b
Used parameter
t o
l es
/u01/app/12.1.0/grid/network/admin/sqlnet.ora
b
Ro
io TNS-03505:
u l Failed to resolve name

Bra [root@host01 ~]#

13. Lets add back the components that were removed. Start by adding the GNS VIP back to
the cluster and starting it. Check the GNS VIP status and configuration.
[root@host01 ~]# srvctl add gns -vip 192.0.2.155 -domain
cluster01.example.com

[root@host01 ~]# srvctl start gns

[root@host01 ~]# srvctl status gns


GNS is running on node host02.
GNS is enabled on node host02.

[root@host01 ~]# srvctl config gns -detail


GNS is enabled.

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 7: Traditional Clusterware Management


Chapter 7 - Page 42
GNS is listening for DNS server requests on port 53
GNS is using port 5353 to connect to mDNS
GNS status: OK
Domain served by GNS: cluster01.example.com
GNS version: 12.1.0.2.0
Globally unique identifier of the cluster where GNS is running:
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

b596525249286f61ffdcb4e07a8221ce
Name of the cluster where GNS is running: cluster01
Cluster type: server.
GNS log level: 1.
GNS listening addresses: tcp://192.0.2.155:61617.
GNS is individually enabled on nodes:
GNS is individually disabled on nodes:

[root@host01 ~]#
n s e
i ce
14. Add the SCAN VIPs and SCAN Listeners back to your cluster. Start the SCAN VIPs and b l el
verify the status and configuration.
fe ra
n s
[root@host01 ~]# srvctl add scan -scanname cluster01-
scan.cluster01.example.com n - tra
a no
[root@host01 ~]# srvctl start scan
h a s eฺ
m ) u id
ฺco ent G
[root@host01 ~]# srvctl status scan
SCAN VIP scan1 is enabled i s
s nodetuhost02
d
SCAN VIP scan1 is running
s s aon S
SCAN VIP scan2 is enabled
e @ thonisnode host03
srunning
SCAN VIP scan2 is l
obis enabledse
( b
SCAN VIP scan3r
SCAN VIPs scan3 isto
u
running on node host01
b l e
io Ro
[root@host01 ~]#

r a ul
B ********************* Go to the grid terminal *********************

[grid@host01 ~]$ srvctl add scan_listener

[grid@host01 ~]$ srvctl start scan_listener

[grid@host01 ~]$ srvctl status scan_listener


SCAN Listener LISTENER_SCAN1 is enabled
SCAN listener LISTENER_SCAN1 is running on node host02
SCAN Listener LISTENER_SCAN2 is enabled
SCAN listener LISTENER_SCAN2 is running on node host03
SCAN Listener LISTENER_SCAN3 is enabled
SCAN listener LISTENER_SCAN3 is running on node host01

[grid@host01 ~]$ crsctl stat res -t -w "TYPE = ora.scan_vip.type"

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 7: Traditional Clusterware Management


Chapter 7 - Page 43
--------------------------------------------------------------------
Name Target State Server State
details
--------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

ora.scan1.vip
1 ONLINE ONLINE host02 STABLE
ora.scan2.vip
1 ONLINE ONLINE host03 STABLE
ora.scan3.vip
1 ONLINE ONLINE host01 STABLE
--------------------------------------------------------------------

[grid@host01 ~]$ srvctl config scan


n s e
SCAN name: cluster01-scan.cluster01.example.com, Network: 1
i ce
Subnet IPv4: 192.0.2.0/255.255.255.0/eth0, dhcp
b l el
Subnet IPv6:
SCAN 0 IPv4 VIP: -/scan1-vip/192.0.2.223 fe r a
SCAN VIP is enabled. a n s
SCAN VIP is individually enabled on nodes: n - tr
SCAN VIP is individually disabled on nodes:
a no
SCAN 1 IPv4 VIP: -/scan2-vip/192.0.2.222
h a s eฺ
SCAN VIP is enabled.
m ) u id
ฺco ent G
SCAN VIP is individually enabled on nodes:
i s
SCAN VIP is individually disabled on nodes:
sas Stud
SCAN 2 IPv4 VIP: -/scan3-vip/192.0.2.221
s
s@ this
SCAN VIP is enabled.
l e
SCAN VIP is individually enabled on nodes:
r ob use
SCAN VIP is individually disabled on nodes:
b
( to
bl es
[grid@host01 ~]$
R o
u o
li15.
Br a Use the host and tnsping commands to check the resolution of the SCANs. Execute
tnsping three times. It should resolve to each IP listed by the host command.
[grid@host01 ~]$ host -a cluster01-scan.cluster01.example.com
Trying "cluster01-scan.cluster01.example.com"
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 63749
;; flags: qr rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 1, ADDITIONAL: 1

;; QUESTION SECTION:
;cluster01-scan.cluster01.example.com. IN ANY

;; ANSWER SECTION:
cluster01-scan.cluster01.example.com. 120 IN A 192.0.2.241
cluster01-scan.cluster01.example.com. 120 IN A 192.0.2.244
cluster01-scan.cluster01.example.com. 120 IN A 192.0.2.246

;; AUTHORITY SECTION:
cluster01.example.com. 86400 IN NS cluster01-
gns.example.com.
Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 7: Traditional Clusterware Management


Chapter 7 - Page 44
;; ADDITIONAL SECTION:
cluster01-gns.example.com. 86400 IN A 192.0.2.155

Received 146 bytes from 192.0.2.1#53 in 53 ms


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

[grid@host01 ~]$ tnsping cluster01-scan.cluster01.example.com

TNS Ping Utility for Linux: Version 12.1.0.2.0 - Production on 23-


JUN-2015 18:07:23

Copyright (c) 1997, 2014, Oracle. All rights reserved.

Used parameter files:


/u01/app/12.1.0/grid/network/admin/sqlnet.ora
n s e
ice
Used EZCONNECT adapter to resolve the alias
b l el
Attempting to contact
(DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=))(ADDRESS=(PROTOCOL=TCP)(H fer a
OST=192.0.2.241)(PORT=1521))(ADDRESS=(PROTOCOL=TCP)(HOST=192.0.2.244 a n s
n - tr
)(PORT=1521))(ADDRESS=(PROTOCOL=TCP)(HOST=192.0.2.246)(PORT=1521)))
OK (30 msec)
a no
h a s eฺ
) id
[grid@host01 ~]$ tnsping cluster01-scan.cluster01.example.com
m u
TNS Ping Utility for Linux: iVersions tG
ฺco e12.1.0.2.0
n - Production on 23-
a s u d
JUN-2015 18:07:29
@ ss is St
Copyright (c) 1997,
b l es 2014,e thOracle. All rights reserved.
b o
r files: u s
( t o
es
Used parameter
b l
/u01/app/12.1.0/grid/network/admin/sqlnet.ora
Ro EZCONNECT adapter to resolve the alias
io Used
u l
Bra Attempting to contact
(DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=))(ADDRESS=(PROTOCOL=TCP)(H
OST=192.0.2.246)(PORT=1521))(ADDRESS=(PROTOCOL=TCP)(HOST=192.0.2.241
)(PORT=1521))(ADDRESS=(PROTOCOL=TCP)(HOST=192.0.2.244)(PORT=1521)))
OK (0 msec)

[grid@host01 ~]$ tnsping cluster01-scan.cluster01.example.com

TNS Ping Utility for Linux: Version 12.1.0.2.0 - Production on 23-


JUN-2015 18:07:31

Copyright (c) 1997, 2014, Oracle. All rights reserved.

Used parameter files:


/u01/app/12.1.0/grid/network/admin/sqlnet.ora

Used EZCONNECT adapter to resolve the alias

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 7: Traditional Clusterware Management


Chapter 7 - Page 45
Attempting to contact
(DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=))(ADDRESS=(PROTOCOL=TCP)(H
OST=192.0.2.244)(PORT=1521))(ADDRESS=(PROTOCOL=TCP)(HOST=192.0.2.246
)(PORT=1521))(ADDRESS=(PROTOCOL=TCP)(HOST=192.0.2.241)(PORT=1521)))
OK (10 msec)
[grid@host01 ~]$
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

[grid@host01 ~]$

16. Close all terminal windows opened for this practice.

n s e
i ce
b l el
fer a
a n s
n - tr
a no
h a s eฺ
m ) u id
i s ฺco ent G
s sas Stud
l e s@ this
b r ob use
( to
bl es
R o
ul io
B r a

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 7: Traditional Clusterware Management


Chapter 7 - Page 46
Practice 7-6: Recovering From Voting Disk Corruptions
Overview
In this practice, you will recover from a Voting Disk corruption.
1. Open a terminal from your desktop to host01 as the grid user and set the environment.
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

Use crsctl to locate your voting disks.


[vncuser@classroom_pc ~] ssh –X grid@host01
Password: *******
[grid@host01~] $ . oraenv
ORACLE_SID = [grid] ? +ASM1
The Oracle base has been set to /u01/app/grid

[grid@host01 ~]$ crsctl query css votedisk


## STATE File Universal Id File Name Disk group n s e
i ce
el
-- ----- ----------------- --------- ---------
1. ONLINE 5a70acc510c04f1abf34a4921c3c251c (/dev/asmdisk1p1) [DATA]
a b l
2. ONLINE 0c90eb6166fb4f35bf61d5c3bbf3452c
fer
(/dev/asmdisk1p10) [DATA]
3. ONLINE 2fa97a8fd6314f6bbf96fd504680c00e
n s
(/dev/asmdisk1p11) [DATA]
a
Located 3 voting disk(s).
n - tr
[grid@host01 ~]$
a no
h a s eฺ
2. Use asmca to create a disk group called LABDG
m ) using id redundancy. Use disks
normal
u
/dev/asmdisk2p3, /dev/asmdisk2p4,
ฺ c o nt G
/dev/asmdisk2p5, and /dev/asmdisk2p6.
Exit asmca when finished. s
si tude
[grid@host01 ~]$ asmca s s a S
s @ i s
o b le se th
( b r o u
es t
o b l
u lio R
Bra

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 7: Traditional Clusterware Management


Chapter 7 - Page 47
3. Use crsctl to view the LABDG disk group resource.
[grid@host01 ~]$ crsctl stat res ora.LABDG.dg -t
--------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

Local Resources
--------------------------------------------------------------------
ora.LABDG.dg
ONLINE ONLINE host01 STABLE
ONLINE ONLINE host02 STABLE
ONLINE ONLINE host03 STABLE
--------------------------------------------------------------------

[grid@host01 ~]$ n s e
li ce
l e
bon all
4. Open a terminal from your desktop to host01 as the root user. Stop Clusterware
e r a
nodes, then stop high availability services on all nodes. Restart it I exclusive
a n sfmode on
host01.
n - tr
[vncuser@classroom_pc ~] ssh root@host01
a no
Password: *******
h a s eฺ
m ) u id
[root@host01~] $ . oraenv
ORACLE_SID = [grid] ? +ASM1 sฺc
o nt G
The Oracle base has been a
i de
s to t/u01/app/grid
s set u
[root@host01 ~]# e s @s stop t h i s S
b l crsctl
e
cluster -all
s stop 'ora.crsd' on 'host04'
b ro o uto
CRS-2673: Attempting
CRS-2673: (Attempting
l e s Stop oft 'ora.crsd'
to stop 'ora.crsd' on 'host05'

o b
CRS-2677: on 'host04' succeeded
R
CRS-2673:
io 'host04'
Attempting to stop 'ora.cluster_interconnect.haip' on

r a ul ...
B CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'host01'
succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'host01'
CRS-2677: Stop of 'ora.cssd' on 'host01' succeeded

[root@host01 ~]# crsctl stop crs -f


CRS-2791: Starting shutdown of Oracle High Availability Services-
managed resources on 'host01'
...
CRS-4133: Oracle High Availability Services has been stopped.

[root@host01 ~]# ssh host02 /u01/app/12.1.0/grid/bin/crsctl stop crs -f


CRS-2791: Starting shutdown of Oracle High Availability Services-
managed resources on 'host02'
...

[root@host01 ~]# ssh host03 /u01/app/12.1.0/grid/bin/crsctl stop crs -f

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 7: Traditional Clusterware Management


Chapter 7 - Page 48
CRS-2791: Starting shutdown of Oracle High Availability Services-
managed resources on 'host03'
...

[root@host01 ~]# ssh host04 /u01/app/12.1.0/grid/bin/crsctl stop crs -f


CRS-2791: Starting shutdown of Oracle High Availability Services-
managed resources on 'host04'
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

...

[root@host01 ~]# ssh host05 /u01/app/12.1.0/grid/bin/crsctl stop crs -f


CRS-2791: Starting shutdown of Oracle High Availability Services-
managed resources on 'host05'
...

[root@host01 ~]# crsctl start crs –excl


n s e
i ce
CRS-4123: Oracle High Availability Services has been started.
b l el
CRS-2672: Attempting to start 'ora.evmd' on 'host01'
fer a
CRS-2672: Attempting to start 'ora.mdnsd' on 'host01'
a n s
CRS-2676:
- tr
Start of 'ora.mdnsd' on 'host01' succeeded
n
CRS-2676:
CRS-2672: a no
Start of 'ora.evmd' on 'host01' succeeded
Attempting to start 'ora.gpnpd' on 'host01'
CRS-2676:
h a s eฺ
Start of 'ora.gpnpd' on 'host01' succeeded
CRS-2672:
m ) u id
Attempting to start 'ora.cssdmonitor' on 'host01'
CRS-2672:
ฺco ent G
Attempting to start 'ora.gipcd' on 'host01'
i s
sas Stud
CRS-2676: Start of 'ora.cssdmonitor' on 'host01' succeeded
CRS-2676: s
Start of 'ora.gipcd' on 'host01' succeeded
CRS-2672: s@ this
Attempting to start 'ora.cssd' on 'host01'
l e
CRS-2672:
b ob use
Attempting to start 'ora.diskmon' on 'host01'
r
CRS-2676:
( to
Start of 'ora.diskmon' on 'host01' succeeded

bl es
CRS-2676: Start of 'ora.cssd' on 'host01' succeeded

R o
CRS-2672: Attempting to start 'ora.crf' on 'host01'

ul io CRS-2672: Attempting to start 'ora.ctssd' on 'host01'

B r a CRS-2672:
'host01'
Attempting to start 'ora.cluster_interconnect.haip' on

CRS-2676: Start of 'ora.crf' on 'host01' succeeded


CRS-2676: Start of 'ora.ctssd' on 'host01' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'host01'
succeeded
CRS-2672: Attempting to start 'ora.asm' on 'host01'
CRS-2676: Start of 'ora.asm' on 'host01' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'host01'
CRS-2676: Start of 'ora.storage' on 'host01' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'host01'
CRS-2676: Start of 'ora.crsd' on 'host01' succeeded

[root@host01 ~]#

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 7: Traditional Clusterware Management


Chapter 7 - Page 49
5. From the grid terminal, list the ASM disk groups using asmcmd.
[grid@host01 ~]$ asmcmd lsdg
State Type Rebal Sector Block AU Total_MB Free_MB
Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name
MOUNTED NORMAL N 512 4096 1048576 27000 17705
2700 7502 0 Y DATA/
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

MOUNTED EXTERN N 512 4096 1048576 8100 7619


0 7619 0 N FRA/
MOUNTED NORMAL N 512 4096 1048576 2600 2309
650 829 0 N LABDG/
[grid@host01 ~]$

6. From the grid terminal list voting disks again, and then replace them on the LABDG disk
group. List the voting disks again to verify the replacement.
n s e
i ce
el
[grid@host01 ~]$ crsctl query css votedisk
## STATE File Universal Id File Name Disk group
a b l
-- ----- ----------------- --------- ---------
fer
1. ONLINE 5a70acc510c04f1abf34a4921c3c251c s
(/dev/asmdisk1p1) [DATA]
a n
2. ONLINE 0c90eb6166fb4f35bf61d5c3bbf3452c
n - tr (/dev/asmdisk1p10) [DATA]

no
3. ONLINE 2fa97a8fd6314f6bbf96fd504680c00e (/dev/asmdisk1p11) [DATA]
Located 3 voting disk(s).
a
s eฺ
h a
) +LABDG id
[grid@host01 ~]$ crsctl replace votedisk
m u
Successful addition of voting disk
i s ฺdisk e n tG
co 598eaa67e0094fddbf63fac177b9719c.
9974000d4b564fa5bff895313fad4433.

sas S tud536c583578474f82bf6cf4791df02994.
Successful addition of voting
s
Successful addition of voting disk
Successful deletion @
l e s of
of e t his disk
voting 5a70acc510c04f1abf34a4921c3c251c.

ob uofs voting disk 2fa97a8fd6314f6bbf96fd504680c00e.


Successful deletion voting disk 0c90eb6166fb4f35bf61d5c3bbf3452c.
Successful b r
deletion
e s ( replaced
Successfully
l
to voting disk group with +LABDG.
R ob
CRS-4266: Voting file(s) successfully replaced

u lio
Bra
[grid@host01 ~]$ crsctl query css votedisk
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 9974000d4b564fa5bff895313fad4433 (/dev/asmdisk2p4)[LABDG]
2. ONLINE 598eaa67e0094fddbf63fac177b9719c (/dev/asmdisk2p3) [LABDG]
3. ONLINE 536c583578474f82bf6cf4791df02994 (/dev/asmdisk2p5) [LABDG]
Located 3 voting disk(s).

[grid@host01 ~]$

7. Moving the voting disk to a new disk group constitutes a major change in the cluster
configuration. Perform a manual OCR backup as the root user.
[root@host01 ~]# ocrconfig -manualbackup

host02 2015/05/14 07:31:38


/u01/app/12.1.0/grid/cdata/cluster01/backup_20150514_073138.ocr 0

[root@host01 ~]#
Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 7: Traditional Clusterware Management


Chapter 7 - Page 50
8. Use asmcmd to list the LABDG disks, and then corrupt the LABDG disks as the root user
using dd to corrupt them. Be very careful to use the correct device when corrupting the
disks. Check the cluster resources. What do you see?
[grid@host01 ~]# asmcmd lsdsk -G LABDG
Path
/dev/asmdisk2p3
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

/dev/asmdisk2p4
/dev/asmdisk2p5
/dev/asmdisk2p6

********************** Switch to root terminal *******************

[root@host01 dev]# dd if=/dev/zero of=/dev/asmdisk2p3 bs=1M count=10


10+0 records in
10+0 records out
n s e
1048576 bytes (1.0 MB) copied, 0.0292729 s, 35.8 MB/s
li ce
b l e
[root@host01 dev]# dd if=/dev/zero of=/dev/asmdisk2p4 bs=1Mra count=10
sf e
10+0 records in
tr a n
10+0 records out
n -
1048576 bytes (1.0 MB) copied, 0.00892324 s, 118
a no MB/s

h a s eฺ
[root@host01 dev]# dd if=/dev/zero of=/dev/asmdisk2p5 bs=1M count=10
) i d
om nt Gu
10+0 records in
10+0 records out ฺ c
is 0.01389
1048576 bytes (1.0 MB) copied,
s a s t u de s, 75.5 MB/s
[root@host01 dev]#sdd @ s is S of=/dev/asmdisk2p6 bs=1M count=10
10+0 records inble e th
if=/dev/zero

b o
r out o u s
10+0 records
s ( t
b l e
R o
[root@host01 ~]# crsctl stat res -t|more

u lio CRS-4535: Cannot communicate with Cluster Ready Services

Bra
CRS-4000: Command Status failed, or completed with errors.

[root@host01 ~]#
Clusterware is now non-responsive on host01.

9. Stop Clusterware on host01. Check the CRS logfile using adrci Go to the bottom of the
file (Shift-G). What do you observe? Quit the editor and exit adrci when finished.
[root@host01 catchup]# crsctl stop crs -f

CRS-2791: Starting shutdown of Oracle High Availability Services-


managed resources on 'host01'
CRS-2673: Attempting to stop 'ora.crsd' on 'host01'
CRS-2677: Stop of 'ora.crsd' on 'host01' succeeded
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'host01'
CRS-2673: Attempting to stop 'ora.storage' on 'host01'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'host01'
CRS-2673: Attempting to stop 'ora.crf' on 'host01'

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 7: Traditional Clusterware Management


Chapter 7 - Page 51
CRS-2673: Attempting to stop 'ora.gpnpd' on 'host01'
CRS-2677: Stop of 'ora.drivers.acfs' on 'host01' succeeded
CRS-2677: Stop of 'ora.storage' on 'host01' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'host01'
CRS-2677: Stop of 'ora.crf' on 'host01' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'host01' succeeded
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

CRS-2677: Stop of 'ora.gpnpd' on 'host01' succeeded


CRS-2677: Stop of 'ora.asm' on 'host01' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on
'host01'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'host01'
succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'host01'
CRS-2677: Stop of 'ora.ctssd' on 'host01' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'host01'
n s e
CRS-2677: Stop of 'ora.cssd' on 'host01' succeeded
i ce
CRS-2673: Attempting to stop 'ora.gipcd' on 'host01'
b l el
CRS-2677:
CRS-2793:
Stop of 'ora.gipcd' on 'host01' succeeded
Shutdown of Oracle High Availability Services-managed fer a
resources on 'host01' has completed a n s
CRS-4133: n - tr
Oracle High Availability Services has been stopped.
a no
h a s eฺ
[root@host01 ~]# adrci
m ) u id
s ฺco entonGThu Jun 18 05:43:01 2015
ADRCI: Release 12.1.0.2.0 - iProduction

s s as Stud
@
Copyright (c) 1982, 2014, Oracle
s t h i s and/or its affiliates. All rights
reserved.
r o ble use
ADR base =(b to
l e s "/u01/app/grid"
R ob show alert
adrci>
u lio
Bra Choose the home from which to view the alert log:

1: diag/asm/+asm/+ASM1
2: diag/crs/host01/crs
3: diag/rdbms/_mgmtdb/-MGMTDB
4: diag/clients/user_grid/host_2394698683_82
5: diag/clients/user_root/host_2394698683_82
6: diag/clients/user_oracle/host_2394698683_82
7: diag/tnslsnr/host01/listener
8: diag/tnslsnr/host01/asmnet1lsnr_asm
9: diag/tnslsnr/host01/listener_scan2
10: diag/tnslsnr/host01/listener_scan3
11: diag/tnslsnr/host01/listener_scan1
12: diag/tnslsnr/host01/mgmtlsnr
Q: to quit

Please select option: 2


Output the results to file: /tmp/alert_25579_1401_crs_1.ado

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 7: Traditional Clusterware Management


Chapter 7 - Page 52
*********** Go to the bottom of the file <Shift-G> *****************

2015-06-18 05:48:08.368 [GPNPD(24641)]CRS-2339: GPNPD advertisement


with GNS failed. This may block some cluster configuration changes.
Advertisement attempts will continue.
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

2015-06-18 05:48:09.546000 +00:00


2015-06-18 05:48:09.546 [OCSSD(24734)]CRS-1714: Unable to discover
any voting files, retrying discovery in 15 seconds; Details at
(:CSSNM00070:) in /u01/app/grid/diag/crs/host01/crs/trace/ocssd.trc
2015-06-18 05:48:24.758000 +00:00
2015-06-18 05:48:24.758 [OCSSD(24734)]CRS-1714: Unable to discover
any voting files, retrying discovery in 15 seconds; Details at
(:CSSNM00070:) in /u01/app/grid/diag/crs/host01/crs/trace/ocssd.trc
2015-06-18 05:48:40.047000 +00:00
n s e
2015-06-18 05:48:40.047 [OCSSD(24734)]CRS-1714: Unable to discover
i ce
any voting files, retrying discovery in 15 seconds; Details at
b l el
(:CSSNM00070:) in /u01/app/grid/diag/crs/host01/crs/trace/ocssd.trc
2015-06-18 05:48:55.289000 +00:00 fer a
2015-06-18 05:48:55.289 [OCSSD(24734)]CRS-1714: Unable to discover a n s
n - tr
any voting files, retrying discovery in 15 seconds; Details at
no
(:CSSNM00070:) in /u01/app/grid/diag/crs/host01/crs/trace/ocssd.trc
a
2015-06-18 05:49:10.483000 +00:00
h a s eฺ
) id
2015-06-18 05:49:10.482 [OCSSD(24734)]CRS-1714: Unable to discover
m u
ฺco ent G
any voting files, retrying discovery in 15 seconds; Details at
i s
(:CSSNM00070:) in /u01/app/grid/diag/crs/host01/crs/trace/ocssd.trc
sas Stud
2015-06-18 05:49:25.799000 +00:00
s
s@ this
2015-06-18 05:49:25.799 [OCSSD(24734)]CRS-1714: Unable to discover
l e
any voting files, retrying discovery in 15 seconds; Details at
r ob use
(:CSSNM00070:) in /u01/app/grid/diag/crs/host01/crs/trace/ocssd.trc
b
( to
bl es
********************** Exit editor: :q! ****************************
R o
ul io Choose the home from which to view the alert log:
B r a
1: diag/asm/+asm/+ASM1
2: diag/crs/host01/crs
...
12: diag/tnslsnr/host01/mgmtlsnr
Q: to quit

Please select option: 2


Output the results to file: /tmp/alert_25579_1401_crs_2.ado

Please select option: q

adrci> exit

[root@host01 ~]#

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 7: Traditional Clusterware Management


Chapter 7 - Page 53
10. Start CRS on host01 in exclusive mode. When the stack is up, restore the voting disks to
the +DATA disk group from the OCR backup
[root@host01 ~]# crsctl start crs -excl
CRS-4123: Oracle High Availability Services has been started.
CRS-2672: Attempting to start 'ora.evmd' on 'host01'
CRS-2672: Attempting to start 'ora.mdnsd' on 'host01'
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

CRS-2676: Start of 'ora.mdnsd' on 'host01' succeeded


CRS-2676: Start of 'ora.evmd' on 'host01' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'host01'
CRS-2676: Start of 'ora.gpnpd' on 'host01' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'host01'
CRS-2672: Attempting to start 'ora.gipcd' on 'host01'
CRS-2676: Start of 'ora.cssdmonitor' on 'host01' succeeded
CRS-2676: Start of 'ora.gipcd' on 'host01' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'host01'
n s e
CRS-2672: Attempting to start 'ora.diskmon' on 'host01'
i ce
CRS-2676: Start of 'ora.diskmon' on 'host01' succeeded
b l el
CRS-2676: Start of 'ora.cssd' on 'host01' succeeded
fer a
CRS-2672: Attempting to start 'ora.crf' on 'host01'
a n s
CRS-2672: Attempting to start 'ora.ctssd' on 'host01'
n - tr
no
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on
'host01' a
s eฺ
) h a
CRS-2676: Start of 'ora.crf' on 'host01' succeeded
id
u
CRS-2676: Start of 'ora.ctssd' on 'host01' succeeded
m
i s ฺco ent G
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'host01'

sas Stud
succeeded
s
CRS-2672: Attempting to start 'ora.asm' on 'host01'

l e s@ this
CRS-2676: Start of 'ora.asm' on 'host01' succeeded

b r ob use
CRS-2672: Attempting to start 'ora.storage' on 'host01'
( to
CRS-2676: Start of 'ora.storage' on 'host01' succeeded
es
CRS-2672: Attempting to start 'ora.crsd' on 'host01'
bl
o
CRS-2676: Start of 'ora.crsd' on 'host01' succeeded
R
ul io [root@host01
B r a ~]# crsctl replace votedisk +DATA
Successful addition of voting disk 44a9f26cdcef4fe4bf28419057f6009b.
Successful addition of voting disk 5ac02418f4a74fb8bf3d757397585e64.
Successful addition of voting disk 2423fd054b684f98bf56abae82e1a44c.
Successfully replaced voting disk group with +DATA.
CRS-4266: Voting file(s) successfully replaced

[root@host01 ~]# crsctl query css votedisk


## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 44a9f26cdcef4fe4bf28419057f6009b (/dev/asmdisk1p1) [DATA]
2. ONLINE 5ac02418f4a74fb8bf3d757397585e64 (/dev/asmdisk1p10) [DATA]
3. ONLINE 2423fd054b684f98bf56abae82e1a44c (/dev/asmdisk1p11) [DATA]
Located 3 voting disk(s).

[root@host01 ~]#

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 7: Traditional Clusterware Management


Chapter 7 - Page 54
11. Since CRS is running on host01 in exclusive mode, stop it there and restart Clusterware
normally on all nodes.
[root@host01 ~]# crsctl stop crs
CRS-2791: Starting shutdown of Oracle High Availability Services-
managed resources on 'host01'
CRS-2673: Attempting to stop 'ora.crsd' on 'host01'
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

CRS-2677: Stop of 'ora.crsd' on 'host01' succeeded


CRS-2673: Attempting to stop 'ora.ctssd' on 'host01'
CRS-2673: Attempting to stop 'ora.gpnpd' on 'host01'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'host01'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'host01'
CRS-2677: Stop of 'ora.drivers.acfs' on 'host01' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'host01' succeeded
CRS-2673: Attempting to stop 'ora.crf' on 'host01'
CRS-2673: Attempting to stop 'ora.evmd' on 'host01'
n s e
CRS-2673: Attempting to stop 'ora.storage' on 'host01'
i ce
CRS-2677: Stop of 'ora.gpnpd' on 'host01' succeeded
b l el
CRS-2677: Stop of 'ora.mdnsd' on 'host01' succeeded
fer a
CRS-2677: Stop of 'ora.storage' on 'host01' succeeded
a n s
CRS-2673: Attempting to stop 'ora.asm' on 'host01'
n - tr
no
CRS-2677: Stop of 'ora.crf' on 'host01' succeeded
a
CRS-2677: Stop of 'ora.evmd' on 'host01' succeeded
s eฺ
) h a
CRS-2677: Stop of 'ora.asm' on 'host01' succeeded
id
u
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on
m
'host01'
i s ฺco ent G
sas Stud
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'host01'
succeeded s
l e s@ this
CRS-2673: Attempting to stop 'ora.cssd' on 'host01'

b r ob use
CRS-2677: Stop of 'ora.cssd' on 'host01' succeeded
( to
CRS-2673: Attempting to stop 'ora.gipcd' on 'host01'
es
CRS-2677: Stop of 'ora.gipcd' on 'host01' succeeded
bl
o
CRS-2793: Shutdown of Oracle High Availability Services-managed
R
ul io resources on 'host01' has completed

B r a CRS-4133: Oracle High Availability Services has been stopped.


[root@host01 ~]#

[root@host01 ~]# crsctl start crs


CRS-4123: Oracle High Availability Services has been started.

[root@host01 ~]# ssh host02 /u01/app/12.1.0/grid/bin/crsctl start crs


CRS-4123: Oracle High Availability Services has been started.

[root@host01 ~]# ssh host03 /u01/app/12.1.0/grid/bin/crsctl start crs


CRS-4123: Oracle High Availability Services has been started.

[root@host01 ~]# ssh host04 /u01/app/12.1.0/grid/bin/crsctl start crs


CRS-4123: Oracle High Availability Services has been started.

[root@host01 ~]# ssh host05 /u01/app/12.1.0/grid/bin/crsctl start crs


CRS-4123: Oracle High Availability Services has been started.

[root@host01 ~]#

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 7: Traditional Clusterware Management


Chapter 7 - Page 55
12. Wait a few moments and check Clusterware on all nodes.
[root@host01 ~]# crsctl stat res -t
--------------------------------------------------------------------
Name Target State Server State
details
--------------------------------------------------------------------
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

Local Resources
--------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
ONLINE ONLINE host01 STABLE
ONLINE ONLINE host02 STABLE
ONLINE ONLINE host03 STABLE
ora.DATA.dg
ONLINE ONLINE host01 STABLE
ONLINE ONLINE host02 STABLE
n s e
ONLINE ONLINE host03 STABLE
i ce
ora.FRA.dg
b l el
ONLINE ONLINE host01 STABLE
fer a
ONLINE ONLINE host02 STABLE
a n s
ONLINE ONLINE host03
n - tr STABLE

no
ora.LABDG.dg
OFFLINE OFFLINE host01
a
s eฺ
STABLE
OFFLINE OFFLINE host02
) h a id
STABLE
OFFLINE OFFLINE host03
m u STABLE
ora.LISTENER.lsnr
i s ฺco ent G
sas Stud
ONLINE ONLINE host01 STABLE
s
ONLINE ONLINE host02 STABLE

l e s@ this
ONLINE ONLINE host03 STABLE

ob use
ora.LISTENER_LEAF.lsnr
( b rOFFLINE OFFLINE host04 STABLE
es to
OFFLINE OFFLINE host05 STABLE
o bl
ora.net1.network
io R ONLINE ONLINE host01 STABLE

r a ul ONLINE ONLINE host02 STABLE


B ora.ons
ONLINE ONLINE host03 STABLE

ONLINE ONLINE host01 STABLE


ONLINE ONLINE host02 STABLE
ONLINE ONLINE host03 STABLE
--------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE host02 STABLE
ora.LISTENER_SCAN2.lsnr
1 ONLINE ONLINE host03 STABLE
ora.LISTENER_SCAN3.lsnr
1 ONLINE ONLINE host01 STABLE
ora.MGMTLSNR
1 ONLINE ONLINE host01
169.254.29.72 192.16

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 7: Traditional Clusterware Management


Chapter 7 - Page 56
8.1.101
192.168.2.10

1,STABLE
ora.asm
1 ONLINE ONLINE host01
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

Started,STABLE
2 ONLINE ONLINE host02
Started,STABLE
3 ONLINE ONLINE host03
Started,STABLE
ora.cvu
1 ONLINE ONLINE host01 STABLE
ora.gns
1 ONLINE ONLINE host01 STABLE
n s e
ora.gns.vip
i ce
1 ONLINE ONLINE host01 STABLE
b l el
ora.host01.vip
1 ONLINE ONLINE host01 STABLE fer a
ora.host02.vip a n s
1 ONLINE ONLINE host02 n - tr STABLE
ora.host03.vip
a no
1 ONLINE ONLINE host03
h a s eฺ STABLE
ora.mgmtdb
m ) u id
ฺco ent G
1 ONLINE ONLINE host01 Open,STABLE
ora.oc4j i s
1
s sas Stud
ONLINE ONLINE host01 STABLE

s@ this
ora.orcl.db
1 l e
ONLINE ONLINE host02 Open,STABLE
2
b r ob use
ONLINE ONLINE host03 Open,STABLE
( to
es
3 ONLINE ONLINE host01 Open,STABLE
bl
ora.scan1.vip
o
io R 1 ONLINE ONLINE host02 STABLE

r a ul ora.scan2.vip
B 1
ora.scan3.vip
ONLINE ONLINE host03 STABLE

1 ONLINE ONLINE host01 STABLE


--------------------------------------------------------------------

[root@host01 ~]#
Note that the LABDG disk group is OFFLINE.

13. The LABDG resource is OFFLINE because the disk headers have been corrupted. As the
grid user, use asmcmd to list currently configured disk groups. The LABDG disk group is
no longer listed. Use asmca to recreate the disk group.
[root@host01 ~]# crsctl stat res -t -w "NAME = ora.LABDG.dg"
--------------------------------------------------------------------
Name Target State Server State
details
--------------------------------------------------------------------
Local Resources
Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 7: Traditional Clusterware Management


Chapter 7 - Page 57
--------------------------------------------------------------------
ora.LABDG.dg
ONLINE ONLINE host01 STABLE
ONLINE ONLINE host02 STABLE
ONLINE ONLINE host03 STABLE
--------------------------------------------------------------------
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

[root@host01 ~]#

14. As the grid user, determine where the management repository is running. If the
management repository is not running on host01, then relocate it there.
[root@host01 ~]# crsctl stat res ora.mgmtdb -t
-------------------------------------------------------------------
Name Target State Server State details n s e
i ce
-------------------------------------------------------------------
b l el
Cluster Resources
fer a
-------------------------------------------------------------------
a n s
ora.mgmtdb
n - tr
1 ONLINE ONLINE host02
a no Open,STABLE
a s eฺ
-------------------------------------------------------------------
h
m ) u id
[grid@host01 ~]$ srvctl relocate
i s ฺco mgmtdb
e n tG-n host01

s s as Stud
[grid@host01 ~]$ crsctl
s @ stat
t h i s res ora.mgmtdb -t
r o ble use
-------------------------------------------------------------------
Name
s (b Target
t o State Server State details
b le
-------------------------------------------------------------------
o
io R
Cluster Resources
u l
Bra
-------------------------------------------------------------------
ora.mgmtdb
1 ONLINE ONLINE host01 Open,STABLE
-------------------------------------------------------------------

[grid@host01 ~]$

15. Close all terminals opened for this practice.

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 7: Traditional Clusterware Management


Chapter 7 - Page 58
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

n s e
i ce
b l el
fer a
a n s
n - tr
Practices afornoLesson 8:
s eCluster
Policy-Based
h a ฺ
) i d
ฺ c om nt Gu
Management
a s is Chapter
de 8
s
s is S t u
s @
o b le se th
( b r o u
es t
o bl
io R
r a ul
B

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 8: Policy-Based Cluster Management


Chapter 8 - Page 1
Practices for Lesson 8: Overview
Practices Overview
In this practice you will configure server categories and the policy set. You will examine the
effect of various changes to verify the dynamic nature of policy-based cluster management.
Finally you will see how easy it is to activate policies.
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

n s e
i ce
b l el
fer a
a n s
n - tr
a no
h a s eฺ
m ) u id
i s ฺco ent G
s sas Stud
l e s@ this
b r ob use
( to
bl es
R o
ul io
B r a

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 8: Policy-Based Cluster Management


Chapter 8 - Page 2
Practice 8-1: Configuring and Using Policy-Based Cluster
Management
Overview
In this practice you will configure server categories and the policy set. You will examine the
effect of various changes to verify the dynamic nature of policy-based cluster management.
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

1. Connect to the first node of your cluster as the grid user. You can use the oraenv script
to set your environment correctly. Do not use the root account at any time during this
practice!
[vncuser@classroom_pc]# ssh grid@host01
grid@host01's password:
Last login: Mon May 20 20:01:43 2015 from dns.example.com
[grid@host01 ~]$
[grid@host01~] $ . oraenv n s e
i ce
el
ORACLE_SID = [grid] ? +ASM1
The Oracle base has been set to /u01/app/grid
a b l
2. Examine the extended server attributes associated with host01, host02 and host03. fer
a n s
n - tr
Notice the amount of physical memory associated with each server (3806MB + for host01,
3128MB + for host02 and host03). Notice also that the attributes identify these nodes
as Hub Nodes in the cluster. a no
[grid@host01 ~]$ crsctl status server h a s -feฺ
m ) host01
u id
ฺco ent G
NAME=host01
MEMORY_SIZE=3915
i s
CPU_COUNT=1
s sas Stud
s@ this
CPU_CLOCK_RATE=2992
l
CPU_HYPERTHREADING=0 e
b use
r o
CPU_EQUIVALENCY=1000
b
s(
DEPLOYMENT=other
l e to
ob
CONFIGURED_CSS_ROLE=hub
R
RESOURCE_USE_ENABLED=1
io SERVER_LABEL=
u l
Bra PHYSICAL_HOSTNAME=
STATE=ONLINE
ACTIVE_POOLS=ora.orcldb
STATE_DETAILS=
ACTIVE_CSS_ROLE=hub

[grid@host01 ~]$ crsctl status server host02 -f


NAME=host02
MEMORY_SIZE=3326
CPU_COUNT=1
CPU_CLOCK_RATE=2992
CPU_HYPERTHREADING=0
CPU_EQUIVALENCY=1000
DEPLOYMENT=other
CONFIGURED_CSS_ROLE=hub
RESOURCE_USE_ENABLED=1
SERVER_LABEL=
PHYSICAL_HOSTNAME=
STATE=ONLINE
Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 8: Policy-Based Cluster Management


Chapter 8 - Page 3
ACTIVE_POOLS=ora.orcldb
STATE_DETAILS=
ACTIVE_CSS_ROLE=hub

[grid@host01 ~]$ crsctl status server host03 -f


NAME=host03
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

MEMORY_SIZE=3326
CPU_COUNT=1
CPU_CLOCK_RATE=2992
CPU_HYPERTHREADING=0
CPU_EQUIVALENCY=1000
DEPLOYMENT=other
CONFIGURED_CSS_ROLE=hub
RESOURCE_USE_ENABLED=1
SERVER_LABEL=
n s e
ce
PHYSICAL_HOSTNAME=
STATE=ONLINE
eli
ACTIVE_POOLS=ora.orcldb
a b l
STATE_DETAILS=
fer
ACTIVE_CSS_ROLE=hub
a n s
n - tr
[grid@host01 ~]$
a no
3. Examine the extended server attributes associated with
h a shost04e ฺandand
host05. Notice that
these nodes contain less physical memory on each
m ) node
u i d
(1557MB) that they are
o nt G
identified as Leaf Nodes in this Flex Cluster.
ฺcserver
i s
s tude host04 -f
[grid@host01 ~]$ crsctl status
NAME=host04 s a
s is S
MEMORY_SIZE=1557 s@
CPU_COUNT=1
o b le se th
( b r o u
CPU_CLOCK_RATE=2992
es t
l
CPU_HYPERTHREADING=0
b
Ro
CPU_EQUIVALENCY=1000
io DEPLOYMENT=other
u l
Bra
CONFIGURED_CSS_ROLE=leaf
RESOURCE_USE_ENABLED=1
SERVER_LABEL=
PHYSICAL_HOSTNAME=
STATE=ONLINE
ACTIVE_POOLS=Free
STATE_DETAILS=
ACTIVE_CSS_ROLE=leaf

[grid@host01 ~]$ crsctl status server host05 -f


NAME=host05
MEMORY_SIZE=1557
CPU_COUNT=1
CPU_CLOCK_RATE=2992
CPU_HYPERTHREADING=0
CPU_EQUIVALENCY=1000
DEPLOYMENT=other
CONFIGURED_CSS_ROLE=leaf
RESOURCE_USE_ENABLED=1
Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 8: Policy-Based Cluster Management


Chapter 8 - Page 4
SERVER_LABEL=
PHYSICAL_HOSTNAME=
STATE=ONLINE
ACTIVE_POOLS=Free
STATE_DETAILS=
ACTIVE_CSS_ROLE=leaf
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

[grid@host01 ~]$
4. Examine the built-in category definitions. These are implicitly used to categorize the cluster
nodes as Hub Nodes or Leaf Nodes based on the ACTIVE_CSS_ROLE setting. Note that
these categories also exist in a standard cluster, however in a standard cluster all the
nodes are designated as Hub Nodes.
[grid@host01 ~]$ crsctl status category
NAME=ora.hub.category
ACL=owner:root:rwx,pgrp:root:r-x,other::r-- n s e
i ce
el
ACTIVE_CSS_ROLE=hub
EXPRESSION=
a b l
fer
NAME=ora.leaf.category
a n s
ACL=owner:root:rwx,pgrp:root:r-x,other::r--
n - tr
ACTIVE_CSS_ROLE=leaf
EXPRESSION= a no
h a s eฺ
[grid@host01 ~]$
m ) u id
5. Examine the servers that are associatedฺc
i s with
e n t Gcategories. Confirm that the server-
o the inbuilt
s s as Stud
to-category mappings are as expected.

s@ this
[grid@host01 ~]$ crsctl status server -category ora.hub.category
NAME=host01 l e
STATE=ONLINErob
b u se
l e
NAME=host02 s( to
R ob
STATE=ONLINE

u lio
Bra
NAME=host03
STATE=ONLINE
[grid@host01 ~]$ crsctl status server -category ora.leaf.category
NAME=host04
STATE=ONLINE
NAME=host05
STATE=ONLINE
[grid@host01 ~]$
6. Create a user defined category which includes the servers that contain more than 3000MB
of system memory.
[grid@host01 ~]$ crsctl add category big -attr
"EXPRESSION='(MEMORY_SIZE > 3000)'"
[grid@host01 ~]$

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 8: Policy-Based Cluster Management


Chapter 8 - Page 5
7. Examine the category definition that you created in the previous step. Notice that the
category definition includes ACTIVE_CSS_ROLE=hub by default. You could modify the
ACTIVE_CSS_ROLE attribute if you wanted to categorize Leaf Nodes.
[[grid@host01 ~]$ crsctl status category big
NAME=big
ACL=owner:grid:rwx,pgrp:oinstall:rwx,other::r--
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

ACTIVE_CSS_ROLE=hub
EXPRESSION=(MEMORY_SIZE > 3000)
8. Examine the servers associated with the big category. Confirm that host01, host02 and
host03 are associated with the big category.
[grid@host01 ~]$ crsctl status server -category big
NAME=host01
STATE=ONLINE
n s e
NAME=host02
i ce
STATE=ONLINE
b l el
fer a
NAME=host03
a n s
STATE=ONLINE
n - tr
[grid@host01 ~]$ a no
9. Examine the categories associated with the host01 h a s Notice
server.d eฺ that a server can be
) i
associated with multiple categories.
ฺ c om nt Gu
[grid@host01 ~]$ crsctl status
a s is category
de -server host01
NAME=big s
s is S t u
s @
le se th
ACL=owner:grid:rwx,pgrp:oinstall:rwx,other::r--
ACTIVE_CSS_ROLE=hub
r o b u > 3000)
( b
EXPRESSION=(MEMORY_SIZE
t o
b l es
Ro
NAME=ora.hub.category

u lio ACL=owner:root:rwx,pgrp:root:r-x,other::r--
Bra
ACTIVE_CSS_ROLE=hub
EXPRESSION=

[grid@host01 ~]$
At this point you have configured a category and examined category definitions along with
server-to-category associations. Next, you will make use of the category definitions as you
configure and exercise policy-based cluster management.
10. Examine the policy set. At this point, no policy set configuration has been performed so the
only policy listed is the Current policy. The Current policy is a special built-in policy
which contains all of the currently active server pool definitions.
[grid@host01 ~]$ crsctl status policyset
ACL=owner:grid:rwx,pgrp:oinstall:rwx,other::r--
LAST_ACTIVATED_POLICY=
SERVER_POOL_NAMES=Free
POLICY
NAME=Current
DESCRIPTION=This policy is built-in and managed automatically to
reflect current configuration
Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 8: Policy-Based Cluster Management


Chapter 8 - Page 6
SERVERPOOL
NAME=Free
IMPORTANCE=0
MAX_SIZE=-1
MIN_SIZE=0
SERVER_CATEGORY=
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

SERVER_NAMES=
SERVERPOOL
NAME=Generic
IMPORTANCE=0
MAX_SIZE=-1
MIN_SIZE=0
SERVER_CATEGORY=
SERVER_NAMES=
SERVERPOOL
n s e
NAME=ora.orcldb
i ce
IMPORTANCE=0
b l el
MAX_SIZE=3
MIN_SIZE=0 fer a
SERVER_CATEGORY=ora.hub.category a n s
SERVER_NAMES= n - tr
[grid@host01 ~]$
a no
a s ateฺ
11. Examine the policy set definition provided in the file located
h
/stage/GRID/labs/less_08/policyset.txt.
m ) Theupolicyid set definition contains two
policies. The day policy enables the orcldb ฺ co server
n G to use two Hub Nodes while not
tpool
i s e
as Sserver
providing any allocation for the bigpool
orcldb server pool, howeversitsalso provides tudanpool. The night policy still prioritizes the
allocation for the bigpool server pool.
Notice that the bigpool e @ pool
sserver t h isreferences the big server category that you defined
l
ob Notice e that the policy set does not list the my_app_pool server
salso
( b r
earlier in this practice. u
pool in the
e s to
SERVER_POOL_NAMES attribute. This demonstrates how it is possible for you to
define
o l
b server pools that are outside policy set management.
R
io [grid@host01 ~]$ cat /stage/GRID/labs/less_08/policyset.txt
u l
Bra
SERVER_POOL_NAMES=Free bigpool ora.orcldb
POLICY
NAME=day
DESCRIPTION=The day policy
SERVERPOOL
NAME=ora.orcldb
IMPORTANCE=10
MAX_SIZE=3
MIN_SIZE=1
SERVER_CATEGORY=ora.hub.category
SERVERPOOL
NAME=bigpool
IMPORTANCE=0
MAX_SIZE=0
MIN_SIZE=0
SERVER_CATEGORY=big
POLICY
NAME=night
DESCRIPTION=The night policy
Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 8: Policy-Based Cluster Management


Chapter 8 - Page 7
SERVERPOOL
NAME=ora.orcldb
IMPORTANCE=10
MAX_SIZE=3
MIN_SIZE=1
SERVER_CATEGORY=ora.hub.category
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

SERVERPOOL
NAME=bigpool
IMPORTANCE=5
MAX_SIZE=1
MIN_SIZE=1
SERVER_CATEGORY=big

[grid@host01 ~]$
12. Modify the policy set to load the configuration file.
n s e
i ce
el
[grid@host01 ~]$ crsctl modify policyset -file
/stage/GRID/labs/less_08/policyset.txt
a b l
[grid@host01 ~]$
s f er
r
13. Reexamine the policy set to confirm that the configuration file was loaded
- t anin the previous
step. Notice that the LAST_ACTIVATED_POLICY attribute is not n set and the Current
policy is unchanged from what you observed earlier. Thisa no no policies have been
is because
activated yet.
h a s eฺ
m ) u id
ฺco ent G
[grid@host01 ~]$ crsctl status policyset
s
ACL=owner:grid:rwx,pgrp:oinstall:rwx,other::r--
i
LAST_ACTIVATED_POLICY=
SERVER_POOL_NAMES=Free sbigpool sas Sora.orcldb
tud
POLICY
l e s@ this
NAME=Current
b r ob policy
u se is built-in and managed automatically to
(
DESCRIPTION=This
scurrent to
l e
ob
reflect configuration

io R SERVERPOOL
u l NAME=Free

Bra IMPORTANCE=0
MAX_SIZE=-1
MIN_SIZE=0
SERVER_CATEGORY=
SERVER_NAMES=
SERVERPOOL
NAME=Generic
IMPORTANCE=0
MAX_SIZE=-1
MIN_SIZE=0
SERVER_CATEGORY=
SERVER_NAMES=
SERVERPOOL
NAME=ora.orcldb
IMPORTANCE=0
MAX_SIZE=3
MIN_SIZE=0
SERVER_CATEGORY=ora.hub.category
SERVER_NAMES=
Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 8: Policy-Based Cluster Management


Chapter 8 - Page 8
POLICY
NAME=day
DESCRIPTION=The day policy
SERVERPOOL
NAME=Free
IMPORTANCE=0
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

MAX_SIZE=-1
MIN_SIZE=0
SERVER_CATEGORY=
SERVER_NAMES=
SERVERPOOL
NAME=bigpool
IMPORTANCE=0
MAX_SIZE=0
MIN_SIZE=0
n s e
SERVER_CATEGORY=big
i ce
SERVER_NAMES=
b l el
SERVERPOOL
NAME=ora.orcldb fer a
IMPORTANCE=10 a n s
MAX_SIZE=3 n - tr
MIN_SIZE=1
a no
SERVER_CATEGORY=ora.hub.category
h a s eฺ
SERVER_NAMES=
m ) u id
ฺco ent G
POLICY
NAME=night i s
sas Stud
DESCRIPTION=The night policy
s
s@ this
SERVERPOOL
NAME=Free l e
r
IMPORTANCE=0
b ob use
( to
es
MAX_SIZE=-1
o bl
MIN_SIZE=0

io R SERVER_CATEGORY=

r a ul SERVER_NAMES=
B SERVERPOOL
NAME=bigpool
IMPORTANCE=5
MAX_SIZE=1
MIN_SIZE=1
SERVER_CATEGORY=big
SERVER_NAMES=
SERVERPOOL
NAME=ora.orcldb
IMPORTANCE=10
MAX_SIZE=3
MIN_SIZE=1
SERVER_CATEGORY=ora.hub.category
SERVER_NAMES=
[grid@host01 ~]$

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 8: Policy-Based Cluster Management


Chapter 8 - Page 9
14. Activate the day policy.
[grid@host01 ~]$ crsctl modify policyset -attr
"LAST_ACTIVATED_POLICY='day'"
[grid@host01 ~]$
15. Reexamine the policy set. Confirm that LAST_ACTIVATED_POLICY=day and that the
Current policy settings reflect the day policy.
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

[grid@host01 ~]$ crsctl status policyset


ACL=owner:grid:rwx,pgrp:oinstall:rwx,other::r--
LAST_ACTIVATED_POLICY=day
SERVER_POOL_NAMES=Free bigpool ora.orcldb
POLICY
NAME=Current
DESCRIPTION=This policy is built-in and managed automatically to
reflect current configuration
n s e
SERVERPOOL
i ce
NAME=Free
b l el
IMPORTANCE=0
fer a
MAX_SIZE=-1
a n s
MIN_SIZE=0
n - tr
no
SERVER_CATEGORY=
SERVER_NAMES=
a
s eฺ
SERVERPOOL
) h a id
NAME=Generic
m u
IMPORTANCE=0
i s ฺco ent G
sas Stud
MAX_SIZE=-1
MIN_SIZE=0
s
SERVER_CATEGORY=
l e s@ this
ob use
SERVER_NAMES=
SERVERPOOL
( b r
es
NAME=bigpool to
o bl
IMPORTANCE=0
io R MAX_SIZE=0

r a ul MIN_SIZE=0
B SERVER_CATEGORY=big
SERVER_NAMES=
SERVERPOOL
NAME=ora.orcldb
IMPORTANCE=10
MAX_SIZE=3
MIN_SIZE=1
SERVER_CATEGORY=ora.hub.category
SERVER_NAMES=
POLICY
NAME=day
DESCRIPTION=The day policy
SERVERPOOL
NAME=Free
IMPORTANCE=0
MAX_SIZE=-1
MIN_SIZE=0
SERVER_CATEGORY=
Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 8: Policy-Based Cluster Management


Chapter 8 - Page 10
SERVER_NAMES=
SERVERPOOL
NAME=bigpool
IMPORTANCE=0
MAX_SIZE=0
MIN_SIZE=0
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

SERVER_CATEGORY=big
SERVER_NAMES=
SERVERPOOL
NAME=ora.orcldb
IMPORTANCE=10
MAX_SIZE=3
MIN_SIZE=1
SERVER_CATEGORY=ora.hub.category
SERVER_NAMES=
n s e
POLICY
i ce
NAME=night
b l el
DESCRIPTION=The night policy
SERVERPOOL fer a
NAME=Free a n s
IMPORTANCE=0 n - tr
MAX_SIZE=-1
a no
MIN_SIZE=0
h a s eฺ
SERVER_CATEGORY=
m ) u id
ฺco ent G
SERVER_NAMES=
SERVERPOOL i s
NAME=bigpool
s sas Stud
s@ this
IMPORTANCE=5
MAX_SIZE=1 l e
MIN_SIZE=1
b r ob use
( to
es
SERVER_CATEGORY=big
o bl
SERVER_NAMES=

io R SERVERPOOL

r a ul NAME=ora.orcldb
B IMPORTANCE=10
MAX_SIZE=3
MIN_SIZE=1
SERVER_CATEGORY=ora.hub.category
SERVER_NAMES=
[grid@host01 ~]$
16. Examine the server pool allocations. Confirm the Hub Nodes are still allocated to the
orcldb server pool in line with the day policy.
[grid@host01 ~]$ crsctl status serverpool
NAME=Free
ACTIVE_SERVERS=host04 host05

NAME=Generic
ACTIVE_SERVERS=

NAME=bigpool
ACTIVE_SERVERS=

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 8: Policy-Based Cluster Management


Chapter 8 - Page 11
NAME=ora.orcldb
ACTIVE_SERVERS=host01 host02 host03
[grid@host01 ~]$
17. Activate the night policy.
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

[grid@host01 ~]$ crsctl modify policyset -attr


"LAST_ACTIVATED_POLICY='night'"

[grid@host01 ~]$
18. Examine the server pool allocations. Confirm that the servers allocated to each pool are
consistent with the night policy.
[grid@host01 ~]$ crsctl status serverpool
NAME=Free
n s e
i ce
el
ACTIVE_SERVERS=host04 host05

a b l
NAME=Generic
fer
ACTIVE_SERVERS=
a n s
n - tr
NAME=bigpool
ACTIVE_SERVERS=host01 a no
h a s eฺ
NAME=ora.orcldb
m ) u id
ACTIVE_SERVERS=host02 host03
i s ฺco ent G
[grid@host01 ~]$ s sas Stud
19. One of the side-effects e s@
of t
activatingh is night policy is that the RAC database (orcl) that
the
l
b host01,
oboth se host02, and host03 now only runs on two nodes.
previously ran on
b r u
(only two instances
s
Confirm that
e to of the orcl database are running. This demonstrates how
b
Oracle
o l Clusterware automatically starts and stops required resources when a policy change
i o Ris made.
ul [grid@host01 ~]$ srvctl status database -d orcl
Bra Instance orcl_1 is running on node host03
Instance orcl_3 is running on node host02
[grid@host01 ~]$
20. To illustrate the dynamic nature of policy-based cluster management, modify the big
category so that no servers can be associated with it. Notice that the category change
immediately causes a server re-allocation, which in turn causes an instance of the orcl
database to start up.
[grid@host01 ~]$ crsctl modify category big -attr
"EXPRESSION='(MEMORY_SIZE > 8000)'"
CRS-2672: Attempting to start 'ora.orcl.db' on 'host01'
CRS-2676: Start of 'ora.orcl.db' on 'host01' succeeded
[grid@host01 ~]$

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 8: Policy-Based Cluster Management


Chapter 8 - Page 12
21. Reexamine the server pool allocations. Notice that no servers are associated with the
bigpool server pool because of the change to the big category, which in turn results in
three servers being allocated to the orcldb server pool. Confirm that ora.orcl.db is
now placed on three servers.
[grid@host01 ~]$ crsctl status serverpool
NAME=Free
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

ACTIVE_SERVERS=host04 host05

NAME=Generic
ACTIVE_SERVERS=

NAME=bigpool
ACTIVE_SERVERS=

NAME=ora.orcldb
n s e
i ce
el
ACTIVE_SERVERS=host01 host02 host03

a b l
[grid@host01 ~]$ srvctl status database -d orcl
fer
Instance orcl_1 is running on node host03
a n s
Instance orcl_2 is running on node host01
n - tr
Instance orcl_3 is running on node host02
[grid@host01 ~]$ a no
22. Close all terminal windows opened for this practice.ha
s eฺ
m ) u id
i s ฺco ent G
s sas Stud
l e s@ this
b r ob use
l e s( to
R ob
u lio
Bra

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 8: Policy-Based Cluster Management


Chapter 8 - Page 13
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

n s e
i ce
b l el
fer a
a n s
n - tr
a no
h a s eฺ
m ) u id
i s ฺco ent G
s sas Stud
l e s@ this
b r ob use
( to
bl es
R o
ul io
B r a

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 8: Policy-Based Cluster Management


Chapter 8 - Page 14
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

n s e
i ce
b l el
fer a
a n s
n - tr
Practices afor noLesson 9:
Upgrading
h a s and eฺ Patching Grid
) i d
ฺ c om nt Gu
Infrastructure
a s is Chapter
de 9
s
s is S t u
s @
o b le se th
( b r o u
es t
o bl
io R
r a ul
B

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 9: Upgrading and Patching Grid Infrastructure


Chapter 9 - Page 1
Practices for Lesson 9

There are no practices for this lesson.


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

n s e
i ce
b l el
fer a
a n s
n - tr
a no
h a s eฺ
m ) u id
i s ฺco ent G
s sas Stud
l e s@ this
b r ob use
( to
bl es
R o
ul io
B r a

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 9: Upgrading and Patching Grid Infrastructure


Chapter 9 - Page 2
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

n s e
i ce
b l el
fer a
a n s
n - tr
Practices afornoLesson 10:
h a s eฺ Oracle
Troubleshooting
m )
Clusterwareu id
i s ฺco ent G
s sas SChapter
tud 10
l e s@ this
b r ob use
( to
bl es
R o
ul io
B r a

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 10: Troubleshooting Oracle Clusterware


Chapter 10 - Page 1
Practices for Lesson 10: Overview
Practices Overview
In this practice, you will work with Oracle Clusterware log files and learn to use the ocrdump
and cluvfy utilities.
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

n s e
i ce
b l el
fer a
a n s
n - tr
a no
h a s eฺ
m ) u id
i s ฺco ent G
s sas Stud
l e s@ this
b r ob use
( to
bl es
R o
ul io
B r a

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 10: Troubleshooting Oracle Clusterware


Chapter 10 - Page 2
Practice 10-1: Working with Log Files
Overview
In this practice, you will examine the Oracle Clusterware alert log and then package various log
files into an archive format suitable to send to My Oracle Support.
1. Connect to host01 as the grid user and view the contents of the Oracle Clusterware alert
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

log.
[vncuser@classroom_pc ~]$ ssh grid@host01
grid@host01's password:
Last login: Wed May 22 15:37:18 2015 from dns.example.com

[grid@host01 ~]$ . oraenv


ORACLE_SID = [grid] ? +ASM1
The Oracle base has been set to /u01/app/grid
n s e
i ce
[grid@host01 ~]$ adrci
b l el
fer a
ADRCI: Release 12.1.0.2.0 - Production on Thu Jun 18 11:26:04 2015
a n s
n - tr
no
Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights
reserved.
a
s eฺ
) h a id
ADR base = "/u01/app/grid"
m u
adrci> show alert
i s ฺco ent G
Choose the home from which
s sasto S tuddiagnostic logs:
view

l e s@ this
ob use
1: diag/asm/+asm/+ASM1
b r
2: diag/crs/host01/crs
e s( to
3: diag/rdbms/_mgmtdb/-MGMTDB
l
b
odiag/clients/user_root/host_2394698683_82
4: diag/clients/user_grid/host_2394698683_82
io R
5:
u l
Bra
6: diag/clients/user_oracle/host_2394698683_82
7: diag/tnslsnr/host01/listener
8: diag/tnslsnr/host01/asmnet1lsnr_asm
9: diag/tnslsnr/host01/listener_scan2
10: diag/tnslsnr/host01/listener_scan3
11: diag/tnslsnr/host01/listener_scan1
12: diag/tnslsnr/host01/mgmtlsnr
Q: to quit

Please select option: 2

********************************************************************

015-06-16 14:55:23.590000 +00:00


2015-06-16 14:55:23.589 [OCRCONFIG(16709)]CRS-2101: The OLR was
formatted using version 4.
2015-06-16 14:56:22.939000 +00:00
2015-06-16 14:56:22.937 [OHASD(17682)]CRS-8500: Oracle Clusterware
OHASD process is starting with operating system process ID 17682
Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 10: Troubleshooting Oracle Clusterware


Chapter 10 - Page 3
2015-06-16 14:56:22.938 [OHASD(17682)]CRS-0714: Oracle Clusterware
Release 12.1.0.2.0.
2015-06-16 14:56:22.954 [OHASD(17682)]CRS-2112: The OLR service
started on node host01.
2015-06-16 14:56:22.990 [OHASD(17682)]CRS-1301: Oracle High
Availability Service started on node host01.
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

2015-06-16 14:56:32.632000 +00:00


2015-06-16 14:56:32.632 [CLSECHO(17964)]CRS-10001: 16-Jun-15 14:56
OKA-620: OKA is not supported on this operating system version:
'2.6.39-400.209.1.el6uek.x86_64'
2015-06-16 14:56:32.659 [CLSECHO(17966)]CRS-10001: 16-Jun-15 14:56
OKA-9201: Not Supported
2015-06-16 14:56:32.821 [CLSECHO(17976)]CRS-10001: 16-Jun-15 14:56
OKA-620: OKA is not supported on this operating system version:
'2.6.39-400.209.1.el6uek.x86_64'
n s e
2015-06-16 14:56:32.848 [CLSECHO(17978)]CRS-10001: 16-Jun-15 14:56
i ce
OKA-9201: Not Supported
b l el
2015-06-16 14:56:37.408000 +00:00
2015-06-16 14:56:37.408 [CLSECHO(18474)]CRS-10001: 16-Jun-15 14:56 fer a
ACFS-9200: Supported a n s
2015-06-16 14:56:49.716000 +00:00 n - tr
no
2015-06-16 14:56:49.712 [OHASD(18572)]CRS-8500: Oracle Clusterware
a
h a s eฺ
OHASD process is starting with operating system process ID 18572
) id
2015-06-16 14:56:49.713 [OHASD(18572)]CRS-0714: Oracle Clusterware
m u
ฺco ent G
Release 12.1.0.2.0.
i s
2015-06-16 14:56:49.721 [OHASD(18572)]CRS-2112: The OLR service
started on node host01.
s sas Stud
e is quit the editor **********************
s@ :q!thto
********************** l
ob use
b r
( trace directory
2. Navigate e sto the to and determine whether any Oracle Cluster Synchronization
l
ob daemon trace archives exist.
Services
R
io [grid@host01 ~]$ls –l
u l
Bra $ORACLE_BASE/diag/crs/host01/crs/trace/ocssd*.trc
-rw-rw---- 1 grid oinstall 52429515 Jun 24 18:32 ocssd_1.trc
-rw-rw---- 1 grid oinstall 23848554 Jun 25 07:10 ocssd.trc

[grid@host01 ~]$
3. Switch to the root user and set up the environment variables for the Grid Infrastructure.
Change to the ORACLE_HOME/bin directory and run the diagcollection.pl script to
gather all log files that can be sent to My Oracle Support for problem analysis.
[grid@host01 ~]$ su -
Password:

[root@host01 ~]# . oraenv


ORACLE_SID = [root] ? +ASM1
The Oracle base has been set to /u01/app/grid

[root@host01 ~]# cd $ORACLE_HOME/bin

[root@host01 bin]# ./diagcollection.pl --collect $ORACLE_HOME


Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 10: Troubleshooting Oracle Clusterware


Chapter 10 - Page 4
Production Copyright 2004, 2010, Oracle. All rights reserved
Cluster Ready Services (CRS) diagnostic collection tool
ORACLE_BASE is /u01/app/grid
The following CRS diagnostic archives will be created in the local
directory.
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

crsData_host01_20150612_0540.tar.gz -> logs,traces and cores from


CRS home. Note: core files will be packaged only with the --core
option.
baseData_host01_20150612_0540.tar.gz -> logs,traces and cores from
Oracle Base. Note: core files will be packaged only with the --core
option.
ocrData_host01_20150612_0540.tar.gz -> ocrdump, ocrcheck etc
coreData_host01_20150612_0540.tar.gz -> contents of CRS core files
in text format
n s e
i ce
osData_host01_20150612_0540.tar.gz -> logs from Operating System
b l el
lsInventory_host01_20150612_0540 ->Opatch lsinventory details
Collecting crs data fer a
Collecting Oracle base data a n s
Collecting OCR data n - tr
Collecting information from core files
a no
No corefiles found
h a s eฺ
Collecting lsinventory details
m ) u id
ฺco ent G
The following diagnostic archives will be created in the local
directory. i s
sas Stud
acfsData_host01_20150612_0540.tar.gz -> logs from acfs log.
s
s@ this
Collecting acfs data
Collecting OS logs l e
r ob use
Collecting sysconfig data
b
( to
bl es
[root@host01 bin]#
R o
a u lio
4. List the resulting log file archives that were generated with the diagcollection.pl
script.
B r
[root@host01 bin]# ls -la *.gz
-rw-r--r-- 1 root root 1090 Jun 12 05:43
acfsData_host01_20150612_0540.tar.gz
-rw-r--r-- 1 root root 24847453 Jun 12 05:42
baseData_host01_20150612_0540.tar.gz
-rw-r--r-- 1 root root 258689 Jun 12 05:40
crsData_host01_20150612_0540.tar.gz
-rw-r--r-- 1 root root 31678 Jun 12 05:43
ocrData_host01_20150612_0540.tar.gz
-rw-r--r-- 1 root root 207286 Jun 12 05:43
osData_host01_20150612_0540.tar.gz
[root@host01 bin]#

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 10: Troubleshooting Oracle Clusterware


Chapter 10 - Page 5
5. Exit the root account and return to the grid account.
[root@host01 bin]# exit
Logout

[grid@host01 ~]$
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

n s e
i ce
b l el
fer a
a n s
n - tr
a no
h a s eฺ
m ) u id
i s ฺco ent G
s sas Stud
l e s@ this
b r ob use
( to
bl es
R o
ul io
B r a

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 10: Troubleshooting Oracle Clusterware


Chapter 10 - Page 6
Practice 10-2: Working with OCRDUMP
Overview
In this practice, you will work with the OCRDUMP utility and dump the binary file into both text
and XML representations.
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

1. While connected to the grid account, dump the contents of the OCR to the standard output
and count the number of lines of output.
[grid@host01 ~]$ ocrdump -stdout | wc -l
1416
[grid@host01 ~]$
2. Switch to the root user, dump the contents of the OCR to the standard output and count
the number of lines of output. Compare your results with the previous step. How do the
results differ?
n s e
[grid@host01 ~]$ su -
i ce
Password:
b l el
fer a
[root@host01 ~]# . oraenv
a n s
ORACLE_SID = [root] ? +ASM1
n - tr
The Oracle base has been set to /u01/app/grid
a no
[root@host01 ~]# ocrdump -stdout | wc -l
h a s eฺ
4153
m ) u id
[root@host01 ~]#
i s ฺco ent G
3. Dump the first 25 lines of the OCR
s s atos standard
S tudoutput using XML format.
[root@host01 ~]# ocrdump
e s@ t-stdout
h is -xml | head -25
<OCRDUMP> l
ob use
b r
e s(
<TIMESTAMP>06/12/2015
l
to 06:22:03</TIMESTAMP>
R ob
<COMMAND>/u01/app/12.1.0/grid/bin/ocrdump.bin -stdout -xml

u lio </COMMAND>

Bra <KEY>
<NAME>SYSTEM</NAME>
<VALUE_TYPE>UNDEF</VALUE_TYPE>
<VALUE><![CDATA[]]></VALUE>
<USER_PERMISSION>PROCR_ALL_ACCESS</USER_PERMISSION>
<GROUP_PERMISSION>PROCR_READ</GROUP_PERMISSION>
<OTHER_PERMISSION>PROCR_READ</OTHER_PERMISSION>
<USER_NAME>root</USER_NAME>
<GROUP_NAME>root</GROUP_NAME>

<KEY>
<NAME>SYSTEM.version</NAME>
<VALUE_TYPE>UB4 (10)</VALUE_TYPE>
<VALUE><![CDATA[5]]></VALUE>
<USER_PERMISSION>PROCR_ALL_ACCESS</USER_PERMISSION>
<GROUP_PERMISSION>PROCR_READ</GROUP_PERMISSION>
<OTHER_PERMISSION>PROCR_READ</OTHER_PERMISSION>
<USER_NAME>root</USER_NAME>
Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 10: Troubleshooting Oracle Clusterware


Chapter 10 - Page 7
<GROUP_NAME>root</GROUP_NAME>

[root@host01 ~]#4
4. Create an XML file dump of the OCR in the /tmp directory. Name the dump file
ocr_current_dump.xml.
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

[root@host01 ~]# ocrdump -xml /tmp/ocr_current_dump.xml


[root@host01 ~]#
5. Find the node and the directory that contains the automatic backup of the OCR from 24
hours ago.
[root@host01 ~]# ocrconfig -showbackup

host01 2015/06/12 03:31:57


/u01/app/12.1.0/grid/cdata/cluster01/backup00.ocr 0
n s e
i ce
host01 2015/06/11 23:31:56
/u01/app/12.1.0/grid/cdata/cluster01/backup01.ocr 0 b l el
fer a
host01 2015/06/11 23:31:56
a n s
0-tr
/u01/app/12.1.0/grid/cdata/cluster01/day.ocr
no n
host01 2015/06/11 23:31:56 a
s eฺ 0
/u01/app/12.1.0/grid/cdata/cluster01/week.ocr
) h a id
m u
ฺco ent G
...
[root@host01 ~]#
i s
6. Copy the 24-hour-old automaticsbackup
s as S udOCR into the /tmp directory. This is not a
of tthe
s@
dump, but rather an actual backup ofis the OCR. (If the daily backup of the OCR is not there,
l e e t h
b
Note: It may(be
ob utosuse
use the oldest backup on
rnecessary
the list.)

s t o scp if the file is located on a different node.


b le
[root@host01
o ~]# cp /u01/app/12.1.0/grid/cdata/cluster01/day.ocr

io R
/tmp/day.ocr
u l
Bra [root@host01 ~]#
7. Dump the contents of the day.ocr backup OCR file in the XML format saving the file in the
/tmp directory. Name the file as day_ocr.xml.
[root@host01 ~]# ocrdump -xml -backupfile /tmp/day.ocr
/tmp/day_ocr.xml
[root@host01 ~]#
-rw-r--r-- 1 grid oinstall 52630342 May 24 09:28 ocssd.l01
-rw-r--r-- 1 grid oinstall 52631859 May 23 19:06 ocssd.l02
-rw-r--r-- 1 grid oinstall 52631497 May 23 04:28 ocssd.l03
-rw-r--r-- 1 grid oinstall 52634557 May 22 13:38 ocssd.l04
8. Compare the differences between the day_ocr.xml file and the
ocr_current_dump.xml file to determine all changes made to the OCR in the last 24
hours. Log out as root when finished.
[root@host01 ~]# diff /tmp/day_ocr.xml
/tmp/ocr_current_dump.xml|more
3,5c3,4

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 10: Troubleshooting Oracle Clusterware


Chapter 10 - Page 8
< <TIMESTAMP>06/12/2015 06:31:13</TIMESTAMP>
< <DEVICE>/tmp/day.ocr</DEVICE>
< <COMMAND>/u01/app/12.1.0/grid/bin/ocrdump.bin -xml -backupfile
/tmp/day.ocr /tmp/day_ocr.xml </COMMAND>
---
> <TIMESTAMP>06/12/2015 06:23:11</TIMESTAMP>
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

> <COMMAND>/u01/app/12.1.0/grid/bin/ocrdump.bin -xml


/tmp/ocr_current_dump.xml </COMMAND>
6563,6564c6562,6563
< <VALUE_TYPE>UNDEF</VALUE_TYPE>
< <VALUE><![CDATA[]]></VALUE>
---
> <VALUE_TYPE>ORATEXT</VALUE_TYPE>
> <VALUE><![CDATA[host01]]></VALUE>
6575,6576c6574,6575
n s e
< <VALUE_TYPE>UNDEF</VALUE_TYPE>
i ce
< <VALUE><![CDATA[]]></VALUE>
b l el
---
> <VALUE_TYPE>ORATEXT</VALUE_TYPE> fer a
> <VALUE><![CDATA[2015/06/12 03:31:57]]></VALUE> a n s
6587,6588c6586,6587 n - tr
< <VALUE_TYPE>UNDEF</VALUE_TYPE>
a no
< <VALUE><![CDATA[]]></VALUE>
h a s eฺ
---
m ) u id
ฺco ent G
> <VALUE_TYPE>ORATEXT</VALUE_TYPE>
i s
> <VALUE><![CDATA[/u01/app/12.1.0/grid/cdata/cluster01]]></VALUE>
6599,6600c6598,6599
s sas Stud
s@ this
< <VALUE_TYPE>UNDEF</VALUE_TYPE>
l e
< <VALUE><![CDATA[]]></VALUE>
---
b r ob use
( to
es
> <VALUE_TYPE>ORATEXT</VALUE_TYPE>
bl
> <VALUE><![CDATA[backup00.ocr]]></VALUE>
o
io R
6611,6612c6610,6611

r a ul < <VALUE_TYPE>UNDEF</VALUE_TYPE>
B < <VALUE><![CDATA[]]></VALUE>
---
> <VALUE_TYPE>ORATEXT</VALUE_TYPE>
> <VALUE><![CDATA[0]]></VALUE>
6635,6636c6634,6635
< <VALUE_TYPE>UNDEF</VALUE_TYPE>
< <VALUE><![CDATA[]]></VALUE>
---
> <VALUE_TYPE>ORATEXT</VALUE_TYPE>
> <VALUE><![CDATA[host01]]></VALUE>
6647,6648c6646,6647
< <VALUE_TYPE>UNDEF</VALUE_TYPE>
< <VALUE><![CDATA[]]></VALUE>
---
> <VALUE_TYPE>ORATEXT</VALUE_TYPE>
> <VALUE><![CDATA[2015/06/11 23:31:56]]></VALUE>
6659,6660c6658,6659
< <VALUE_TYPE>UNDEF</VALUE_TYPE>

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 10: Troubleshooting Oracle Clusterware


Chapter 10 - Page 9
< <VALUE><![CDATA[]]></VALUE>
---
> <VALUE_TYPE>ORATEXT</VALUE_TYPE>
> <VALUE><![CDATA[/u01/app/12.1.0/grid/cdata/cluster01]]></VALUE>
6671,6672c6670,6671
< <VALUE_TYPE>UNDEF</VALUE_TYPE>
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

< <VALUE><![CDATA[]]></VALUE>
---
...
[root@host01 ~]# exit
logout
[grid@host01 cssd]$ cd
[grid@host01 ~]$

n s e
i ce
b l el
fer a
a n s
n - tr
a no
h a s eฺ
m ) u id
i s ฺco ent G
s sas Stud
l e s@ this
b r ob use
( to
bl es
R o
ul io
B r a

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 10: Troubleshooting Oracle Clusterware


Chapter 10 - Page 10
Practice 10-3: Working with CLUVFY
Overview
In this practice, you will work with CLUVFY to verify the state of various cluster components.
1. Determine the location of the cluvfy utility and its configuration file.
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

[grid@host01 ~]$ which cluvfy


/u01/app/12.1.0/grid/bin/cluvfy

[grid@host01 ~]$ cd $ORACLE_HOME/cv/admin

[grid@host01 admin]$ cat cvu_config


# Configuration file for Cluster Verification Utility(CVU)
# Version: 011405
#
n s e
i ce
el
# NOTE:
# 1._ Any line without a '=' will be ignored
a b l
# 2._ Since the fallback option will look into the environment
fer
variables,
a n s
#
- tr
please have a component prefix(CV_) for each property to
n
define a
# namespace. a no
#
h a s eฺ
m ) u id
o G
#Nodes for the cluster. If sCRSi sฺchomedeisntnot installed, this list will
be s sa Stu
#picked up when -nsall
e is
@ isthmentioned in the commandline argument.
#CV_NODE_ALL= b
o l se
b r u
l e s ( cvuqdisk
#if enabled, to rpm is required on all nodes
ob
CV_RAW_CHECK_ENABLED=TRUE
R
u lio
Bra
# Fallback to this distribution id
#CV_ASSUME_DISTID=OEL5

#Complete file system path of sudo binary file, default is


/usr/local/bin/sudo
CV_SUDO_BINARY_LOCATION=/usr/local/bin/sudo

#Complete file system path of pbrun binary file, default is


/usr/local/bin/pbrun
CV_PBRUN_BINARY_LOCATION=/usr/local/bin/pbrun

# Whether X-Windows check should be performed for user equivalence


with SSH
#CV_XCHK_FOR_SSH_ENABLED=TRUE

# To override SSH location


#ORACLE_SRVM_REMOTESHELL=/usr/bin/ssh

# To override SCP location


Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 10: Troubleshooting Oracle Clusterware


Chapter 10 - Page 11
#ORACLE_SRVM_REMOTECOPY=/usr/bin/scp

# To override version used by command line parser


CV_ASSUME_CL_VERSION=12.1

# Location of the browser to be used to display HTML report


Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

#CV_DEFAULT_BROWSER_LOCATION=/usr/bin/mozilla

[grid@host01 admin]$
2. Display the stage options and stage names that can be used with the cluvfy utility.
[grid@host01 admin]$ cluvfy stage -list

USAGE:
cluvfy stage {-pre|-post} <stage-name> <stage-specific options> [-
n s e
ce
verbose]
eli
Valid Stages are:
a b l
-pre cfs : pre-check for CFS setup fer
-pre crsinst : pre-check for CRS installation
a n s
-pre acfscfg :
n - tr
pre-check for ACFS Configuration.
-pre dbinst : no
pre-check for database installation
a
-pre dbcfg : s eฺ
pre-check for database configuration
h a
-pre hacfg :
m ) id
pre-check for HA configuration
u
ฺco ent G
-pre nodeadd : pre-check for node addition.
-post hwos
i s : post-check for hardware and operating system
-post cfs
s sas Stud : post-check for CFS setup

s@ this
-post crsinst : post-check for CRS installation
-post acfscfg
l e : post-check for ACFS Configuration.

b r
-post hacfgob use : post-check for HA configuration
( to
es
-post nodeadd : post-check for node addition.

o bl-post nodedel : post-check for node deletion.


R
a u lio [grid@host01 admin]$
Br 3. Perform a postcheck for the ACFS configuration on all nodes.
[grid@host01 admin]$ cluvfy stage -post acfscfg -n all

Performing post-checks for ACFS Configuration

Checking node reachability...


Node reachability check passed from node "host01"

Checking user equivalence...


User equivalence check passed for user "grid"
Task ASM Integrity check started...

Checking if connectivity exists across cluster nodes on the ASM


network

Checking node connectivity...

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 10: Troubleshooting Oracle Clusterware


Chapter 10 - Page 12
Checking hosts config file...

Verification of the hosts config file successful

Check: Node connectivity using interfaces on subnet "192.168.2.0"


Node connectivity passed for subnet "192.168.2.0" with node(s)
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

host03,host02,host01
TCP connectivity check passed for subnet "192.168.2.0"

Check: Node connectivity using interfaces on subnet "192.168.1.0"


Node connectivity passed for subnet "192.168.1.0" with node(s)
host03,host01,host02
TCP connectivity check passed for subnet "192.168.1.0"

n s e
Checking subnet mask consistency...
i ce
Subnet mask consistency check passed for subnet "192.168.1.0".
b l el
Subnet mask consistency check passed for subnet "192.168.2.0".
Subnet mask consistency check passed. fer a
a n s
Node connectivity check passed n - tr
o
Network connectivity check across clusters nodes a n ฺon the ASM network
passed ) ha uide
m tG
Starting check to see if ASMisis ฺcorunning
en on all cluster nodes...
a s t u d
@ ss ASM s S
tAthi least one Disk Group configured
ASM Running check passed. is running on all specified nodes
Disk Group Checkle s
passed.
b
ro ocheck
u s e
s ( b
Task ASM Integrity t passed...
l e
R obACFS Integrity check started...
Task
u lio
Bra Task ACFS Integrity check passed

UDev attributes check for ACFS started...


UDev attributes check passed for ACFS

Post-check for ACFS Configuration was successful.

[grid@host01 admin]$ $
4. Display a list of the component names that can be checked with the cluvfy utility.
[grid@host01 admin]$ cluvfy comp -list

USAGE:
cluvfy comp <component-name> <component-specific options> [-
verbose]

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 10: Troubleshooting Oracle Clusterware


Chapter 10 - Page 13
Valid Components are:
nodereach : checks reachability between nodes
nodecon : checks node connectivity
cfs : checks CFS integrity
ssa : checks shared storage accessibility
space : checks space availability
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

sys : checks minimum system requirements


clu : checks cluster integrity
clumgr : checks cluster manager integrity
ocr : checks OCR integrity
olr : checks OLR integrity
ha : checks HA integrity
freespace : checks free space in CRS Home
crs : checks CRS integrity
nodeapp : checks node applications existence
n s e
admprv : checks administrative privileges
i ce
peer : compares properties with peers
b l el
software
acfs
: checks software distribution
: checks ACFS integrity fer a
asm : checks ASM integrity a n s
gpnp : checks GPnP integrity n - tr
gns : checks GNS integrity
a no
scan s eฺ
: checks SCAN configuration
h a
ohasd )
: checks OHASD integrity
m u id
ฺco ent G
clocksync : checks Clock Synchronization
vdisk i s
: checks Voting Disk configuration and UDEV
settings
s sas Stud
s@ this
healthcheck : checks mandatory requirements and/or best
l e
practice recommendations
dhcp
b r ob use
: checks DHCP configuration
( to
es
dns : checks DNS configuration
o blbaseline : collect and compare baselines
R
a u lio [grid@host01 admin]$
Br 5. Display the syntax usage help for the space component check of the cluvfy utility.
[grid@host01 admin]$ cluvfy comp space -help

USAGE:
cluvfy comp space [-n <node_list>] -l <storage_location> -z
<disk_space>{B|K|M|G} [-verbose]

<node_list> is the comma-separated list of non-domain qualified node


names on which the test should be conducted. If "all" is specified,
then all the nodes in the cluster will be used for verification.
<storage_location> is the storage path.
<disk_space> is the required disk space, in units of
bytes(B),kilobytes(K),megabytes(M) or gigabytes(G).

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 10: Troubleshooting Oracle Clusterware


Chapter 10 - Page 14
DESCRIPTION:
Checks for free disk space at the location provided by '-l' option
on all the nodes in the nodelist. If no '-n' option is given, local
node is used for this check.

[grid@host01 admin]$
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

6. Verify that on each node of the cluster the /tmp directory has at least 200 MB of free space
in it using the cluvfy utility. Use verbose output.
[grid@host01 admin]$ cluvfy comp space -n host01,host02,host03 -l
/tmp -z 200M -verbose

Verifying space availability

Checking space availability...


n s e
i ce
Check: Space available on "/tmp"
b l el
Node Name Available Required f a
er
Status
---------- ------------------------ -------------- s
an ----------
host03 5.9636GB (6253308.0KB) n - tr 200MB (204800.0KB) passed
host02 5.965GB (6254796.0KB)
a no 200MB (204800.0KB) passed
host01
h a s eฺ
6.7903GB (7120168.0KB) 200MB (204800.0KB) passed

m d
) forui"/tmp"
ฺc ent G
Result: Space availability checkopassed
i s
sas Studwas successful.
Verification of space availability
s
l
[grid@host01 admin]$e s@ this
b r ob use
l e s( to
R ob
u lio
Bra

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 10: Troubleshooting Oracle Clusterware


Chapter 10 - Page 15
Practice 10-4: Working with Cluster Health Monitor
Overview
In this practice, you will work with the OCRDUMP utility and dump the binary file into both text
and XML representations.
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

1. Open a terminal session to host01 as the grid user. Use the oraenv script to set the
Grid environment. Display Clusterware resource information for the management repository
database and the associated listener.
$ ssh grid@host01
grid@host01's password:
Last login: Thu May 30 20:13:28 2015 from dns.example.com

[grid@host01 ~]$ . oraenv


ORACLE_SID = [grid] ? +ASM1
n s e
i ce
el
The Oracle base has been set to /u01/app/grid

a b l
[grid@host01 ~]$ crsctl stat res ora.mgmtdb -t
fer
--------------------------------------------------------------------
a n s
Name Target State Server
n - tr
State details

Cluster Resources a no
--------------------------------------------------------------------

h a s eฺ
--------------------------------------------------------------------
ora.mgmtdb m ) u id
1 ONLINE ONLINE
i s ฺco ent G
host01 Open,STABLE

sas Stud
--------------------------------------------------------------------
s
@ stat isres ora.MGMTLSNR -t
e
[grid@host01 ~]$ crsctl
l s t h
b r obTargetuseState
--------------------------------------------------------------------
Name
s ( t o Server State details
ble Resources
--------------------------------------------------------------------
o
R
Cluster
io --------------------------------------------------------------------
u l
Bra
ora.MGMTLSNR
1 ONLINE ONLINE 169.254.147.164 192.host01
168.1.101,STABLE
--------------------------------------------------------------------

[grid@host01 ~]$
2. Determine which node in the cluster is hosting the cluster logger service. Retrieve detailed
logger information.
[grid@host01 ~]$ oclumon manage -get master

Master = host01

[grid@host01 ~]$ oclumon manage -get alllogger -details

Logger = host01
Nodes = host01,host03,host02,host04,host05

[grid@host01 ~]$
Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 10: Troubleshooting Oracle Clusterware


Chapter 10 - Page 16
3. Determine where the management repository database files are stored.
[grid@host01 ~]$ oclumon manage -get reppath

CHM Repository Path =


+DATA/_MGMTDB/FD9B43BF6A646F8CE043B6A9E80A2815/DATAFILE/sysmgmtdata.
269.882743737
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

[grid@host01 ~]$
4. Run the diagcollection.pl script from the /tmp directory as the root user to collect
all the data in the Grid management repository.
[grid@host01 ~]$ su -
Password:

[root@host01 ~]# cd /tmp


n s e
i ce
[root@host01 tmp]# /u01/app/12.1.0/grid/bin/diagcollection.pl -
b l el
collect
fer a
Production Copyright 2004, 2010, Oracle. All rights reserved
a n s
Cluster Ready Services (CRS) diagnostic collection tool
ORACLE_BASE is /u01/app/grid n - tr
a no
The following CRS diagnostic archives will be created in the local
directory.
h a s eฺ
) id
crsData_host01_20150618_1317.tar.gz -> logs,traces and cores from
m u
i s ฺco ent G
CRS home. Note: core files will be packaged only with the --core

sas Stud
option.
baseData_host01_20150618_1317.tar.gz -> logs,traces and cores from
s
l e s@ this
Oracle Base. Note: core files will be packaged only with the --core

ob use
option.
b r
ocrData_host01_20150618_1317.tar.gz -> ocrdump, ocrcheck etc
(
es to
coreData_host01_20150618_1317.tar.gz -> contents of CRS core files
o bl
in text format
R
io osData_host01_20150618_1317.tar.gz
r a ul -> logs from Operating System
B lsInventory_host01_20150618_1317 ->Opatch lsinventory details
Collecting crs data
Collecting Oracle base data
Collecting OCR data
Collecting information from core files
No corefiles found
Collecting lsinventory details
The following diagnostic archives will be created in the local
directory.
acfsData_host01_20150618_1317.tar.gz -> logs from acfs log.
Collecting acfs data
Collecting OS logs
Collecting sysconfig data

[root@host01 tmp]# exit

[grid@host01 ~]$

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 10: Troubleshooting Oracle Clusterware


Chapter 10 - Page 17
5. Use the oclumon command to dump node views for host01, host02 and host03
collected over the last hour.
[grid@host01 ~]$ oclumon dumpnodeview -n host01 host02 host03 -last
"01:00:00"|more

----------------------------------------
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

Node: host01 Clock: '15-06-19 04.49.35 UTC' SerialNo:4916


----------------------------------------

SYSTEM:
#pcpus: 1 #vcpus: 1 cpuht: N chipname: Intel(R) cpu: 7.87 cpuq: 1
physmemfree: 272752 physmemtotal: 3606464 mcache:
1359052 swapfree: 8743908 swaptotal: 8843776 hugepagetotal: 0
hugepagefree: 0 hugepagesize: 2048 ior: 41 iow: 103 io
s: 22 swpin: 0 swpout: 0 pgin: 40 pgout: 103 netr: 21.874 netw:
n s e
21.862 procs: 207 procsoncpu: 1 rtprocs: 10 rtprocso
i ce
ncpu: N/A #fds: 16480 #sysfdlimit: 6815744 #disks: 5 #nics: 4
b l el
nicErrors: 0
fer a
a n s
TOP CONSUMERS:
n - tr
no
topcpu: 'gipcd.bin(22079) 1.79' topprivmem: 'java(17133) 212708'
a
topshm: 'oracle_22666_-m(22666) 278172' topfd: 'crs
s eฺ
) h a
d.bin(23380) 258' topthread: 'crsd.bin(23380) 47'
id
m u
i s ฺco ent G
sas S tud UTC' SerialNo:4917
----------------------------------------
s
Node: host01 Clock: '15-06-19 04.49.40

l e s@ this
----------------------------------------

b r ob use
#pcpus: s1 (#vcpus:to
SYSTEM:

b l e 1 cpuht: N chipname: Intel(R) cpu: 6.88 cpuq: 1

R o
physmemfree: 272760 physmemtotal: 3606464 mcache:

u lio 1359068 swapfree: 8743908 swaptotal: 8843776 hugepagetotal: 0


Bra
hugepagefree: 0 hugepagesize: 2048 ior: 189 iow: 81 io
s: 44 swpin: 0 swpout: 0 pgin: 189 pgout: 81 netr: 18.613 netw:
20.778 procs: 208 procsoncpu: 1 rtprocs: 10 rtprocso
ncpu: N/A #fds: 16480 #sysfdlimit: 6815744 #disks: 5 #nics: 4
nicErrors: 0

TOP CONSUMERS:
topcpu: 'mdb_vktm_-mgmtd(9173) 2.00' topprivmem: 'java(17133)
212708' topshm: 'oracle_22666_-m(22666) 278172' topfd:
'crsd.bin(23380) 258' topthread: 'crsd.bin(23380) 47'

----------------------------------------
Node: host01 Clock: '15-06-19 04.49.45 UTC' SerialNo:4918
----------------------------------------

SYSTEM:
#pcpus: 1 #vcpus: 1 cpuht: N chipname: Intel(R) cpu: 33.70 cpuq: 12
physmemfree: 272588 physmemtotal: 3606464 mcache
Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 10: Troubleshooting Oracle Clusterware


Chapter 10 - Page 18
: 1359084 swapfree: 8743908 swaptotal: 8843776 hugepagetotal: 0
hugepagefree: 0 hugepagesize: 2048 ior: 41 iow: 100
ios: 25 swpin: 0 swpout: 0 pgin: 41 pgout: 100 netr: 62.184 netw:
42.899 procs: 207 procsoncpu: 1 rtprocs: 10 rtproc
soncpu: N/A #fds: 16480 #sysfdlimit: 6815744 #disks: 5 #nics: 4
nicErrors: 0
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

TOP CONSUMERS:
topcpu: 'gipcd.bin(22079) 1.80' topprivmem: 'java(17133) 212708'
topshm: 'oracle_22666_-m(22666) 278172' topfd: 'crs
d.bin(23380) 258' topthread: 'crsd.bin(23380) 47'

----------------------------------------
Node: host01 Clock: '15-06-19 04.49.50 UTC' SerialNo:4919
----------------------------------------
n s e
i ce
SYSTEM:
b l el
#pcpus: 1 #vcpus: 1 cpuht: N chipname: Intel(R) cpu: 8.23 cpuq: 2
fer a
physmemfree: 271984 physmemtotal: 3606464 mcache:
a n s
n -
hugepagefree: 0 hugepagesize: 2048 ior: 41 iow: 98 iostr
1359104 swapfree: 8743908 swaptotal: 8843776 hugepagetotal: 0

a no
: 20 swpin: 0 swpout: 0 pgin: 41 pgout: 98 netr: 22.619 netw: 22.695
a s eฺ
procs: 207 procsoncpu: 1 rtprocs: 10 rtprocsonc
h
) id
pu: N/A #fds: 16480 #sysfdlimit: 6815744 #disks: 5 #nics: 4
m u
nicErrors: 0
i s ฺco ent G
TOP CONSUMERS:
s s as Stud
s @
topcpu: 'gipcd.bin(22079)
t i s
2.00'
h topprivmem: 'java(17133) 212708'

d.bin(23380)ro
ble use
topshm: 'oracle_22666_-m(22666) 278172' topfd: 'crs

s (b 258' topthread:
to
'crsd.bin(23380) 47'

l e
R ob
----------------------------------------

u lio Node: host01 Clock: '15-06-19 04.49.55 UTC' SerialNo:4920


Bra
...

[grid@host01 ~]$
6. Use oclumon to retrieve the current retention time for the management database. Increase
it by 2 hours (7200 seconds). The new retention time should be 98100 (90900 + 7200).
What happens?
[grid@host01 ~]$ oclumon manage –get repsize

CHM Repository Size = 90900 seconds

[grid@host01 ~]$ oclumon manage -repos changeretentiontime 98100

The Cluster Health Monitor repository is too small for the desired
retention. Please first resize the repository to 2212 MB

[grid@host01 ~]$
The repository is too small to support your new retention time.

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 10: Troubleshooting Oracle Clusterware


Chapter 10 - Page 19
7. The repository default size is 2048 MB and is too small to support the desired retention
time. Resize the repository to 2212 MB.
[grid@host01 ~]$ oclumon manage -repos changerepossize 2212

The Cluster Health Monitor repository was successfully resized.The


new retention is 98160 seconds.
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

8. Resize the repository to 1500 MB. What happens?


[grid@host01 ~]$ oclumon manage -repos changerepossize 1500
Size less than default value of 2048 MB specified. Aborting resize
operation.
[grid@host01 ~]$
The resize operation is not permitted for values less than the default (minimum) size.
9. Resize the repository to its original size. Again, what do you observe?
n s e
[grid@host01 ~]$ oclumon manage -repos changerepossize 2048
li ce
l e
b be
Warning: Entire data in Cluster Health Monitor repository will
e r a
deleted.Do you want to continue(Yes/No)? Yes
a n sf
The Cluster Health Monitor repository was successfully n - tr resized.The
new retention is 90900 seconds.
a no
[grid@host01 ~]$
h a s eฺ
A resize operation that reduces the repositorymsize idrepository data to be dropped.
) causesuall
i
10. Exit all terminal windows opened up fors ฺthis e n tG
copractice.
s sas Stud
l e s@ this
b r ob use
l e s( to
R ob
ulio
Bra

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 10: Troubleshooting Oracle Clusterware


Chapter 10 - Page 20
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

n s e
i ce
b l el
fer a
a n s
n - tr
Practices afor noLesson 11:
Making h a sApplications
eฺ Highly
) i d
ฺ c om nt Gu
Available
a s is Chapter
de 11
s
s is S t u
s @
o b le se th
( b r o u
es t
o bl
io R
r a ul
B

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 11: Making Applications Highly Available


Chapter 11 - Page 1
Practices for Lesson 11: Overview
Practices Overview
In this practice, you will:
• Configure highly available application resources on Flex Cluster leaf nodes.
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

• Use Oracle Clusterware to protect the Apache application.

n s e
i ce
b l el
fer a
a n s
n - tr
a no
h a s eฺ
m ) u id
i s ฺco ent G
s sas Stud
l e s@ this
b r ob use
( to
bl es
R o
ul io
B r a

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 11: Making Applications Highly Available


Chapter 11 - Page 2
Practice 11-1: Configuring Highly Available Application Resources on
Flex Cluster Leaf Nodes
Overview
In this practice you will create a series of highly available application resources running on one
of the Flex Cluster Leaf Nodes.
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

1. Establish a terminal session connected to host01 using the grid user. Configure the
environment with the oraenv script.
[vncuser@classroom_pc ~]$ ssh grid@host01
grid@host01's password:

[grid@host01 ~]$ . oraenv


ORACLE_SID = [grid] ? +ASM1 n s e
i ce
el
The Oracle base has been set to /u01/app/grid

a b l
[grid@host01 ~]$
fer
2. Examine the cluster to identify the role for each node in the cluster. a n s
n - tr
[grid@host01 ~]$ crsctl get node role status -all
Node 'host01' active role is 'hub' a no
Node 'host02' active role is 'hub'
h a s eฺ
Node 'host03' active role is 'hub'
m ) u id
Node 'host04' active role is 'leaf'
i s ฺco ent G
sas Stud
Node 'host05' active role is 'leaf'
[grid@host01 ~]$ s is defined server pools and server allocations.
3. Examine the cluster tol e s@ thethcurrently
identify
r obthe Hub
Notice that currently u e are allocated to the orcldb server pool which contains
sNodes
s( b
the RAC database. to both Leaf Nodes are currently in the Free pool.
Also,
b l e
R o
[grid@host01 ~]$ crsctl status serverpool

ul io ACTIVE_SERVERS=host04
NAME=Free

B r a host05

NAME=Generic
ACTIVE_SERVERS=

NAME=bigpool
ACTIVE_SERVERS=

NAME=ora.orcldb
ACTIVE_SERVERS=host01 host02 host03

[grid@host01 ~]$

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 11: Making Applications Highly Available


Chapter 11 - Page 3
4. Examine the cluster to identify the currently defined server categories. Note the existence of
the built-in category ora.leaf.category. All the Leaf Nodes in the cluster are implicitly
associated with this category.
[grid@host01 ~]$ crsctl status category
NAME=big
ACL=owner:grid:rwx,pgrp:oinstall:rwx,other::r--
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

ACTIVE_CSS_ROLE=hub
EXPRESSION=(MEMORY_SIZE > 8000)

NAME=ora.hub.category
ACL=owner:root:rwx,pgrp:root:r-x,other::r--
ACTIVE_CSS_ROLE=hub
EXPRESSION=

NAME=ora.leaf.category
n s e
ACL=owner:root:rwx,pgrp:root:r-x,other::r--
i ce
ACTIVE_CSS_ROLE=leaf
b l el
EXPRESSION=
fer a
a n s
[grid@host01 ~]$
n - tr
one of the Flex Cluster Leaf Nodes. a noapplication resources on
5. Create a new server pool to support a series of highly available

[grid@host01 ~]$ srvctl add srvpool )-serverpool has uideฺmy_app_pool -min 1 -


max 1 -category "ora.leaf.category"
ฺ c om nt G
[grid@host01 ~]$ s
si tude
6. Reexamine the server pools. s s a
Notice thatS one of the leaf nodes has been allocated to the
s
newly created server pool @ th i s
(my_app_pool).
[grid@host01ro blecrsctl
~]$ u s e
( b t o
status serverpool
NAME=Free
b l es
ACTIVE_SERVERS=host05
o
u lio R NAME=Generic
Bra ACTIVE_SERVERS=

NAME=bigpool
ACTIVE_SERVERS=

NAME=ora.my_app_pool
ACTIVE_SERVERS=host04

NAME=ora.orcldb
ACTIVE_SERVERS=host01 host02 host03

[grid@host01 ~]$
7. Navigate to the directory that contains the scripts for this practice.
[grid@host01 ~]$ cd /stage/GRID/labs/less_11

[grid@host01 less_11]$

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 11: Making Applications Highly Available


Chapter 11 - Page 4
8. Examine the provided action script (action.scr). You will use this script to create a series
of highly available application resources on a Flex Cluster Leaf Node. The application
resources will all perform the following set of simple tasks.
• When the application resource is started it creates an empty file at
/tmp/<resource_name> on the node running the application resource.
• When the application resource is stopped or cleaned the file at
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

/tmp/<resource_name> is deleted.
• When the application resource is checked a test is performed to check the existence
of the file at /tmp/<resource_name>.
• In addition, regardless of the action performed, log entries are written to the CRSD
agent log file, which can be found at
/u01/app/12.1.0/grid/log/<hostname>/agent/crsd/scriptagent_gri
d/scriptagent_grid.log.
n s e
[grid@host01 less_11]$ cat action.scr
i ce
#!/bin/sh
b l el
TOUCH=/bin/touch
RM=/bin/rm fer a
PATH_NAME=/tmp/$_CRS_NAME a n s
n - tr
#
a no
# These messages go into the CRSD agent log
`date` ********** ") ha
s eฺfile.
echo " *******
m u idresource[$_CRS_NAME]
ฺco ent G
echo "Action script '$_CRS_ACTION_SCRIPT' for
called for action $1" i s
#
s sas Stud
case "$1" in l e s@ this
'start') rob
b u se
echo ("START tentry
l e s "Creatingo thepoint has been called.."
o b$TOUCH $PATH_NAME
echo file: $PATH_NAME"

u lio R exit 0
Bra ;;

'stop')
echo "STOP entry point has been called.."
echo "Deleting the file: $PATH_NAME"
$RM $PATH_NAME
exit 0
;;

'check')
echo "CHECK entry point has been called.."
if [ -e $PATH_NAME ]; then
echo "Check -- SUCCESS"
exit 0
else
echo "Check -- FAILED"
exit 1
fi

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 11: Making Applications Highly Available


Chapter 11 - Page 5
;;

'clean')
echo "CLEAN entry point has been called.."
echo "Deleting the file: $PATH_NAME"
$RM -f $PATH_NAME
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

exit 0
;;

esac

[grid@host01 less_11]$
9. Examine the file create_res.sh. This file contains the commands which you will next use
to create three application resources named my_dep_res1, my_dep_res2, and
my_resource. Notice that all three resources are associated with the my_app_pool n s e
i ce
el
server pool. Notice also that my_resource has a mandatory (hard) dependency on
my_dep_res1, and an optional (soft) dependency on my_dep_res2. Finally notice that
a b l
my_resource also depends on the RAC database (ora.orcl.db). This illustrates how fer
you can unify the management of databases and applications within a Flex Cluster. a n s
n - tr
no
[grid@host01 less_11]$ cat create_res.sh
a
crsctl add resource my_dep_res1 -type cluster_resource -attr
s eฺ
) h a
"ACTION_SCRIPT=/stage/GRID/labs/less_11/action.scr,PLACEMENT=restric
id
ted,SERVER_POOLS=ora.my_app_pool"
m u
i s tG
ฺco-typeencluster_resource
crsctl add resource my_dep_res2
s a s tud -attr
s is S
"ACTION_SCRIPT=/stage/GRID/labs/less_11/action.scr,PLACEMENT=restric
@
s
le se th
ted,SERVER_POOLS=ora.my_app_pool"

crsctl add b r o b umy_resource -type cluster_resource -attr


( resource
t o
l es
"ACTION_SCRIPT=/stage/GRID/labs/less_11/action.scr,PLACEMENT=restric
b
Roglobal:uniform:ora.orcl.db) pullup:always(my_dep_res1,
ted,SERVER_POOLS=ora.my_app_pool,START_DEPENDENCIES='hard(my_dep_res

u lio 1,
Bra
global:ora.orcl.db)
weak(my_dep_res2)',STOP_DEPENDENCIES='hard(my_dep_res1,
global:ora.orcl.db)'"
[grid@host01 less_11]$
10. Execute create_res.sh to create the application resources.
[grid@host01 less_11]$ ./create_res.sh
[grid@host01 less_11]$
11. Examine the newly created cluster resources. Note that currently the resources exist but
have not been started.
[grid@host01 less_11]$ crsctl status resource -t -w "NAME co my"
--------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------
my_dep_res1
1 OFFLINE OFFLINE STABLE

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 11: Making Applications Highly Available


Chapter 11 - Page 6
my_dep_res2
1 OFFLINE OFFLINE STABLE
my_resource
1 OFFLINE OFFLINE STABLE
--------------------------------------------------------------------
[grid@host01 less_11]$
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

12. Establish another terminal session connected to host01 as the oracle user. Configure
the oracle environment setting the ORACLE_SID variable to orcl.
[vncuser@classroom_pc ~]$ ssh oracle@host01
oracle@host01's password:

[oracle@host01 ~]$ . oraenv


ORACLE_SID = [oracle] ? orcl
The Oracle base has been set to /u01/app/oracle
n s e
[oracle@host01 ~]$
i ce
13. Stop the orcl database so that you can exercise the dependency between my_resource
b l el
and your RAC database. Confirm the database has been stopped on all nodes.
fer a
[oracle@host01 ~]$ srvctl stop database -d orcl
a n s
n - tr
[oracle@host01 ~]$ srvctl status database -d n o
orcl
Instance orcl_1 is not running on node host03 s a
Instance orcl_2 is not running on nodeha host01deฺ
Instance orcl_3 is not running on m
o
)
G
node host02 ui
s i sฺc dent
[oracle@host01 ~]$
s sa Stu
14. Back in your grid terminal s @session,
t h s the my_resource application resource. Notice
istart
l e
that my_dep_res1band my_dep_res2 e are started automatically to fulfill the dependency
b
definitions, and r o all three
that u sresources are started on the server associated with the
s ( servertpool.
o Note also that the
ble
my_app_pool
theodependency
orcl database is also started as defined in
definitions for the my_resource application resource. This illustrates how
i o R
l
u a Flex Cluster can be used to manage a RAC database and associated application
Bra resources.
[grid@host01 less_11]$ crsctl start resource my_resource
CRS-2672: Attempting to start 'my_dep_res1' on 'host04'
CRS-2672: Attempting to start 'my_dep_res2' on 'host04'
CRS-2672: Attempting to start 'ora.orcl.db' on 'host03'
CRS-2672: Attempting to start 'ora.orcl.db' on 'host01'
CRS-2672: Attempting to start 'ora.orcl.db' on 'host02'
CRS-2676: Start of 'my_dep_res1' on 'host04' succeeded
CRS-2676: Start of 'my_dep_res2' on 'host04' succeeded
CRS-2676: Start of 'ora.orcl.db' on 'host03' succeeded
CRS-2676: Start of 'ora.orcl.db' on 'host01' succeeded
CRS-2676: Start of 'ora.orcl.db' on 'host02' succeeded
CRS-2672: Attempting to start 'my_resource' on 'host04'
CRS-2676: Start of 'my_resource' on 'host04' succeeded
[grid@host01 less_11]$

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 11: Making Applications Highly Available


Chapter 11 - Page 7
15. Re-examine the application resources to confirm their status.
[grid@host01 less_11]$ crsctl status resource -t -w "NAME co my"
--------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------
Cluster Resources
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

--------------------------------------------------------------------
my_dep_res1
1 ONLINE ONLINE host04 STABLE
my_dep_res2
1 ONLINE ONLINE host04 STABLE
my_resource
1 ONLINE ONLINE host04 STABLE
--------------------------------------------------------------------

n s e
[grid@host01 less_11]$
i ce
16. Check the files under /tmp on the node running the application resources. You should b l el
expect to see a series of empty files, each bearing the name of one of the application
fer a
resources.
a n s
[grid@host01 less_11]$ ssh n - tr
host04 ls -la /tmp/my*
-rw-r--r-- 1 grid oinstall
a no
0 Jun 12 06:54 /tmp/my_dep_res1
-rw-r--r-- 1 grid oinstall
a s eฺ
0 Jun 12 06:54 /tmp/my_dep_res2
h
-rw-r--r-- 1 grid oinstall
m ) id
0 Jun 12 06:57 /tmp/my_resource
u
i s ฺco ent G
sasconfirm
[grid@host01 less_11]$
d
tualso
@ s
17. From the oracle terminal session,
i s S that your RAC database (orcl) started as
s
le se th
part of starting the my_resource application resource.
o b
[oracle@host01
(br ~]$
Instances orcl_1 t
is
u
osrvctl status database -d orcl
running on node host03
o ble orcl_2 is running on node host01
Instance

i o R
Instance orcl_3 is running on node host02
u l
Bra [oracle@host01 ~]$
18. Switch user (su) to root and shutdown Oracle Clusterware on the node hosting the
application resources. Don’t forget to set the Grid environment for root.
[grid@host01 less_11]$ su -
Password:
[root@host01 ~]# . oraenv
ORACLE_SID = [root] ? +ASM1
The Oracle base has been set to /u01/app/grid

[root@host01 ~]# crsctl stop cluster -n host04


CRS-2673: Attempting to stop 'ora.crsd' on 'host04'
CRS-2790: Starting shutdown of Cluster Ready Services-managed
resources on 'host04'
CRS-2673: Attempting to stop 'my_resource' on 'host04'
CRS-2673: Attempting to stop 'my_dep_res2' on 'host04'
CRS-2677: Stop of 'my_dep_res2' on 'host04' succeeded
CRS-2677: Stop of 'my_resource' on 'host04' succeeded

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 11: Making Applications Highly Available


Chapter 11 - Page 8
CRS-2673: Attempting to stop 'my_dep_res1' on 'host04'
CRS-2677: Stop of 'my_dep_res1' on 'host04' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on
'host04' has completed
CRS-2677: Stop of 'ora.crsd' on 'host04' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

'host04'
CRS-2673: Attempting to stop 'ora.ctssd' on 'host04'
CRS-2673: Attempting to stop 'ora.evmd' on 'host04'
CRS-2673: Attempting to stop 'ora.storage' on 'host04'
CRS-2677: Stop of 'ora.storage' on 'host04' succeeded
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'host04'
succeeded
CRS-2677: Stop of 'ora.ctssd' on 'host04' succeeded
CRS-2677: Stop of 'ora.evmd' on 'host04' succeeded
n s e
CRS-2673: Attempting to stop 'ora.cssd' on 'host04'
i ce
CRS-2677: Stop of 'ora.cssd' on 'host04' succeeded
b l el
[root@host01 ~]#
fer a
19. Wait a few moments, then examine the status of the server pools. You should see that the
a n s
n - tr
previously unallocated Leaf Node has been moved to the my_app_pool server pool. Exit
the root account when finished
no
[root@host01 ~]# crsctl status serverpools a
NAME=Free ) ha uideฺ
ACTIVE_SERVERS=
ฺ c om nt G
s
si tude
NAME=Generic
s a
s is S
ACTIVE_SERVERS=
s @
NAME=bigpoolrob
le se th
( b t o u
es
ACTIVE_SERVERS=
b l
Ro
NAME=ora.my_app_pool
io ACTIVE_SERVERS=host05
u l
Bra NAME=ora.orcldb
ACTIVE_SERVERS=host01 host02 host03

[root@host01 ~]# exit


Logout

[grid@host01 less_11]$
20. Re-examine the status of the application resources. Notice that they have been failed-over
to the surviving Leaf Node.
[grid@host01 less_11]$ crsctl status resource -t -w "NAME co my"
--------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------
my_dep_res1
1 ONLINE ONLINE host05 STABLE
Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 11: Making Applications Highly Available


Chapter 11 - Page 9
my_dep_res2
1 ONLINE ONLINE host05 STABLE
my_resource
1 ONLINE ONLINE host05 STABLE
--------------------------------------------------------------------
[grid@host01 less_11]$
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

21. Re-examine the contents of the /tmp directory on both Leaf Nodes. Notice that the files
you saw in step 16 no longer exist and that there is newer set of files on the other Leaf
Node.
[grid@host01 less_11]$ ssh host04 ls -la /tmp/my*
ls: cannot access /tmp/my*: No such file or directory

[grid@host01 less_11]$ ssh host05 ls -la /tmp/my*


-rw-r--r-- 1 grid oinstall 0 Jun 12 06:59 /tmp/my_dep_res1
n s e
ce
-rw-r--r-- 1 grid oinstall 0 Jun 12 06:59 /tmp/my_dep_res2
-rw-r--r-- 1 grid oinstall 0 Jun 12 06:59 /tmp/my_resource
eli
a b l
[grid@host01 less_11]$
s f er
r
22. Restart Oracle Clusterware as the root user on the inactive Leaf Node.
- t an
[grid@host01 less_11]$ su - n on
Password:
s a ฺ
) a
h uide
[root@host01 ~]# . oraenv
ฺ c om nt G
ORACLE_SID = [root] ? +ASM1
The Oracle base has been a
s
si to t/u01/app/grid
de
s set u
[root@host01 ~]# e s @s start
t i s S
h cluster -n host04
b l crsctl
e
s start 'ora.cssdmonitor' on 'host04'
b ro o uto
CRS-2672: Attempting
CRS-2672: (Attempting
l e s Start oft 'ora.cssdmonitor'
to start 'ora.evmd' on 'host04'

o b
CRS-2676: on 'host04' succeeded

i o R
CRS-2672: Attempting to start 'ora.cssd' on 'host04'
ul CRS-2672: Attempting to start 'ora.diskmon' on 'host04'
Bra CRS-2676:
CRS-2676:
Start of 'ora.diskmon' on 'host04' succeeded
Start of 'ora.evmd' on 'host04' succeeded
CRS-2676: Start of 'ora.cssd' on 'host04' succeeded
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on
'host04'
CRS-2672: Attempting to start 'ora.storage' on 'host04'
CRS-2672: Attempting to start 'ora.ctssd' on 'host04'
CRS-2676: Start of 'ora.storage' on 'host04' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'host04'
succeeded
CRS-2676: Start of 'ora.ctssd' on 'host04' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'host04'
CRS-2676: Start of 'ora.crsd' on 'host04' succeeded

[root@host01 ~]#

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 11: Making Applications Highly Available


Chapter 11 - Page 10
23. Check server pool status. Check and see what pool the server you just started the
Clusterware stack on currently belongs to.
[root@host01 ~]# crsctl status serverpool
NAME=Free
ACTIVE_SERVERS=host04
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

NAME=Generic
ACTIVE_SERVERS=
NAME=bigpool
ACTIVE_SERVERS=

NAME=ora.my_app_pool
ACTIVE_SERVERS=host05

NAME=ora.orcldb n s e
i ce
el
ACTIVE_SERVERS=host01 host02 host03

a b l
[root@host01 ~]#
s f er
a n resources
Congratulations! You have successfully configured highly available application
t r
on Flex Cluster Leaf Nodes
o n -
24. To finish, please exit the root account. Stop and delete the n
amy_resource, my_dep_res1,
and my_dep_res2 resources. Delete the ora.my_app_pool a s
h uide ฺ
server pool. Exit the terminal
)
session.
ฺ c om nt G
[root@host01 ~]# exit s
si tude
logout
s a
[grid@host01 less_11]$ scrsctl s S
s @ t h i stop resource my_resource
CRS-2673: Attempting
o b le'my_resource'
to
s e stop 'my_resource' on 'host05'
CRS-2677: Stop
( b r of
o u on 'host05' succeeded

es less_11]$ t
b l
[grid@host01
o crsctl stop resource my_dep_res1

i o R
CRS-2673: Attempting to stop 'my_dep_res1' on 'host05'
ul CRS-2677: Stop of 'my_dep_res1' on 'host05' succeeded
Bra [grid@host01 less_11]$ crsctl stop resource my_dep_res2
CRS-2673: Attempting to stop 'my_dep_res2' on 'host05'
CRS-2677: Stop of 'my_dep_res2' on 'host05' succeeded

[grid@host01 less_11]$ crsctl delete resource my_resource

[grid@host01 less_11]$ crsctl delete resource my_dep_res1

[grid@host01 less_11]$ crsctl delete resource my_dep_res2

[grid@host01 less_11]$ crsctl delete serverpool ora.my_app_pool

[grid@host01 less_11]$ exit


logout
Connection to host01 closed.

[vncuser@classroom_pc ~]$

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 11: Making Applications Highly Available


Chapter 11 - Page 11
Practice 11-2: Protecting the Apache Application
Overview
In this practice, you use Oracle Clusterware to protect the Apache application. To do this, you
create an application VIP for Apache (HTTPD), an action script, and a resource.
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

1. As the root user, verify that the Apache RPMs, httpd, and httpd-tools, are installed
on host04 and host05.
[vncuser@classroom_pc ~]$ ssh root@host04
root@host04's password:
Last login: Thu Jun 11 19:03:00 2015 from 192.0.2.1

[root@host04 ~]# rpm -qa|grep httpd


httpd-2.2.15-15.0.1.el6_2.1.x86_64
httpd-tools-2.2.15-15.0.1.el6_2.1.x86_64 n s e
i ce
[root@host04 ~]# ssh host05 rpm -qa|grep httpd
b l el
httpd-2.2.15-15.0.1.el6_2.1.x86_64
fer a
httpd-tools-2.2.15-15.0.1.el6_2.1.x86_64
a n s
[root@host04 ~]#
n - tr
nothe apachectl start
2. As the root user, start the Apache application on host04 with
a
command:
h a s eฺ
[root@host04 ~]# apachectl start m) u id
[root@host04 ~]#
i s ฺco ent G
Open a browser on your desktops s access
aand t d Apache test page on host04. The HTTP
uthe
@
address would look somethings like this:
i S
s http://host04.example.com
s t h
r o ble use
s (b to
l e
R ob
u lio
Bra

After you have determined that Apache is working properly, stop Apache using the
apachectl stop command.

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 11: Making Applications Highly Available


Chapter 11 - Page 12
[root@host04 ~]# apachectl stop

[root@host04 ~]#
3. As the root user, start the Apache application on host05 with the apachectl start
command.
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

[root@host04 ~]# ssh host05 apachectl start


[root@host04 ~]#
Open a browser on your desktop and access the Apache test page on host05. The HTTP
address would look something like this: http://host05.example.com

n s e
i ce
b l el
fer a
a n s
n - tr
a no
h a s eฺ
m ) u id
i s ฺco ent G
s sas Stud
l e s@ this
b r ob use
( to
bl es
R o
ul io
B r a After you have determined that Apache is working properly, stop Apache using the
apachectl stop command.
[root@host04 ~]# ssh host05 apachectl stop
[root@host04 ~]#
4. Create an action script to control the application. This script must be accessible by all
nodes on which the application resource can be located. As the root user, create a script
on host04 called apache.scr in /usr/local/bin that will start, stop, check status, and
clean up if the application does not exit cleanly. Make sure that the host specified in the
WEBPAGECHECK variable is host04. Use the
/stage/GRID/labs/less_11/apache.tpl file as a template for creating the script.
Make the script executable and test the script.
[root@host04 ~]# cp /stage/GRID/labs/less_11/apache.tpl
/usr/local/bin/apache.scr
[root@host04 ~]# vi /usr/local/bin/apache.scr
#!/bin/bash

HTTPDCONFLOCATION=/etc/httpd/conf/httpd.conf
Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 11: Making Applications Highly Available


Chapter 11 - Page 13
WEBPAGECHECK=http://host04.example.com:80/icons/apache_pb.gif
case $1 in
'start')
/usr/sbin/apachectl -k start -f $HTTPDCONFLOCATION
RET=$?
;;
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

'stop')
/usr/sbin/apachectl -k stop
RET=$?
;;
'clean')
/usr/sbin/apachectl -k stop
RET=$?
;;
'check')
n s e
/usr/bin/wget -q --delete-after $WEBPAGECHECK
i ce
RET=$?
b l el
;;
*) fer a
RET=0 a n s
;; n - tr
esac
a no
# 0: success; 1 : error
h a s eฺ
if [ $RET -eq 0 ]; then
m ) u id
ฺco ent G
exit 0
else i s
exit 1
s sas Stud
s@ this
fi
l e
Save the file rob
b u se
l e s ( ~]# tchmod
o
o b
[root@host04 755 /usr/local/bin/apache.scr
R
io [root@host04
u l ~]# apache.scr start
Bra Verify web page

[root@host04 ~]# apache.scr stop

Refresh http://host04.example.com, Web page should no longer display

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 11: Making Applications Highly Available


Chapter 11 - Page 14
5. Let’s repeat the actions in the previous step now on host05. As the root user, log into
host05 and create a script called apache.scr in /usr/local/bin that will start, stop,
check status, and clean up if the application does not exit cleanly. Make sure that the host
specified in the WEBPAGECHECK variable is host05. Use the
/stage/GRID/labs/less_11/apache.tpl file as a template for creating the script.
Make the script executable and test the script. When finished, exit host05, returning to
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

host04.
[root@host04 ~]# ssh host05

[root@host05 ~]# cp /stage/GRID/labs/less_11/apache.tpl


/usr/local/bin/apache.scr

[root@host05 ~]# vi /usr/local/bin/apache.scr


#!/bin/bash
n s e
i ce
HTTPDCONFLOCATION=/etc/httpd/conf/httpd.conf
b l el
WEBPAGECHECK=http://host05.example.com:80/icons/apache_pb.gif
fer a
case $1 in
a n s
'start')
/usr/sbin/apachectl -k start -f $HTTPDCONFLOCATION n - tr
RET=$?
a no
;;
h a s eฺ
'stop')
m ) u id
ฺco ent G
/usr/sbin/apachectl -k stop
RET=$? i s
;;
s sas Stud
'clean')
l e s@ this
ob use
/usr/sbin/apachectl -k stop
RET=$?
( b r
;;
es to
bl
'check')
o
io R
/usr/bin/wget -q --delete-after $WEBPAGECHECK

r a ul RET=$?
B ;;
*)
RET=0
;;
esac
# 0: success; 1 : error
if [ $RET -eq 0 ]; then
exit 0
else
exit 1
fi

Save the file

[root@host05 ~]# chmod 755 /usr/local/bin/apache.scr

[root@host05 ~]# apache.scr start

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 11: Making Applications Highly Available


Chapter 11 - Page 15
Verify web page
[root@host05 ~]# apache.scr stop

Refresh http://host05.example.com, Web page should no longer display


[root@host05 ~]# exit
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

[root@host04 ~]#
6. Next, validate the return code of a check failure using the new script. The Apache server
should NOT be running on either node. Run apache.scr check and immediately test the
return code by issuing an echo $? command. This must be run immediately after the
apache.scr check command because the shell variable $? holds the exit code of the
previous command run from the same shell. An unsuccessful check should return an exit
code of 1. You should do this on both nodes.
[root@host04 ~]# apache.scr check
n s e
i ce
[root@host04 ~]# echo $?
b l el
1
fer a
a n s
[root@host04 ~]# ssh host05
n - tr
[root@host05 ~]# apache.scr check
a no
h a s eฺ
[root@host05 ~]# echo $?
m ) u id
ฺco ent G
1
i s
sas Stud
[root@host05 ~]# exit
logout s is
Connection to host05
l e s@closed.
t h
r b use
o~]#
[root@host04
s ( b to
l e
R ob grid user, create a server pool for the resource called myApache_sp. This pool
7. As the
contains host04 and host05 and is a child of the Free pool.
u lio
Bra
[root@host04 ~]# su - grid
[grid@host04 ~]$ /u01/app/12.1.0/grid/bin/crsctl add serverpool
myApache_sp -attr "PARENT_POOLS=Free, SERVER_NAMES=host04 host05"
[grid@host04 ~]$
8. Check the status of the new pool on your cluster. Exit the grid account when finished.
[grid@host04 ~]$ /u01/app/12.1.0/grid/bin/crsctl status serverpool
NAME=Free
ACTIVE_SERVERS=host04 host05
NAME=Generic
ACTIVE_SERVERS=
NAME=bigpool
ACTIVE_SERVERS=
NAME=myApache_sp
ACTIVE_SERVERS=host04 host05

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 11: Making Applications Highly Available


Chapter 11 - Page 16
NAME=ora.orcldb
ACTIVE_SERVERS=host01 host02 host03
[grid@host04 ~]$ exit
Logout

[root@host04 ~]#
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

9. Add the Apache Resource, which will be called myApache, to the myApache_sp subpool.
It must be performed as root as the resource requires root authority because of listening
on the default privileged port 80. Set CHECK_INTERVAL to 30, RESTART_ATTEMPTS to 2,
and PLACEMENT to restricted.
[root@host04 ~]# /u01/app/12.1.0/grid/bin/crsctl add resource
myApache -type cluster_resource -attr
"ACTION_SCRIPT=/usr/local/bin/apache.scr, PLACEMENT='restricted',
n s e
ce
SERVER_POOLS=myApache_sp, CHECK_INTERVAL='30', RESTART_ATTEMPTS='2'"
[root@host04 ~]#
eli
a b l
10. View the static attributes of the myApache resource.
fer
[root@host04 ~]# /u01/app/12.1.0/grid/bin/crsctl status resource
a n s
myApache -f
n - tr
NAME=myApache a no
TYPE=cluster_resource
h a s eฺ
STATE=OFFLINE
m ) u id
TARGET=OFFLINE
i s ฺco ent G
sas Stud
ACL=owner:root:rwx,pgrp:root:r-x,other::r--
ACTIONS= s
@ this
l e s
ACTION_FAILURE_TEMPLATE=

b r ob use
ACTION_SCRIPT=/usr/local/bin/apache.scr

s( to
ACTION_TIMEOUT=60
l e
ACTIVE_PLACEMENT=0
b
o
AGENT_FILENAME=%CRS_HOME%/bin/scriptagent
R
u lio ALERT_TEMPLATE=
Bra
ALIAS_NAME=
AUTO_START=restore
CARDINALITY=1
CARDINALITY_ID=0
CHECK_INTERVAL=30
CHECK_TIMEOUT=0
CLEAN_TIMEOUT=60
CREATION_SEED=243
DEFAULT_TEMPLATE=
DEGREE=1
DELETE_TIMEOUT=60
DESCRIPTION=
ENABLED=1
FAILOVER_DELAY=0
FAILURE_INTERVAL=0
FAILURE_THRESHOLD=0
HOSTING_MEMBERS=
ID=myApache
INSTANCE_COUNT=1
Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 11: Making Applications Highly Available


Chapter 11 - Page 17
INSTANCE_FAILOVER=1
INTERMEDIATE_TIMEOUT=0
LOAD=1
LOGGING_LEVEL=1
MODIFY_TIMEOUT=60
NOT_RESTARTING_TEMPLATE=
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

OFFLINE_CHECK_INTERVAL=0
PLACEMENT=restricted
PROFILE_CHANGE_TEMPLATE=
RELOCATE_BY_DEPENDENCY=1
RESTART_ATTEMPTS=2
SCRIPT_TIMEOUT=60
SERVER_CATEGORY=
SERVER_POOLS=myApache_sp
START_CONCURRENCY=0
n s e
START_DEPENDENCIES=
i ce
START_TIMEOUT=0
b l el
STATE_CHANGE_TEMPLATE=
STOP_CONCURRENCY=0 fer a
STOP_DEPENDENCIES= a n s
STOP_TIMEOUT=0 n - tr
UPTIME_THRESHOLD=1h
a no
USER_WORKLOAD=no
h a s eฺ
USE_STICKINESS=0
m ) u id
[root@host04 ~]#
i s ฺco ent G
astheSresource
11. Start the new resource. Confirmsthat
s tud is online on one of the leaf nodes. If you
e s @ it toththeisnode on which the myApache resource is placed.
like, open a browser and point
[root@host04 ~]# l se
ob /u01/app/12.1.0/grid/bin/crsctl start resource
myApache (br u
e s Attemptingto to start 'myApache' on 'host04'
l
CRS-2672:
ob
R
CRS-2676: Start of 'myApache' on 'host04' succeeded

ulio [root@host04 ~]# /u01/app/12.1.0/grid/bin/crsctl status resource


Bra myApache

NAME=myApache
TYPE=cluster_resource
TARGET=ONLINE
STATE=ONLINE on host04

[root@host04 ~]#

Open the indicated host in a browser to verify the test page is displayed:

http://host04.example.com

12. Confirm that Apache is NOT running on the other leaf node. The easiest way to do this is to
check for the running /usr/sbin/httpd -k start -f
/etc/httpd/conf/httpd.conf processes with the ps command. Check on the host
where the resource is currently placed first. Then check the node where it should NOT be
running.
Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 11: Making Applications Highly Available


Chapter 11 - Page 18
[root@host04 ~]# ps -ef|grep -i "httpd -k"
root 16977 1 0 20:46 ? 00:00:00 /usr/sbin/httpd -k
start -f /etc/httpd/conf/httpd.conf
apache 16978 16977 0 20:46 ? 00:00:00 /usr/sbin/httpd -k
start -f /etc/httpd/conf/httpd.conf
apache 16979 16977 0 20:46 ? 00:00:00 /usr/sbin/httpd -k
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

start -f /etc/httpd/conf/httpd.conf
apache 16980 16977 0 20:46 ? 00:00:00 /usr/sbin/httpd -k
start -f /etc/httpd/conf/httpd.conf
apache 16981 16977 0 20:46 ? 00:00:00 /usr/sbin/httpd -k
start -f /etc/httpd/conf/httpd.conf
apache 16982 16977 0 20:46 ? 00:00:00 /usr/sbin/httpd -k
start -f /etc/httpd/conf/httpd.conf
apache 16986 16977 0 20:46 ? 00:00:00 /usr/sbin/httpd -k
start -f /etc/httpd/conf/httpd.conf
n s e
apache 16987 16977 0 20:46 ?
start -f /etc/httpd/conf/httpd.conf
00:00:00 /usr/sbin/httpd -k
li ce
l
b -ke
apache 16988 16977 0 20:46 ?
start -f /etc/httpd/conf/httpd.conf
00:00:00 /usr/sbin/httpd
f e r a
root 17081 16662 0 20:53 pts/0 00:00:00 grepa-i n shttpd -k
- t r
[root@host04 ~]# ssh host05 ps -ef|grep -i "httpd n on -k"
s a ฺ
[root@host04 ~]# ) a
h uide
m myApache
13. Simulate a node failure on the node hosting
command as root. Before issuing s i
the s ฺ c othe
reboot d
nt G resource using the reboot
onethe first node, open a terminal session on
s sauser, S
the second node and as the root tu the
execute
l e s@ this
/stage/GRID/labs/less_11/monitor.sh script so that you can monitor the failover.
b
ro o us e
( b
$ ssh root@host05
t 30 21:08:47 2015 from 192.0.2.1
b l es Thu
Last login: May

Ro
[root@host05 ~]#

i o
ul Switch terminal windows to the node hosting the myApache resource
Bra [root@host04 ~]# reboot

Broadcast message from root@host04


(/dev/pts/0) at 21:11 ...

The system is going down for reboot NOW!


[root@host04 ~]# Connection to host04 closed by remote host
Connection to host04 closed.

Switch terminal windows to the other leaf node

[root@host05 ~]# /stage/GRID/labs/less_11/monitor.sh

root 21940 18530 0 11:01 pts/4 00:00:00 grep -i httpd -k


root 21948 18530 0 11:01 pts/4 00:00:00 grep -i httpd -k
root 21951 18530 0 11:01 pts/4 00:00:00 grep -i httpd –k
Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 11: Making Applications Highly Available


Chapter 11 - Page 19
...
apache 22123 22117 0 11:01 ? 00:00:00 /usr/sbin/httpd -k
start -f /etc/httpd/conf/httpd.conf

apache 22124 22117 0 11:01 ? 00:00:00 /usr/sbin/httpd -k


start -f /etc/httpd/conf/httpd.conf
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

apache 22125 22117 0 11:01 ? 00:00:00 /usr/sbin/httpd -k


start -f /etc/httpd/conf/httpd.conf
...
Issue a ctl-c to stop the monitoring

14. Verify the myApache resource failover from the first leaf node to the second. Open the
indicated host in a browser to verify the test page is displayed.
n s e
ce
[root@host05 ~]# /u01/app/12.1.0/grid/bin/crsctl stat resource
myApache -t
eli
--------------------------------------------------------------------
a b l
Name Target State Server State details fer
--------------------------------------------------------------------
a n s
Cluster Resources
n - tr
no
--------------------------------------------------------------------
a
myApache
h a s eฺ
1 ONLINE ONLINE host05
m ) u idSTABLE

ฺco ent G
--------------------------------------------------------------------
i s
sas toSverify
Open the indicated host in a browser
s tud the test page is displayed:
l e s@ this
http://host05.example.com
b r ob use
l e s ( ~]# to
[root@host05
obhost04 is back up, use the crsctl relocate resource command to move the
15. After
R
ulio
myApache resource back to the original server. Verify that the myApache resource has

Bra
been relocated back to the original node.
[root@host05 ~]# /u01/app/12.1.0/grid/bin/crsctl relocate resource
myApache
CRS-2673: Attempting to stop 'myApache' on 'host05'
CRS-2677: Stop of 'myApache' on 'host05' succeeded
CRS-2672: Attempting to start 'myApache' on 'host04'
CRS-2676: Start of 'myApache' on 'host04' succeeded

[root@host05 ~]# /u01/app/12.1.0/grid/bin/crsctl stat resource


myApache -t

--------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------
myApache
1 ONLINE ONLINE host04 STABLE

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 11: Making Applications Highly Available


Chapter 11 - Page 20
--------------------------------------------------------------------

[root@host05 ~]#

16. Now, stop the myApache resource. When stopped, delete the resource along with the
myApache_sp server pool. Exit the terminal session when finished.
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

[root@host05 ~]# /u01/app/12.1.0/grid/bin/crsctl stop resource


myApache
CRS-2673: Attempting to stop 'myApache' on 'host04'
CRS-2677: Stop of 'myApache' on 'host04' succeeded

[root@host05 ~]# /u01/app/12.1.0/grid/bin/crsctl delete resource


myApache

n s e
[root@host05 ~]# /u01/app/12.1.0/grid/bin/crsctl delete serverpool
i ce
myApache_sp
b l el
[root@host05 ~]# fer a
a n s
n - tr
no
17. Close all terminal windows opened for these practices.
a
h a s eฺ
m ) u id
i s ฺco ent G
s sas Stud
l e s@ this
b r ob use
( to
bl es
R o
ul io
B r a

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 11: Making Applications Highly Available


Chapter 11 - Page 21
Unauthorized reproduction or distribution prohibitedฺ Copyright© 2016, Oracle and/or its affiliatesฺ

n s e
i ce
b l el
fer a
a n s
n - tr
a no
h a s eฺ
m ) u id
i s ฺco ent G
s sas Stud
l e s@ this
b r ob use
( to
bl es
R o
ul io
B r a

Copyright © 2015, Oracle and/or its affiliates. All rights reserved.

Practices for Lesson 11: Making Applications Highly Available


Chapter 11 - Page 22

You might also like