Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 13

CLUSTER_INTERCONNECTS Parameter defines a private network , This parameter will affect

GCS and GES The choice of service network interface .

This parameter is mainly used for the following purposes :

1. Override the default inline network

2. A single network bandwidth cannot satisfy RAC Bandwidth requirements for databases , Increase
bandwidth .

CLUSTER_INTERCONNECTS Store the information in the cluster registry , Specifically cover the
following :
1. Stored in OCR Pass through oifcfg View the classification of network commands .
2.Oracle Default internal connection selected .

The default value of this parameter is empty , It can contain one or more IP Address , Separate... With
a colon .
CLUSTER_INTERCONNECTS It's a static parameter , In the revision
CLUSTER_INTERCONNECTS when , Each instance must be modified :
alter system set cluster_interconnects = '192.168.100.2' scope=spfile sid = 'rac1';
alter system set cluster_interconnects = '192.168.100.3' scope=spfile sid = 'rac2';

You can specify multiple network interfaces as inline networks :


alter system set cluster_interconnects = '192.168.100.2:192.168.101.2' scope=spfile sid = 'rac1';
alter system set cluster_interconnects = '192.168.100.3:192.168.101.3' scope=spfile sid = 'rac2';
But for the whole RAC In terms of usability , Use HAIP Or system level bonding It's a better choice .

Set Cluster Interconnects in Oracle RAC


To set the cluster interconnects in the RAC
Delete any references of the cluster_interconnect on the interfaces.
Before
host1$ oifcfg getif
ce0 192.168.1.0 global cluster_interconnect
ce4 10.0.102.0 global public

Delete cluster interconnects using oifcfg.


host1$ oifcfg delif -global ce0

After
host1$ oifcfg getif
ce4 10.0.102.0 global public

The cluster_interconnects initialization parameter must be manually overrode its the default value
from the OCR.
Before
SQL> select * from gv$cluster_interconnects;
INST_ID NAME IP_ADDRESS IS_ SOURCE
---------- --------------- ---------------- --- -------------------------------
1 ce0 192.168.1.50 NO Oracle Cluster Repository
2 ce0 192.168.1.51 NO Oracle Cluster Repository
Update the initialization parameters in both ASM and RAC database.
alter system set cluster_interconnects = '192.168.1.50' scope=spfile sid='RAC1' ;
alter system set cluster_interconnects = '192.168.1.51' scope=spfile sid='RAC2' ;

After
SQL> select * from gv$cluster_interconnects;
INST_ID NAME IP_ADDRESS IS_ SOURCE
---------- --------------- ---------------- --- -------------------------------
1 ce0 192.168.1.50 NO cluster_interconnects parameter <== Source is changed
2 ce0 192.168.1.51 NO cluster_interconnects parameter <== Source is changed

Oracle CLUSTER_INTERCONNECTS Parameters,


This note attempts to clarify the cluster_interconnects parameter and the platforms on which the imple
mentation has been made. A brief explanation on the workings of the parameter has also been presente
d in this note.
This is also one of the most frequently questions related to cluster and RAC
installations on most sites and forms a part of the prerequisite as well.

ORACLE 9I RAC – Parameter CLUSTER_INTERCONNECTS


———————————————–

FREQUENTLY ASKED QUESTIONS


————————–
November 2002

QUESTIONS & ANSWERS


——————-
1. What is the parameter CLUSTER_INTERCONNECTS for ?

Answer
——
This parameter is used to influence the selection of the network interface
for Global Cache Service (GCS) and Global Enqueue Service (GES) processing.

This note does not compare the other elements of 8i OPS with 9i RAC
because of substantial differences in the behaviour of both architectures.
Oracle 9i RAC has certain optimizations which attempt to transfer most of
the information required via the interconnects so that the number of disk
reads are minimized. This behaviour known as Cache fusion phase 2 is summarised
in Note 139436.1
The definition of the interconnnect is a private network which
will be used to transfer the cluster traffic and Oracle Resource directory
information and blocks to satisfy queries. The technical term for that is
cache fusion.

The CLUSTER_INTERCONNECTS should be used when


- you want to override the default network selection
- bandwith of a single interconnect does not meet the bandwith requirements of
a Real Application Cluster database

The syntax of the parameter is:


CLUSTER_INTERCONNECTS = if1:if2:…:ifn
Where if is an IP address in standard dotted-decimal format, for example,
144.25.16.214. Subsequent platform. implementations may specify interconnects
with different syntaxes.

2. Is the parameter CLUSTER_INTERCONNECTS available for all platforms ?


Answer
——
This parameter is configurable on most platforms.
This parameter can not be used on Linux.

The following Matrix shows when the parameter was introduced on which platform.:
Operating System Available since
AIX 9.2.0
HP/UX 9.0.1
HP Tru64 9.0.1
HP OPenVMS 9.0.1
Sun Solaris 9.0.1

References
———-
Bug <2119403> ORACLE9I RAC ADMINISTRATION SAYS CLUSTER_INTERCONNECTS IS
SOLARIS ONLY.
Bug <2359300> ENHANCE CLUSTER_INTERCONNECTS TO WORK WITH 9I RAC ON IBM

3. How is the Interconnect recognized on Linux ?


Answer
——
Since Oracle9i 9.2.0.8 CLUSTER_INTECONNETCS can be used to change the interconnect.
A patch is also available for 9.2.0.7 under Patch 4751660.
Before 9.2.0.8 the Oracle implementation for the interface selection reads the ‘private hostname’
in the cmcfg.ora file and uses the corresponding ip-address for the interconnect.
If no private hostname is available the public hostname will be used.

4. Where could I find information on this parameter ?


Answer
——

The parameter is documented in the following books:


Oracle9i Database Reference Release 2 (9.2)
Oracle9i Release 1 (9.0.1) New Features in Oracle9i Database Reference -
What’s New in Oracle9i Database Reference?
Oracle9i Real Application Clusters Administration Release 2 (9.2)
Oracle9i Real Application Clusters Deployment and Performance Release 2 (9.2)

Also port specific documentation may contain information about the usage of
the cluster_interconnects parameter.

Documentation can be viewed on


http://tahiti.oracle.com
http://otn.oracle.com/documentation/content.html
References:
———–
Note 162725.1: OPS/RAC VMS: Using alternate TCP Interconnects on 8i OPS
and 9i RAC on OpenVMS

Note 151051.1: Init.ora Parameter “CLUSTER_INTERCONNECTS” Reference Note

5. How to detect which interconnect is used ?


The following commands show which interconnect is used for UDP or TCP:
sqlplus> connect / as sysdba
oradebug setmypid
oradebug ipc
exit

The corresponding trace can be found in the user_dump_dest directory and for
example contains the following information in the last couple of lines:

SKGXPCTX: 0x32911a8 ctx


admno 0x12f7150d admport:
SSKGXPT 0x3291db8 flags SSKGXPT_READPENDING info for network 0
socket no 9 IP 172.16.193.1 UDP 43307
sflags SSKGXPT_WRITESSKGXPT_UP
info for network 1
socket no 0 IP 0.0.0.0 UDP 0
sflags SSKGXPT_DOWN
context timestamp 0x1ca5
no ports
Please note that on some platforms and versions (Oracle9i 9.2.0.1 on Windows)
you might see an ORA-70 when the command oradebug ipc has not been
implemented.

When other protocols such as LLT, HMP or RDG are used, then the trace file will not
reveal an IP address.
6. Cluster_Interconnects is mentioned in the 9i RAC administration
guide as a Solaris specific parameter, is this the only platform
where this parameter is available ?

Answer
—–

This information that this parameter works on Solaris only is incorrect. Please
check the answer for question number 2 for the complete list of platforms for the same.

References:
———–
bug <2119403> ORACLE9I RAC ADMINISTRATION SAYS CLUSTER_INTERCONNECTS IS
SOLARIS ONLY.
7. Are there any side effects for this parameter, namely affecting normal
operations ?

Answer
—–
When you set CLUSTER_INTERCONNECTS in cluster configurations, the
interconnect high availability features are not available. In other words,
an interconnect failure that is normally unnoticeable would instead cause
an Oracle cluster failure as Oracle still attempts to access the network
interface which has gone down. Using this parameter you are explicitly
specifying the interface or list of interfaces to be used.

8. Is the parameter OPS_INTERCONNECTS which was available in 8i similar


to this parameter ?

Answer
——
Yes, the parameter OPS_INTERCONNECTS was used to influence the network selection
for the Oracle 8i Parallel Server.

Reference
———
Note <120650.1> Init.ora Parameter “OPS_INTERCONNECTS” Reference Note
9. Does Cluster_interconnect allow failover from one Interconnect to another
Interconnect ?

Answer
——
Failover capability is not implemented at the Oracle level. In general this
functionality is delivered by hardware and/or Software of the operating system.
For platform. details please see Oracle platform. specific documentation
and the operating system documentation.
10. Is the size of messages limited on the Interconnect ?

Answer
——
The message size depends on the protocoll and platform.
UDP: In Oracle9i Release 2 (9.2.0.1) message size for UDP was limited to 32K.
Oracle9i 9.2.0.2 allows to use bigger UDP message sizes depending on the
platform. To increase throughput on an interconnect you have to adjust
udp kernel parameters.
TCP: There is no need to set the message size for TCP.
RDG: The recommendations for RDG are documented in
Oracle9i Administrator’s Reference – Part No. A97297-01
References
———-
Bug <2475236> RAC multiblock read performance issue using UDP IPC
11. How can you see which protocoll is being used by the instances ?

Answer
——
Please see the alert-file(s) of your RAC instances. During startup you’ll
find a message in the alert-file that shows the protocoll being used.

Wed Oct 30 05:28:55 2002


cluster interconnect IPC version:Oracle UDP/IP with Sun RSM disabled
IPC Vendor 1 proto 2 Version 1.0
12. Can the parameter CLUSTER_INTERCONNECT be changed dynamically during runtime ?

Answer
——
No. Cluster_interconnects is a static parameter and can only be set in the
spfile or pfile (init.ora)

Explain in detail through experiments CLUSTER_INTERCONNECTS The influence of parameters


on instances

stay Oracle RAC Environment ,RAC Example of Cache Fusion It's usually used Clusterware Private
heartbeat network , especially 11.2.0.2 After the version , multi-purpose HAIP technology , While
improving the bandwidth of this technology ( most 4 A heartbeat network ), It also ensures the
fault tolerance of heartbeat network , for example :RAC Node server 4 A heartbeat network , It's bad
at the same time 3 None of them will cause Oracle RAC and Clusterware Downtime .

But when a set of RAC When multiple databases are deployed in the environment , Between
different database instances Cache Fusion Activities will influence each other , Maybe some libraries
require more bandwidth , Some libraries require less bandwidth , To avoid the same set of RAC The
heartbeat of multiple databases in the environment affects each other ,Oracle At the database level, it
provides cluster_interconnects Parameters , This parameter is used to override the default heartbeat
network , Use the specified network for the database instance Cache Fusion Activities , But this
parameter is not fault tolerant , Now let's illustrate it through experiments :

Oracle RAC Environmental Science :12.1.0.2.0 standard Cluster for Oracle Linux 5.9 x64.

One . The network configuration .

> node 1:
[root@rhel1 ~]# ifconfig -a
eth0 Link encap:Ethernet HWaddr 00:50:56:A8:16:15 <<<< eth0 Manage the
network .
inet addr:172.168.4.20 Bcast:172.168.4.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:13701 errors:0 dropped:522 overruns:0 frame:0
TX packets:3852 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1122408 (1.0 MiB) TX bytes:468021 (457.0 KiB)

eth1 Link encap:Ethernet HWaddr 00:50:56:A8:25:6B <<<< eth1 The public


network .
inet addr:10.168.4.20 Bcast:10.168.4.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:23074 errors:0 dropped:520 overruns:0 frame:0
TX packets:7779 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:15974971 (15.2 MiB) TX bytes:2980403 (2.8 MiB)

eth1:1 Link encap:Ethernet HWaddr 00:50:56:A8:25:6B


inet addr:10.168.4.22 Bcast:10.168.4.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

eth1:2 Link encap:Ethernet HWaddr 00:50:56:A8:25:6B


inet addr:10.168.4.24 Bcast:10.168.4.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

eth2 Link encap:Ethernet HWaddr 00:50:56:A8:21:0A <<<< eth2 Heartbeat Network ,


Belong to Clusterware HAIP One of them .
inet addr:10.0.1.20 Bcast:10.0.1.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:11322 errors:0 dropped:500 overruns:0 frame:0
TX packets:10279 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:6765147 (6.4 MiB) TX bytes:5384321 (5.1 MiB)

eth2:1 Link encap:Ethernet HWaddr 00:50:56:A8:21:0A


inet addr:169.254.10.239 Bcast:169.254.127.255 Mask:255.255.128.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

eth3 Link encap:Ethernet HWaddr 00:50:56:A8:F7:F7 <<<< eth3 Heartbeat Network , Belong
to Clusterware HAIP One of them .
inet addr:10.0.2.20 Bcast:10.0.2.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:347096 errors:0 dropped:500 overruns:0 frame:0
TX packets:306170 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:210885992 (201.1 MiB) TX bytes:173504069 (165.4 MiB)

eth3:1 Link encap:Ethernet HWaddr 00:50:56:A8:F7:F7


inet addr:169.254.245.28 Bcast:169.254.255.255 Mask:255.255.128.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

eth4 Link encap:Ethernet HWaddr 00:50:56:A8:DC:CC <<<< eth4~eth9 Heartbeat Network ,


But that does not belong to Clusterware HAIP.
inet addr:10.0.3.20 Bcast:10.0.3.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:7247 errors:0 dropped:478 overruns:0 frame:0
TX packets:6048 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:3525191 (3.3 MiB) TX bytes:2754275 (2.6 MiB)

eth5 Link encap:Ethernet HWaddr 00:50:56:A8:A1:86


inet addr:10.0.4.20 Bcast:10.0.4.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:40028 errors:0 dropped:480 overruns:0 frame:0
TX packets:23700 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:15139172 (14.4 MiB) TX bytes:9318750 (8.8 MiB)

eth6 Link encap:Ethernet HWaddr 00:50:56:A8:F7:53


inet addr:10.0.5.20 Bcast:10.0.5.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:13324 errors:0 dropped:470 overruns:0 frame:0
TX packets:128 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1075873 (1.0 MiB) TX bytes:16151 (15.7 KiB)
eth7 Link encap:Ethernet HWaddr 00:50:56:A8:E4:78
inet addr:10.0.6.20 Bcast:10.0.6.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:13504 errors:0 dropped:457 overruns:0 frame:0
TX packets:120 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1158553 (1.1 MiB) TX bytes:14643 (14.2 KiB)

eth8 Link encap:Ethernet HWaddr 00:50:56:A8:C0:B0


inet addr:10.0.7.20 Bcast:10.0.7.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:13272 errors:0 dropped:442 overruns:0 frame:0
TX packets:126 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1072609 (1.0 MiB) TX bytes:15999 (15.6 KiB)

eth9 Link encap:Ethernet HWaddr 00:50:56:A8:5E:F6


inet addr:10.0.8.20 Bcast:10.0.8.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:14316 errors:0 dropped:431 overruns:0 frame:0
TX packets:127 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1169023 (1.1 MiB) TX bytes:15293 (14.9 KiB)

node 2:
[root@rhel2 ~]# ifconfig -a <<<< Network configuration and nodes 1 Agreement .
eth0 Link encap:Ethernet HWaddr 00:50:56:A8:C2:66
inet addr:172.168.4.21 Bcast:172.168.4.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:19156 errors:0 dropped:530 overruns:0 frame:0
TX packets:278 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:4628107 (4.4 MiB) TX bytes:37558 (36.6 KiB)

eth1 Link encap:Ethernet HWaddr 00:50:56:A8:18:1A


inet addr:10.168.4.21 Bcast:10.168.4.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:21732 errors:0 dropped:531 overruns:0 frame:0
TX packets:7918 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:4110335 (3.9 MiB) TX bytes:14783715 (14.0 MiB)

eth1:2 Link encap:Ethernet HWaddr 00:50:56:A8:18:1A


inet addr:10.168.4.23 Bcast:10.168.4.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

eth2 Link encap:Ethernet HWaddr 00:50:56:A8:1B:DD


inet addr:10.0.1.21 Bcast:10.0.1.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:410244 errors:0 dropped:524 overruns:0 frame:0
TX packets:433865 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:206461212 (196.8 MiB) TX bytes:283858870 (270.7 MiB)

eth2:1 Link encap:Ethernet HWaddr 00:50:56:A8:1B:DD


inet addr:169.254.89.158 Bcast:169.254.127.255 Mask:255.255.128.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

eth3 Link encap:Ethernet HWaddr 00:50:56:A8:2B:68


inet addr:10.0.2.21 Bcast:10.0.2.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:323060 errors:0 dropped:512 overruns:0 frame:0
TX packets:337911 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:176652414 (168.4 MiB) TX bytes:212347379 (202.5 MiB)

eth3:1 Link encap:Ethernet HWaddr 00:50:56:A8:2B:68


inet addr:169.254.151.103 Bcast:169.254.255.255 Mask:255.255.128.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

eth4 Link encap:Ethernet HWaddr 00:50:56:A8:81:DB


inet addr:10.0.3.21 Bcast:10.0.3.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:37308 errors:0 dropped:507 overruns:0 frame:0
TX packets:27565 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:10836885 (10.3 MiB) TX bytes:14973305 (14.2 MiB)

eth5 Link encap:Ethernet HWaddr 00:50:56:A8:43:EA


inet addr:10.0.4.21 Bcast:10.0.4.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:38506 errors:0 dropped:496 overruns:0 frame:0
TX packets:27985 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:10940661 (10.4 MiB) TX bytes:14859794 (14.1 MiB)

eth6 Link encap:Ethernet HWaddr 00:50:56:A8:84:76


inet addr:10.0.5.21 Bcast:10.0.5.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:13653 errors:0 dropped:484 overruns:0 frame:0
TX packets:114 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1102617 (1.0 MiB) TX bytes:14161 (13.8 KiB)

eth7 Link encap:Ethernet HWaddr 00:50:56:A8:B6:4F


inet addr:10.0.6.21 Bcast:10.255.255.255 Mask:255.0.0.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:13633 errors:0 dropped:474 overruns:0 frame:0
TX packets:115 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1101251 (1.0 MiB) TX bytes:14343 (14.0 KiB)

eth8 Link encap:Ethernet HWaddr 00:50:56:A8:97:62


inet addr:10.0.7.21 Bcast:10.0.7.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:13633 errors:0 dropped:459 overruns:0 frame:0
TX packets:115 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1102065 (1.0 MiB) TX bytes:14343 (14.0 KiB)

eth9 Link encap:Ethernet HWaddr 00:50:56:A8:28:10


inet addr:10.0.8.21 Bcast:10.0.8.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:13764 errors:0 dropped:446 overruns:0 frame:0
TX packets:115 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1159479 (1.1 MiB) TX bytes:14687 (14.3 KiB)

Two . The current heartbeat network configuration of the cluster .

[grid@rhel1 ~]$ oifcfg getif


eth1 10.168.4.0 global public
eth2 10.0.1.0 global cluster_interconnect
eth3 10.0.2.0 global cluster_interconnect

3、 ... and .cluster_interconnects Before parameter adjustment .

SQL> show parameter cluster_interconnect

NAME TYPE VALUE


------------------------------------ ----------- ------------------------------
cluster_interconnects string

cluster_interconnects The default is empty. .

SQL> select * from v$cluster_interconnects;

NAME IP_ADDRESS IS_ SOURCE CON_ID


--------------- ---------------- --- ------------------------------- ----------
eth2:1 169.254.10.239 NO 0
eth3:1 169.254.245.28 NO 0

V$CLUSTER_INTERCONNECTS displays one or more interconnects that are being used for cluster
communication.

Inquire about v$cluster_interconnects Find out , At present RAC The environment uses HAIP,
Please note that : What's shown here is HAIP Address , It's not the address of the system
configuration , This is different from the later display .

Four . adjustment cluster_interconnects Parameters .

adjustment cluster_interconnects Parameters , In order to maximize the heartbeat bandwidth , We


have... For each machine 9 A heartbeat network :
SQL> alter system set
cluster_interconnects="10.0.1.20:10.0.2.20:10.0.3.20:10.0.4.20:10.0.5.20:10.0.6.20:10.0.7.20:10.0.8.2
0:10.0.9.20" scope=spfile sid='orcl1'; <<<< Be careful IP Separated by colons , Double quotation
marks ; Set up cluster_interconnects The parameter will be overridden by oifcfg getif Command to see
clusterware Heartbeat Network , The network is also RAC The default network for heartbeat
communication .

System altered.

SQL> alter system set


cluster_interconnects="10.0.1.21:10.0.2.21:10.0.3.21:10.0.4.21:10.0.5.21:10.0.6.21:10.0.7.21:10.0.8.2
1:10.0.9.21" scope=spfile sid='orcl2';

System altered.

Restart the database instance and receive the following error :


Advanced Analytics and Real Application Testing options
[oracle@rhel1 ~]$ srvctl stop database -d orcl
[oracle@rhel1 ~]$ srvctl start database -d orcl
PRCR-1079 : Failed to start resource ora.orcl.db
CRS-5017: The resource action "ora.orcl.db start" encountered the following error:
ORA-27504: IPC error creating OSD context
ORA-27300: OS system dependent operation:ip_list failed with status: 0
ORA-27301: OS failure message: Error 0
ORA-27302: failure occurred at: skgxpcini
ORA-27303: additional information: Too many IPs specified to SKGXP. Max supported is 4, given
9.
. For details refer to "(:CLSN00107:)" in
"/u01/app/grid/diag/crs/rhel2/crs/trace/crsd_oraagent_oracle.trc".

CRS-2674: Start of 'ora.orcl.db' on 'rhel2' failed


CRS-5017: The resource action "ora.orcl.db start" encountered the following error:
ORA-27504: IPC error creating OSD context
ORA-27300: OS system dependent operation:ip_list failed with status: 0
ORA-27301: OS failure message: Error 0
ORA-27302: failure occurred at: skgxpcini
ORA-27303: additional information: Too many IPs specified to SKGXP. Max supported is 4, given
9.
. For details refer to "(:CLSN00107:)" in
"/u01/app/grid/diag/crs/rhel1/crs/trace/crsd_oraagent_oracle.trc".

CRS-2674: Start of 'ora.orcl.db' on 'rhel1' failed


CRS-2632: There are no more servers to try to place resource 'ora.orcl.db' on that would satisfy its
placement policy

It seems that even with cluster_interconnects The network address can't exceed 4 individual , This
heel HAIP Agreement .

therefore , Get rid of the back 5 individual IP, Leave the 4 individual IP For heartbeat networks :
node 1:10.0.1.20:10.0.2.20:10.0.3.20:10.0.4.20
node 2:10.0.1.21:10.0.2.21:10.0.3.21:10.0.4.21

5、 ... and . test cluster_interconnects The ability of parameter fault tolerance .


So let's test that out cluster_interconnects The ability of fault tolerance :
SQL> set linesize 200
SQL> select * from v$cluster_interconnects;

NAME IP_ADDRESS IS_ SOURCE CON_ID


--------------- ---------------- --- ------------------------------- ----------
eth2 10.0.1.20 NO cluster_interconnects parameter 0
eth3 10.0.2.20 NO cluster_interconnects parameter 0
eth4 10.0.3.20 NO cluster_interconnects parameter 0
eth5 10.0.4.20 NO cluster_interconnects parameter 0

After restarting the instance, we found that the current RAC Use the previously specified 4 individual
IP For heartbeat networks .

RAC Both node instances are working properly :


[oracle@rhel1 ~]$ srvctl status database -d orcl
Instance orcl1 is running on node rhel1
Instance orcl2 is running on node rhel2

Manual down Drop the node 1 One of the heartbeat network cards :
[root@rhel1 ~]# ifdown eth4 <<<< The network card is not HAIP Among them IP so .

[oracle@rhel1 ~]$ srvctl status database -d orcl


Instance orcl1 is running on node rhel1
Instance orcl2 is running on node rhel2
adopt srvctl The tool shows that the instance is still running .

use sqlplus Local landing :


[oracle@rhel1 ~]$ sql

SQL*Plus: Release 12.1.0.2.0 Production on Tue Oct 20 18:11:35 2015

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Connected.
SQL>
This state is obviously not right .

Check the alarm log , Received the following error :


2015-10-20 18:10:22.996000 +08:00
SKGXP: ospid 32107: network interface query failed for IP address 10.0.3.20.
SKGXP: [error 32607]
2015-10-20 18:10:31.600000 +08:00
Errors in file /u01/app/oracle/diag/rdbms/orcl/orcl1/trace/orcl1_qm03_453.trc (incident=29265)
(PDBNAME=CDB$ROOT):
ORA-00603: ORACLE server session terminated by fatal error
ORA-27501: IPC error creating a port
ORA-27300: OS system dependent operation:bind failed with status: 99
ORA-27301: OS failure message: Cannot assign requested address
ORA-27302: failure occurred at: sskgxpsock
Incident details in:
/u01/app/oracle/diag/rdbms/orcl/orcl1/incident/incdir_29265/orcl1_qm03_453_i29265.trc
Errors in file /u01/app/oracle/diag/rdbms/orcl/orcl1/trace/orcl1_cjq0_561.trc (incident=29297)
(PDBNAME=CDB$ROOT):
ORA-00603: ORACLE server session terminated by fatal error
ORA-27544: Failed to map memory region for export
ORA-27300: OS system dependent operation:bind failed with status: 99
ORA-27301: OS failure message: Cannot assign requested address
ORA-27302: failure occurred at: sskgxpsock
Incident details in:
/u01/app/oracle/diag/rdbms/orcl/orcl1/incident/incdir_29297/orcl1_cjq0_561_i29297.trc
2015-10-20 18:10:34.724000 +08:00
Dumping diagnostic data in directory=[cdmp_20151020181034], requested by (instance=1, osid=561
(CJQ0)), summary=[incident=29297].
2015-10-20 18:10:35.819000 +08:00
Dumping diagnostic data in directory=[cdmp_20151020181035], requested by (instance=1, osid=453
(QM03)), summary=[incident=29265].

From the log , The example does not down fall ,HANG There it is , View the database instance log of
another node , Find out RAC No error was reported in other instances of , Unaffected .

Manual recovery of network card :


[root@rhel1 ~]# ifup eth4

Then the instance returns to normal , The whole process instance doesn't down fall .

that HAIP Corresponding network port down Will it affect the instance ? So will eth2 down fall :
[root@rhel1 ~]# ifdown eth2

From the test , Examples remain hang live , Follow down Get rid of the wrong HAIP The network
port is the same , After the network port is restored, the instance will return to normal .

summary : From the test , Whatever the designation is HAIP so , Or not HAIP so , Set up
cluster_interconnects Parameters will make the heartbeat network not fault tolerant , There is a
problem with any of the specified network ports , Will make the instance HANG live , Until the
network port returns to normal , The instance will return to normal , meanwhile cluster_interconnects
Parameters only support 4 individual IP Address .
Although in RAC In the case of multi database environment , By setting the cluster_interconnects
Initialization parameters can override the default clusterware Heartbeat Network , The heartbeat
communication of multiple database instances is isolated from each other , But the failure of any
specified network card will cause the instance HANG live , High availability is not guaranteed .

You might also like