Professional Documents
Culture Documents
How To Clone An 11.2.0.3 Grid Infrastructure Home and Clusterware
How To Clone An 11.2.0.3 Grid Infrastructure Home and Clusterware
In this Document
Goal
Solution
Install Grid Infrastructure Clusterware + any required patches
Prepare the new cluster nodes
Run the clone.pl on the Destination Node.
Launch the Configuration Wizard
Applies to:
Goal
This articles describes how to clone an Oracle Grid Infrastructure home and use the cloned home to
create a cluster. You perform the cloning procedures by running scripts in silent mode. The cloning
procedures are applicable to 11.2.0.3 (and later) Clusterware installations on IBM: Linux on System z on
both SuSE and Redhat.
This document does not cover using Cloning to Add Nodes to a Cluster.
This article assumes that you are cloning an Oracle Clusterware 11g release 2 (11.2) installation
configured as follows:
Voting disk and Oracle Cluster Registry (OCR) are stored in Oracle Automatic Storage Management
(ASM)
Please note that a fresh install of the Grid Install is recommended and is always less problematic than
cloning, with GUI Grid Infrastructure Oracle Universal Installer guiding you through the configuration.
However, silent installs can save dbas time and effort when a large number of nodes need to be
configured.
Solution
Cloning is the process of copying an existing Oracle Clusterware installation to a different location and
then updating the copied installation to work in the new environment. Changes made by one-off
patches applied on the source Oracle Grid Infrastructure home are also present after cloning. During
cloning, you run a script that replays the actions that installed the Oracle Grid Infrastructure home.
Cloning requires that you start with a successfully installed Oracle Grid Infrastructure home. You use this
home as the basis for implementing a script that extends the Oracle Grid Infrastructure home to create a
cluster based on the original Grid home.
Advantages
Install once and deploy to many without the need for a GUI interface.
Cloning enables you to create an installation (copy of a production, test, or development installation)
with all patches applied to it in a single step. Once you have performed the base installation and applied
all patch sets and patches on the source system, cloning performs all of these individual steps as a single
procedure.
Installing Oracle Clusterware by cloning is a quick process. A few minutes to install the software, plus the
configuration wizard.
Cloning provides a guaranteed method of accurately repeating the same Oracle Clusterware installation
on multiple clusters.
Before copying the source Oracle Grid Infrastructure home, shut down all of the services, databases,
listeners, applications, Oracle Clusterware, and Oracle ASM instances that run on the node. Oracle
recommends that you use the Server Control (SRVCTL) utility to first shut down the databases, and then
the Oracle Clusterware Control (CRSCTL) utility to shut down the rest of the components.
It is recommended to create a “copy” of the source Grid Infrastructure home. This may appear an
unnecessary step but by doing so, we can delete unwanted (node specific) files/logs, leaving the original
Grid Infrastructure Home intact whilst ensuring that the “cloned” software is clean. If this is going to be
the “master” copy of Grid Infrastructure software to be rolled out to many clusters, it is worth taking a
little time to do this.
As root user:
The next command assumes that our Grid Infrastructure source is “/u01/11.2.0/grid” and we are going
to use a copy_path of /mnt/sware
cd /mnt/sware/grid
Note where you see “host_name” you need to replace with the hostname of your server
rm -rf host_name
rm -rf log/host_name
rm -rf gpnp/host_name
cd /mnt/sware/grid
tar -zcvpf /mnt/sware/gridHome.tgz .
This article does not go into specific details as to what is required. It is assumed that all nodes of the new
cluster have been set up with correct kernel parameters, meet all networking requirements, have all
ASM devices configured, shared and available and CVU has been run successfully to verify OS and
Hardware setup.
Create the same directory structure on each of the new nodes of the new cluster into which you will
restore the copy of the Grid Infrastructure Home. You should ensure that the permissions are correct for
both the new Grid Home and the oraInventory directory.
In the example below it is assumed that the Grid Infrastructure installation owner is “oracle” and the
Oracle Inventory group is “oinstall” - hence owner:group is “oracle:oinstall”
As root user
mkdir -p /u01/11.2.0/grid
cd /u01/11.2.0/grid
tar -zxvf /mnt/sware/gridHome.tgz
mkdir -p /u01/oraInventory
chown oracle:oinstall /u01/oraInventory
chown -R oracle:oinstall /u01/11.2.0/grid
It is necessary to add the setuid and setgid from the binaries and you should run:
Just to clarify, at this point we are working on our new node, we have extracted the copied software,
ensuring that all permissions are correct and unwanted files have been removed.
We need to run the clone.pl with the relevant parameters e.g.:-
Parameters Description
The complete path to the Oracle base to
be cloned. If you specify an invalid path,
ORACLE_BASE=ORACLE_BASE
then the script exits. This parameter is
required.
$ cd /u01/11.2.0/grid/clone/bin
$ perl clone.pl -silent ORACLE_BASE=/u01/base ORACLE_HOME=
/u01/11.2.0/grid ORACLE_HOME_NAME=OraHome1Grid
INVENTORY_LOCATION=/u01/oraInventory -O'"CLUSTER_NODES={node1, node2}"'
-O'"LOCAL_NODE=node1"' CRS=TRUE
Just to clarify the “quotes” in the command above for LOCAL_NODE and CLUSTER_NODES e.g.
-O'"CLUSTER_NODES={node1, node2}"'
This is single quote followed by double quote after the –O and double quote followed by single quote at
the end.
The clone command needs to be run on each node of the new cluster. This command prepares the new
Grid Infrastructure Home for entry into the central inventory (/u01/oraInventory) and relinks the
binaries.
==================================================================
as root user:-
./u01/oraInventory/orainstRoot.sh
./u01/11.2.0/grid /root.sh
In practice, if there was a requirement to roll out Clusterware software on a large number of nodes, this
would be further automated by generating shell scripts which call clone.pl passing relevant parameters.
Here is an example:-
Filename start.sh
==================================================================
#!/bin/sh
export PATH=/ u01/11.2.0/grid/bin:$PATH
export THIS_NODE=`/bin/hostname -s`
echo $THIS_NODE
ORACLE_BASE=/u01/base
GRID_HOME=/u01/11.2.0/grid
E01=ORACLE_BASE=${ORACLE_BASE}
E02=ORACLE_HOME=${GRID_HOME}
E03=ORACLE_HOME_NAME=OraGridHome1
E04=INVENTORY_LOCATION=/u01/oraInventory
C00=-O'"-debug"'
C01=-O"\"CLUSTER_NODES={strkf42,strkf43}\""
C02="-O\"LOCAL_NODE=$THIS_NODE\""
perl ${GRID_HOME}/clone/bin/clone.pl -silent $E01 $E02 $E03 $E04 $C00 $C01 $C02
==================================================================
./start.sh will be run on each node of the new cluster, resulting in a prompt to run root.sh and
orainstRoot.sh after successful completion, on each node of your new cluster.
It is now time to configure the new cluster – this can be done via the Configuration Wizard (a GUI
interface) or silently via a response file.
The Configuration Wizard helps you to prepare the new crsconfig_params file which is copied across all
nodes of the cluster, prompting you to run root.sh script (which calls the rootconfig script), and runs
cluster post-install verifications. You will need to have the list of public, private, and virtual IP address,
ASM devices, scan names etc. This article assumes that you are familiar with these requirements and
does not go into further detail.
./u01/11.2.0/grid/crs/config/config.sh
The Configuration Wizard allows you to record a responseFile. The following is an example responseFile
generated from 11.2.0.3 Configuration Wizard.
Filename config.rsp
==================================================================
oracle.install.responseFileVersion=/oracle/install/rspfmt_crsinstall_response_schema_v11_2_0
INVENTORY_LOCATION=/u01/oraInventory
SELECTED_LANGUAGES=en
oracle.install.option=CRS_CONFIG
ORACLE_BASE=/u01/base
oracle.install.asm.OSDBA=asmdba
oracle.install.asm.OSOPER=oinstall
oracle.install.crs.config.gpnp.scanName=strkf-scan
oracle.install.crs.config.gpnp.scanPort=1521
oracle.install.crs.config.clusterName=strkf
oracle.install.crs.config.gpnp.configureGNS=false
oracle.install.crs.config.gpnp.gnsSubDomain=
oracle.install.crs.config.gpnp.gnsVIPAddress=
oracle.install.crs.config.autoConfigureClusterNodeVIP=false
oracle.install.crs.config.clusterNodes=strkf42.us.oracle.com:strkf42-
vp.us.oracle.com,strkf43.us.oracle.com:strkf43-vp.us.oracle.com
#-------------------------------------------------------------------------------
# The value should be a comma separated strings where each string is as shown below
# InterfaceName:SubnetMask:InterfaceType
# where InterfaceType can be either "1", "2", or "3"
# (1 indicates public, 2 indicates private, and 3 indicates the interface is not used)
#
# For example: eth0:140.87.24.0:1,eth1:10.2.1.0:2,eth2:140.87.52.0:3
#
#-------------------------------------------------------------------------------
oracle.install.crs.config.networkInterfaceList=eth0:130.xx.xx.0:1,eth1:10.xx.xx.0:2
oracle.install.crs.config.storageOption=ASM_STORAGE
oracle.install.asm.SYSASMPassword=Oracle_11
oracle.install.asm.diskGroup.name=DATA
oracle.install.asm.diskGroup.redundancy=EXTERNAL
oracle.install.asm.diskGroup.AUSize=8
oracle.install.asm.diskGroup.disks=/dev/mapper/lun01,/dev/mapper/lun02
oracle.install.asm.diskGroup.diskDiscoveryString=/dev/mapper/lun0*
oracle.install.asm.monitorPassword=Oracle_11
oracle.install.asm.upgradeASM=false
[ConfigWizard]
oracle.install.asm.useExistingDiskGroup=false
==================================================================
Note the –ignoreSysPrereqs and –ignorePrereq are required or config.sh will fail due to an incorrectly
flagged missing rpm. In addition, the SYSASM passwords do not conform to recommended naming
standards.
==================================================================
Example output:-
[WARNING] [INS-30011] The SYS password entered does not conform to the Oracle recommended
standards.
CAUSE: Oracle recommends that the password entered should be at least 8 characters in length, contain
at least 1 uppercase character, 1 lower case character and 1 digit [0-9].
ACTION: Provide a password that conforms to the Oracle recommended standards.
[WARNING] [INS-30011] The ASMSNMP password entered does not conform to the Oracle
recommended standards.
CAUSE: Oracle recommends that the password entered should be at least 8 characters in length, contain
at least 1 uppercase character, 1 lower case character and 1 digit [0-9].
ACTION: Provide a password that conforms to the Oracle recommended standards.
At this point you can see that it is necessary to run root.sh (for a second time). The first time this was
run was during the clone.pl process. At this point, the $GRID_HOME/crs/crsconfig/rootconfig.sh file was
empty. Now that config.sh has been run, the root.sh and rootconfig.sh file will be populated.
Root.sh takes a little time to run, you should ensure that it has completed successfully on the first node
before running root.sh on any other nodes.