Buffer Cache Hit Ratio: Useful DBA Monitoring Scripts

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 21

Latest, Top, Free, Best Oracle Interview Questions and Answers - Oracle Job Interview FAQs, Queries, Tips,

Sample Papers, Exam Papers -


CoolInterview.com
http://www.coolinterview.com/type.asp?order=3&iType=8&iDBLoc=340

Useful DBA Monitoring Scripts


Buffer Cache Hit Ratio

select round((1-(pr.value/(bg.value+cg.value)))*100,2)
from v$sysstat pr, v$sysstat bg, v$sysstat cg
where pr.name='physical reads'
and bg.name='db block gets'
and cg.name='consistent gets'

Dictionary Cache Hit Ratio


select sum(gets-getmisses)*100/sum(gets) 
from v$rowcache

Sorts in Memory
select round((mem.value/(mem.value+dsk.value))*100,2)
from v$sysstat mem, v$sysstat dsk
where mem.name='sorts (memory)'
and dsk.name='sorts (disk)'

Shared Pool Free


select round((sum(decode(name,'free memory',bytes,0))/sum(bytes))*100,2)
from v$sgastat

Shared Pool Reloads


select round(sum(reloads)/sum(pins)*100,2)
from v$librarycache
where namespace in ('SQL AREA','TABLE/PROCEDURE','BODY','TRIGGER')

Library Cache Get Hit Ratio


The proportion of requests for a lock on an object which were satisfied by finding that object's handle already in memory. 
select round(sum(gethits)/sum(gets)*100,2)
from v$librarycache
Posted by Real-Core-DBA at 8:54 PM No comments: 
Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest
Tuesday, December 14, 2010

Implementation of RMAN (Recovery Manager)


To get the full benefits of Rman we should recovery catalog. 

Recovery Catalog

The recovery catalog is an optional repository of information about your target databases that RMAN uses and maintains. 

Physical location of the catalog

We should place the catalog database on a different server than the target database. If we fail to do this, we jeopardize backup and recovery options, because
we make it possible to lose both the catalog and the target database.

The catalog database should be created with the latest version of Oracle in your production environment. This helps to minimize compatibility issues and
complexity when you start to back up your target databases. 

Creating a catalog

The examples in this section provide details for creating a catalog database and registering a target database within the catalog. These examples assume that
your catalog database is on a different host than your target database.

Catalog database: Oracle 9.2.0.4 (GEK1) on gecko


Target database: Oracle 10.1.0.2 (AKI1) on akira

To create a recovery catalog follow these steps

1. Create a specific tablespace to hold the catalog objects.


2. Create a catalog schema.
3. Issue appropriate grants
4. Create the schema objects.
Oracle@akira:~> sqlplus system/manager@GEK1

CREATE TABLESPACE rman_cat 


DATAFILE ‘/U01/oracle/db/GEK1/CAT/rman_cat_01.dbf’
SIZE 50M;

Now that we have a tablespace to store our schema objects, we can create the schema

CREATE USER rmancat


IDENTIFIED BY rmancat
DEFAULT TABLESPACE rman_cat
TEMPORARY TABLESPACE temp
QUOTA UNLIMITED ON rman_cat;

Before we create the catalog objects, we need to grant special privileges to new schema. These privileges, granted through the
RECOVRY_CATALOG_OWNER role, let the schema manage its catalog objects.

GRANT RECOVERY_CATALOG_OWNER TO rmancat;


GRANT CREATE TYPE TO rmancat;

We can now create the catalog objects within our new schema. In order to perform this step, invoke RMAN, connect to newly created catalog schema, and
issue the create catalog command. If we don't specify a tablespace with the create catalog command, the catalog objects are created in the default tablespace
assigned to the catalog owner. 

Oracle@akira:~> rman catalog rmancat/rmancat@GEK1

Recovery Manager: Release 10.1.0.2.0 - Production


Copyright (c) 1995,2004, Oracle. All rights reserved.
connected to recovery catalog database
recovery catalog is not installed

RMAN> create catalog;


Recovery catalog created

RMAN> exit;

At this point, we now have an operational RMAN catalog

Registering a target database

After creating a catalog, the next logical step is to register a target database. We won’t be able to backup the target with the catalog unless the target database
is registered. 

Invoke RMAN, connect to both the target and the catalog and issue the register database command.

Oracle@akira:~> rman target / catalog rmancat@rmancat@GEK1

RMAN> Register Database;

RMAN> Exit;

Configuring the RMAN Environment

Configure command

We can configure persistent settings in the Rman environment. The configuration setting is done once, and used by Rman to perform all subsequent
operations.

To display the pre configured settings type the command SHOW ALL

RMAN> SHOW ALL

There are various parameters that can be used to configure RMAN operations to suit our needs.

Some of the things that we can configure are

1. Required number of backups for each datafile.


2. Number of server processes that will do backup/restore operations in parallel.
3. Directory where on disk backups will be stored.
Etc.,
We can return any CONFIGURE command to it’s default setting by running the command with the CLEAR option.

$ rman target / catalog rmancat/rmancat@GEK1

RMAN> CONFIGURE DEFAULT DEVICE TYPE TO DISK;

RMAN> CONFIGURE RETENTION POLICY TO REDUNDANCY 3;

RMAN> CONFIGURE DEVICE TYPE DISK PARALLELISM 2;

RMAN> CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT


'/u01/oracle/db/AKI1/bck/ora_df%t_s%s_s%p';

RMAN> CONFIGURE CONTROLFILE AUTOBACKUP ON;

RMAN> CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO


'/u01/oracle/db/AKI1/bck/ora_cf%F';

RMAN> CONFIGURE BACKUP OPTIMIZATION ON;

RMAN> SHOW ALL;

Working with RMAN

There are few things we need to have in place before instructing RMAN to connect to the target.

1. Appropriate target environment variables must be established.


2. We must have access to an O/S account or a schema that has SYSDBA privilege.

Before you connect to your target database, you must ensure that the standard Unix environment variables are established. These variables include:
ORACLE_SID, ORACLE_HOME, PATH, NLS_LANG, and NLS_DATE_FORMAT. They govern the name of the instance, the path to the RMAN
executable; and the behavior of backup, restore, and reporting commands. 

When using RMAN, NLS_LANG should be set to the character set that your database was created with. If you do not set NLS_LANG, you may encounter
problems when issuing BACKUP, RESTORE, and RECOVER commands.

Once you have the appropriate environment variables set, you then need access to an O/S account or a database schema that has SYSDBA privileges. You
must have access to the SYSDBA privilege before you can connect to the target database using RMAN. There are two methods of administering the
SYSDBA privilege: 

1. Locally via O/S authentication


2. Remotely via password file

For local connections, RMAN automatically connects you to the target database with SYSDBA privileges. 

Setting up a password file is the other method by which we can administer the SYSDBA privilege. There are two good reasons to use RMAN with a
password file.

1. Oracle has deprecated the use of CONNECT INTERNAL and Server Manager.
2. We may want to administer RMAN remotely through a network connection.

For example, if you're in an environment where you want to back up all of your target databases from one place and not has to log on to each host and back
up the database, you must do it via a network connection. To remotely administer RMAN through a network connection, you need to do the following: 

• Create a password file 


• Enable remote logins for password file users 

Create a password file for Target

To create the password file, as the Oracle software owner or as a member of the dba group.

$ cd $oracle_home/dbs
$ orapwd file=sidname password=password entries=n

There are three user-provide variables in this example

1. sidname : The SID of the target instance


2. password : The password to be used when we connect a user SYS with SYSDBA privilege.
3. n : The maximum number of schemas allowed in the password files.

Example

$ cd $ORACLE_HOME/dbs
$ orapwd file=orapwAKI1 password=goofi entries=30

After we create a password file, we need to enable remote logins. To do this, set the instance’s REMOTE_LOGIN_PASSWORDFILE initialization
parameter to EXCLUSIVE.

Setting this parameter to exclusive signifies that only one database can use the password file and that users other than sys and internal can reside in it. We
can now use a network connection to connect to your target database as SYSDBA.

Test the connection, try to connect from a PC to the remote database as SYS with SYSDBA privileges:

$ sqlplus "sys/goofi@AKI1 as sysdba"

Note that we have to create a password file only for the target database and not for the catalog. This is because when you connect to the target, you need to
connect as an account that has the SYSDBA privilege. When you connect remotely to a target database, the SYSDBA privilege is enabled through the
password file. This is unlike a connection to the catalog, for which SYSDBA is not required, because you log in as the owner of the catalog schema. 

When the SYSDBA privilege is granted to a specified user, the user can be queried in the V$PWFILE_USERS view.

SQL> GRANT SYSDBA TO rmancat;


SQL> select * from v$pwfile_users where username='RMANADMIN';

Invoking the RMAN Executable

In order to use Rman we have to invoke the executable. Once we have invoked the executable, we get an RMAN prompt, from which we can execute
RMAN commands.

The executable of RMAN is located with all of the other oracle executables, in the BIN directory of oracle installation.

From O/S command prompt issue the command RMAN

$ rman

Connecting to target with no catalog

O/S Authentication

$ rman target / nocatalog

We can use O/S authentication only from an O/S account on the database server

Password file authentication

client-pc> rman target sys/goofi@AKI1 nocatalog

Hiding the password

Connect to the database after RMAN has been invoked prevents any password information from showing up in a process list.

SQLPLUS> $ rman nocatalog


RMAN> connect target sys/pwd@SID

Connecting to both Target and Catalog

If we are using catalog, we will typically connect to the target and the catalog at the same time. This is because when we are performing backup and
recovery operations both the target and the catalog need to be aware of your activities.

O/S authentication

$ rman target / catalog rmancat/rmancat@GEK1

This connects us to the target and catalog database at the same time. Alternatively we can invoke RMAN first and then issue connect commands for the
target and catalog separately.

$ rman
RMAN> connect catalog rmancat/rmancat@GEK1
RMAN> connect target /

Password Authentication

client-pc> rman target sys/goofi@AKI1 catalog rmancat/rmancat@GEK1


Posted by Real-Core-DBA at 8:51 PM No comments: 
Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest
Wednesday, December 8, 2010
Move/rename datafiles in Oracle
Moving datafiles of a database: The datafiles reside under /home/oracle/ORACLE_HOME/database/OLD_LOCATION and have go to
/home/oracle/ORACLE_HOME/database/NEW_LOCATION/.

SQL> select tablespace_name, substr(file_name,1,70) from dba_data_files;

TABLESPACE_NAME SUBSTR(FILE_NAME,1,70)
----------------------------------------------------------------------
SYSTEM /home/oracle/ORACLE_HOME/database/OLD_LOCATION/system.dbf
UNDO /home/oracle/ORACLE_HOME/database/OLD_LOCATION/undo.dbf
DATA /home/oracle/ORACLE_HOME/database/OLD_LOCATION/data.dbf

SQL> select member from v$logfile;

MEMBER
--------------------------------------------------------------------------------
/home/oracle/ORACLE_HOME/database/OLD_LOCATION/redo1.ora
/home/oracle/ORACLE_HOME/database/OLD_LOCATION/redo2.ora
/home/oracle/ORACLE_HOME/database/OLD_LOCATION/redo3.ora

SQL> select name from v$controlfile;

NAME
--------------------------------------------------------------------------------
/home/oracle/ORACLE_HOME/database/OLD_LOCATION/ctl_1.ora
/home/oracle/ORACLE_HOME/database/OLD_LOCATION/ctl_2.ora
/home/oracle/ORACLE_HOME/database/OLD_LOCATION/ctl_3.ora

Now, as the files to be moved are known, the database can be shut down:

SQL> shutdown

The files can be copied to their destination:

$ cp /home/oracle/ORACLE_HOME/database/OLD_LOCATION/system.dbf /home/oracle/ORACLE_HOME/database/NEW_LOCATION/system.dbf
$ cp /home/oracle/ORACLE_HOME/database/OLD_LOCATION/undo.dbf /home/oracle/ORACLE_HOME/database/NEW_LOCATION/undo.dbf
$ cp /home/oracle/ORACLE_HOME/database/OLD_LOCATION/data.dbf /home/oracle/ORACLE_HOME/database/NEW_LOCATION/data.dbf

$ cp /home/oracle/ORACLE_HOME/database/OLD_LOCATION/redo1.ora /home/oracle/ORACLE_HOME/database/NEW_LOCATION/redo1.ora
$ cp /home/oracle/ORACLE_HOME/database/OLD_LOCATION/redo2.ora /home/oracle/ORACLE_HOME/database/NEW_LOCATION/redo2.ora
$ cp /home/oracle/ORACLE_HOME/database/OLD_LOCATION/redo3.ora /home/oracle/ORACLE_HOME/database/NEW_LOCATION/redo3.ora

$ cp /home/oracle/ORACLE_HOME/database/OLD_LOCATION/ctl_1.ora /home/oracle/ORACLE_HOME/database/NEW_LOCATION/ctl_1.ora
$ cp /home/oracle/ORACLE_HOME/database/OLD_LOCATION/ctl_2.ora /home/oracle/ORACLE_HOME/database/NEW_LOCATION/ctl_2.ora
$ cp /home/oracle/ORACLE_HOME/database/OLD_LOCATION/ctl_3.ora /home/oracle/ORACLE_HOME/database/NEW_LOCATION/ctl_3.ora

The init.ora file is also copied because it references the control files. I name the copied file just init.ora because it is not in a standard place anymore and it
will have to be named explicitely anyway when the database is started up.

$ cp /home/oracle/ORACLE_HOME/dbs/initOLD.ora /home/oracle/ORACLE_HOME/database/NEW_LOCATION/initNEW.ora

The new location for the control files must be written into the (copied) init.ora file:
/home/oracle/ORACLE_HOME/database/NEW_LOCATION/init.ora

control_files = (/home/oracle/ORACLE_HOME/database/NEW_LOCATION/ctl_1.ora,
                        /home/oracle/ORACLE_HOME/database/NEW_LOCATION/ctl_2.ora,
                       /home/oracle/ORACLE_HOME/database/NEW_LOCATION/ctl_3.ora)

$ sqlplus "/ as sysdba"

SQL> startup exclusive mount pfile=/home/oracle/ORACLE_HOME/database/NEW_LOCATION/init.ora


SQL> alter database rename file '/home/oracle/ORACLE_HOME/database/OLD_LOCATION/system.dbf' to
'/home/oracle/ORACLE_HOME/database/NEW_LOCATION/system.dbf';
SQL> alter database rename file '/home/oracle/ORACLE_HOME/database/OLD_LOCATION/undo.dbf' to
'/home/oracle/ORACLE_HOME/database/NEW_LOCATION/undo.dbf';
SQL> alter database rename file '/home/oracle/ORACLE_HOME/database/OLD_LOCATION/data.dbf' to
'/home/oracle/ORACLE_HOME/database/NEW_LOCATION/data.dbf';

SQL> alter database rename file '/home/oracle/ORACLE_HOME/database/OLD_LOCATION/redo1.ora' to


'/home/oracle/ORACLE_HOME/database/NEW_LOCATION/redo1.ora';
SQL> alter database rename file '/home/oracle/ORACLE_HOME/database/OLD_LOCATION/redo2.ora' to
'/home/oracle/ORACLE_HOME/database/NEW_LOCATION/redo2.ora';
SQL> alter database rename file '/home/oracle/ORACLE_HOME/database/OLD_LOCATION/redo3.ora' to
'/home/oracle/ORACLE_HOME/database/NEW_LOCATION/redo3.ora';

SQL> shutdown

SQL> startup pfile=/home/oracle/ORACLE_HOME/database/NEW_LOCATION/init.ora

Thanks :) 

Please share your really important  comments and views  for improvement......
Posted by Real-Core-DBA at 11:31 PM No comments: 
Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest
Tuesday, August 31, 2010

Why is excessive redo generated during an Online/Hot Backup in Oracle


       There is not excessive redo generated, there is additional information
logged into the online redo log during a hot backup the first time a block is
modified in a tablespace that is in hot backup mode.

In hot backup mode only 2 things are different:

No 1->               The first time a block is changed in a datafile that is in hot backup mode,
                   the ENTIRE BLOCK is written to the redo log files, not just the changed bytes.
                   Normally only the changed bytes (a redo vector) is written. In hot backup mode,
                   the entire block is logged the FIRST TIME. This is because you can get into a
                   situation where the process copying the datafile and DBWR are working on the
                   same block simultaneously.

and 2nd is :->

         The datafile headers which contain the SCN of the last completed checkpoint
          are NOT updated while a file is in hot backup mode. This lets the recovery
         process understand what archive redo log files might be needed to fully recover
         this file.

  To limit the effect of this additional logging, you should ensure you only place one tablepspace at a time in backup mode and bring the
tablespace out of backup mode as soon as you have backed it up. This will reduce the number of blocks that may have to be logged to the
minimum possible.

Please share your really important  comments and views  for improvement......   Thanks :)
Posted by Real-Core-DBA at 1:48 AM No comments: 
Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest
Sunday, August 29, 2010
Step by Step MMR Replication Setup

Oracle 8i  Replication Step By Step


1. PRE REPLICATION ENVOIRNMENT SETUP

BACKUP PRIMARY/SECONDARY SITE DB

Backup all the DB Links and synonyms for SECONDARY Environment.

Shutdown the Existing secondary Database Normal.


[ ] Shutdown

Take the cold backup from the existing SECONDARY Database Server (O/s Level).

Clean up SECONDARY Data base Environment.

(Remove all the SECONDARY DB from different mount point for refresh new DB from PRIMARY)

Cross Check the Mount points on the SECONDARY database server have sufficient space

Take the control file Backup on existing PRIMARY database As :

Alter database backup controlfile to trace;

(This will create a back script of the controlfile at the destination: user_dump_dest with some name with .trc extension, kindly find it with the
latest timestamp and rename it as ‘create_control_file.sql’)

Note: This file has information about PRIMARY Environment.

Take all the DB Links and synonyms Backup.

List All the Data files.

Select name from dba_data_files OR select * from v$datafile

Shutdown the Existing PRIMARY Database Normal.


  
            [ ] Shutdown

Take a Cold Backup of PRIMARY Database.

2. REPLICATION ENVOIRNMENT SETUP

DB Refresh & Replication Setup Step By Step with:

Restore the backup to the new database server to the relevant mount points.

Copy the backup controlfile to the new server $ORACLE_HOME/dbs directory.

     Note: copy the redo.log files and multiplex the files in different mount points.

 Copy the pfile from the source Primary database server to  the location

Change the modification of path of:

- Controlfile location. (The control files must be multiplex)


- user_dump_dest.
- core_dump_dest.
- background_dump_dest.

  Login to new Primary database server as sys user :

$ sqlplus ‘ / as sysdba’

      Note : it will connect  to the oracle ideal instance.

Startup the database in nomount mode .

[ ] startup nomount  pfile=’pfile_path/pfilename’. ( it set the parameter that are set in the                 parameterfile)
I.e # cd $ORACLE_HOME/dbs/

 Create new Control file .

             Run the script ‘create_control_file.sql’ that is created in the step-8

[ ] @create_control_file.sql;

Note: Make changes according to SECONDARY Environment


I.e. change in path of Datafiles as below

Take on the DATABASE into mount state

Alter database mount;

 Take on the DATABASE into open state

Alter database open resetlogs;

 To check the database is in open state

[ ] select name,open_mode From v$database;

To check the database is refreshed properly

[ ] select *  from V$RECOVER_FILE;

Configure following Step in each Environment ( Primary & Secondary)

1.      Connect SECONDARY QA DB as sysdba as follows

$ ORACLE_SID=SECONDARY
$ export ORACLE_SID
$ sqlplus '/ as sysdba'

SQL*Plus: Release 8.1.7.0.0 - Production on Tue Aug 10 07:16:17 2010

(c) Copyright 2000 Oracle Corporation.  All rights reserved.

Connected to:
Oracle8i Enterprise Edition Release 8.1.7.4.0 - Production
With the Partitioning option
Jserver Release 8.1.7.4.0 - Production

SQL>

2.      Create replication administrator / propagator / receiver

CREATE USER REPADMIN


             IDENTIFIED BY REPADMIN
             DEFAULT TABLESPACE USERS
             TEMPORARY TABLESPACE TEMP
             PROFILE DEFAULT
             ACCOUNT UNLOCK;
GRANT CONNECT TO REPADMIN;
  GRANT DBA TO REPADMIN;
  GRANT RESOURCE TO REPADMIN;
  GRANT SELECT_CATALOG_ROLE TO REPADMIN;
  ALTER USER REPADMIN DEFAULT ROLE ALL;
  -- 45 System Privileges for REPADMIN
  GRANT ALTER ANY CLUSTER TO REPADMIN;
  GRANT ALTER ANY INDEX TO REPADMIN;
  GRANT ALTER ANY PROCEDURE TO REPADMIN;
  GRANT ALTER ANY SEQUENCE TO REPADMIN;
  GRANT ALTER ANY SNAPSHOT TO REPADMIN;
  GRANT ALTER ANY TABLE TO REPADMIN;
  GRANT ALTER ANY TRIGGER TO REPADMIN;
  GRANT ALTER SESSION TO REPADMIN;
  GRANT COMMENT ANY TABLE TO REPADMIN;

3.      Grant privs to the propagator, to propagate changes to remote sites


 
               EXECUTE Dbms_Defer_Sys.Register_Propagator(username=>'REPADMIN');
 
4.      Grant privs to the receiver to apply deferred transactions
              
               GRANT EXECUTE ANY PROCEDURE TO repadmin;
5.      Authorize the administrator to administer replication groups and schemas
              
               EXECUTE Dbms_Repcat_Admin.Grant_Admin_Any_Repgroup('REPADMIN');
               EXECUTE Dbms_Repcat_Admin.Grant_Admin_Any_Schema (username => 'REPADMIN');
 
6.      Authorize the administrator to lock and comment tables

               GRANT LOCK ANY TABLE TO repadmin;


               GRANT COMMENT ANY TABLE TO repadmin;

7.       Connect to the replication administrator


              
               CONNECT repadmin/repadmin
 
8.      Create private db links for all repadmin users

Create database link "IAS.QA.SECONDARY.COM"
Using '(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(COMMUNITY=TCP)(PROTOCOL=TCP)(Host=10.277.93.169)(Port=1521)))
(CONNECT_DATA=(SID=SECONDARY)))';

Create database link " IAS.QA.PRIMARY.COM"
Using '(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(COMMUNITY=TCP)(PROTOCOL=TCP)(Host=10.237.93.141)(Port=1521)))
(CONNECT_DATA=(SID=SECONDARY)))';

9.      Schedule job to push transactions to QA master sites with appropriate intervals

EXECUTE Dbms_Defer_Sys.Schedule_Push(        -
        Destination   => ' IAS.QA.PRIMARY.COM ',       -
        Interval      => 'sysdate+1/24/60',  -
        Next_date     => sysdate+1/24/60,    -
        Stop_on_error => FALSE,              -
        Delay_seconds => 0,                  -
        Parallelism   => 1);

10.  Schedule job to push transactions to INT master sites with appropriate intervals

EXECUTE Dbms_Defer_Sys.Schedule_Push(        -
        Destination   => ' IAS.QA.SECONDARY.COM ',       -
        Interval      => 'sysdate+1/24/60',  -
        Next_date     => sysdate+1/24/60,    -
        Stop_on_error => FALSE,              -
        Delay_seconds => 0,                  -
        Parallelism   => 1);

11.  Schedule job to delete successfully replicated transactions

EXECUTE Dbms_Defer_Sys.Schedule_Purge(       -
        Next_date     => sysdate+1/24,       -
        Interval      => 'sysdate+1/24');

CONNECT repadmin/repadmin

12.  Create replication group for QA site

EXECUTE Dbms_Repcat.Create_Master_Repgroup('IREPLICATION');

13.  Add master destination sites

EXECUTE Dbms_Repcat.Add_Master_Database('REPLICATION', 'IAS.QA.RENNES.EQUANT.COM');

14.   Wait until IAS_REPLICATION appears in the DBA_REPSITES view\

SELECT * FROM dba_repsites WHERE gname = ‘REPLICATION';

15.  Register objects within the group

BEGIN
DBMS_REPCAT.GENERATE_REPLICATION_SUPPORT (
sname => 'TEST'
oname => 'NEW_CHECKPOINT'
type => 'TABLE',
min_communication => TRUE);
END;
/

BEGIN
DBMS_REPCAT.GENERATE_REPLICATION_SUPPORT (
sname => 'TEST'
oname => 'NEW_CORRELATION'
type => 'TABLE',
min_communication => TRUE);
END;

1.2.2 Start/Stop Replication  (PRIMARY & SECONDARY)

Once replication support has been generated for all objects relevant objects replication can be started or stopped as follows:

-- Start Replication
EXECUTE Dbms_Repcat.Resume_Master_Activity(gname => 'REPLICATION');

-- Stop Replication
EXECUTE Dbms_Repcat.Suspend_Master_Activity(gname => 'REPLICATION');

1.2.3 Check  Replication Status  (PRIMARY & SECONDARY)

Select gname, status from dba_repgroup where gname ='REPLICATION' ;

FLASHBACK DATA ARCHIVE


To start with, this feature is not available for Oracle standard, personal and Express Editions.
From  Oracle 10G, flashback feature mainly depends on undo segments (other than flashback
database as it uses flashback logs). Thus it has limitation up to how we can retrieve data and it
depends on undo data in undo tablespace. To avoid this, Oracle has provided new feature as
FLASHBACK DATA ARCHIVE.
It is a database object (must be confused with object, but yes it is!) that holds histotrical data for
one or many tables. Additional advantage of Flashback Data Archive that it has space retention and
purging policy also.
For Flashback Data Archive, oracle has background process FDBA to deal with all flashback related
work.
It should be something like ora_fbda_padwsdpr. Where PADWSDPR is the name of the instance for
which this process is attached.
Let see how to create and use flashback data archive.
To deal with flashback data archive, user must have FLASHBACK ARCHIVE ADMINISTER privilege.
SQL> select * from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
PL/SQL Release 11.2.0.2.0 - Production
CORE    11.2.0.2.0      Production
TNS for Linux: Version 11.2.0.2.0 - Production
NLSRTL Version 11.2.0.2.0 - Production
SQL> select * from dba_sys_privs where privilege like '%FLASH%';

GRANTEE                        PRIVILEGE                                ADM


------------------------------ ---------------------------------------- ---
SYS                            FLASHBACK ANY TABLE                      NO
DBA                            FLASHBACK ANY TABLE                      YES
SYS                            FLASHBACK ARCHIVE ADMINISTER             NO
DBA                            FLASHBACK ARCHIVE ADMINISTER             YES
MDSYS                          FLASHBACK ANY TABLE                      NO

SQL> grant flashback archive administer to Mohit_flash;

Grant succeeded.

Two main parameters that is used with the create statement of flashback data are rentention and
quota. Retention confirms the maximum time holding of flashback data and quota will confirm for
the space it will be used on provided tablespace.

Once reaching the quota, oracle will issue out-of-space alert.


SQL> create flashback archive flash2
  2  tablespace users quota 1024M retention 5 year;

Flashback archive created.

Other option available with alteration of flashback data are:

alter flashback archive flash2  set default

alter flashback archive flash2 add tablespace user2;

alter tablespace archive flash2 modify tablespace user2;

Now let’s see how to use this flashback data with tables.
SQL> create table test11 ( name varchar2(30), address varchar2(50))
  2  flashback archive flash2;

Table created.

Already created can be altered this way:


SQL> alter table test12 flashback archive flash2;

Table altered.

Checking the flashback data of the tables:


SQL> select * from dba_flashback_archive_tables;

TABLE_NAME                OWNER_NAME                FLASHBACK_ARCHIVE_NAME


ARCHIVE_TABLE_NAME        STATUS
------------------------- ------------------------- -------------------------
------------------------- -------------------------
TEST11                    DBCHECK                   FLASH2                   
SYS_FBA_HIST_366239       ENABLED

SQL> select flashback_archive_name,retention_in_days from dba_flashback_archive;

FLASHBACK_ARCHIVE_NAME    RETENTION_IN_DAYS
------------------------- -----------------
FLASH2                                 1825

SQL>
Now table is created with flashback data, let’s try to use it. Here I will update the created table and
try to read from flashback data for the updated column.
SQL> select * from test11;

NAME                           ADDRESS
------------------------------ --------------------------------------------------
MOHIT                            A
MOHIT                            A
MOHIT                            A
MOHIT                            A
MOHIT                            A

SQL>
SQL> !date
Wed Dec 21 05:14:17 EST 2011

SQL> !date
Wed Dec 21 05:15:35 EST 2011

SQL> update test11 set address='B' where name='MOHIT';

5 rows updated.

SQL> commit;

Commit complete.

SQL> select * from test11;

NAME                           ADDRESS
------------------------------ --------------------------------------------------
MOHIT                            B
MOHIT                            B
MOHIT                            B
MOHIT                            B
MOHIT                            B

Now the before image must be available with flashback data, so checking that.
SQL> select * from SYS_FBA_HIST_366239;

no rows selected

SQL> /

no rows selected.

It should be available but unfortunately not…!! Here I understand why. Background process FDBA
wakes up at system determined interval (default is 5 mins) and there it copies all the
corresponding undo data for archive. So next time if changes won’t reflect immediately won’t be
surprise.

Let’s try again:


SQL> !date
Wed Dec 21 05:45:59 EST 2011

SQL> select NAME,ADDRESS from SYS_FBA_HIST_366239;

NAME                           ADDRESS
------------------------------ --------------------------------------------------
MOHIT                            A
MOHIT                            A
MOHIT                            A
MOHIT                            A
MOHIT                            A

SQL> select * from test11;

NAME                           ADDRESS
------------------------------ --------------------------------------------------
MOHIT                            B
MOHIT                            B
MOHIT                            B
MOHIT                            B
MOHIT                            B

Now the data is available in flashback archive. One more point to add, we cannot modify the
flashback data. It is only allowed for read-only access.

Now if needed, we can also think to purge the flash data.


SQL> alter flashback archive FLASH2 purge all;

Flashback archive altered.


Other option are also available for purging, some of them are:
alter flashback archive flash1 purge before timestamp (systimestamp – interval '2'
day);

alter flashback archive flash1 purge before scn 123456; 

Next section, we will take a short look of some limitation of this feature.

All the tables enabled for the flashback archive feature will not be allowed to perform below
funtions:

         ALTER TABLE statement that does any of the following


                 -Drops, renames, or modifies a column
                 -Performs partition or subpartition operations
                -Converts a LONG column to a LOB column
                -Includes an UPGRADE TABLE ,with or without an INCLUDING DATA clause
         DROP TABLE statement
         RENAME TABLE statement
         TRUNCATE TABLE statement

Attempted to drop for the table which is enabled for faskback data archive:
SQL> drop table test11;
drop table test11
           *
ERROR at line 1:
ORA-55610: Invalid DDL statement on history-tracked table

Solution:

SQL> alter table test11 no flashback archive;

Table altered.

SQL> drop table test11;

Table dropped.

Adding also to drop the flashback data archive.


SQL> drop flashback archive FLASH2;
Flashback archive dropped.

SQL>

Also please read Oracle document more information.

Thanks
Posted by Real-Core-DBA at 11:16 PM No comments: 
Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest

Thursday, February 10, 2011

Basic differences between Data pump and Export/Import


If you have worked with prior 10g database you possibly are familiar with exp/imp utilities of oracle database.
Oracle 10g introduces a new feature called data pump export and import.Data pump export/import differs
from original export/import. The difference is listed below.

1)Impdp/Expdp has self-tuning unities. Tuning parameters that were used in original Export and Import, such
as BUFFER and RECORDLENGTH, are neither required nor supported by Data Pump Export and Import.

2)Data Pump represent metadata in the dump file set as XML documents rather than as DDL commands. 

3)Impdp/Expdp use parallel execution rather than a single stream of execution, for improved performance.

4)In Data Pump expdp full=y and then impdp schemas=prod is same as of expdp schemas=prod and then
impdp full=y where in original export/import does not always exhibit this behavior. 

5)Expdp/Impdp access files on the server rather than on the client. 

6)Expdp/Impdp operate on a group of files called a dump file set rather than on a single sequential dump file.

7)Sequential media, such as tapes and pipes, are not supported in oracle data pump.But in original
export/import we could directly compress the dump by using pipes.

8)The Data Pump method for moving data between different database versions is different than the method
used by original Export/Import.

9)When you are importing data into an existing table using either APPEND or TRUNCATE, if any row violates
an active constraint, the load is discontinued and no data is loaded. This is different from original Import,
which logs any rows that are in violation and continues with the load.

10)Expdp/Impdp consume more undo tablespace than original Export and Import. 

11)If a table has compression enabled, Data Pump Import attempts to compress the data being loaded.
Whereas, the original Import utility loaded data in such a way that if a even table had compression enabled,
the data was not compressed upon import.

12)Data Pump supports character set conversion for both direct path and external tables. Most of the
restrictions that exist for character set conversions in the original Import utility do not apply to Data Pump.
The one case in which character set conversions are not supported under the Data Pump is when using
transportable tablespaces.

13)There is no option to merge extents when you re-create tables. In original Import, this was provided by
the COMPRESS parameter. Instead, extents are reallocated according to storage parameters for the target
table.
Differences between Data Pump impdp and import utility
The original import utility dates back to the earliest releases of Oracle, and it's quite slow and primitive
compared to Data Pump.  While the old import (imp) and Data Pump import (impdp) do the same thing, they
are completely different utilities, with different syntax and characteristics.  
Here are the major syntax differences between import and Data Pump impdp:
 Data Pump does not use the BUFFERS parameter
 Data Pump export represents the data in XML format
 A Data Pump schema import will recreate the user and execute all of the associated security privileges
(grants, user password history).
 Data Pump's parallel processing feature is dynamic. You can connect to a Data Pump job that is
currently running and dynamically alter the number of parallel processes.
 Data Pump will recreate the user, whereas the old imp utility required the DBA to create the user ID
before importing. 

Analyze vs. DBMS_STATS


The following is a quick overview of the two. 

 Analyze
 The only method available for collecting statistics in Oracle 8.0 and lower.
 ANALYZE can only run serially.
 ANALYZE cannot overwrite or delete certain types of statistics that where generated by DBMS_STATS.
 ANALYZE calculates global statistics for partitioned tables and indexes instead of gathering them directly. This
can lead to inaccuracies for some statistics, such as the number of distinct values.
 For partitioned tables and indexes, ANALYZE gathers statistics for the individual partitions and then
calculates the global statistics from the partition statistics.
 For composite partitioning, ANALYZE gathers statistics for the subpartitions and then calculates the
partition statistics and global statistics from the subpartition statistics.
 ANALYZE can gather additional information that is not used by the optimizer, such as information about chained
rows and the structural integrity of indexes, tables, and clusters. DBMS_STATS does not gather this information.
 No easy way of knowing which tables or how much data within the tables have changed. The DBA would
generally re-analyze all of their tables on a semi-regular basis.
 DBMS_STATS
 Only available for Oracle 8i and higher.
 Statistics can be generated to a statistics table and can then be imported or exported between databases and re-
loaded into the data dictionary at any time. This allows the DBA to experiment with various statistics.
 DBMS_STATS routines have the option to run via parallel query or operate serially.
 Can gather statistics for sub-partitions or partitions.
 Certain DDL commands (ie. create index) automatically generate statistics, therefore eliminating the need to
generate statistics explicitly after DDL command.
 DBMS_STATS does not generate information about chained rows and the structural integrity of segments.
 The DBA can set a particular table, a whole schema or the entire database to be automatically monitored when a
modification occurs. When enabled, any change (insert, update, delete, direct load, truncate, etc.) that occurs on a table will be
tracked in the SGA. This information is incorporated into the data dictionary by the SMON process at a pre-set interval  (every 3
hours in Oracle 8.1.x, and every 15 minutes in Oracle 9i). The information collected by this monitoring can be seen in
the DBA_TAB_MODIFICATIONS view. Oracle 9i introduced a new function in the DBMS_STATS package
called: FLUSH_DATABASE_MONITORING_INFO. The DBA can make use of this function to flush the monitored table data
more frequently. Oracle 9i will also automatically call this procedure prior to executing  DBMS_STATSfor statistics gathering
purposes. Note that this function is not included with Oracle 8i.
 DBMS_STATS provides a more efficient, scalable solution for statistics gathering and should be used over the
traditional ANALYZE command which does not support features such as parallelism and stale statistics collection.
 Use of table monitoring in conjunction with  DBMS_STATS  stale object statistics generation is highly
recommended for environments with large, random and/or sporadic data changes. These features allow the database to more
efficiently determine which tables should be re-analyzed versus the DBA having to force statistics collection for all tables.   Including
those that have not changed enough to merit a re-scan)
Posted by Real-Core-DBA at 1:18 AM No comments: 
Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest

Thursday, January 5, 2012

Oracle Real time Interview Questions with Answer


                     Oracle Real time questions
1)      How can you see the Current SCN number of the database?
Select current_scn from v$database;
2)      How can you see the Current log sequence number the logwriter is writing in to?
Select * from v$log;

3)      If you are given a database, how will you know how many datafiles each tablespace contain?
Select distinct tablespace_name,file_name from dba_data_files;

4). How will you know which temporaray tablepsace is allocated to which user?
Select temporary_tablespace from dba_users where username=’SCOTT’;

5) If you are given a database,how will you know whether it is locally managed or      dictionary managed?
Select extent_management from dba_tablespaces where tablespace_name=’USERS’;

6) How will you list all the tablespaces and their status in a database?
Select tablespace_name,status from dba_tablespaces;

7) How will you find the system wide 1) default permanent tablespace, 2) default temporary tablespace 3) Database time zone?
Select property_name,property_value from database_properties where property_name like ‘%DEFAULT%’;

8) How will you find the current users who are using temporary tablespace segments?
V$TEMPSEG_USAGE

9) How will you convert an existing dictionary managed permanent tablespace to temporary tablespace?
Not possible

10) Is media recovery requird if a tablespace is taken offline immediate?


Not required

11) How will you convert dictionary managed tablespace to locally managed tablespace?
Exec dbms_space_admin.tablespace_migrate_to_local(‘TABLESPACE_NAME’);

12) If you have given command to make a tablespace offline normal, but its not  happening.it is in transactional read-only mode.
How will you find which are the transactions which are preventing theconversion?
By looking at queries using by those SID (u can get script from net). I suspect question is not clear.

13) If you drop a tablespace containing 4 datafiles, how many datafiles will be droped at a time by giving a single drop tablespace
command?

All datafiles

14) If database is not in OMF,How will you drop all the datafiles of a tablespace without dropping the tablespace itself?
Alter database datafile ‘PATH’ offline drop;

15) How will you convert the locally managed tablespace to dictionay managed?What are the limitations?
Exec dbms_space_admin.tablespace_migrate_from_local(‘TABLESPACE_NAME’);

SYSTEM tablespace should be dictionary

16) Which parameter defines the max number of datafile in database?


Db_files and MAXDATAFILES in control file

17) Can a single datafile be allocated to two tablespaces?Why?


No. because segments cannot space multiple datafiles

18) How will you check if a datafile is Autoextinsible?


Select autoextensible from dba_data_files where file_name=’’;

19) Write command to make all datafiles of a tablespace offline without making the tablspace offline itself?
Alter database datafile ‘PATH’ offline normal;

20) In 10g, How to allocate more than one temporary tablespace as default temporary tablespace to a single user?
By using temporary tablespace group

21) What is the relation between db_files and maxdatafiles parameters?


Both will restrict no of datafiles in the database

22) Is it possible to make tempfiles as read only?


yes

23) What is the common column between dba_tablespaces and dba_datafiles?


Tablespace_name

24) Write a query to display the names of all dynamic performance views?
Select table_name from dictionary where table_name like ‘v$%’;

25) Name the script that needs to be executed to create the data dictionary views after database creation?
Catalog.sql
26) Grant to the user SCOTT the RESTRICTED SESSION privilege?
SQL> grant restricted session to scott;
Grant succeeded.

27) How are privileged users being authenticated on the database you are currently working on? Which initialization parameter
would give me this information?
Question not clear

28) Which dynamic performance view gives you information about all privileged users who have been granted sysdba or sysoper
roles? Query the view?
SQL> desc v$pwfile_users

29) What is the purpose of the DICTIONARY table?


To know data dictionary and dynamic performance view names

30) Write a query to display the file# and the status of all datafiles that are offline?
Select file#,status from v$datafile where status=’OFFLINE’;

31) Write the statement to display the size of the System Global Area (SGA)?
Show parameter sga
Or
Show sga

32) Obtain the information about the current database? What is its name and creation date?
Select name,created from v$database;

33) What is the size of the database buffer cache? Which two initialization Parameters are used to determine this value?

Db_cache_size or db_block_buffers

34) What value should the REMOTE_LOGIN_PASSWORDFILE take if you need to set up Operating System authentication?
exclusive

35)  Which initialization parameter holds this value? What does the shared pool comprise of?
Library cache and data dictionary cache.
Parameter : shared_pool_size

36) Which initialization parameter holds the name of the database?


Db_name

37) Which dynamic performance view displays information about the active transactions in the database? Which view returns
session related information?
V$transaction, v$session

38) Which dynamic performance view is useful for killing user sessions? Which columns of the view will you require to kill a user
session? Write the statement to kill any of the currently active sessions in your database?
V$session (SID, SERAIL#)
Alter system kill session ‘SID,SERIAL#’;

39) What is the difference between the ALTER SYSTEM and ALTER SESSION commands?
Changes performed using ALTER SYSTEM are either permanent for the memory or database. But for ALTER SESSION,
its only for that session

40) Write down the mandatory steps that a DBA would need to perform before the CREATE DATABASE command may be used to
create a database?
Create a pfile or spfile
Create password file
If windows, create instance using ORADIM utility

41) What does the script utlexcpt.sql create? What is this table used for?

It will create EXECEPTIONS table. See below link

42) In which Oracle subdirectory are all the SQL scripts such as catalog.sql/ catproc.sql /utlexcpt.sql etc...? Located?
$ORACLE_HOME/rdbms/admin/

43) Which dynamic performance view would you use to display the OPTIMAL size of the rollback segment RBS2. Write a query to
retrieve the OPTIMAL size and Rollback segment name?
V$undostat (but many scripts are available in google or even in my blog)

44) During a long-running transaction, you receive an error message indicating you have insufficient space in rollback segment RO4.
Which storage parameter would you modify to solve this problem?
Extent size

45) How would I start the database if only users with the RESTRICTED SESSION privilege need to access it?
Startup restrict

46) Which data dictionary view would you query to find out information about free extents in your database? Write a query to
display a count of the number of free extents in your database?
We can use scripts. Exactly its difficult to know

47) Write a query to display the tablespace name, datafile name and type of extent management (local or dictionary) from the data
dictionary?
You need to combine dba_data_files and dba_tablespaces

48) Which two types of tablespace cannot be taken offline or dropped?


SYSTEM and UNDO

49) When a tablespace is offline can it be made read only? Perform the
Required steps to confirm your answer?
Didn’t got the answer

50) Which parameter specifies the percentage of space in each data block that is reserved for future updates?
PCTFREE
51) write down two reasons why automatic extent allocation for an extent may fail?
If the disk space reached max limit
If autoextend reached maxsize limit

52) Query the DBA_CONSTRAINTS view and display the names of all the constraints that are created on the CUSTOMER table?
Select constraint_name from dba_constraints where table_name=’CUSTOMER’;

53) Write a command to display the names of all BITMAP indexes created in the database?
Select index_name from dba_indexes where index_type=’BITMAP’;

54) Write a command to coalesce the extents of any index of your choice?
Alter tablespace <tablespace_name> coalesce;
Don’t know for extents

55) . What happens to a row that is bigger than a single block? What is this called? Which data dictionary view can be queried to
obtain information about such blocks?
Row will be chained into multiple blocks. CHAINED_ROWS is the view

56) Write a query to retrieve the employee number and ROWIDs of all rows that belong to the EMP table belonging to user SCOTT?
Select rowid,empno from scott.emp;

57) During a long-running transaction, you receive an error message indicating you have insufficient space in rollback segment RO4.
Which storage parameter would you modify to solve this problem?
Repeated question

58) How to compile a view?  How to compile a table?


Alter view <view_name> compile;
Tables cannot be compiled

59) What is the block size of your database and how do you see it?
Db_block_size

60) At one time you lost parameter file accidentally and you don't have any backup. How you will recreate a new parameter file with
the parameters set to previous values.?
We can recover it from alert log file which contains non-default values

61) You want to retain only last 3 backups of datafiles. How do you go for it in RMAN?
By configuring backup retention policy to redundancy 3
Posted by Real-Core-DBA at 1:39 AM 8 comments: 
Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest

Tuesday, January 3, 2012

RMAN Backup and Recovery Scenarios

RMAN Backup and Recovery Scenarios


 
Complete Closed Database Recovery. System tablespace is missing

If the system tablespace is missing or corrupted the database cannot be started up so a complete closed database recovery must be
performed.
Pre requisites: A closed or open database backup and archived logs.
1. Use OS commands to restore the missing or corrupted system datafile to its original location, ie:
cp -p /usr/backup/RMAN/system01.dbf  /usr/oradata/u01/IASDB/system01.dbf
2. startup mount;
3. recover datafile 1;
4. alter database open;

Complete Open Database Recovery. Non system tablespace is missing

If a non system tablespace is missing or corrupted while the database is open, recovery can be performed while the database
remain open.
Pre requisites: A closed or open database backup and archived logs.
1. Use OS commands to restore the missing or corrupted datafile to its original location, ie:
cp -p /usr/backup/RMAN/user01.dbf /usr/oradata/u01/IASDB/user01.dbf
2. alter tablespace <tablespace_name> offline immediate;
3. recover tablespace <tablespace_name>;
4. alter tablespace <tablespace_name> online;

Complete Open Database Recovery (when the database is initially closed).Non system tablespace is missing

If a non system tablespace is missing or corrupted and the database crashed,recovery can be performed after the database is open.
Pre requisites: A closed or open database backup and archived logs.
1.   startup; (you will get ora-1157 ora-1110 and the name of the missing datafile, the database will remain mounted)
2.   Use OS commands to restore the missing or corrupted datafile to its original location, ie:
cp -p /usr/backup/RMAN/user01.dbf /usr/oradata/u01/IASDB/user01.dbf
3.   alter database datafile 6 offline; (tablespace cannot be used because the database is not open)
4.   alter database open;
5.   recover datafile 6;
6.   alter tablespace <tablespace_name> online;

Recovery of a Missing Datafile that has no backups (database is open).

If a non system datafile that was not backed up since the last backup is missing,recovery can be performed if all archived logs since
the creation of the missing datafile exist.
Pre requisites: All relevant archived logs.
1.   alter tablespace <tablespace_name> offline immediate;
2.   alter database create datafile ‘/user/oradata/u01/IASDB/newdata01.dbf’;
3.   recover tablespace <tablespace_name>;
4.   alter tablespace <tablespace_name> online;
If the create datafile command needs to be executed to place the datafile on a location different than the original use:
alter database create datafile ‘/user/oradata/u01/IASDB/newdata01.dbf’ as ‘/user/oradata/u02/IASDB/newdata01.dbf’

Restore and Recovery of a Datafile to a different location.

If a non system datafile is missing and its original location not available, restore can be made to a different location and recovery
performed.
Pre requisites: All relevant archived logs.
1.    Use OS commands to restore the missing or corrupted datafile to the new location, ie:
cp -p /usr/backup/RMAN/user01.dbf /usr/oradata/u02/IASDB/user01.dbf
2.    alter tablespace <tablespace_name> offline immediate;
3.    alter tablespace <tablespace_name> rename datafile ‘/user/oradata/u01/IASDB/user01.dbf’ to
‘/user/oradata/u02/IASDB/user01.dbf’;
4.    recover tablespace <tablespace_name>;
5.    alter tablespace <tablespace_name> online;

Control File Recovery

Always multiplex your controlfiles. Controlfiles are missing, database crash.


Pre requisites: A backup of your controlfile and all relevant archived logs.
1.    startup; (you get ora-205, missing controlfile, instance start but database is not mounted)
2.    Use OS commands to restore the missing controlfile to its original location:
cp -p /usr/backup/RMAN/control01.dbf /usr/oradata/u01/IASDB/control01.dbf
cp -p /usr/backup/RMAN/control02.dbf /usr/oradata/u01/IASDB/control02.dbf
3.    alter database mount;
4.    recover automatic database using backup controlfile;
5.    alter database open resetlogs;
6.    make a new complete backup, as the database is open in a new incarnation and previous archived log are not relevant.

Incomplete Recovery, Until Time/Sequence/Cancel

Incomplete recovery may be necessaire when an archived log is missing, so recovery can only be made until the previous sequence,
or when an important object was dropped, and recovery needs to be made until before the object was dropped.
Pre requisites: A closed or open database backup and archived logs, the time or sequence that the ‘until’ recovery needs to be
performed.
1.  If the database is open, shutdown abort
2.  Use OS commands to restore all datafiles to its original locations:
cp -p /usr/backup/RMAN/u01/*.dbf /usr/oradata/u01/IASDB/
cp -p /usr/backup/RMAN/u02/*.dbf /usr/oradata/u01/IASDB/
cp -p /usr/backup/RMAN/u03/*.dbf /usr/oradata/u01/IASDB/
cp -p /usr/backup/RMAN/u04/*.dbf /usr/oradata/u01/IASDB/
etc…
3.  startup mount;
4.  recover automatic database until time ’2004-03-31:14:40:45′;
5.  alter database open resetlogs;
6.  make a new complete backup, as the database is open in a new incarnation and previous archived log are not
relevant.Alternatively you   may use instead of until time, until sequence or until cancel:
recover automatic database until sequence 120 thread 1; OR
recover database until cancel;

Rman Recovery Scenarios

Rman recovery scenarios require that the database is in archive log mode, and that backups of datafiles, control files and archived
redolog files are made using Rman. Incremental Rman backups may be used also.
Rman can be used with the repository installed on the archivelog, or with a recovery catalog that may be installed in the same or
other database.
Configuration and operation recommendations:
Set the parameter controlfile autobackup to ON to have with each backup a
controlfile backup also:
configure controlfile autobackup on;
set the parameter retention policy to the recovery window you want to have,
ie redundancy 2 will keep the last two backups available, after executing delete obsolete commands:
configure retention policy to redundancy 2;
Execute your full backups with the option ‘plus archivelogs’ to include your archivelogs with every backup:
backup database plus archivelog;
Perform daily maintenance routines to maintain on your backup directory the number of backups you need only:
crosscheck backup;
crosscheck archivelog all;
delete noprompt obsolete backup;
To work with Rman and a database based catalog follow these steps:
1. sqlplus /
2. create tablespace repcat;
3. create user rmuser identified by rmuser default tablespace repcat temporary tablespace temp;
4. grant connect, resource, recovery_catalog_owner to rmuser
5. exit
6. rman catalog rmuser/rmuser          # connect to rman catalog as the rmuser
7. create catalog                     # create the catalog
8. connect target /                   #

Complete Closed Database Recovery. System tablespace is missing

In this case complete recovery is performed, only the system tablespace is missing,so the database can be opened without reseting
the redologs.
1.  rman target /
2.  startup mount;
3.  restore database;
4.  recover database;
5.  alter database open;

Complete Open Database Recovery. Non system tablespace is missing,database is up

1.   rman target /


2.   sql ‘alter tablespace <tablespace_name> offline immediate’;
3.   restore datafile 3;
4.   recover datafile 3;
5.   sql ‘alter tablespace <tablespace_name> online’;

Complete Open Database Recovery (when the database is initially closed).Non system tablespace is missing

A user datafile is reported missing when tryin to startup the database. The datafile can be turned offline and the database started
up. Restore and recovery are performed using Rman. After recovery is performed the datafile can be turned online again.
1.    sqlplus /nolog
2.    connect / as sysdba
3.    startup mount
4.    alter database datafile ‘<datafile_name>’ offline;
5.    alter database open;
6.    exit;
7.    rman target /
8.    restore datafile ‘<datafile_name>’;
9.    recover datafile ‘<datafile_name>’;
10.   sql ‘alter tablespace <tablespace_name> online’;

Recovery of a Datafile that has no backups (database is up).

If a non system datafile that was not backed up since the last backup is missing,recovery can be performed if all archived logs since
the creation of the missing datafile exist. Since the database is up you can check the tablespace name and put it offline. The option
offline immediate is used to avoid that the update of the datafile header.
Pre requisites: All relevant archived logs.
1.    sqlplus ‘/ as sysdba’
2.    alter tablespace <tablespace_name> offline immediate;
3.    alter database create datafile ‘/user/oradata/u01/IASDB/newdata01.dbf;
4.    exit
5.    rman target /
6.    recover tablespace <tablespace_name>;
7.    sql ‘alter tablespace <tablespace_name> online’;
If the create datafile command needs to be executed to place the datafile on a location different than the original use:
alter database create datafile ‘/user/oradata/u01/IASDB/newdata01.dbf’ as ‘/user/oradata/u02/IASDB/newdata01.dbf’

Restore and Recovery of a Datafile to a different location. Database is up.

If a non system datafile is missing and its original location not available, restore can be made to a different location and recovery
performed.
Pre requisites: All relevant archived logs, complete cold or hot backup.
1.    Use OS commands to restore the missing or corrupted datafile to the new location, ie:
cp -p /usr/backup/RMAN/user01.dbf /usr/oradata/u02/IASDB/user01.dbf
2.    alter tablespace <tablespace_name> offline immediate;
3.    alter tablespace <tablespace_name> rename datafile ‘/user/oradata/u01/IASDB/user01.dbf’ to
‘/user/oradata/u02/IASDB/user01.dbf’;
4.    rman target /
5.    recover tablespace <tablespace_name>;
6.    sql ‘alter tablespace <tablespace_name> online’;

Control File Recovery

Always multiplex your controlfiles. If you loose only one controlfile you can replace it with the one you have in place, and startup the
Database. If both controlfiles are missing, the database will crash.
Pre requisites: A backup of your controlfile and all relevant archived logs. When using Rman alway set configuration parameter
autobackup of controlfile to ON. You will need the dbid to restore the controlfile, get it from the name of the backed up controlfile.It
is the number following the ‘c-’ at the start of the name.
1.   rman target /
2.   set dbid <dbid#>
3.   startup nomount;
4.   restore controlfile from autobackup;
5.   alter database mount;
6.   recover database;
7.   alter database open resetlogs;
8.   make a new complete backup, as the database is open in a new incarnation and previous archived log are not relevant.

Incomplete Recovery, Until Time/Sequence/Cancel

Incomplete recovery may be necessaire when the database crash and needs to be recovered, and in the recovery process you find
that an archived log is missing. In this case recovery can only be made until the sequence before the one that is missing.
Another scenario for incomplete recovery occurs when an important object was dropped or incorrect data was committed on it.
In this case recovery needs to be performed until before the object was dropped.
Pre requisites: A full closed or open database backup and archived logs, the time or sequence that the ‘until’ recovery needs to be
performed.
1.   If the database is open, shutdown it to perform full restore.
2.   rman target \
3.   startup mount;
4.   restore database;
5.   recover database until sequence 8 thread 1; # you must pass the thread, if a single instance will always be 1.
6.  alter database open resetlogs;
7.  make a new complete backup, as the database is open in a new incarnation and previous archived log are not
relevant.Alternatively you may use instead of until sequence, until time, ie: ’2012-01-04:01:01:10′.

Posted by Real-Core-DBA at 11:14 PM No comments: 


Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest

You might also like