Professional Documents
Culture Documents
Introduction To Oracle PATCHES
Introduction To Oracle PATCHES
Introduction To Oracle PATCHES
ARCHIVE GAP
An archive gap means a set of archived redo logs that could not be transmitted to the
Standby site from the Primary database. Mostly this problem occurs the network
connectivity becomes unavailable between the Primary and Standby site. When the
network is available again Data Guard resumes redo data transmission from the Primary
to Standby site. Oracle DG provides 2 methods for GAP resolution. They are AUTOMATIC
and FAL.
When the archived logs are missing on the Standby database, simply we can ship
missing logs from the Primary to Standby database; (If missing logs are very less
count e.g. (below 15).We need to register all Shipped logs in the Standby database so
that Gap can be resolved.
In this article I will demonstrate how to resolve archive log gaps using following
methods.
# On Primary database
# On Standby database
# On Primary database
# On Standby database
SYS> select thread#, max (sequence#) from v$archived_log where applied='YES' group
by thread#;
THREAD# MAX (SEQUENCE#)
--------- --------------
1 551
$ cd /u01/app/oracle/diag/rdbms/stbycrms/stbycrms/trace
$ tail -f alert_stbycrms.log
Fetching gap sequence in thread 1, gap sequence 552-556
..
... FAL [client]: Failed to request gap sequence
DETECTING GAPS
Oracle Data Guard provide us with a simple view (v$archive_gap) to detect a gap.
# On Standby database
Output of Standby database is currently missing log files from sequence# 552 to 556; the
Standby database is 5 logs behind the Primary database. ORACLE NOTE: Refer BUG #10072528
V$ARCHIVE_GAP may not detect archive gap when Physical Standby is open read only.
NAME
--------------------------------------------------------------------------------------
/u01/app/oracle/flash_recovery_area/CRMS/archivelog/2016_06_01/o1_mf_1_552_cnvodqq5_.arc
/u01/app/oracle/flash_recovery_area/CRMS/archivelog/2016_06_01/o1_mf_1_553_cnvohcy1_.arc
Oracle DBA Technology Explored by 8 Bits Virtual Training
RESOLVING ARCHIVE GAP BETWEEN PRIMARY & STANDBY DATBASE
/u01/app/oracle/flash_recovery_area/CRMS/archivelog/2016_06_01/o1_mf_1_554_cnvokk6z_.arc
/u01/app/oracle/flash_recovery_area/CRMS/archivelog/2016_06_01/o1_mf_1_555_cnvom1l4_.arc
/u01/app/oracle/flash_recovery_area/CRMS/archivelog/2016_06_01/o1_mf_1_556_cnvomxkz_.arc
Copy the above redo log files to the Physical Standby database and register them using the
ALTER DATABASE REGISTER LOGFILE ... SQL statement on the Physical Standby database.
$ ls /u01/app/oracle/flash_recovery_area/CRMS/archivelog/2016_06_01/
oracle@192.168.222.134's password:
As per above example you need to transfer all archive logs to the Standby Server.
Database altered.
Database altered.
Database altered.
Database altered.
Database altered.
Recovery process would start automatically or else stop the managed recovery process
(MRP) and re-start it once again; that’s it!
Oracle DBA Technology Explored by 8 Bits Virtual Training
RESOLVING ARCHIVE GAP BETWEEN PRIMARY & STANDBY DATBASE
In cases where a Physical Standby database is out of Sync with Primary database. In
case of archive logs are missing/corrupt we have to rebuild the Standby from scratch.
If the database size in terabytes again rebuilding the Standby database could be a
tedious job; but we a solution to resolve this kind of issues.
As a DBA you can go for an RMAN Incremental backup to sync a Physical Standby with
the Primary database; using the command RMAN BACKUP INCREMENTAL FROM SCN … create a
backup on the Primary database that starts at the standby database’s current SCN,
which can then be used to roll the Standby database forward in time.
Please assume bunch of archive logs are deleted/corrupted on the Primary database
Server before they are transferring to the Standby database Server. In this case, I
demonstrate an efficient way to Sync Standby with Primary (an alternative to rebuild
the Standby DB)!
DISASTER RECOVERY
11g DATABASE SERVER FOR PRIMARY -> 192.168.222.133 -> DB_UNIQUE_NAME -> CRMS
11g ATABASE SERVER FOR STANDBY -> 192.168.222.134 -> DB_UNIQUE_NAME -> STBYCRMS
# On Primary database
# On Standby database
THREAD# MAX(SEQUENCE#)
---------- --------------
1 653
# On Standby database
SYS> select thread#, max (sequence#) from v$archived_log where applied='YES' group
by thread#;
Note the difference, On Standby last applied SEQUENCE# 558 but on Primary SEQUENCE# 653
SYS> @archive_gap.sql
Note the SCN DIFFERENCE -> 4538393 – 3473651 = 1064742 (Now the Standby is lag behind).
But I want to know how long the Standby database is far behind in terms of (hh/dd).
To know that I used the scn_to_timestamp function to translate the SCN to timestamp.
# On Primary database
SCN_TO_TIMESTAMP (4538393)
---------------------------------------
01-JUN-16 11.12.52.000000000 PM
SCN_TO_TIMESTAMP (3473651)
--------------------------------------------------
01-JUN-16 04.32.44.000000000 AM
The Standby database far behind more than 18 hours from the Primary database.
NOTE: Query NOT WORKABLE in OPEN READ ONLY MODE or MOUNTED MODE.
To Sync Standby with Primary we need archive logs SEQUENCE# 559 to SEQUENCE# 644.
But these archives are NOT found at Primary Site; then only choice is RMAN
Incremental backup.
On the Primary, take an Incremental backup from the SCN# (3473651) of the Standby
database. The last recorded SCN# (3473651) of the Standby database.
On the Primary database, create a new Standby control file. We can use (SQL*Plus or
RMAN) to create a Standby control file; anyone method is enough.
Or
# connect target as Primary & Create control file backup for Standby database
$ scp * oracle@192.168.222.134:/u03/bkp/
oracle@192.168.222.134's password: ******
Note the location of all data files & Control file(s) at the Standby Database Server.
$ cd /u01/app/oracle/flash_recovery_area/stbycrms/
$ mv control02.ctl /tmp/control02.ctl.bkp
$ cd /u01/app/oracle/oradata/stbycrms/
$ mv control01.ctl /tmp/control01.ctl.bkp
NOTE: If you do not want rename control files, you can remove these files at OS
level.
connected to target database (not started) Oracle instance started database mounted
Total System Global Area 912306176 bytes
..
....
If data files have been added to the Primary database during the archive log gap
time, they were not included in the incremental backup sets. We need to register
newly created files to the Standby database. We can find newly created files using
CURRENT_SCN of the Standby.
# Execute the following query at Primary DB - (Last SCN of the Standby database)
SYS> select file#, name from v$datafile where creation_change# > 3473651;
FILE# NAME
---------- --------------------------------------------
6 /u01/app/oracle/oradata/crms/users03.dbf
All changed blocks has been captured in the incremental backup and also updated at
thestandby database, thus brings the Standby database up to date with the primary
database.
Check the SCN’s in Primary and Standby it should be close to each other.
REFERENCE DOCS:
https://docs.oracle.com/cd/E11882_01/server.112/e41134/rman.htm#SBYDB00759
http://docs.oracle.com/cd/B19306_01/backup.102/b14191/rcmdupdb.htm#sthref955
https://web.stanford.edu/dept/itss/docs/oracle/10gR2/backup.102/b14191/rcmdupdb008.htm
# Perform Recover
RMAN> recover database noredo;
These are Steps I have done to roll the Standby database forward in time. That’s it!
RFS process Compares the Sequence number of currently being archived file with the
sequence number of previously received archived redo file; (if currently being
archived redo file sequence# is greater than the last sequence# received plus one,
there is a gap). An archived redo log file is uniquely identified by its sequence
number and thread number.
Now 3 files are missing. There is a gap between both files, and then RFS
automatically requests the missing redo log sequence# from the Primary DB again via
the ARCH-RFS Heartbeat Ping. The archiver of the Primary then retransmits these
missing archived redo files.
This type of Gap Resolution is using the Service defined in log_archive_dest_n on the
Primary databaseserving this Standby database.
The archiver process of the Primary database polls the Standby databases every minute
- (Referred to as heartbeat) to see if there is a gap in the sequence of archived
redo logs. If a gap is detected, the ARCH process sends the missing archived redo log
files to the Standby databases that reported the gap. Once a gap is resolved then the
Transport Process (ARCH/LGWR) is notified about the Resolution of the Gap.
The destination where the FAL_SERVER database should send the archived log(s).
In earlier release when you set FAL_CLIENT parameter on the Standby database, the
Primary database (FAL_SERVER)uses Standby database (NET SERVICE NAME) to connect the
Standby DB.
# On Primary database
the Primary DB with the value of the Standby database NET SERVICE NAME thus helps
when you do switch over event.
Once the Log Apply Services (MRP) detect an ARCHIVE GAP, then it sends a FAL Request
to the FAL_SERVER; Once communication has been established with the Primary database,
it passes the Sequence number of archived files (which are causing for the archive
gap) to retransmit from the archiver process of the Primary database; and
additionally it passes the service name defined by the FAL_CLIENT parameter to the
Primary ARCH process.
An ARCH Process on the FAL_SERVER tries to pick up the requested Sequences from that
database and sends them to the FAL_CLIENT. I.e. The Primary database ARCH process
Ships the requested archived logs to the remote archive destination of corresponding
service name.
# SNIPPET OF STANDBY ALERT.LOG
RFS [54]: Opened log for thread 1 sequence 1076 dbid 1613387466 branch 913081878
Media Recovery Log u01/app/oracle/flash_recovery_area/STBYCRMS/archivelog/2016_06_04/
o1_mf_1_1076_co5qn3sd_.arc
RFS [55]: Opened log for thread 1 sequence 1077 dbid 1613387466 branch 913081878
Media Recovery Log u01/app/oracle/flash_recovery_area/STBYCRMS/archivelog/2016_06_04/
o1_mf_1_1077_co5qn4db_.arc
In order to successfully complete a Gap Request the requested archive log Sequence(s)
must be available on the FAL_SERVER database.
When you have multiple Physical Standby databases, the FAL mechanism can
automatically retrieve missing archived redo log files from another Physical Standby
database.
Every minute the Primary database polls its Standby databases to see if there
are gaps in the sequence of archived redo log files.
The FAL (Client) requests to transfer archived redo log files automatically.
The FAL (Server) services the FAL requests coming from the FAL Client.
A separate FAL server is created for each incoming FAL client.
FAL is available since Oracle 9.2.0 for Physical Standby database and Oracle 10.1.0
for Logical Standby databases. In 11g , if you not set FAL_CLIENT parameter, the
Primary databasewill obtain service name from related
LOG_ARCHIVE_DEST_n parameter.
REFERENCE LINKS
http://flylib.com/books/en/1.145.1.66/1/ https://docs.oracle.com/cd/B19306_01/server.
102/b14239/log_transport.htm#i1268294 Oracle