Professional Documents
Culture Documents
Oracle Dataguard - Handout - SNL
Oracle Dataguard - Handout - SNL
Oracle Dataguard - Handout - SNL
This protection mode ensures that no data loss will occur if the primary database fails. To
provide this level of protection, the redo data needed to recover each transaction must be
written to both the local online redo log and to the standby redo log on at least one standby
database before the transaction commits. To ensure data loss cannot occur, the primary
database shuts down if any fault prevents it from writing to the standby redo log of at least
one standby database.
Maximum availability
This protection mode provides the highest level of data protection that is possible without
compromising the availability of the primary database. Like maximum protection mode, a
transaction will not commit in primary, until the redo needed to recover that transaction is
written to the local online redo log and also to the standby redo log of at least one standby
database.
Unlike maximum protection mode, the primary database does not shut down if a fault
prevents it from writing its redo stream to a remote standby redo log. Instead, the primary
database operates in maximum performance mode until the fault is corrected, and all gaps
in redo log files are resolved. When all gaps are resolved, the primary database
automatically resumes operating in maximum availability mode. This mode ensures that no
data loss will occur if the primary database fails.;
Maximum performance
This protection mode (the default) provides the highest level of data protection that is
possible without affecting the performance of the primary database. This is accomplished by
allowing a transaction to commit as soon as the redo data for the transaction is written to
the local online redo log. The primary databases redo data stream is also written to at least
one standby database, but that redo stream is written asynchronously with respect to the
transactions that create the redo data.
When network links with sufficient bandwidth are used, this mode provides a level of data
protection that approaches that of maximum availability mode with minimal impact on
primary database performance. The maximum protection and maximum availability modes
require that standby redo log files are configured on at least one standby database in the
configuration.
All three protection modes require that specific log transport attributes be specified on the
LOG_ARCHIVE_DEST_n initialization parameter to send redo data to at least one standby
database.
On the primary database, Oracle Data Guard uses archiver processes (ARCn) or the log
writer process (LGWR) to collect transaction redo data and transmit it to standby
destinations. Although we cannot use both the archiver and log writer processes to send
redo data to the same destination, we can choose to use the log writer process
for some destinations, while archiver processes send redo data to other destinations.
Data Guard also uses the fetch archive log (FAL) client and server to send archived redo log
files to standby destinations following a network outage, for automatic gap resolution, and
resynchronization. The FAL process and gap resolution are dealt with below:
The FAL mechanism handles the following types of archive gaps and problems:
When creating a physical or logical standby database, the FAL mechanism can
automatically retrieve any archived log files generated during a hot backup of the primary
database.
When there are problems with archived log files that have already been received on the
standby database, the FAL mechanism can automatically retrieve archived log files to
resolve any of the following situations:
When the archived log file is deleted from disk before it is applied to the standby database.
When the archived log file cannot be applied because of a disk corruption.
When the archived log file is accidentally replaced by another file (for example, a text file)
that is not an archived redo log file before the redo data has been applied to the standby
database.
When you have multiple physical standby databases, the FAL mechanism can
automatically retrieve missing archived log files from another physical standby database.
The FAL client and FAL server are configured using the FAL_CLIENT and FAL_SERVER
initialization parameters that are set on the standby database. [Define the FAL_CLIENT and
FAL_SERVER initialization parameters only for physical standby databases in the init
parameter file]
1
7
10
The output from the above example indicates that physical standby database is currently
missing log files from sequence 7 to sequence 10 for thread 1. After identifying the gap,
issue the following SQL statement on the primary database to locate the archived log files
on your primary database (assuming the local archive destination on the primary database
is LOG_ARCHIVE_DEST_1):
SQL>SELECTNAMEFROMV$ARCHIVED_LOGWHERETHREAD#=1ANDDEST_ID=1ANDSEQUENCE#
BETWEEN7AND10;
NAME
/primary/thread1_dest/arcr_1_7.arc
/primary/thread1_dest/arcr_1_8.arc
/primary/thread1_dest/arcr_1_9.arc
We need to manually copy these log files to our physical standby database and register
them using the ALTER DATABASE REGISTER LOGFILE statement on your physical standby
database. For example:
SQL>ALTERDATABASEREGISTERLOGFILE'/physical_standby1/thread1_dest/arcr_1_7.arc';
SQL>ALTERDATABASEREGISTERLOGFILE'/physical_standby1/thread1_dest/arcr_1_8.arc';
After we register these log files on the physical standby database, we can restart Redo
Apply.
Note:
The V$ARCHIVE_GAP fixed view on a physical standby database only returns the next gap
that is currently blocking Redo Apply from continuing. After resolving the gap and starting
Redo Apply, query the V$ARCHIVE_GAP fixed view again on the physical standby database
to determine the next gap sequence, if there is one. Repeat this process until there are no
more gaps.
Verification
Monitoring Log File Archival Information
This section describes using views to monitor redo log archival activity for the primary
database.
Step 1 Determine the status of redo log files.
Enter the following query on the primary database to determine the status of all online redo
log files:
SQL>SELECTTHREAD#,SEQUENCE#,ARCHIVED,STATUSFROMV$LOG;
Step 3 Determine the most recent archived redo log file at each destination.
Enter the following query on the primary database to determine which archived redo log file
was most recently transmitted to each of the archiving destinations:
SQL>SELECTDESTINATION,STATUS,ARCHIVED_THREAD#,ARCHIVED_SEQ#FROM
V$ARCHIVE_DEST_STATUS
WHERESTATUS<>'DEFERRED'ANDSTATUS<>'INACTIVE';
DESTINATION
STATUSARCHIVED_THREAD#ARCHIVED_SEQ#
/private1/prmy/lad VALID 1
947
standby1VALID1947
The most recently written archived log file should be the same for each archive destination
listed. If it is not, a status other than VALID might identify an error encountered during the
archival operation to that destination.
Step 4 Find out if archived redo log files have been received.
You can issue a query at the primary database to find out if an archived log file was not
received at a particular site. Each destination has an ID number associated with it. You can
query the DEST_ID column of the V$ARCHIVE_DEST fixed view on the primary database to
identify each destinations ID number.
Assume the current local destination is 1, and one of the remote standby destination IDs is
2.
To identify which log files are missing at the standby destination, issue the following query:
SQL>SELECTLOCAL.THREAD#,LOCAL.SEQUENCE#FROM
2>(SELECTTHREAD#,SEQUENCE#FROMV$ARCHIVED_LOGWHEREDEST_ID=1)
3>LOCALWHERE
4>LOCAL.SEQUENCE#NOTIN
5>(SELECTSEQUENCE#FROMV$ARCHIVED_LOGWHEREDEST_ID=2AND
6>THREAD#=LOCAL.THREAD#);
THREAD#
SEQUENCE#
1
12
1
13
1
14
If you anticipate a heavy workload for archiving, you can increase the maximum number of
archiver processes to as many as 30 by setting the initialization parameter
LOG_ARCHIVE_MAX_PROCESSES. This initialization parameter is dynamic and can be altered
by the ALTER SYSTEM command to increase or decrease the maximum
number of archiver processes.
For example:
ALTERSYSTEMSETLOG_ARCHIVE_MAX_PROCESSES=20;
1 The managed recovery process (MRP) applies archived redo log files to the physical
standby database, and automatically determines the optimal number of parallel recovery
processes at the time it starts. The number of parallel recovery slaves spawned is based
on the number of CPUs available on the standby server.
2 The logical standby process (LSP) uses parallel execution (Pnnn) processes to apply
archived redo log files to the logical standby database, using SQL interfaces.