Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 4

Qus. What is log shipping?

Ans. Log shipping is the process to replicate closed transaction log files from active database to
passive database.

Replication service performs log shipping by several components such as:-

1. Log copier:- Log copier is responsible for copying closed log files from Active database
to passive database inspector directory.
Replication service continuously monitors the source. As soon as new log file is closed
on the source, the copier will copy the log file to the inspector directory on the target.
2. Log inspector:-
i. Which log are copied in inspector directory. Log inspector verify those log that
log file is not corrupt.
ii. If a log file is found to be corrupt, the Replication service will request a re-copy
of the file.
iii. it check (checksummed for validity).
iv. If log is ok then copy in database logs directory and then copied to the database
logs directory.
3. Log Replayer:-
i. Log replayer replay log from database logs directory to passive database. It also
has the ability to batch multiple log files into a single batch replay.
ii. The Replication service uses an Extensible Storage Engine (ESE) API to inspect
and replay log

Replication Pipeline

The replication pipeline implemented by the Replication service is shown below:-


The other directory and services in replication service.

Continuous Replication Block Mode

A process of continuous replication that buffers logs files passive mailbox databases copies.

What does this mean? This means Exchange 2010 has replicate log buffers before the transaction log file
is closed

Continuous Replication File Mode

A process of continuous replication that replicates closed transaction log files from the
active mailbox database copy to passive mailbox database copies.

IgnoredLogs Directory

The IgnoredLogs directory is used to keep valid files that cannot be replayed for any reason (e.g.,
the log file is too old, the log file is corrupt, etc.). The IgnoredLogs might also have the
following subdirectories:

1. E00OutofDate This subdirectory that holds any old E00.log file that was present on the
passive copy at the time of failover. An E00.log file is created on the passive if it was
previously running as an active. An event 2013 is logged in the Application event log to
indicate the failure.
2. InspectionFailed This subdirectory that holds log files that have failed inspection. An
event 2013 is logged when a log file fails inspection. The log file is then moved to the
InspectionFailed directory. The log inspector uses Eseutil and other methods to verify
that a log file is physically valid. Any exception returned by these checks will be
considered as a failure and the log file will be deemed to be corrupt.
3. Truncate Deletor The truncate deletor is responsible for deleting log files that have
been successfully replayed into the passive database. This is especially important after an
online backup is performed on the active copy since online backups delete log files are
not required for recovery of the active database. The truncate deleter makes sure that any
log files that have not been replicated and replayed into the passive copy are not deleted
by an online backup of the active copy.
4. Incremental Reseeder The incremental reseeder is responsible for ensuring that the
active and passive database copies are not diverged after a database restore has been
performed, and after a failover in a CCR environment.
5. Seeder The seeder is responsible for creating the baseline content of a storage group used
to start replay processing. The Replication service perform automatic seeding for new
storage groups.
6. Replay Manager The replay manager is responsible for keeping track of all replica
instances. It will create and destroy the replica on-demand based on the online status of
the storage group. The configuration of a replica instance is intended to be static;
therefore, when a replica instance configuration is changed the replica will be restarted
with the updated configuration. In addition, during shutdown of the Replication service,
the configuration is not saved. As a result, each time the Replication service starts it has
an empty replica instance list. When the Replication service starts, the replay manager
does discovery of the storage groups that are currently online to create a "running
instance" list.

The replay manager periodically runs a "configupdater" thread to scan for newly configured
replica instances. The configupdater thread runs in the Replication service process every 30
seconds. It will create and destroy a replica instance based on the current database state (e.g.,
whether the database is online or offline. The configupdater thread uses the following algorithm:

1. Read instance configuration from Active Directory


2. Compare list of configurations found in Active Directory against running storage
groups/databases
3. Produce a list of running instances to stop and a list of configurations to start
4. Stop running instances on the stop list
5. Start instances on the start list

Exchange 2007 log shipping process

1. Microsoft Exchange Replication Service performs log shipping.


2. When an active Exchange server closes a log file, it updates the LastLogCopyNotified
variable. which server containing the passive copy of the database watches this
variable.
3. At this point, the server containing the passive database pulls the log file over a
Server Message Block (SMB) connection.
4. Once the log file arrives on the target server, it moves to a temporary folder known as
the inspector directory.
5. When the copy process is complete, LastLogCopied variable is updated to keep track
of which log files moved to passive database.
6. inspects the newly copied log file to make sure it isn't corrupt. This prevents the
replay of corrupt data in the passive copy of the database.
7. LastLogInspected variable is updated.
8. The log file then is replayed into the passive database, and the LastLogReplayed
variable is updated to show that the log file has been committed on the database's
passive copy

You might also like