Professional Documents
Culture Documents
nlp MODULE-5 - NOTES
nlp MODULE-5 - NOTES
nlp MODULE-5 - NOTES
TRANSACTION MANAGEMENT
5.1 Transaction
● A transaction is an action, or series of actions that are being performed by a single
user or application program, which reads or updates the contents of the database.
● A transaction can be defined as a logical unit of work on the database. This may
be an entire program, a piece of a program or a single command (like the SQL
commands such as INSERT or UPDATE) and it may engage in any number of
operations on the database.
● A transaction can be defined as a group of tasks. A single task is the minimum
processing unit which cannot be divided further.
● A transaction is an event which occurs on the database. Generally a transaction
reads a value from the database or writes a value to the database.
● Although a transaction can both read and write on the database, there are some
fundamental differences between these two classes of operations.
● A read operation does not change the image of the database in any way.
● But a write operation, whether performed with the intention of inserting, updating
or deleting data from the database, changes the image of the database.
● That is, we may say that these transactions bring the database from an image which
existed before the transaction occurred to an image which exists after the
transaction occurred .
● Example of a Transaction in DBMS :
✔ A simple example of a transaction will be dealing with the bank accounts
of two users let say Karlos and Ray.
✔ A simple transaction of moving an amount of 5000 from Karlos to Ray
engages many low-level jobs.
✔ As the amount of Rs. 5000 gets transferred from the Karlos's account to
Ray's account, a series of tasks gets performed in the background of the
screen.
✔ This very simple and small transaction includes several steps: decrease
Karlos's bank account from 5000:
Open_Acc (Karlos) OldBal = Karlos.bal
NewBal = OldBal - 5000
Ram.bal = NewBal
CloseAccount(Karlos)
✔ Simply you can say, the transaction involves many tasks, such as opening
the account of Karlos, reading the old balance, decreasing the specific
amount 5000 from that account, saving new balance to an account of
Karlos and finally closing the transaction session.
✔ For adding amount 5000 in Ray's account, the same sort of tasks needs
to be done:
1
Transaction Management Module-5
OpenAccount(Ray)
Old_Bal = Ray.bal
NewBal = OldBal + 1000
Ahmed.bal = NewBal
CloseAccount(B)
✔ Detailed Explanation :
5.2.1 Atomicity:
● This property ensures that either all the operations of a transaction reflect in
database or none.
OR
● The 'all or nothing' property.
● A transaction is an indivisible entity that is either performed in its entirety or will not
get performed at all.
● This is the responsibility or duty of the recovery subsystem of the DBMS to ensure
atomicity.
● Example:
❖ we have two accounts A and B, each containing Rs 1000/-.
❖ We now start a transaction to deposit Rs 100/- from account A to Account B.
Read (A)
A = A – 100;
Write (A)
Read B;
B = B + 100;
2
Transaction Management Module-5
Write (B)
❖ The transaction has 6 instructions to extract the amount from A and submit it to B.
❖ After transaction, it will show Rs 900/- in A and Rs 1100/- in B.
❖ Problem :
o Now, suppose there is a power failure just after instruction 3 (Write A) has been
complete.
o What happens now? After the system recovers , will show Rs 900/- in A, but
the same Rs 1000/-in B.
o It would be said that Rs 100/- evaporated in thin air for the power failure.
Clearly such a situation is not acceptable.
❖ Solution :
o It is to keep every value calculated by the instruction of the transaction not in any
stable storage (hard disc) but in a volatile storage (RAM), until the transaction
completes its last instruction.
o When we see that there has not been any error we do something known as a
COMMIT operation.
o Its job is to write every temporarily calculated value from the volatile storage on to
the stable storage.
o In this way, even if power fails at instruction 3, the post recovery image of the
database will show accounts A and B both containing Rs 1000/-, as if the failed
transaction had never occurred.
5.2.2 Consistency:
● To preserve the consistency of database, the execution of transaction should take
place in isolation (that means no other transaction should run concurrently when there
is a transaction already running).
OR
● A transaction must alter the database from one steady state to another steady state.
● This is the responsibility of both the DBMS and the application developers to make
certain consistency.
● The DBMS can ensure consistency by putting into effect all the constraints that have
been particularly on the database schema such as integrity and enterprise constraints.
● If we execute a particular transaction in isolation or together with other transaction,
(i.e. presumably in a multi-programming environment), the transaction will yield the
same expected result.
● To give better performance, every database management system supports the
execution of multiple transactions at the same time, using CPU Time Sharing.
● Concurrently executing transactions may have to deal with the problem of sharable
resources, i.e. resources that multiple transactions are trying to read/write at the same
time.
3
Transaction Management Module-5
● For example:
❖ we may have a table or a record on which two transactions are trying to read or
write at the same time.
❖ Careful mechanisms are created in order to prevent mismanagement of these
sharable resources, so that there should not be any change in the way a transaction
performs.
❖ A transaction which deposits Rs 100/- to account A must deposit the same amount
whether it is acting alone or in conjunction with another transaction that may be
trying to deposit or withdraw some amount at the same time.
5.2.3 Isolation
● For every pair of transactions, one transaction should start execution only when the
other finished execution.
In case multiple transactions are executing concurrently and trying to access a
sharable resource at the same time, the system should create an ordering in their
execution so that they should not create any anomaly in the value stored at the
sharable resource.
● There are several ways to achieve this and the most popular one is using some kind of
locking mechanism.
● Again, if you have the concept of Operating Systems, then you should remember the
semaphores, how it is used by a process to make a resource busy before starting to use it,
and how it is used to release the resource after the usage is over.
● Other processes intending to access that same resource must wait during this time. Locking
is almost similar.
● It states that a transaction must first lock the data item that it wishes to access, and release
the lock when the accessing is no longer required.
● Once a transaction locks the data item, other transactions wishing to access the same data
item must wait until the lock is released.
5.2.4 Durability:
● Once a transaction completes successfully, the changes it has made into the database
should be permanent even if there is a system failure.
● The recovery-management component of database systems ensures the durability of
transaction.
OR
● The effects of a successfully accomplished transaction are permanently recorded in
the database and must not get lost or vanished due to a subsequent failure.
● So this becomes the responsibility of the recovery sub-system to ensure durability.
● As we have seen in the explanation of the Atomicity property, the transaction, if
completes successfully, is committed.
● Once the COMMIT is done, the changes which the transaction has made to the
4
Transaction Management Module-5
✔ There are the following six states in which a transaction may exist:
(i) Active: The initial state when the transaction has just started execution.
(ii) Partially Committed: At any given point of time if the transaction is executing
properly, then it is going towards it COMMIT POINT. The values generated during
the execution are all stored in volatile storage.
(iii) Failed: If the transaction fails for some reason. The temporary values are no longer
required, and the transaction is set to ROLLBACK. It means that any change made
to the database by this transaction up to the point of the failure must be undone. If
the failed transaction has withdrawn Rs. 100/- from account A, then the
ROLLBACK operation should add Rs 100/- to account A.
(iv) Aborted: When the ROLLBACK operation is over, the database reaches the BFIM.
The transaction is now said to have been aborted.
(v) Committed: If no failure occurs then the transaction reaches the COMMIT POINT.
All the temporary values are written to the stable storage and the transaction is said
to have been committed.
(vi) Terminated: Either committed or aborted, the transaction finally reaches this state.
5
Transaction Management Module-5
(ii)
● If a faulty transfer program always credits the second amount with one dollar less
than the amount debited from the first account, the DBMS cannot be expected to
detect inconsistencies due to such errors in the user program’s logic.
5.4.2 Isolation:
✔ This property ensured by guaranteeing that, even though actions of several
transactions might be interleaved, the net effect is identical to executing all
transactions one after the other in some serial order
✔ Example:
● If two transactions T1 and T2 are executed concurrently, the net effect is guaranteed
to be equivalent to executing T1 followed by executing T2 or executing T2
followed by executing T1
Database consistency: It is the property that every transaction sees a consistent database
instance. Database consistency follows from transaction atomicity, isolation and
transaction consistency.
6
Transaction Management Module-5
● This scheme is based on making copies of the database, called shadow copies,
assumes that only one transaction is active at a time.
● The scheme also assumes that the database is simply a file on disk.
● A pointer called db-pointer is maintained on disk; it points to the current copy
of the database.
● If the transaction completes, it is committed as follows:
❖ First, the operating system is asked to make sure that all pages of the new
copy of the database have been written out to disk. (Unix systems use the
flush command for this purpose.)
❖ After the operating system has written all the pages to disk, the database
system updates the pointer db-pointer to point to the new copy of the
database; the new copy then becomes the current copy of the database.
❖ The old copy of the database is then deleted.
● Figure below depicts the scheme, showing the database state before and after the
update.
● The transaction is said to have been committed at the point where the updated
db pointer is written to disk.
7
Transaction Management Module-5
Durability:
✔ The log is also used to ensure durability, if the system crashes before the changes
made by a completed transaction are written to disk, the log is used to remember and
restore these changes when the system restarts.
5.7.2 Schedule:
Classification :
(i)Serial schedule:
✔ Transactions are not interleaved, that is Transactions are executed from start to
8
Transaction Management Module-5
✔ Serial :
● In Serial schedule, there is no question of sharing a single data item among many
transactions, because not more than a single transaction is executing at any point
of time.
● However, a serial schedule is inefficient in the sense that the transactions suffer
for having a longer waiting time and response time, as well as low amount of
resource utilization.
✔ Concurrent :
● In concurrent schedule, CPU time is shared among two or more transactions in
order to run them concurrently.
● However, this creates the possibility that more than one transaction may need to
access a single data item for read/write purpose and the database could contain
inconsistent value if such accesses are not handled properly.
9
Transaction Management Module-5
Read A;
A = A – 100;
Write A;
Read B;
B = B + 100;
Write B;
T2
Read A;
Temp = A * 0.1;
Read C;
C = C + Temp;
Write C;
● T2 is a new transaction which deposits to account C 10% of the amount in account A.
● If we prepare a serial schedule, then either T1 will completely finish before T2 can
begin, or T2 will completely finish before T1 can begin.
● However, if we want to create a concurrent schedule, then some Context Switching
need to be made, so that some portion of T1 will be executed, then some portion of T2
will be executed and so on.
● For example, 2: say we have prepared the following concurrent schedule.
T1 T2
Read A;
A = A – 100;
Write A;
Read A;
Temp = A * 0.1;
Read C;
C = C + Temp;
Write C;
Read B;
B = B + 100;
Write B;
● No problem here.
● We have made some Context Switching in this Schedule, the first one after executing
the third instruction of T1, and after executing the last statement of T2.
● T1 first deducts Rs 100/- from A and writes the new value of Rs 900/- into A.
● T2 reads the value of A, calculates the value of Temp to be Rs 90/- and adds the value
to C.
● The remaining part of T1 is executed and Rs 100/- is added to B.
● It is clear that a proper Context Switching is very important in order to maintain the
Consistency and Isolation properties of the transactions.
● But let us take another example where a wrong Context Switching can bring about
disaster.
10
Transaction Management Module-5
Temp = A * 0.1;
Read C;
C = C + Temp;
Write C;
Write A;
Read B;
B = B + 100;
Write B;
● This schedule is wrong, because we have made the switching at the second instruction
of T1.
● The result is very confusing.
● If we consider accounts A and B both containing Rs 1000/- each, then the result of this
schedule should have left Rs 900/- in A, Rs 1100/- in B and add Rs 90 in C (as C should
be increased by 10% of the amount in A).
● But in this wrong schedule, the Context Switching is being performed before the new
value of Rs 900/- has been updated in A.
● T2 reads the old value of A, which is still Rs 1000/-, and deposits Rs 100/- in C.
● C makes an unjust gain of Rs 10/- out of nowhere.
● In the above example, we detected the error simple by examining the schedule and
applying common sense.
● But there must be some well-formed rules regarding how to arrange instructions of the
transactions to create error free concurrent schedules.
● So , we go for the concept of Serializability.
5.8.2 Serializability
✔ A serializable schedule over a set S of committed transactions is a schedule whose effect
on any consistent database instance is guaranteed to be identical to that of some complete
serial schedule over S.
✔ The database instance that results from executing the given schedule is identical to the
database instance that results from executing the transactions in some serial order
11
Transaction Management Module-5
● SQL gives application programmers the ability to instruct the DBMS to choose non-
serializable schedules.
12
Transaction Management Module-5
1. If two instructions of the two concurrent transactions are both for read operation, then they
are not in conflict, and can be allowed to take place in any order.
2. If one of the instructions wants to perform a read operation and the other instruction wants
to perform a write operation, then they are in conflict, hence their ordering is important. If
the read instruction is performed first, then it reads the old value of the data item and after
the reading is over, the new value of the data item is written. It the write instruction is
performed first, then updates the data item with the new value and the read instruction reads
the newly updated value.
3. If both the transactions are for write operation, then they are in conflict but can be allowed
to take place in any order, because the transaction do not read the value updated by each
other. However, the value that persists in the data item after the schedule is over is the one
written by the instruction that performed the last write.
✔ It may happen that we may want to execute the same set of transaction in a different
schedule on another day.
✔ Keeping in mind these rules, we may sometimes alter parts of one schedule (S1) to create
another schedule (S2) by swapping only the non-conflicting parts of the first schedule.
✔ The conflicting parts cannot be swapped in this way because the ordering of the conflicting
instructions is important and cannot be changed in any other schedule that is derived from
the first.
✔ If these two schedules are made of the same set of transactions, then both S1 and S2 would
yield the same result if the conflict resolution rules are maintained while creating the new
schedule.
✔ In that case the schedule S1 and S2 would be called Conflict Equivalent.
13
Transaction Management Module-5
1. If in S1, T1 reads the initial value of the data item, then in S2 also, T1 should read the
initial value of that same data item.
2. If in S1, T1 writes a value in the data item which is read by T2, then in S2 also, T1
should write the value in the data item before T2 reads it.
3. If in S1, T1 performs the final write operation on that data item, then in S2
also, T1 should perform the final write operation on that data item.
✔ Except in these three cases, any alteration can be possible while creating S2 by
modifying S1.
14
Transaction Management Module-5
✔ Example :
● Suppose A is the number of available copies of the book.
● A transaction that places an order first reads A, checks that it is greater than 0, and then
decrements it.
● Transaction T1 reads A and sees the value 1.
● Transaction T2 reads A and sees the value 1, decrement A to 0 and commits.
● Transaction T1 then tries to decrement a gets an error.n progress.
➢ If T1 tries to read the value of A again, it will get a different result, even tho
15
Transaction Management Module-5
✔ Note that neither transaction reads a salary value before writing it such a write is called a
blind write.
✔ Consider the following interleaving actions of T1 and T2; T2 sets Harry’s salaries to
$1000, T1 sets Larry’s salary to $2000, T2 sets Larry’s salary to $1000 and commits, and
finally T1 sets Harry’s salary to $2000 and commits.
✔ The result is not identical to the result o either of the two possible serial executions and
the interleaved schedule is therefore not serializable.
✔ It violates the desired consistency criterion that the two salaries must be equal. This
property is called lost Update problem
16
Transaction Management Module-5
✔ If T2 had not yet committed, we could deal with the situation by cascading the abort of
T1 and also aborting T2; this process recursively aborts any transaction that read data
written by T2 and so on.
✔ But T2 has already committed, and so we cannot undo its actions. We say that such a
schedule is Unrecoverable.
✔ Recoverable Schedule: transactions commit only after all transactions whose changes
they read commit.
✔ Avoid Cascading Aborts: if transaction read only the changes of committed transactions,
not only is the schedule recoverable, but also aborting a transaction can be accomplished
without cascading the abort to another transaction.
✔ Another problem in undoing the actions of a transaction, suppose that a transaction T2
overwrites the value of an object A that has been modified by a transaction T1, while T1
is still in progress, and T1 subsequently aborts.
✔ All of T1’s changes to database objects are undone by restoring the value of any object
that it modified to the value of the object before T1’s changes.
✔ When T1 is aborted and its changes are undone in this manner. T2’s changes are lost even
if T2 decides to commit.
✔ Example:
❖ If A originally had the value 5, then was changed by T1 to 6, and by T2 to 7, if T1
now aborts, the value of A becomes 5 again.
❖ Even if T2 commits, its change to A is inadvertently lost.
✔ Rule 1: Each Transaction must obtain a S (shared) lock on object before reading, and
an X (exclusive) lock on object before writing.
✔ Rule 2: All locks held by a transaction are released when the transaction completes
❖ A transaction that has an exclusive lock can also read the object.
❖ A transaction that requests a lock is suspended until the DBMS is able to grant it the
requested lock.
17
Transaction Management Module-5
✔ The DBMS keeps track of the locks it has granted and ensures that if a transaction
holds an exclusive lock on an object, no other transaction holds a shared or exclusive
lock on the same object.
✔ Requests to acquire and release locks can be automatically inserted into transactions
by the DBMS; users need not worry about these details.
✔ In effect, the locking protocol allows only safe interleaving of transactions.
✔ If two transactions access completely independent parts of the database, they
concurrently obtain the locks they need and proceed merrily on their ways.
✔ On the other hand, if two transactions access the same object, and one wants to modify
it, their actions are effectively ordered serially- all actions of one of these transactions
are completed before the other transaction can proceed.
Example:
● T1 would obtain an exclusive lock on A first and then read and write A.
● Then T2 would request a lock on A.
● However, this request cannot be granted until T1 releases its exclusive lock on A, and
the DBMS therefore suspends T2.
● T1 now proceeds to obtain an exclusive lock on B reads and writes B, then finally
commits, at which time its locks are released.
● T2’s lock request is now granted, and it proceeds.
18
Transaction Management Module-5
5.9.2 Deadlocks
✔ Consider the following example
✔ Transaction T1 sets an exclusive lock on object A, T2 sets an exclusive lock on B, T1
requests an exclusive lock on B and is queued, and T2 requests an exclusive lock on A
and is queued.
✔ Now, T1 is waiting for T2 to release its lock and T2 is waiting for T1 to release its lock.
Such a cycle of transactions waiting for locks to be released is called a deadlock.
✔ Clearly, these two transactions will make no further progress. Worse, they hold locks
that may be required by other transactions.
✔ The DBMS must either prevent or detect (and resolve) such deadlock situations.
✔ Deadlock Prevention
● We can prevent deadlocks by giving each transaction a priority and ensuring that
lower priority transactions are not allowed to wait for higher priority transactions
(or vice versa).
● One way to assign priorities is to give each transaction a timestamp when it starts
up.
● The lower the timestamp, the higher the transaction's priority, that is, the oldest
transaction has the highest priority.
● If a transaction Ti requests a lock and transaction Tj holds a conflicting lock, the
lock manager can use one of the following two policies:
o Wait-die: If It has higher priority, it is allowed to wait; otherwise it is
aborted.
o Wound-wait: If It has higher priority, abort Tj; otherwise Ti waits. In the
wait-die scheme, lower priority transactions can never wait for higher
priority transactions
● In the wound-wait scheme, higher priority transactions never wait for lower
priority transactions. In either case no deadlock cycle can develop.
● Transactions having lower timestamp value is having higher priority, this ensures
that the oldest transaction will get all the locks that it requires.
● The wait-die scheme is non preemptive; only a transaction requesting a lock can
be aborted.
✔ Deadlock Detection
● Deadlocks tend to be rare and typically involve very few transactions.
● This observation suggests that rather than taking measures to prevent deadlocks, it
may be better to detect and resolve deadlocks as they arise.
● In the detection approach, the DBMS must periodically check for deadlocks.
19
Transaction Management Module-5
20
Transaction Management Module-5
✔ If this request can now be granted, the transaction that made the request is woken
up and given the lock.
✔ Indeed, if there are several requests for a shared lock on the object at the front of the
queue, all of these requests can now be granted together.
✔ Note that if T1 has a shared lock on O, and T2 requests an exclusive lock, T2's request
is queued.
✔ Now, if T3 requests a shared lock, its request enters the queue behind that of T2, even
though the requested lock is compatible with the lock held by T1.
✔ This rule ensures that T2 does not starve, that is, wait indefinitely while a stream of
other transactions acquire shared locks and thereby prevent T2 from getting the
exclusive lock that it is waiting for.
21
Transaction Management Module-5
✔ If, indeed, there are few conflicts, and validation can be done efficiently, this approach
should lead to better performance than locking does.
✔ If there are many conflicts, the cost of repeatedly restarting transactions (thereby
wasting the work they've done) will hurt performance significantly.
✔ Each transaction Ti is assigned a timestamp TS(Ti) at the beginning of its validation
phase, and the validation criterion checks whether the timestamp-ordering of
transactions is an equivalent serial order.
✔ For every pair of transactions Ti and Tj such that TS(Ti) < TS(Tj), one of the
following conditions must hold:
1. Ti completes (all three phases) before Tj begins; or
2. Ti completes before Tj starts its Write phase, and Ti does not write any database
object that is read by Tj; or
3. Ti completes its Read phase before Tj completes its Read phase, and Ti does not write
any database object that is either read or written by Tj.
✔ To validate Tj, we must check to see that one of these conditions holds with respect
to each committed transaction Ti such that TS(Ti) < TS(Tj).
✔ Each of these conditions ensures that Tj's modifications are not visible to Ti.
o The first condition allows Tj to see some of Ti's changes, but clearly, they execute
completely in serial order with respect to each other.
o The second condition allows Tj to read objects while Ti is still modifying objects,
but there is no conflict because Tj does not read any object modified by Ti.
Although Tj might overwrite some objects written by Ti, all of Ti's writes
precede all of Tj's writes.
o The third condition allows Ti and Tj to write objects at the same time, and thus
have even more overlap in time than the second condition, but the sets of objects
written by the two transactions cannot overlap. Thus, no RW, WR, or WW
conflicts are possible if any of these three conditions is met.
✔ Checking these validation criteria requires us to maintain lists of objects read and
written by each transaction.
✔ The locking overheads of lock-based approaches are replaced with the overheads
of recording read-lists and write-lists for transactions, checking for conflicts, and
copying changes from the private workspace.
Timestamps
● With each transaction Ti in the system, we associate a unique fixed timestamp,
denoted by TS (4).
● This timestamp is assigned by the database system before the transaction Ti starts
execution.
● If a transaction Ti has been assigned timestamp TS(Ti), and a new transaction Q
enters the system, then TS(4) < TS(4).
22
Transaction Management Module-5
Crash Recovery
● Transactions (or units of work) against a database can be interrupted unexpectedly.
● If a failure occurs before all of the changes completed, committed, and written to disk,
the database is left in an inconsistent and unusable state.
23
Transaction Management Module-5
● Crash recovery is the process by which the database is moved back to a consistent and
usable state.
● This is done by rolling back incomplete transactions and completing committed
transactions that were still in memory when the crash occurred (Figure 1).
24
Transaction Management Module-5
for REDO, from which REDO operations are applied until the end of the log is reached.
In addition, information stored by ARIES and in the data pages will allow ARIES to
determine whether the operation to be redone has actually been applied to the database
and hence need not be reapplied. Thus only the necessary REDO operations are applied
during recovery.
3. UNDO
During the UNDO phase, the log is scanned backwards and the operations of
transactions that were active at the time of the crash are undone in reverse order. The
information needed for ARIES to accomplish its recovery procedure includes the log,
the Transaction Table, and the Dirty Page Table. I
Before describing this topic we need to elaborate some concepts:
1. Log sequence number
It refers to a pointer used to identify the log records.
2. Dirty page table
It refers to pages with their updated version placed in main memory and disk version
of it is not updated.
A table is maintained which is useful in reducing unnecessary redo operation.
3. Fuzzy checkpoints.
A new type of checkpoints i.e. fuzzy checkpoints has been derived that allows processes
to process new transactions that alter the log has been updated without having to update
the database.
Media Recovery
● When a database object such as a file or a page is corrupted the copy of that is brought
up-to-date by using the log.
● Media recovery requires a control file, data files (typically restored from backup), and
25
Transaction Management Module-5
online and archived redo log files containing changes since the time the data files were
backed up. Media recovery is most often used to recover from media failure, such as
the loss of a file or disk, or a user error, such as the deletion of the contents of a table.
1)Analysis
2)Redo
3)Undo
Undoing – If a transaction crashes, then the recovery manager may undo transactions i.e.
reverse the operations of a transaction.
Deferred update – This technique does not physically update the database on disk until a
transaction has reached its commit point. Before reaching commit, all transaction updates are
recorded in the local transaction workspace. If a transaction fails before reaching its commit
point, it will not have changed the database in any way so UNDO is not needed.
Immediate update – In the immediate update, the database may be updated by some
operations of a transaction before the transaction reaches its commit point.
Shadow Paging is a recovery technique that is used to recover databases. In this recovery
technique, a database is considered as made up of fixed size of logical units of storage which
are referred to as pages. pages are mapped into physical blocks of storage, with help of the
page table which allow one entry for each logical page of database
● Full database backup – In this full database including data and database, Meta information
needed to restore the whole database, including full-text catalogs are backed up in a
predefined time series.
● Differential backup – It stores only the data changes that have occurred since last full
26
Transaction Management Module-5
database backup. When same data has changed many times since last full database backup, a
differential backup stores the most recent version of changed data. For this first, we need to
restore a full database backup.
● Transaction log backup – In this, all events that have occurred in the database, like a record
of every single statement executed is backed up. It is the backup of transaction log entries
and contains all transaction that had happened to the database. Through this, the database can
be recovered to a specific point in time. It is even possible to perform a backup from a
transaction log if the data files are destroyed and not even a single committed transaction is
lost.
27