Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 157

Department of CE-AI

DBMS:Database Management System


Unit no : 6
Transaction Transaction
Processing
Management DMS (01CE1302)

Dr.Madhu Shukla
● English Meaning for Transaction
Transfer of amount or property from one
person to another.

What is
Transaction?
●While transferring of funds from the customer’s one
account to another own account is a single operation.
●However it consists of several operations. It is also
essential that all these, operations, occur, or in case
of failure none occur.
Basic ●Think, would it be acceptable if source account were
Concepts debited but the destinations account were not credited?
●DBMS must ensure proper execution of transactions
despite failures.
●It must manage concurrent execution of transactions
to avoid inconsistency
●A Transaction is a unit of program execution that
accesses and possibly updates various data items.
Basic ●A program contains statements of the form begin
transaction & end transaction
Concepts
●All operations are covered between ‘begin & end’
Collection of operations
that forms
What It Is ?
a single logical unit of work
is called Transaction.
● Collection of operations that form a single logical unit of work is
called Transaction.

OPERATION-3
OPERATION-2
Transaction
as
(Set of
operations) Transaction
OPERATION-1

OPERATION-5
OPERATION-4
●Two Operations :
1) Read (X)
Which transfers the data item x from the database to a
local buffer belonging to the transaction that executed
Operations the read operation

in
2) Write (X)
Transactions
Which transfers the data item x from the local buffer to
the transaction that executed the write back to the
database
●A transaction is a unit of program execution that
accesses and possibly updates various data items.
●Example :
Transaction to transfer 50$ amount from Account A to
Account B Operations

Transaction 1. Read(A);
2. A := A – 50;
Work as a single
3. Write(A); Unit
4. Read(B);
5. B := B + 50;
6. Write(B);
Transactions
READ(B);
WRITE(A);

Transaction READ(A);
A :=A-50

WRITE(B); B:=B+50;
ACID Properties
A = Atomicity

ACID C = Consistency
Properties
I = Isolation

D = Durability
To ensure integrity of the data , the DBMS should
maintain following (ACID) properties.
Atomicity : Transaction should be executed in 0% or
100%.
ACID Consistency : Execution of a Transaction in isolation
means with no other transaction executing
Properties concurrently.
Isolation : Each Transaction is unaware of other
Transactions executing concurrently in the system.
Durability : After a transaction completes successfully,
the changes it has made to the database persists, even
if there are system failures.
Atomicity
●Either all operations of the 0%
transaction are properly
reflected in the database or none
are.
ACID ●That simply means either
Properties transaction occur 0% or 100%.
●Ex : If Person A transfer 5000 Rs
to Person B, then amount which
is deducted from person A’s
Account must be added to
Person B’s Account. 100%
Consistency Before Transaction
●The database must remain in a A = 1000, B= 800
consistent state after any A + B = 1800
transaction.
●No transaction should have any
adverse effect on the data
ACID residing in the database.
●If the database was in a
Properties consistent state before the
execution of a transaction, it
must remain consistent after the
execution of the transaction as
well
●The consistency requirement After Transaction
here is that the sum of A and B A = 950, B= 850
be unchanged by the execution of
the transaction. A + B = 1800
Isolation
●This property ensures that
multiple transactions can occur
concurrently without leading to
the inconsistency of database
ACID state.
Properties ●Transactions occur independently
without interference.
●Changes occurring in a particular
transaction will not be visible to
any other transaction until that
particular change in that
transaction is written to memory
or has been committed.
Durability
● This property ensures that once the transaction has
completed execution, the updates and modifications to the
database are stored in and written to disk and they persist
even if a system failure occurs.
● These updates now become permanent and are stored in
non-volatile memory.
● The effects of the transaction, thus, are never lost.
ACID ● Ensuring durability is the responsibility of a component
Properties called the recovery management

Transaction Database
Completed Persist
● Log: Log file helps us in rolling back operations in failed
transaction.

Log File
log
●The transaction can be executed as a single Unit.
●If the database operations do not update the
Facts about database but only retrieve data, this type of
transaction is called a read-only transaction.
Transaction
●Transaction Can be successful or Unsuccessful.
Successful Transaction : Committed
Unsuccessful Transaction: Aborted
Transfer $50 from account a to B
Ti :
begin transaction
read (A)
A := A – 50
Example
write (A)
read (B)
B := B + 50
write (B)
end transaction
Partially
Committed
Committed

Transaction
State Active
Diagram

Failed Aborted
●Not all transaction Complete Successfully
⮚ Ones that fail, are known as Aborted
⮚ For the Aborted / Failed transactions when
Transaction data is undone is said transaction Rolled
State Back
⮚When transaction completes successfully
and data is saved is said to be transaction
Committed .
Active :
● This is the Initial State
● A Transaction stays in this state while it is
Transaction executing.
State ● A Transaction enters into an active state
when the execution process begins. During
this state read or write operations can be
performed.
Active
Partially Committed :
●When a transaction executes it’s final operation, it is
said to be in Partially Committed State.
●A transaction goes into the partially committed
state after the end of a transaction.
●At this point failure is still possible since changes
Transaction may have been only done in main memory, a
State hardware failure could still occur.
●The DBMS needs to write out enough information
to disk so that, in case of a failure, the system
could re-create the updates performed by the
transaction once the system is brought back up.
●After it has written out all the necessary
information, it is committed.
Failed :
●Discover that normal execution can no longer
proceed.
●A transaction considers failed when any one of
the checks fails or if the transaction is aborted
while it is in the active state.
●Once a transaction is not completed, any changes
must be undone/ rolling it back.
Transaction
State Committed :
●The transaction enters in this state after
successful completion of transaction.
●We can not aborted or rollback a committed
transaction.
●Moreover, all of its changes are recorded to the
database permanently.
Aborted :
●It is the state after the transaction has been
rolled back and the database has been restored to
its state prior to the start of the transaction.
●If any of the checks fails and the transaction has
reached a failed state, then the recovery
Transaction manager rolls back all its write operations on
State the database to bring the database back to its
original state where it was prior to the execution
of the transaction. Transactions in this state are
called aborted.
●The Database Recovery Module can select one of
the two operations after a transaction aborts −
1)Re-start the transaction
2) Kill the transaction
Implementatio
n of Atomicity
and Durability
(Shadow Copy)
●The Recovery-Management component of a
database system implements the support for
atomicity and durability by a variety of schemes.
Implementatio ●First we consider a simple, but extremely inefficient
scheme called the Shadow Copy Scheme.
n of Atomicity ●E.g. the shadow-database scheme:
and Durability ● all updates are made on a shadow copy of the database
● db_pointer is made to point to the updated shadow copy
after
● The transaction reaches partial commit and
● All updated pages have been flushed to disk.
●db_pointer always points to the current
consistent copy of the database.
●In case transaction fails, old consistent copy
pointed to by db_pointer can be used, and the
shadow copy can be deleted.
Implementation ●The shadow-database scheme:
of Atomicity ●Assumes that only one transaction is active at a
time.
and Durability ●Assumes disks do not fail
●Useful for text editors, but
● Extremely inefficient for large databases (why?)
● Variant called shadow paging reduces copying of data,
but is still not practical for large databases
●Does not handle concurrent transactions
●Usually system allows multiple Transactions to run
concurrently .
●This concurrency causes several complications
●Two good reasons to allow concurrency

Concurrent 1) Improved throughput and resource utilization

Execution 2) Reduced waiting time


1) Improved throughput and resource utilization
●leading to better transaction throughput : one
transaction can be using the CPU while another is
reading from or writing to the disk.
●Throughput : the number of transactions executed in a
Concurrent given amount of time.
Execution ●Correspondingly, the processor and disk utilization
also increases.
● E.g.
● CPU
● Multitasking
● More utilization
● Less idle
● Less time
2) Reduced Waiting time
●reduced average response time for transactions: short
transactions need not wait behind long ones.
●Average Response Time : The average time for a
transaction to be completed after it has been
submitted.
Concurrent
Execution Concurrency control schemes :
Mechanisms to achieve isolation, i.e., to control the
interaction among the concurrent transactions in order
to prevent them from destroying the consistency of the
database
●A schedule is the chronological (sequential) order in
which instructions are executed in a system.
●A schedule for a set of transaction must consist of all
the instruction of those transactions and must preserve
Schedule the order in which the instructions appear in each
individual transaction.
●A schedule is required in a database because when
some transactions execute in parallel, they may affect
the result of the transaction.
●Schedule – a sequences of instructions that specify the
chronological order in which instructions of concurrent
transactions are executed
● a schedule for a set of transactions must consist of all
instructions of those transactions
● must preserve the order in which the instructions appear in
Schedule each individual transaction.
●A transaction that successfully completes its execution
will have a commit instructions as the last statement
● by default transaction assumed to execute commit instruction
as its last step
●A transaction that fails to successfully complete its
execution will have an abort instruction as the last
statement
●Let T1 transfer $50 from A to B, and T2 transfer 10%
of the balance from A to B.
●A serial schedule in which T1 is followed by T2 :

Try it for A = 100 , B = 200


Before Execution
A + B = 300
Schedule 1
After Transaction execution
A+B=?
● A serial schedule where T2 is followed by T1

Try it for A = 100 , B = 200


Before Execution
A + B = 300
Schedule 2 After Transaction execution
A+B=?
●Let T1 and T2 be the transactions defined previously.
The following schedule is not a serial schedule, but it is
equivalent to Schedule 1.

Try it for A = 100 , B = 200


Before Execution
Schedule 3 A + B = 300
After Transaction execution
A+B=?

In Schedules 1, 2 and 3, the sum A + B is


preserved.
Try it for A = 100 , B = 200
Before Execution
A + B = 300
After Transaction execution
A+B=?

Schedule 4

concurrent schedule does not preserve the


value of (A + B ).
Serializability
●Basic Assumption – Each transaction preserves database
consistency.
●Thus serial execution of a set of transactions preserves
database consistency.
●A (possibly concurrent) schedule is serializable if it is
equivalent to a serial schedule. Different forms of schedule
equivalence give rise to the notions of:
1. conflict serializability
Serializability
2. view serializability
Simplified view of transactions :
●We ignore operations other than read and write instructions
●We assume that transactions may perform arbitrary
computations on data in local buffers in between reads and
writes.
●Our simplified schedules consist of only read and write
instructions.
●Serial Execution
●Non Serial Execution
Issues with concurrent execution :
1) Lost update
2) Dirty read
Serializability 3) Unrepeatable read
Issues with concurrent execution :
1) Lost update
The update of one transaction is overwritten by another
transaction
2) Dirty read
Reading of non-existent value of A by T2. if T1 updates A
Serializability which is then read by t2 , then if t1 aborts t2 will have a read
a value of A which never existed .
3) Unrepeatable read
If transaction A reads which is then altered by another
transaction , lastly first transaction commits and then
another transaction reads then would find different value.
Serialization of Concurrent transactions

●Process of managing the execution of a set of transactions


in such a way that their concurrent execution produces the
same end result as if they were run serially
Serializability
Serialization of Concurrent transactions

●Process of managing the execution of a set of transactions


in such a way that their concurrent execution produces the
same end result as if they were run serially
Conflict
Serializability
●Let us consider a schedule S in which there are two
consecutive instructions, Ii and Ij, of transaction Ti and Tj,
respectively (i ≠ j).
●Conflict if and only if there exists some item Q accessed by
both li and lj, and at least one of these instructions wrote Q.
Conflict 1) Ii = read(Q), Ij = read(Q)  Ii and Ij don’t conflict.
Serializability 2) Ii = read(Q), Ij = Write(Q)  They Conflict
3) Ii = Write(Q), Ij = read(Q)  They Conflict
4) Ii = Write(Q), Ij = Write(Q)  They Conflict
●If both the transaction access different data item then they
are not conflict.
●Intuitively, a conflict between li and lj forces a (logical)
temporal order between them. If li and lj are consecutive in
a schedule and they do not conflict, their results would
remain the same even if they had been interchanged in the
Conflict schedule.
Serializability ●If a schedule S can be transformed into a schedule S ́ by a
series of swaps of non-conflicting instructions, we say that
S and S ́ are conflict equivalent.
●We say that a schedule S is conflict serializable if it is
conflict equivalent to a serial schedule
●Schedule 3 can be transformed into Schedule 6, a serial
schedule where T2 follows T1, by series of swaps of non-
conflicting instructions.
●Therefore Schedule 3 is conflict serializable.

Conflict
Serializability

Schedule 3 Schedule 3 Schedule 6


●Since the write(A) instruction of T2 in schedule 3 of Figure
1 does not conflict with the read(B) instruction of T1 , we
can swap these instructions to generate an equivalent
schedule, schedule 6, in Figure 2. Regardless of the initial
system state, schedules 3 and 6 both produce the same
final system state.
Conflict ●We continue to swap non conflicting instructions:
Serializability ●Swap the read(B) instruction of T1 with the read(A)
instruction of T2 .
●Swap the write(B) instruction of T1 with the write(A)
instruction of T2 .
●Swap the write(B) instruction of T1 with the read(A)
instruction of T2 .
●Example of a schedule that is not conflict serializable:

Conflict
Serializability

●We are unable to swap instructions in the above


schedule to obtain either the serial schedule < T3, T4 >,
or the serial schedule < T4, T3 >.
View
Serializability
V.S.

View
Serializability C.S.

Consistent

Non Consistent
●Let S and S ́ be two schedules with the same set of
transactions. S and S ́ are view equivalent if the following
three conditions are met:
1) For each data item Q, if transaction Ti reads the initial
value of Q in schedule S, then transaction Ti must, in
schedule S ́, also read the initial value of Q.

View 2) For each data item Q if transaction Ti executes read(Q) in


schedule S, and that value was produced by transaction
Serializability Tj(if any), then transaction Ti must in schedule S ́ also
read the value of Q that was produced by transaction Tj .
3) For each data item Q, the transaction (if any) that
performs the final write(Q) operation in schedule S must
perform the final write(Q) operation in schedule S ́
●As can be seen, view equivalence is also based purely on
reads and writes alone.
●A schedule S is view serializable it is view equivalent to
a serial schedule.
●Every conflict serializable schedule is also view
serializable.

View
Serializability

●Every view serializable schedule that is not conflict


serializable has blind writes.
Recoverability
• If a transaction Ti fails, for whatever reason , we need
to undo the effect of this transaction to ensure the
atomicity property of the transaction.
• In a system which allows concurrent execution, it is
necessary also to ensure that any transaction Tj that
is dependent on Ti (i.e. Tj has read data written by
Recoverability Ti) is also aborted.
• To achieve this surety, we need to place restrictions
on the type of schedules permitted in the system.
• Now we will address the issue of what schedules are
acceptable from the viewpoint of recovery from
transaction failure.
●Need to address the effect of transaction failures on
concurrently running transactions.
●Recoverable schedule :Consider the below schedule, in
which T9 is a transaction that performs only one
instruction: read(A).
●Suppose that the system allows T9 to to commit
immediately after executing the read(A) instruction.
Recoverability ●Since T9 has read the value of dta item A written by T8,
we must abort T9 to ensure transaction atomicity.
● However T9 has already committed and can’t be aborted.
●Schedule given here, with the commit happening
immediately after the read(A) operation, is an example
of non recoverable instruction.
●Recoverable schedule — if a transaction Tj reads a
data items previously written by a transaction Ti , the
commit operation of Ti appears before the commit
operation of Tj.

Recoverability

●If T8 should abort, T9 would have read (and possibly


shown to the user) an inconsistent database state.
Hence database must ensure that schedules are
recoverable
●Cascading rollback – a single transaction failure leads
to a series of transaction rollbacks. Consider the
following schedule where none of the transactions has
yet committed (so the schedule is recoverable)

Recoverability

●If T10 fails, T11 and T12 must also be rolled back.
●Can lead to the undoing of a significant amount of work
●Cascadeless schedules — cascading rollbacks cannot
occur; for each pair of transactions Ti and Tj such that Tj
reads a data item previously written by Ti, the commit
operation of Ti appears before the read operation of Tj.
●Every cascadeless schedule is also recoverable.
Recoverability ●It is desirable to restrict the schedules to those that are
cascadeless
●If in a schedule, a transaction is not allowed to read a
data item until the last transaction that has written it is
committed or aborted, then such a schedule is called as
Cascadeless Schedule.
●Schedules must be conflict or view serializable, and
recoverable, for the sake of database consistency, and
preferably cascadeless.
●Let us take an example : A transaction acquires a lock on
the entire database before it starts and releases the lock
after it as committed.
Implementation ●While a transaction hold a lock , no other transaction is
allowed to acquire the lock and all must therefore wait
of Isolation for the lock to be released.
●As a result of the locking policy, only one transaction can
execute at a time.
●A concurrency control scheme such as this one leads to
poor performance, i.e. it provides a poor degree of
concurrency.
●The goal of concurrency control scheme is to provide a
high degree of concurrency, while ensuring that all
schedules that can be generated are conflict or view
serializable and are cascadeless.
●Concurrency-control schemes tradeoff between the
Implementation amount of concurrency they allow and the amount of
of Isolation overhead that they incur.
●Some schemes allow only conflict-serializable schedules
to be generated, while others allow view-serializable
schedules that are not conflict-serializable.
●Data manipulation language must include a construct
for specifying the set of actions that comprise a
transaction.
Transaction ●In SQL, a transaction begins implicitly.
definition in ●A transaction in SQL ends by:
SQL Commit work commits current transaction and begins a
new one.
Rollback work causes current transaction to abort.
●Consider some schedule of a set of transactions T1,
T2, ..., Tn
●Precedence graph— a direct graph where the vertices
are the transactions (names).
●We draw an arc from Ti to Tj if the two transaction
Testing for conflict, and Ti accessed the data item on which the
conflict arose earlier.
Serializability
●We may label the arc by the item that was accessed.
Testing for
Serializability

T1 T2 T2 T1
Testing for
Serializability
T1 T2 T3 T4 T5
Read (X)
Read (Y)
Read (Z)
Read (V)
Read (W)
Read (W)
Testing for Read (Y)

Serializability Write(Y)
Write(Z)
Read(U)
Read(Y)
Write(Y)
Read(Z)
Write(Z)
Read(U)
Write(U)
Testing for
Serializability
●A schedule is conflict serializable if and only if its
precedence graph is acyclic.
●Cycle-detection algorithms exist which take order n2
time, where n is the number of vertices in the graph. (i.e.
the number of transaction
Test for Conflict
Serializability ●If precedence graph is acyclic, the serializability order
can be obtained by a topological sorting of the graph.
This is a linear order consistent with the partial order of
the graph.
●The precedence graph test for conflict serializability
must be modified to apply to a test for view
serializability.
●The problem of checking if a schedule is view
Test for View serializable falls in the class of NP-complete problems.
Serializability Thus existence of an efficient algorithm is unlikely.
●However practical algorithms that just check some
sufficient conditions for view serializability can still be
used
Concurrency Control
Schemes
T1 T2 T3 .... T100
Mutually exclusive
T1 T2
R(A)
W(A)
W(A)
⮚ Concurrency control is the procedure for managing
simultaneous operations without conflicting with
each other.
⮚ Concurrency control is used to address conflicts
which occur with a multi-user system who have
access to perform READ and WRITE operation in
Concurrency database.
Control ⮚ Example: In concurrent execution environment
if T1 conflicts with T2 over a data item A, then
the existing concurrency control decides if T1 or
T2 should get the A and if the other transaction
is rolled-back or waits.
⮚ To resolve read-write and write-write
conflicts.
⮚ To apply isolation through mutual exclusion
between conflicting transactions.
⮚ Mutual Exclusion means While one
Why use transaction is accessing a data item, no other
Concurrency transaction can modify that data item.
Control ⮚ The system needs to control the interaction
among the concurrent transactions. This
control is achieved using concurrent-control
schemes.
⮚ Concurrency control helps to ensure
serializability.
⮚ A lock is a mechanism to control concurrent
access to a data item.
⮚ Data items can be locked in two modes :
1. exclusive (X) mode. Data item can be both
read as well as written. X-lock is requested using
Lock-Based lock-X instruction.
Protocol 2. shared (S) mode. Data item can only be
read. S-lock is requested using lock-S
instruction.
⮚ Lock requests are made to concurrency-
control manager. Transaction can proceed
only after request is granted.
⮚ Lock-compatibility matrix

⮚ A transaction may be granted a lock on an item if the


requested lock is compatible with locks already held
Lock-Based on the item by other transactions.
Protocol ⮚ Any number of transactions can hold shared locks on
an item, but if any transaction holds an exclusive on
the item no other transaction may hold any lock on
the item.
⮚ If a lock cannot be granted, the requesting transaction
is made to wait till all incompatible locks held by
other transactions have been released. The lock is
then granted.
⮚ Shared mode is compatible with shared mode, but not with
exclusive mode. At any time, several shared-mode locks can
be held simultaneously (by different transactions) on a
particular data item.
⮚ A subsequent exclusive-mode lock request has to wait until
the currently held shared-mode locks are released.
⮚ A transaction requests a shared lock on data item Q by
Lock-Based executing the lock-S(Q) instruction. Similarly, a transaction
Protocol requests an exclusive lock through the lock-X(Q) instruction.
⮚ A transaction can unlock a data item Q by the unlock(Q)
instruction.
⮚ To access a data item, transaction Ti must first lock that item.
⮚ If the data item is already locked by another transaction in an
incompatible mode, the concurrency control manager will
not grant the lock until all incompatible locks held by other
transactions have been released.
⮚ Thus, Ti is made to wait until all incompatible locks held by
other transactions have been released.
T1: lock-X(B); B = 50 , A = 250 T2: lock-S(A);
A+B = 300
read(B); read(A);
B = 100 A = 200
A+B = 300
B= B-50 unlock(A);

Lock-Based write(B);
unlock(B);
T2:
A = 250
lock-S(B);
read(B);
Protocol lock – X(A);
B = 50
300 unlock(B);
(Table 1) read(A); display(A+B);
T1 and T2 A= A+50;
write(A);
unlock(A);
Transaction T1. Transaction T2.
T1 T2 Concurrency-Control
lock-X(B);
Grant-X(B,T1)
read(B);
B= B-50
write(B);
unlock(B);
lock-S(A);

Lock-Based read(A);
Grant-S(A,T2)

Protocol unlock(A);

(Table 2) lock-S(B);

Schedule 1 read(B);
Grant-S(B,T2)

unlock(B);
display(A+B);
Grant-X(A,T2)
lock – X(A);
read(A);
A= A+50;
write(A);
unlock(A);
⮚ Suppose that the values of accounts A and B are $100
and $200, respectively. If these two transactions are
executed serially, either in the order T1 , T2 or the order
T2 , T1 , then transaction T2 will display the value $300.
⮚ If, however, these transactions are executed concurrently,
then schedule 1(table-2) is possible. In this case,
transaction T2 displays $250, which is incorrect. The
Lock-Based reason for this mistake is that the transaction T1
unlocked data item B too early, as a result of which T2
Protocol saw an inconsistent state.
⮚ Suppose now that unlocking is delayed to the end of the
transaction. Transaction T3 corresponds to T1 with
unlocking delayed (table-3). Transaction T4 corresponds
to T2 with unlocking delayed (table-4).
⮚ You should verify that the sequence of reads and writes
in schedule 1, which lead to an incorrect total of $250
being displayed, is no longer possible with T3 and T4 .
T3: lock-X(B); X
B = 50 A = 250
read(B);
B = 50
A = 250
B= B-50
write(B);
lock – X(A);
Lock-Based read(A);
Protocol A= A+50;

(table-3) write(A);
unlock(B);
unlock(A);
Transaction T3.
T4: lock-S(A);
read(A);
lock-S(B);
read(B);
display(A+B);
Lock-Based unlock(A);
Protocol unlock(B);
(table-4)

Transaction T4.
⮚ Consider the partial schedule

A B
T3 , X
T4, S
t4,S NO Waiting
T3, X NO waiting

Pitfalls of
Lock-Based
Protocols
Schedule 2
⮚ Neither T3 nor T4 can make progress — executing lock-X(B)
causes T4 to wait for T3 to release its lock on B, while executing
lock-X(A) causes T3 to wait for T4 to release its lock on A.
⮚ Such a situation is called a deadlock.
⮚ To handle a deadlock one of T3 or T4 must be rolled back
and its locks released.
⮚ The potential for deadlock exists in most locking
protocols. Deadlocks are a necessary evil.
⮚ Starvation is also possible if concurrency control
manager is badly designed. For example:
⮚ A transaction may be waiting for an X-lock on an item,
while a sequence of other transactions request and
Pitfalls of are granted an S-lock on the same item.
Lock-Based ⮚ The same transaction is repeatedly rolled back due to
deadlocks.
Protocols ⮚ Concurrency control manager can be designed to prevent
starvation.
⮚ We can avoid starvation of transactions by granting locks
in the following manner:
⮚ When a transaction Ti requests a lock on a data item Q in
a particular mode M, the concurrency-control manager
grants the lock provided that
1. There is no other transaction holding a lock on Q in a
Pitfalls of mode that conflicts with M.
Lock-Based 2. There is no other transaction that is waiting for a lock
on Q, and that made its lock request before Ti .
Protocols ⮚ Thus, a lock request will never get blocked by a lock
request that is made later.
Running Example
Transfer Money from Account A to Account B

Two Phase
Commit
Protocol
● Two Phase commit protocol ensures that all participants
perform the same action (Either to commit or to rollback a
transaction).
● It is designed to ensure that either all database are
updated or none of them , So database remain
synchronized.
Two Phase ● Coordinator : The component that coordinates with all the
participants
Commit ● Cohorts : each individual node except coordinator are
Protocol participant. Co
or
Co
ho
din rts
ato
r
● As per the Protocol Name, it involves two phases :

Two Phase 1) Commit Request or Prepare Phase


Commit 2) Commit / Abort Phase

Protocol
Two Phase
Commit
Protocol
Two Phase
Commit
Protocol
Two Phase
Commit
Protocol
Two Phase
Commit
Protocol
Two Phase
Commit
Protocol
Two Phase
Commit
Protocol
TWO PHASE LOCKING PROTOCOL
PHASE - 1 -->
PHASE -2 -->

The Two- T1 T2

Phase
LOCK - X(A)
READ(A)
WRITE(A)

Locking
UNLOCK(A)
| LOCK - S(A)

Protocol |
READ(A)
|
|
|
⮚ This is a protocol which ensures conflict-serializable
schedules.
⮚ Phase 1: Growing Phase
⮚ transaction may obtain locks
⮚ transaction may not release locks
The Two- ⮚ Phase 2: Shrinking Phase
⮚ transaction may release locks
Phase ⮚ transaction may not obtain locks
Locking ⮚ The protocol assures serializability. It can be proved that
Protocol the transactions can be serialized in the order of their
lock points (i.e. the point where a transaction acquired
its final lock).
⮚ For example, transactions T3 and T4 are two phase. On
the other hand, transactions T1 and T2 are not two
phase.
T3: lock-X(B); T4: lock-S(A); T1: lock-X(B); T2: lock-S(A);
read(B); read(A); read(B); read(A);
The Two- B= B-50 lock-S(B); B= B-50 unlock(A);
Phase write(B); read(B); write(B); lock-S(B);

Locking lock – X(A); display(A+B); unlock(B); read(B);

Protocol read(A); unlock(A); lock – X(A); unlock(B);


A= A+50; unlock(B); read(A); display(A+B);
write(A); A= A+50;
unlock(B); write(A);
unlock(A); unlock(A);
Transaction T3. Transaction T4. Transaction T1. Transaction T2.
⮚ We can show that the two – phase locking protocol
ensures conflict serializability.

⮚ Consider any transaction , the point in the schedule


where transaction has obtained its final lock (the end of
The Two- its growing phase) is called Lock Point.
Phase
Locking ⮚ Two phase locking does not ensure freedom from
Protocol deadlock.
T5 T6 T7
Lock-X(A)
read(A)
lock-S(B)
read(B)
The Two- write(A)

Phase unlock(A)
lock-X(A)
Locking read(A)
Protocol write(A)
unlock(A)
lock-S(A)
read(A)
⮚ Cascading roll-back is possible under two-phase locking.
To avoid this, follow a modified protocol called strict
two-phase locking. Here a transaction must hold all its
exclusive locks till it commits/aborts.
⮚ Rigorous two-phase locking is even stricter: here all
The Two- locks are held till commit/abort. In this protocol
Phase transactions can be serialized in the order in which they
Locking commit.
Protocol ⮚ Most database systems implement either strict or
rigorous two phase locking.
⮚ Consider the following two transactions, for which we
have shown only some of
⮚ the significant read and write operations:
T8 : read(a1 );
read(a2 );
The Two- ...
read(an );
Phase write(a1 ).
Locking
Protocol T9 : read(a1 );
read(a2 );
display(a1 + a2 )
⮚ If we employ the two-phase locking protocol, then T8
must lock a1 in exclusive mode. Therefore, any
concurrent execution of both transactions amounts to a
serial execution.
⮚ Notice, however, that T8 needs an exclusive lock on a1
The Two- only at the end of its execution, when it writes a1.
⮚ Thus, if T8 could initially lock a1 in shared mode, and
Phase then could later change the lock to exclusive mode, we
Locking could get more concurrency, since T8 and T9 could
Protocol access a1 and a2 simultaneously.
⮚ This observation leads us to a refinement of the basic
two-phase locking protocol, in which lock conversions
are allowed.
⮚ We shall provide a mechanism for upgrading a shared
lock to an exclusive lock, and downgrading an exclusive
lock to a shared lock.
⮚ We denote conversion from shared to exclusive modes by
upgrade, and from exclusive to shared by downgrade.
Lock conversion cannot be allowed arbitrarily.
⮚ Rather, upgrading can take place in only the growing
phase, whereas downgrading can take place in only the
The Two- shrinking phase.
Phase
Locking
Protocol
⮚ Two-phase locking with lock conversions:
⮚ First Phase:
▪ Can acquire a lock-S on item
▪ Can acquire a lock-X on item
▪ Can convert a lock-S to a lock-X (upgrade)
⮚ Second Phase:
▪ Can release a lock-S
Lock ▪ Can release a lock-X
Conversions ▪ Can convert a lock-X to a lock-S (downgrade)
⮚ This protocol ensures serializability. But still relies on
the programmer to insert the various locking
instructions.
⮚ A lock manager can be implemented as a separate
process to which transactions send lock and unlock
requests.
⮚ The lock manager replies to a lock request by sending a
lock grant messages (or a message asking the transaction
to roll back, in case of a deadlock).
Implementat ⮚ The requesting transaction waits until its request is
ion of answered.
⮚ The lock manager maintains a data-structure called a
Locking lock table to record granted locks and pending requests.
⮚ The lock table is usually implemented as an in-memory
hash table indexed on the name of the data item being
locked.
Lock table

Granted
Waiting
⮚ Black rectangles indicate granted locks, white ones
indicate waiting requests
⮚ Lock table also records the type of lock granted or
requested
⮚ New request is added to the end of the queue of requests
for the data item, and granted if it is compatible with all
earlier locks
Lock table ⮚ Unlock requests result in the request being deleted, and
later requests are checked to see if they can now be
granted
⮚ If transaction aborts, all waiting or granted requests of
the transaction are deleted
▪ lock manager may keep a list of locks held by each
transaction, to implement this efficiently
• The lock manager processes requests this way:
⮚ When a lock request message arrives, it adds a
record to the end of the linked list for the data item, if
the linked list is present. Otherwise it creates a new
linked list, containing only the record for the request.
⮚ It always grants the first lock request on a data item.
But if the transaction requests a lock on an item on
Lock table which a lock has already been granted, the lock
manager grants the request only if it is compatible
with all earlier requests, and all earlier requests have
been granted already. Otherwise the request has to
wait.
⮚ When the lock manager receives an unlock message
from a transaction, it deletes the record for that data
item in the linked list corresponding to that
transaction. It tests the record that follows, if any, as
described in the previous paragraph, to see if that
request can now be granted. If it can, the lock
manager grants that request, and processes the
Lock table record following it, if any, similarly, and so on.
⮚ If a transaction aborts, the lock manager deletes any
waiting request made by the transaction. Once the
database system has taken appropriate actions to
undo the transaction it releases all locks held by the
aborted transaction.
⮚ Graph-based protocols are an alternative to two-
phase locking
⮚ Impose a partial ordering → on the set D = {d1, d2 ,...,
dh} of all data items.
▪ If di → dj then any transaction accessing both di
and dj must access di before accessing dj.
Graph-Based ▪ Implies that the set D may now be viewed as a
Protocols directed acyclic graph, called a database graph.
⮚ The tree-protocol is a simple kind of graph protocol.
⮚ In the Tree protocol, the only lock instruction
allowed is lock-X.
Tree
Protocol Rules for TREE Protocol
1.The Only Exclusive locks(X) are allowed.
2. First lock by Ti may be on any data item.
Subsequently, a data Q can be locked by Ti only if the
parent of Q is currently locked by Ti.
3. A Data items may be Unlocked at any time.
4. A data item that has been locked and unlocked by Ti
cannot subsequently be relocked by T
⮚ The tree protocol ensures conflict serializability
as well as freedom from deadlock.
⮚ Advantage:
⮚ Unlocking may occur earlier in the tree-locking
protocol than in the two-phase locking protocol.
⮚ shorter waiting times, and increase in
concurrency
Graph-Based ⮚ protocol is deadlock-free, no rollbacks are
Protocols required.

Disadvantages of TREE Protocol: ????


⮚ Drawbacks
⮚ Protocol does not guarantee recoverability or
cascade freedom
⮚ Transactions may have to lock data items that
they do not access.
⮚ For example, a transaction that needs to access
data items A and J in the database graph of Fig
Graph-Based must lock not only A and J, but also data items B,
D, and H.
Protocols ⮚ Increased locking overhead, and additional
waiting time
⮚ Potential decrease in concurrency
⮚ Schedules not possible under two-phase locking are
possible under tree protocol, and vice versa.
Database

Area

Multiple
Granularity
Locking
Scheme Files

Records
granular : small particals
different data items with different size
fine : smallest
coarse : bigger

Multiple
Granularity
⮚ Granularity: The size of data items chosen as the
unit of protection by a concurrency control protocol.
⮚ Allow data items to be of various sizes and define a
hierarchy of data granularities, where the small
granularities are nested within larger ones
⮚ Can be represented graphically as a tree (but don't
Multiple confuse with tree-locking protocol)
⮚ When a transaction locks a node in the tree explicitly,
Granularity it implicitly locks all the node's descendents in the
same mode.
⮚ Granularity of locking (level in tree where locking is
done):
1. Fine granularity (lower in tree): high
concurrency, high locking overhead
2. Coarse granularity (higher in tree): low
locking overhead, low concurrency
Multiple
Granularity
Multiple
Granularity
⮚ For example, if a transaction locks a page, Page 2 , all
its records (Record 1 and Record 2 ) as well as all
their fields (Field 1 and Field 2 ) are also locked. If
another transaction requests an incompatible lock on
the same node, the DBMS clearly knows that the lock
cannot be granted.
⮚ If another transaction requests a lock on any of the
Multiple descendants of the locked node, the DBMS checks the
hierarchical path from the root to the requested node
Granularity to determine if any of its ancestors are locked before
deciding whether to grant the lock.
⮚ Thus, if the request is for an exclusive lock on record
Record1 , the DBMS checks its parent (Page2), its
grand parent (File2), and the database itself to
determine if any of them are locked. When it finds
that Page 2 is already locked, it denies the request.
⮚ Additionally, a transaction may request a lock on a
node and a descendant of the node is already locked.
⮚ For example, if a lock is requested on File2 , the
DBMS checks every page in the file, every record in
those pages, and every field in those records to
determine if any of them are locked.
⮚ To reduce the searching involved in locating locks on
Multiple descendants, the DBMS can use another specialized
locking strategy called multiple-granularity
Granularity locking. This strategy uses a new type of lock called
an intention lock (Gray et al., 1975).
⮚ When any node is locked, an intention lock is placed
on all the ancestors of the node. Thus, if some
descendant of File2 (in our example, Page2) is locked
and a request is made for a lock on File2, the
presence of an intention lock on File2 indicates that
some descendant of that node is already locked.
⮚ In addition to S and X lock modes, there are three
additional lock modes with multiple granularity:
1. Intention-shared (IS): indicates explicit locking
at a lower level of the tree but only with shared
locks.
2. Intention-exclusive (IX): indicates explicit
locking at a lower level with exclusive or shared
Intention locks
3. Shared and intention-exclusive (SIX): the sub-
Lock Modes tree rooted by that node is locked explicitly in
shared mode and explicit locking is being done
at a lower level with exclusive-mode locks.
⮚ Intention locks allow a higher level node to be locked
in S or X mode without having to check all
descendent nodes.
IS

IX

S
S
X
Intention SIX

Lock Modes

X
The compatibility matrix for all lock modes is:

IS I S S IX X
X
IS ✔ ✔ ✔ ✔ ×
Compatibility I ✔ ✔ × × ×
Matrix with X
S × ×
Intention ✔ × ✔

Lock Modes S IX ✔ × × × ×

X × × × × ×
⮚ Rules for Multiple Granularity Scheme
⮚ Transaction Ti can lock a node Q, using the following
rules:
1. The lock compatibility matrix must be observed.
2. The root of the tree must be locked first, and may
be locked in any mode.
Multiple 3. A node Q can be locked by Ti in S or IS mode only
Granularity if the parent of Q is currently locked by Ti in
either IX or IS mode.
Locking 4. A node Q can be locked by Ti in X, SIX, or IX mode
Scheme only if the parent of Q is currently locked by Ti in
either IX or SIX mode.
5. Ti can lock a node only if it has not previously
unlocked any node (that is, Ti is two-phase).
6. Ti can unlock a node Q only if none of the
children of Q are currently locked by Ti.
⮚ Observe that locks are acquired in root-to-leaf order,
IX / SIX
A
A IX / IS
Multiple
Granularity
Locking Q
Scheme Q

X / IX / SIX

S / IS
Mc

Dhara 9:00 PM 15 Cheesy


Milan 9:10 PM 18 King
Akansha 9:30 PM 24 Cheesy

Token = Time stamp

Timestamp- 210019082020

Based
Protocols
⮚ Each transaction is issued a timestamp when it enters
the system. If an older transaction Ti has time-stamp
TS(Ti), a younger transaction Tj is assigned time-
stamped TS(Tj) such that TS(Ti) < TS(Tj).
⮚ The protocol manages concurrent execution such that
the time-stamps determine the serializability order.
Timestamp- ⮚ There are two simple methods for implementing this
scheme:
Based 1. Use the value of the system clock as the timestamp;
Protocols that is, a transaction’s time stamp is equal to the value
of the clock when the transaction enters the system.
2. Use a logical counter that is incremented after a new
timestamp has been assigned; that is, a transaction’s
timestamp is equal to the value of the counter when the
transaction enters the system.
⮚ The protocol manages concurrent execution such that
the time-stamps determine the serializability order.
⮚ In order to assure such behavior, the protocol maintains
for each data Q two timestamp values:
• W-timestamp(Q) is the largest time-stamp of any
Timestamp- transaction that executed write(Q) successfully.
• R-timestamp(Q) is the largest time-stamp of any
Based
transaction that executed read(Q) successfully.
Protocols
Q
RTS(Q) WTS(Q)
10 12

T1=5 T2=10 T3=12


⮚ The timestamp ordering protocol ensures that any
conflicting read and write operations are executed in
timestamp order.
⮚ Suppose a transaction Ti issues a read(Q)

1. If TS(Ti) ≤ W-timestamp(Q),
Timestamp- then Ti needs to read a value of Q that was already
Based overwritten. Hence, the read operation is rejected, and
Ti is rolled back.
Protocols
2. If TS(Ti)≥ W-timestamp(Q),
then the read operation is executed, and R-
timestamp(Q) is set to max(R-timestamp(Q), TS(Ti)).
S
TS (Ti) = 12 TS (Tx) = 10 TS(Ti) = 12 TS(Tx) = 10
W(Q)
R(Q)
Suppose that transaction Ti issues write(Q).
1. If TS(Ti) < R-timestamp(Q),
then the value of Q that Ti is producing was needed
previously, and the system assumed that that value
would never be produced. Hence, the write operation is
Timestamp- rejected, and Ti is rolled back.

Based 2.If TS(Ti) < W-timestamp(Q),


Protocols then Ti is attempting to write an obsolete value of Q.
Hence, this write operation is rejected, and Ti is rolled
back.

3. Otherwise, the write operation is executed, and W-


timestamp(Q) is set to TS(Ti).
S
TS (Ti) = 11 TS (Tx) = 10
W(Q)

W(Q)

Timestamp-
Based
Protocols
Thomas’ Write
Rule
⮚ Modified version of the timestamp-ordering protocol in
which obsolete write operations may be ignored under
certain circumstances.
⮚ When Ti attempts to write data item Q, if TS(Ti) < W-
timestamp(Q), then Ti is attempting to write an obsolete
value of {Q}.
▪ Rather than rolling back Ti as the timestamp ordering
Thomas’ Write protocol would have done, this {write} operation can
Rule be ignored.
⮚ Otherwise this protocol is the same as the timestamp
ordering protocol.
⮚ Thomas' Write Rule allows greater potential
concurrency.
▪ Allows some view-serializable schedules that are not
conflict-serializable.
⮚ Thomas’ write rule makes use of view serializability by, in
effect, deleting obsolete write operations from the
transactions that issue them. This modification of
transactions makes it possible to generate serializable
schedules that would not be possible under the other
protocols presented.
⮚ For example, T16 and T17 is not conflict serializable and,
Thomas’ Write thus, is not possible under any of two-phase locking, the
Rule tree protocol, or the timestamp-ordering protocol.
⮚ Under Thomas’ write rule, the write(Q) operation of T16
would be ignored. The result is a schedule that is view
equivalent to the serial schedule <T16 , T17>.
T1 T2 T3

READ(Q)
Thomas’ Write WRITE(Q)
Rule WRITE(Q)
WRITE(Q)
⮚ A system is in a deadlock state if there exists a set of
transactions such that every transaction in the set is
waiting for another transaction in the set.
⮚ Example: There exists a set of waiting transactions {T0,
T1, . . . , Tn} such that T0 is waiting for a data item that
T1 holds, and T1 is waiting for a data item that T2 holds,
and . . . , and Tn−1 is waiting for a data item that Tn
Deadlock holds, and Tn is waiting for a data item that T0 holds.
⮚ None of the transactions can make progress in such a
situation.
⮚ The only remedy to this undesirable situation is for the
system to invoke some drastic action, such as rolling
back some of the transactions involved in the deadlock.
⮚ Rollback of a transaction may be partial: That is, a
transaction may be rolled back to the point where it
obtained a lock whose release resolves the deadlock.
⮚ There are two principal methods for dealing with the
Deadlock deadlock problem.
⮚ We can use a deadlock prevention protocol to ensure
that the system will never enter a deadlock state.
⮚ Alternatively, we can allow the system to enter a
deadlock state, and then try to recover by using a
deadlock detection and deadlock recovery scheme.
Consider the following two transactions:
T1: write (X) T2: write(Y)
write(Y) write(X)
Schedule with deadlock

Deadlock T1 T2
Handling
lock-X (X)
write (X)
lock-X on Y
write (Y)
wait for lock-X on X
wait for lock-X(Y)
wait for lock-X(X)
⮚ System is deadlocked if there is a set of transactions such
that every transaction in the set is waiting for another
transaction in the set.
⮚ Deadlock prevention protocols ensure that the system
will never enter into a deadlock state. Some prevention
Deadlock strategies :
⮚ Require that each transaction locks all its data items
Handling before it begins execution (predeclaration).
⮚ Impose partial ordering of all data items and require
that a transaction can lock data items only in the
order specified by the partial order (graph-based
protocol).
⮚ Following schemes use transaction timestamps for the sake of
deadlock prevention alone.
⮚ wait-die scheme — non-preemptive
⮚ older transaction may wait for younger one to release
data item. Younger transactions never wait for older ones;
they are rolled back instead.
⮚ a transaction may die several times before acquiring
More needed data item
Deadlock ⮚ The wait–die scheme is a non preemptive technique.
When transaction Ti requests a data item currently held
Prevention by Tj, Ti is allowed to wait only if it has a timestamp
Strategies smaller than that of Tj (that is, Ti is older than Tj ).
Otherwise, Ti is rolled back (dies).
⮚ For example, suppose that transactions T22, T23, and T24
have timestamps 5, 10, and 15, respectively. If T22
requests a data item held by T23 , then T22 will wait. If
T24 requests a data item held by T23 , then T24 will be
rolled back.
⮚ wound-wait scheme — preemptive
• Older transaction wounds (forces rollback) of
younger transaction instead of waiting for it.
Younger transactions may wait for older ones.
• May be fewer rollbacks than wait-die scheme.
⮚ The wound–wait scheme is a preemptive technique. It is
More a counterpart to the wait–die scheme. When transaction
Deadlock Ti requests a data item currently held by Tj , Ti is
allowed to wait only if it has a timestamp larger than
Prevention that of Tj (that is, Ti is younger than Tj ). Otherwise, Tj is
Strategies rolled back (Tj is wounded by Ti ).
⮚ Returning to our example, with transactions T22, T23
and T24, if T22 requests a data item held by T23 , then
the data item will be preempted from T23 , and T23 will
be rolled back. If T24 requests a data item held by T23,
then T24 will wait.
⮚ Whenever the system rolls back transactions, it is
important to ensure that there is no starvation—that
is, no transaction gets rolled back repeatedly and is
never allowed to make progress.
⮚ Both the wound–wait and the wait–die schemes avoid
starvation: At any time, there is a transaction with the
More smallest timestamp. This transaction cannot be required
Deadlock to roll back in either scheme.
Prevention ⮚ Since timestamps always increase, and since
transactions are not assigned new timestamps when
Strategies they are rolled back, a transaction that is rolled back
repeatedly will eventually have the smallest timestamp,
at which point it will not be rolled back again.
⮚ Both in wait-die and in wound-wait schemes, a rolled
back transactions is restarted with its original
timestamp. Older transactions thus have precedence
over newer ones, and starvation is hence avoided.
⮚ Timeout-Based Schemes :
More 1. A transaction waits for a lock only for a specified
Deadlock amount of time. After that, the wait times out and
the transaction is rolled back.
Prevention 2. Thus deadlocks are not possible
Strategies 3. Simple to implement; but starvation is possible.
Also difficult to determine good value of the timeout
interval.
⮚ Deadlocks can be described as a wait-for graph, which
consists of a pair G = (V,E),
1. V is a set of vertices (all the transactions in the
system)
2. E is a set of edges; each element is an ordered pair
Ti →Tj.
Deadlock ⮚ If Ti → Tj is in E, then there is a directed edge from Ti to
Detection Tj, implying that Ti is waiting for Tj to release a data item.
⮚ When Ti requests a data item currently being held by Tj,
then the edge Ti Tj is inserted in the wait-for graph. This
edge is removed only when Tj is no longer holding a data
item needed by Ti.
⮚ The system is in a deadlock state if and only if the wait-
for graph has a cycle. Must invoke a deadlock-detection
algorithm periodically to look for cycles.
Deadlock
Detection Wait-for graph without a cycle Wait-for graph with a cycle
⮚ When deadlock is detected : (selection of a victim)
▪ Some transaction will have to rolled back (made a
victim) to break deadlock. Select that transaction
as victim that will incur minimum cost.
▪ Unfortunately, the term minimum cost is not a
precise one. Many factors may determine the cost
of a rollback, including
Deadlock a. How long the transaction has computed, and how
Recovery much longer the transaction will compute before it
completes its designated task.
b. How many data items the transaction has used.
c. How many more data items the transaction needs
for it to complete.
d. How many transactions will be involved in the
rollback.
⮚ Rollback -- determine how far to roll back transaction
▪ Total rollback: Abort the transaction and then restart it.
▪ More effective to roll back transaction only as far as
necessary to break deadlock.
▪ Such partial rollback requires the system to maintain
additional information about the state of all the running
transactions.
▪ Specifically, the sequence of lock requests/grants and
Deadlock updates performed by the transaction needs to be
Recovery recorded.
▪ The deadlock detection mechanism should decide which
locks the selected transaction needs to release in order
to break the deadlock. The selected transaction must be
rolled back to the point where it obtained the first of
these locks, undoing all actions it took after that point.
⮚ Starvation happens if same transaction is always chosen as
victim. Include the number of rollbacks in the cost factor to
avoid starvation.
⮚ The timestamp-ordering protocol guarantees
serializability since all the arcs in the precedence graph are
of the form:

Correctness transaction transaction


with smaller with larger
of timestamp timestamp

Timestamp-
⮚ Thus, there will be no cycles in the precedence graph
Ordering ⮚ Timestamp protocol ensures freedom from deadlock as no
Protocol transaction ever waits.
⮚ But the schedule may not be cascade-free, and may not
even be recoverable.
⮚ Problem with timestamp-ordering protocol:
▪ Suppose Ti aborts, but Tj has read a data item written by
Ti
▪ Then Tj must abort; if Tj had been allowed to commit
earlier, the schedule is not recoverable.
▪ Further, any transaction that has read a data item written
by Tj must abort
▪ This can lead to cascading rollback --- that is, a chain of
Recoverability rollbacks
⮚ Solution 1:
and Cascade ▪ A transaction is structured such that its writes are all
Freedom performed at the end of its processing
▪ All writes of a transaction form an atomic action; no
transaction may execute while a transaction is being
written
▪ A transaction that aborts is restarted with a new
timestamp
⮚ Solution 2: Limited form of locking: wait for data to be
committed before reading it
⮚ Solution 3: Use commit dependencies to ensure recoverability
Execution of transaction Ti is done in three phases.
1. Read phase: Transaction Ti writes only to temporary
variables.
2. Validation phase: Transaction Ti performs a ``validation test''
to determine if local variable can be written without violating
serializability.
3. Write phase: If Ti is validated, the updates are applied to the
Validation- database; otherwise Ti is rolled back.
Based ⮚ The three phases of concurrently executing transactions can
Protocol be interleaved, but each transaction must go through the
three phases in that order.
▪ Assume for simplicity that the validation and write phase
occur together, atomically and serially
I.e., only one transaction executes validation/write at a
time.
⮚ Also called as optimistic concurrency control since
transaction executes fully in the hope that all will go well
during validation.
⮚ Each transaction Ti has 3 timestamps
▪ Start(Ti) : the time when Ti started its execution
▪ Validation(Ti): the time when Ti entered its validation
phase
▪ Finish(Ti) : the time when Ti finished its write phase
Validation- ⮚ Serializability order is determined by timestamp given
Based at validation time, to increase concurrency.
▪ Thus TS(Ti) is given the value of Validation(Ti).
Protocol
⮚ This protocol is useful and gives greater degree of
concurrency if probability of conflicts is low.
▪ because the serializability order is not pre-decided,
and
▪ relatively few transactions will have to be rolled
back.
⮚ If for all Ti with TS (Ti) < TS (Tj) either one of the
following condition holds:
▪ finish(Ti) < start(Tj)
▪ start(Tj) < finish(Ti) < validation(Tj)
Validation  Then validation succeeds and Tj can be committed.
Test for Otherwise, validation fails and Tj is aborted.
Transaction Tj ⮚ Justification: Either the first condition is satisfied, and
there is no overlapped execution, or the second condition
is satisfied.
Example of schedule produced using validation

T14 T15
read(B)
Schedule read(B)
B:= B-50
Produced by read(A)
A:= A+50
Validation read(A)
(validate)
display (A+B)
(validate)
write (B)
write (A)
⮚ Multiversion schemes keep old versions of data item to
increase concurrency.
1. Multiversion Timestamp Ordering
2. Multiversion Two-Phase Locking
⮚ Each successful write results in the creation of a new
Multiversion version of the data item written.
⮚ Use timestamps to label versions.
Schemes ⮚ When a read(Q) operation is issued, select an
appropriate version of Q based on the timestamp of the
transaction, and return the value of the selected version.
⮚ reads never have to wait as an appropriate version is
returned immediately.
⮚ Each data item Q has a sequence of versions <Q1, Q2,....,
Qm>. Each version Qk contains three data fields:
▪ Content -- the value of version Qk.
▪ W-timestamp(Qk) -- timestamp of the transaction
Multiversion that created (wrote) version Qk
Timestamp ▪ R-timestamp(Qk) -- largest timestamp of a
Ordering transaction that successfully read version Qk
⮚ when a transaction Ti creates a new version Qk of Q, Qk's
W-timestamp and R-timestamp are initialized to TS(Ti).
⮚ R-timestamp of Qk is updated whenever a transaction Tj
reads Qk, and TS(Tj) > R-timestamp(Qk).
⮚ Suppose that transaction Ti issues a read(Q) or write(Q) operation.
Let Qk denote the version of Q whose write timestamp is the largest
write timestamp less than or equal to TS(Ti).
1. If transaction Ti issues a read(Q), then the value returned is the
content of version Qk.
Multiversion 2. If transaction Ti issues a write(Q)
Timestamp ⮚ if TS(Ti) < R-timestamp(Qk), then transaction Ti is rolled
Ordering back.
⮚ if TS(Ti) = W-timestamp(Qk), the contents of Qk are
overwritten
⮚ else a new version of Q is created.
⮚ Differentiates between read-only transactions and update
transactions
⮚ Update transactions acquire read and write locks, and hold all
locks up to the end of the transaction. That is, update
transactions follow rigorous two-phase locking.
• Each successful write results in the creation of a new version
Multiversion of the data item written.
• each version of a data item has a single timestamp whose
Two-Phase value is obtained from a counter ts-counter that is
Locking incremented during commit processing.
⮚ Read-only transactions are assigned a timestamp by reading the
current value of ts-counter before they start execution; they
follow the multiversion timestamp-ordering protocol for
performing reads.
⮚ When an update transaction wants to read a data item:
▪ it obtains a shared lock on it, and reads the latest
version.
⮚ When it wants to write an item
Multiversion ▪ it obtains X lock on; it then creates a new version of the
Two-Phase item and sets this version's timestamp to ∞.
⮚ When update transaction Ti completes, commit processing
Locking
occurs:
▪ Ti sets timestamp on the versions it has created to ts-
counter + 1
▪ Ti increments ts-counter by 1
⮚ Creation of multiple versions increases storage overhead
Extra tuples
Extra space in each tuple for storing version information
⮚ Versions can, however, be garbage collected
E.g. if Q has two versions Q5 and Q9, and the oldest active
transaction has timestamp > 9, than Q5 will never be
MVCC: required again
Implementati
on Issues
1) Silberschatz A, Korth HF, Sudarshan S.
Database system concepts. New York: McGraw-
Hill; 1997 Apr.
References 2) Date CJ. An introduction to database
systems. Pearson Education India; 1975.
3) PL/SQL Programming by Ivan Bayross.
Thanks
Prof. Dhara
Joshi

You might also like