DBMS_UNIT 5_(2022-2023)

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 29

UNIT 5

TRANSACTIONS
Introduction to Transaction Processing
Transaction is a logical unit of work that represents real-world events of any organization or an
enterprise whereas concurrency control is the management of concurrent transaction
execution. Transaction processing systems execute database transactions with large databases
and hundreds of concurrent users, for example, railway and air reservations systems, banking
system, credit card processing, stock market monitoring, super market inventory and checkouts
and so on.

Single-User versus Multiuser Systems


One criterion for classifying a database system is according to the number of users who can use
the system concurrently. A DBMS is single-user if at most one user at a time can use the system,
and it is multiuser if many users can use the system—and hence access the
database—concurrently. Single-user DBMSs are mostly restricted to personal computer systems;
most other DBMSs are multiuser. For example, an airline reservations system is used by hundreds
of travel agents and reservation clerks concurrently. Database systems used in banks, insurance
agencies, stock exchanges, supermarkets, and many other applications are multiuser systems. In
these systems, hundreds or thousands of users are typically operating on the database by
submitting transactions concurrently to the system.
Multiple users can access databases—and use computer systems—simultaneously because of the
concept of multiprogramming, which allows the operating system of the computer to execute
multiple programs—or processes—at the same time. A single central processing unit (CPU) can
only execute at most one process at a time. However, multiprogramming operating systems
execute some commands from one process, then suspend that process and execute some
commands from the next process, and so on.

A process is resumed at the point where it was suspended whenever it gets its turn to use the CPU
again. Hence, concurrent execution of processes is actually interleaved, as illustrated in Figure
21.1, which shows two processes, A and B, executing concurrently in an interleaved fashion.
Interleaving keeps the CPU busy when a process requires an input or output (I/O) operation, such
as reading a block from disk. The CPU is switched to execute another process rather than
remaining idle during I/O time. Interleaving also prevents a long process from delaying other
processes. If the computer system has multiple hardware processors (CPUs), parallel processing
of multiple processes is possible, as illustrated by processes C and D in Figure 21.1. Concurrency
Control deals with interleaved execution of more than one transaction.
Transactions, Database Items, Read and Write Operations, and DBMS Buffers: -
A transaction is an executing program that forms a logical unit of database processing. A
transaction includes one or more database access operations—these can include insertion,
deletion, modification, or retrieval operations. One way of specifying the transaction boundaries is
by specifying explicit begin transaction and end transaction statements in an application
program; in this case, all database access operations between the two are considered as forming
one transaction. A single application program may contain more than one transaction if it contains
several transaction boundaries. If the database operations in a transaction do not update the
database but only retrieve data, the transaction is called a read-only transaction; otherwise it is
known as a read-write transaction.
The database model that is used to present transaction processing concepts is quite simple when
compared to the data models such as the relational model or the object model. A database is
basically represented as a collection of named data items. The size of a data item is called its
granularity. A data item can be a database record, but it can also be a larger unit such as a whole
disk block, or even a smaller unit such as an individual field (attribute) value of some record in the
database. Using this simplified database model, the basic database access operations that a
transaction can include are as follows:
• read_item(X). Reads a database item named X into a program variable. To simplify our
notation, we assume that the program variable is also named X.
• write_item(X). Writes the value of program variable X into the database item named X.
The basic unit of data transfer from disk to main memory is one block. Executing a read_item(X)
command includes the following steps:
1. Find the address of the disk block that contains item X.
2. Copy that disk block into a buffer in main memory (if that disk block is not already in some
main memory buffer).
3. Copy item X from the buffer to the program variable named X.

Executing a write_item(X) command includes the following steps:


1. Find the address of the disk block that contains item X.
2. Copy that disk block into a buffer in main memory (if that disk block is not already in some
main memory buffer).
3. Copy item X from the program variable named X into its correct location in the buffer.
4. Store the updated block from the buffer back to disk (either immediately or at some later point
in time).
It is step 4 that actually updates the database on disk. In some cases the buffer is not immediately
stored to disk, in case additional changes are to be made to the buffer. Usually, the decision about
when to store a modified disk block whose contents are in a main memory buffer is handled by
the recovery manager of the DBMS in coop_eration with the underlying operating system. The
DBMS will maintain in the database cache a number of data buffers in main memory. Each buffer
typically holds the contents of one database disk block, which contains some of the database items
being processed. When these buffers are all occupied, and additional database disk blocks must be
copied into memory, some buffer replacement policy is used to choose which of the current
buffers is to be replaced. If the chosen buffer has been modified, it must be written back to disk
before it is reused.
A transaction includes read_item and write_item operations to access and update the database.
Figure 21.2 shows examples of two very simple transactions. The read-set of a transaction is the
set of all items that the transaction reads, and the write-set is the set of all items that the
transaction writes. For example, the read-set of T1 in Figure 21.2 is {X, Y} and its write-set is also
{X, Y}.

Concurrency control and recovery mechanisms are mainly concerned with the database
commands in a transaction. Transactions submitted by the various users may execute
concurrently and may access and update the same database items. If this concurrent execution is
uncontrolled, it may lead to problems, such as an inconsistent database.

Why Concurrency Control Is Needed


If transactions are executed serially, i.e., sequentially with no overlap in time, no transaction
concurrency exists. However, if concurrent transactions with interleaving operations are allowed
in an uncontrolled manner, some unexpected, undesirable result may occur. The types of
problems we may encounter with these two simple transactions if they run concurrrently.
The Lost Update Problem: - When two transactions that access the same database items contain
their operations in a way that makes the value of some database item incorrect, then the lost
update problem occurs. If two transactions T1 and T2 read a record and then update it, then the
effect of updating of the first record will be overwritten by the second update.
Example: -

Here,
• At time t2, transaction-X reads A's value.
• At time t3, Transaction-Y reads A's value.
• At time t4, Transactions-X writes A's value on the basis of the value seen at time t2.
• At time t5, Transactions-Y writes A's value on the basis of the value seen at time t3.
• So, at time T5, the update of Transaction-X is lost because Transaction y overwrites it without
looking at its current value.
• Such type of problem is known as Lost Update Problem as update made by one transaction is
lost here.

The Temporary Update (or Dirty Read) Problem: - The dirty read occurs in the case when one
transaction updates an item of the database, and then the transaction fails for some reason. The
updated database item is accessed by another transaction before it is changed back to the original
value. A transaction T1 updates a record which is read by T2. If T1 aborts then T2 now has values
which have never formed part of the stable database.
Example: -
Here,
• At time t2, transaction-Y writes A's value.
• At time t3, Transaction-X reads A's value.
• At time t4, Transactions-Y rollbacks. So, it changes A's value back to that of prior to t1.
• So, Transaction-X now contains a value which has never become part of the stable database.
• Such type of problem is known as Dirty Read Problem, as one transaction reads a dirty value
which has not been committed

The Incorrect Summary Problem: - If one transaction is calculating an aggregate summary


function on a number of database items while other transactions are updating some of these
items, the aggregate function may calculate some values before they are updated and others after
they are updated. A transaction T1 reads a record and then does some other processing during
which the transaction T2 updates the record. Now when the transaction T1 reads the record, then
the new value will be inconsistent with the previous value.
Example: -
Suppose two transactions operate on three accounts

• Transaction-X is doing the sum of all balance while transaction-Y is transferring an amount 50
from Account-1 to Account-3.
• Here, transaction-X produces the result of 550 which is incorrect. If we write this produced
result in the database, the database will become an inconsistent state because the actual sum is
600. Here, transaction-X has seen an inconsistent state of the database.
The Unrepeatable Read Problem: -
Another problem that may occur is called unrepeatable read, where a transaction T reads the
same item twice and the item is changed by another transaction T1_ between the two reads.
Hence, T receives different values for its two reads of the same item.

Why Recovery is Needed: -


Whenever a transaction is submitted to a DBMS for execution, the system is responsible for
making sure that either all the operations in the transaction are completed successfully and their
effect is recorded permanently in the database, or that the transaction does not have any effect on
the database or any other transactions. In the first case, the transaction is said to be committed,
whereas in the second case, the transaction is aborted. The DBMS must not permit some
operations of a transaction T to be applied to the database while other operations of T are not,
because the whole transaction is a logical unit of database processing. If a transaction fails after
executing some of its operations but before executing all of them, the operations already executed
must be undone and have no lasting effect.

Types of Failures. Failures are generally classified as transaction, system, and media failures.
There are several possible reasons for a transaction to fail in the middle of execution:
1. A computer failure (system crash). A hardware, software, or network error occurs in the
computer system during transaction execution. Hardware crashes are usually media
failures—for example, main memory failure.
2. A transaction or system error. Some operation in the transaction may cause it to fail, such as
integer overflow or division by zero. Transaction failure may also occur because of erroneous
parameter values or because of a logical programming error. Additionally, the user may
interrupt the transaction during its execution.
3. Local errors or exception conditions detected by the transaction. During transaction
execution, certain conditions may occur that necessitate cancellation of the transaction. For
example, data for the transaction may not be found. An exception condition,4 such as
insufficient account balance in a banking database, may cause a transaction, such as a fund
withdrawal, to be canceled. This exception could be programmed in the transaction itself, and
in such a case would not be considered as a transaction failure.
4. Concurrency control enforcement. The concurrency control method may decide to abort a
transaction because it violates serializability, or it may abort one or more transactions to
resolve a state of deadlock among several transactions. Transactions aborted because of
serializability violations or deadlocks are typically restarted automatically at a later time.
5. Disk failure. Some disk blocks may lose their data because of a read or write
malfunction or because of a disk read/write head crash. This may happen during a read or a
write operation of the transaction.
6. Physical problems and catastrophes. This refers to an endless list of problems that includes
power or air-conditioning failure, fire, theft, sabotage, overwriting disks or tapes by mistake,
and mounting of a wrong tape by the operator.
Failures of types 1, 2, 3, and 4 are more common than those of types 5 or 6. Whenever a failure of
type 1 through 4 occurs, the system must keep sufficient information to quickly recover from the
failure. Disk failure or other catastrophic failures of type 5 or 6 do not happen frequently; if they
do occur, recovery is a major task.
The concept of transaction is fundamental to many techniques for concurrency control and
recovery from failures.
Transaction and System Concepts: -
Transaction States and Additional Operations
A transaction is an atomic unit of work that should either be completed in its entirety or not done
at all. For recovery purposes, the system needs to keep track of when each transaction starts,
terminates, and commits or aborts. Therefore, the recovery manager of the DBMS needs to keep
track of the following operations:
• BEGIN_TRANSACTION. This marks the beginning of transaction execution.
• READ or WRITE. These specify read or write operations on the database items that are
executed as part of a transaction.
• END_TRANSACTION. This specifies that READ and WRITE transaction operations have ended
and marks the end of transaction execution. However, at this point it may be necessary to check
whether the changes introduced by the transaction can be permanently applied to the database
(committed) or whether the transaction has to be aborted because it violates serializability or
for some other reason.
• COMMIT_TRANSACTION. This signals a successful end of the transaction so that any changes
(updates) executed by the transaction can be safely committed to the database and will not be
undone.
• ROLLBACK (or ABORT). This signals that the transaction has ended unsuccessfully, so that
any changes or effects that the transaction may have applied to the database must be undone.

The above figure shows a state transition diagram that illustrates how a transaction moves
through its execution states. A transaction goes into an active state immediately after it starts
execution, where it can execute its READ and WRITE operations. When the transaction ends, it
moves to the partially committed state. At this point, some recovery protocols need to ensure
that a system failure will not result in an inability to record the changes of the transaction
permanently. Once this check is successful, the transaction is said to have reached its commit
point and enters the committed state. When a transaction is committed, it has concluded its
execution successfully and all its changes must be recorded permanently in the database, even if a
system failure occurs.
However, a transaction can go to the failed state if one of the checks fails or if the transaction is
aborted during its active state. The transaction may then have to be rolled back to undo the effect
of its WRITE operations on the database. The terminated state corresponds to the transaction
leaving the system. The transaction information that is maintained in system tables while the
transaction has been running is removed when the transaction terminates. Failed or aborted
transactions may be restarted later—either automatically or after being resubmitted by the
user—as brand new transactions.

The System Log


To be able to recover from failures that affect transactions, the system maintains a log to keep
track of all transaction operations that affect the values of database items, as well as other
transaction information that may be needed to permit recovery from failures. The log is a
sequential, append-only file that is kept on disk, so it is not affected by any type of failure except
for disk or catastrophic failure. Typically, one (or more) main memory buffers hold the last part of
the log file, so that log entries are first added to the main memory buffer. When the log buffer is
filled, or when certain other conditions occur, the log buffer is appended to the end of the log file
on disk. In addition, the log file from disk is periodically backed up to archival storage (tape) to
guard against catastrophic failures. The following are the types of entries—called log
records—that are written to the log file and the corresponding action for each log record. In these
entries, T refers to a unique transaction-id that is generated automatically by the system for each
transaction and that is used to identify each transaction:
1. [start_transaction, T]. Indicates that transaction T has started execution.
2. [write_item, T, X, old_value, new_value]. Indicates that transaction T has changed the value of
database item X from old_value to new_value.
3. [read_item, T, X]. Indicates that transaction T has read the value of database item X.
4. [commit, T]. Indicates that transaction T has completed successfully, and affirms that its effect
can be committed (recorded permanently) to the data_base.
5. [abort, T]. Indicates that transaction T has been aborted

Protocols for recovery that avoid cascading rollbacks —which include nearly all practical
protocols—do not require that READ operations are written to the system log. Additionally, some
recovery protocols require simpler WRITE entries only include one of new_value and old_value
instead of including both.

Notice that we are assuming that all permanent changes to the database occur within transactions,
so the notion of recovery from a transaction failure amounts to either undoing or redoing
transaction operations individually from the log. If the system crashes, we can recover to a
consistent database state by examining the log. Because the log contains a record of every WRITE
operation that changes the value of some database item, it is possible to undo the effect of these
WRITE operations of a transaction T by tracing backward through the log and resetting all items
changed by a WRITE operation of T to their old_values. Redo of an operation may also be
necessary if a transaction has its updates recorded in the log but a failure occurs before the system
can be sure that all these new_values have been written to the actual database on disk from the
main memory buffers.

Commit Point of a Transaction: -


A transaction T reaches its commit point when all its operations that access the database have
been executed successfully and the effect of all the transaction opera_tions on the database have
been recorded in the log. Beyond the commit point, the transaction is said to be committed, and its
effect must be permanently recorded in the database. The transaction then writes a commit
record [commit, T] into the log. If a system failure occurs, we can search back in the log for all
transactions T that have written a [start_transaction, T] record into the log but have not written
their [commit, T] record yet; these transactions may have to be rolled back to undo their effect on
the database during the recovery process. Transactions that have written their commit record in
the log must also have recorded all their WRITE operations in the log, so their effect on the
database can be redone from the log records.

Desirable Properties of Transactions: -


Transactions should possess several properties, often called the ACID properties; they should be
enforced by the concurrency control and recovery methods of the DBMS. The following are the
ACID properties:
▪ Atomicity. A transaction is an atomic unit of processing; it should either be performed in its
entirety or not performed at all.
▪ Consistency preservation. A transaction should be consistency preserving, meaning that if it
is completely executed from beginning to end without interference from other transactions, it
should take the database from one consistent state to another.
▪ Isolation. A transaction should appear as though it is being executed in isolation from other
transactions, even though many transactions are executing concurrently. That is, the execution
of a transaction should not be interfered with by any other transactions executing concurrently.
▪ Durability or permanency. The changes applied to the database by a com_mitted transaction
must persist in the database. These changes must not be lost because of any failure.
The atomicity property requires that we execute a transaction to completion. It is the
responsibility of the transaction recovery subsystem of a DBMS to ensure atomicity. If a
transaction fails to complete for some reason, such as a system crash in the midst of transaction
execution, the recovery technique must undo any effects of the transaction on the database. On the
other hand, write operations of a committed transaction must be eventually written to disk.
The integrity constraints are maintained so that the database is consistent before and after
the transaction. The execution of a transaction will leave a database in either its prior stable state
or a new stable state. The consistent property of database states that every transaction sees a
consistent database instance. The transaction is used to transform the database from one
consistent state to another consistent state. For example: Let's assume that following transaction
T consisting of T1 and T2. A consists of Rs 600 and B consists of Rs 300. Transfer Rs 100 from
account A to account B.

T1 T2

Read(A) Read(B)
A:= A-100 Y:= Y+100
Write(A) Write(B)
The total amount must be maintained before or after the transaction.
Total before T occurs = 600+300=900
Total after T occurs= 500+400=900
Therefore, the database is consistent. In the case when T1 is completed but T2 fails, then
inconsistency will occur.

The isolation property is enforced by the concurrency control subsystem of the DBMS. If
every transaction does not make its updates (write operations) visible to other transactions until
it is committed, one form of isolation is enforced that solves the temporary update problem and
eliminates cascading rollbacks but does not eliminate all other problems. There have been
attempts to define the level of isolation of a transaction. A transaction is said to have level 0
(zero) isolation if it does not overwrite the dirty reads of higher-level transactions. Level 1 (one)
isolation has no lost updates, and level 2 isolation has no lost updates and no dirty reads. Finally,
level 3 isolation (also called true isolation) has, in addition to level 2 properties, repeatable reads.

And last, the durability property is the responsibility of the recovery subsystem of the DBMS.

SERIALIZABILITY
When multiple transactions are running concurrently then there is a possibility that the database
may be left in an inconsistent state. Serializability is a concept that helps us to check
which schedules are Serializable. A Serializable schedule is the one that always leaves the
database in consistent state.

What is a Serializable schedule: -


A Serializable schedule always leaves the database in consistent state. A serial schedule is
always a Serializable schedule because in serial schedule, a transaction only starts when the other
transaction finished execution. A serial schedule doesn’t allow concurrency, only one transaction
executes at a time and the other starts when the already running transaction finished.
There are two types of schedules – Serial & Non-Serial. A Serial schedule doesn’t support
concurrent execution of transactions while a non-serial schedule supports concurrency. A
non-serial schedule may leave the database in inconsistent state so we need to check these
non-serial schedules for the Serializability.
Types of Serializability: -
There are two types of Serializability.
• Conflict Serializability
• View Serializability

• Conflict Serializability: -
Conflict Serializability is one of the type of Serializability, which can be used to check whether
a non-serial schedule is conflict serializable or not.

What is Conflict Serializability: -


A schedule is called conflict serializable if we can convert it into a serial schedule after
swapping its non-conflicting operations.

Conflicting operations: -
Two operations are said to be in conflict, if they satisfy all the following three conditions:
• Both the operations should belong to different transactions.
• Both the operations are working on same data item.
• At least one of the operation is a write operation.
Some examples to understand this
Example 1: -
Operation W(X) of transaction T1 and operation R(X) of transaction T2 are conflicting
operations, because they satisfy all the three conditions mentioned above. They belong to
different transactions; they are working on same data item X, one of the operation in write
operation.
Example 2:
Similarly Operations W(X) of T1 and W(X) of T2 are conflicting operations.
Example 3:
Operations W(X) of T1 and W(Y) of T2 are non-conflicting operations because both the
write operations are not working on same data item so these operations don’t satisfy the
second condition.
Example 4:
Similarly R(X) of T1 and R(X) of T2 are non-conflicting operations because none of them is
write operation.
Example 5:
Similarly W(X) of T1 and R(X) of T1 are non-conflicting operations because both the
operations belong to same transaction T1.

Conflict Equivalent Schedules


Two schedules are said to be conflict Equivalent if one schedule can be converted into other
schedule after swapping non-conflicting operations.
Example of Conflict Serializability: -
Let’s consider this schedule
T1 T2
----- ------
R(A)
R(B)
R(A)
R(B)
W(B)
W(A)
To convert this schedule into a serial schedule we must have to swap the R(A) operation of
transaction T2 with the W(A) operation of transaction T1. However we cannot swap these two
operations because they are conflicting operations, thus we can say that this given schedule
is not Conflict Serializable.
Let’s take another example
T1 T2
----- ------
R(A)
R(A)
R(B)
W(B)
R(B)
W(A)
Let’s swap non-conflicting operations:

After swapping R(A) of T1 and R(A) of T2 we get:


T1 T2
----- ------
R(A)
R(A)
R(B)
W(B)
R(B)
W(A)

After swapping R(A) of T1 and R(B) of T2 we get:


T1 T2
----- ------
R(A)
R(B)
R(A)
W(B)
R(B)
W(A)

After swapping R(A) of T1 and W(B) of T2 we get:


T1 T2
----- ------
R(A)
R(B)
W(B)
R(A)
R(B)
W(A)

We finally got a serial schedule after swapping all the non-conflicting operations so we can say
that the given schedule is Conflict Serializable.

• View Serializability: -
View Serializability is a process to find out that a given schedule is view Serializable or not. To
check whether a given schedule is view Serializable, we need to check whether the given
schedule is View Equivalent to its serial schedule. Let’s take an example to understand what I
mean by that.
Given Schedule: -
T1 T2
----- ------
R(X)
W(X)
R(X)
W(X)
R(Y)
W(Y)
R(Y)
W(Y)

Serial Schedule of the above given schedule: -


As we know that in Serial schedule a transaction only starts when the current running
transaction is finished. So the serial schedule of the above given schedule would look like this:
T1 T2
----- ------
R(X)
W(X)
R(Y)
W(Y)
R(X)
W(X)
R(Y)
W(Y)

If we can prove that the given schedule is View Equivalent to its serial schedule then the given
schedule is called view Serializable

Why we need View Serializability: -


We know that a serial schedule never leaves the database in inconsistent state because
there are no concurrent transactions executions. However a non-serial schedule can leave the
database in inconsistent state because there are multiple transactions running concurrently. By
checking that a given non-serial schedule is view serializable, we make sure that it is a
consistent schedule.
You may be wondering instead of checking that a non-serial schedule is serializable or not,
can’t we have serial schedule all the time? The answer is no, because concurrent execution of
transactions fully utilize the system resources and are considerably faster compared to serial
schedules.

View Equivalent: -
Two schedules T1 and T2 are said to be view equivalent, if they satisfy all the following
conditions:
• Initial Read: -
Initial read of each data item in transactions must match in both schedules. For example, if
transaction T1 reads a data item X before transaction T2 in schedule S1 then in schedule
S2, T1 should read X before T2.
Read vs Initial Read: You may be confused by the term initial read. Here initial read
means the first read operation on a data item, for example, a data item X can be read
multiple times in a schedule but the first read operation on X is called the initial read. This
will be more clear once we will get to the example in the next section of this same article.

• Final Write: -
Final write operations on each data item must match in both the schedules. For example, a
data item X is last written by Transaction T1 in schedule S1 then in S2, the last write
operation on X should be performed by the transaction T1.

• Update Read: -:
If in schedule S1, the transaction T2 is reading a data item updated by T1 then in schedule
S2, T2 should read the value after the write operation of T1 on same data item. For
example, In schedule S1, T2 performs a read operation on X after the write operation on X
by T1 then in S2, T2 should read the X after T1 performs write on X.
View Serializable Example: -

Let’s check the three conditions of view Serializability:


• Initial Read: -In schedule S1, transaction T1 first reads the data item X. In S2 also
transaction T1 first reads the data item X. Let’s check for Y. In schedule S1, transaction T1
first reads the data item Y. In S2 also the first read operation on Y is performed by T1. We
checked for both data items X & Y and the initial read condition are satisfied in S1 & S2.
• Final Write: - In schedule S1, the final write operation on X is done by transaction T2. In S2
also transaction T2 performs the final write on X. Let’s check for Y. In schedule S1, the final
write operation on Y is done by transaction T2. In schedule S2, final write on Y is done by
T2. We checked for both data items X & Y and the final write condition is satisfied in S1 &
S2.
• Update Read: - In S1, transaction T2 reads the value of X, written by T1. In S2, the same
transaction T2 reads the X after it is written by T1. In S1, transaction T2 reads the value of
Y, written by T1. In S2, the same transaction T2 reads the value of Y after it is updated by
T1. The update read condition is also satisfied for both the schedules.
Result: Since all the three conditions that checks whether the two schedules are view
equivalent are satisfied in this example, which means S1 and S2 are view equivalent. Also, as we
know that the schedule S2 is the serial schedule of S1, thus we can say that the schedule S1 is
view Serializable schedule.

TRANSACTIONS AS SQL STATEMENTS


There are many more details, and the newer standards have more commands for transaction
processing. The basic definition of an SQL transaction is similar to our already defined concept of a
transaction. That is, it is a logical unit of work and is guaranteed to be atomic. A single SQL
statement is always considered to be atomic—either it completes execution without an error or it
fails and leaves the database unchanged.
With SQL, there is no explicit Begin_Transaction statement. Transaction initiation is done
implicitly when particular SQL statements are encountered. However, every transaction must
have an explicit end statement, which is either a COMMIT or a ROLLBACK. Every transaction has
certain characteristics attributed to it. These characteristics are specified by a SET TRANSACTION
statement in SQL. The characteristics are the access mode, the diagnostic area size, and the
isolation level.
The access mode can be specified as READ ONLY or READ WRITE. The default is READ WRITE,
unless the isolation level of READ UNCOMMITTED is specified (see below), in which case READ
ONLY is assumed. A mode of READ WRITE allows select, update, insert, delete, and create
commands to be executed. A mode of READ ONLY, as the name implies, is simply for data retrieval.
The diagnostic area size option, DIAGNOSTIC SIZE n, specifies an integer value n, which indicates
the number of conditions that can be held simultaneously in the diagnostic area. These conditions
supply feedback information (errors or exceptions) to the user or program on the n most recently
executed SQL statement.
The isolation level option is specified using the statement ISOLATION LEVEL, where the value for
can be READ UNCOMMITTED, READ COMMITTED, REPEATABLE READ, or SERIALIZABLE. The
default isolation level is SERIALIZABLE, although some systems use READ COMMITTED as their
default. The use of the term SERIALIZABLE here is based on not allowing violations that cause
dirty read, unrepeatable read, and phantoms, and it is thus not identical to the way serializability.
If a transaction executes at a lower isolation level than SERIALIZABLE, then one or more of the
following three violations may occur:
1. Dirty read. A transaction T1 may read the update of a transaction T2, which has not yet
committed. If T2 fails and is aborted, then T1 would have read a value that does not exist and is
incorrect.
2. Nonrepeatable read. A transaction T1 may read a given value from a table. If another
transaction T2 later updates that value and T1 reads that value again, T1 will see a different
value.
3. Phantoms. A transaction T1 may read a set of rows from a table, perhaps based on some
condition specified in the SQL WHERE-clause. Now suppose that a transaction T2 inserts a new
row that also satisfies the WHERE-clause condition used in T1, into the table used by T1. If T1 is
repeated, then T1 will see a phantom, a row that previously did not exist.

Table 21.1 summarizes the possible violations for the different isolation levels. An entry of Yes
indicates that a violation is possible and an entry of No indicates that it is not possible. READ
UNCOMMITTED is the most forgiving, and SERIALIZABLE is the most restrictive in that it avoids
all three of the problems mentioned above.

A sample SQL transaction might look like the following:


The above transaction consists of first inserting a new row in the EMPLOYEE table and then
updating the salary of all employees who work in department 2. If an error occurs on any of the
SQL statements, the entire transaction is rolled back. This implies that any updated salary (by this
transaction) would be restored to its previous value and that the newly inserted row would be
removed.
SQL provides a number of transaction-oriented features. The DBA or database programmers can
take advantage of these options to try improving transaction performance by relaxing
serializability if that is acceptable for their applications.

Introduction to Concurrency Control


One of the fundamental properties of a transaction is isolation. When several transactions
execute concurrently in the database, however, the isolation property may no longer be
preserved. In the concurrency control, the multiple transactions can be executed simultaneously.
It may affect the transaction result. It is highly important to maintain the order of execution of
those transactions.
Problems of concurrency control: -
Several problems can occur when concurrent transactions are executed in an uncontrolled
manner. Following are the three problems in concurrency control.
The Lost Update Problem: - When two transactions that access the same database items contain
their operations in a way that makes the value of some database item incorrect, then the lost
update problem occurs. If two transactions T1 and T2 read a record and then update it, then the
effect of updating of the first record will be overwritten by the second update.
Example: -

Here,
• At time t2, transaction-X reads A's value.
• At time t3, Transaction-Y reads A's value.
• At time t4, Transactions-X writes A's value on the basis of the value seen at time t2.
• At time t5, Transactions-Y writes A's value on the basis of the value seen at time t3.
• So, at time T5, the update of Transaction-X is lost because Transaction y overwrites it without
looking at its current value.
• Such type of problem is known as Lost Update Problem as update made by one transaction is
lost here.

The Temporary Update (or Dirty Read) Problem: - The dirty read occurs in the case when one
transaction updates an item of the database, and then the transaction fails for some reason. The
updated database item is accessed by another transaction before it is changed back to the original
value. A transaction T1 updates a record which is read by T2. If T1 aborts then T2 now has values
which have never formed part of the stable database.
Example: -

Here,
• At time t2, transaction-Y writes A's value.
• At time t3, Transaction-X reads A's value.
• At time t4, Transactions-Y rollbacks. So, it changes A's value back to that of prior to t1.
• So, Transaction-X now contains a value which has never become part of the stable database.
• Such type of problem is known as Dirty Read Problem, as one transaction reads a dirty value
which has not been committed

The Incorrect Summary Problem: - If one transaction is calculating an aggregate summary


function on a number of database items while other transactions are updating some of these
items, the aggregate function may calculate some values before they are updated and others after
they are updated. A transaction T1 reads a record and then does some other processing during
which the transaction T2 updates the record. Now when the transaction T1 reads the record, then
the new value will be inconsistent with the previous value.
Example: -
Suppose two transactions operate on three accounts
• Transaction-X is doing the sum of all balance while transaction-Y is transferring an amount 50
from Account-1 to Account-3.
• Here, transaction-X produces the result of 550 which is incorrect. If we write this produced
result in the database, the database will become an inconsistent state because the actual sum is
600. Here, transaction-X has seen an inconsistent state of the database.

The Unrepeatable Read Problem: -


Another problem that may occur is called unrepeatable read, where a transaction T reads the
same item twice and the item is changed by another transaction T1_ between the two reads.
Hence, T receives different values for its two reads of the same item.

CONCURRENCY CONTROL PROTOCOL


Concurrency control protocols ensure atomicity, isolation, and Serializability of concurrent
transactions. The concurrency control protocol can be divided into three categories:
• Lock based protocol
• Time-stamp protocol
• Validation based protocol

INTRODUCTION TO LOCK-BASED PROTOCOLS


In this type of protocol, any transaction cannot read or write data until it acquires an
appropriate lock on it. Lock is a mechanism which is important in a concurrent control. A lock is a
data variable which is associated with a data item. It controls concurrent access to a data item.
This lock signifies that operations that can be performed on the data item. Locks help synchronize
access to the database items by concurrent transactions. All lock requests are made to the
concurrency-control manager. Transactions proceed only once the lock request is granted.
For example, in traffic, there are signals which indicate stop and go. When one signal is
allowed to pass at a time, then other signals are locked. Similarly, in database transaction only one
transaction is performed at a time and other transactions are locked.
If the locking is not done properly, then it will display the inconsistent and corrupt data. It
manages the order between the conflicting pairs among transactions at the time of execution.
There are two types of lock:
• Shared lock.
• Exclusive lock.
• Shared Lock (S): - Shared Lock (S) also known as Read-only lock. As the name suggests it can
be shared between transactions because while holding this lock the transaction does not have
the permission to update data on the data item. S-lock is requested using lock-S instruction.

• Exclusive Lock (X): - Data item can be both read as well as written. This is Exclusive and
cannot be held simultaneously on the same data item. X-lock is requested using lock-X
instruction.

Lock Compatibility Matrix: -


Lock Compatibility Matrix controls whether multiple transactions can acquire locks on the same
resource at the same time.
Shared Exclusive
Shared True False
Exclusive False False

If a resource is already locked by another transaction, then a new lock request can be
granted only if the mode of the requested lock is compatible with the mode of the existing lock.
Any number of transactions can hold shared locks on an item, but if any transaction holds an
exclusive lock on item, no other transaction may hold any lock on the item.
Upgrade / Downgrade locks: - A transaction that holds a lock on an item A is allowed under
certain condition to change the lock state from one state to another.
Upgrade: - A S(A) can be upgraded to X(A) if Ti is the only transaction holding the S-lock on
element A.
Downgrade: - We may downgrade X(A) to S(A) when we feel that we no longer want to write on
data-item A. As we were holding X-lock on A, we need not check any conditions.
Applying simple locking, we may not always produce Serializable results; it may lead to
Deadlock Inconsistency.
Problem with Simple Locking: -
Consider the Partial Schedule:
T1 T2
1 lock-X(B)
2 read(B)
3 B:=B-50
4 write(B)
5 lock-S(A)
6 read(A)
7 lock-S(B)
8 lock-X(A)
9 …… ……
Deadlock: - Deadlock refers to a specific situation where two or more processes are waiting for
each other to release a resource or more than two processes are waiting for the resource in a
circular chain. Consider the above execution phase. Now, T1 holds an Exclusive lock over B,
and T2 holds a Shared lock over A. Consider Statement 7, T2 requests for lock on B, while in
Statement 8 T1 requests lock on A. This as you may notice imposes a Deadlock as none can
proceed with their execution.
Starvation: - Starvation is also possible if concurrency control manager is badly designed.
Starvation is the situation when a transaction needs to wait for an indefinite period to acquire a
lock. Following are the reasons for Starvation:
• When waiting scheme for locked items is not properly managed.
• In the case of resource leak.
• The same transaction is selected as a victim repeatedly.
For example: - A transaction may be waiting for an X-lock on an item, while a sequence of other
transactions request and are granted an S-lock on the same item. This may be avoided if the
concurrency control manager is properly designed.

Two Phase Locking (2PL) Protocol: -


There are two types of Locks available Shared S(a) and Exclusive X(a). Implementing this
lock system without any restrictions gives us the Simple Lock based protocol (or Binary Locking),
but it has its own disadvantages, they does not guarantee Serializability. Schedules may follow the
preceding rules but a non-Serializable schedule may result. To guarantee Serializability, we must
follow some additional protocol concerning the positioning of locking and unlocking operations in
every transaction. This is where the concept of Two Phase Locking (2-PL) comes in the picture,
2-PL ensures Serializability.
Two-Phase locking protocol which is also known as a 2PL protocol. It is also called P2L. In
this type of locking protocol, the transaction should acquire a lock after it releases one of its locks.
This locking protocol divides the execution phase of a transaction into three different parts.
• In the first phase, when the transaction begins to execute, it requires permission for the locks
it needs.
• The second part is where the transaction obtains all the locks. When a transaction releases its
first lock, the third phase starts.
• In this third phase, the transaction cannot demand any new locks. Instead, it only releases the
acquired locks.

The Two-Phase Locking protocol allows each transaction to make a lock or unlock request in
two steps:
• Growing Phase: - In this phase transaction may obtain locks but may not release any locks.
• Shrinking Phase: In this phase, a transaction may release locks but not obtain any new lock.
It is true that the 2PL protocol offers Serializability. However, it does not ensure that
deadlocks do not happen.
In the below example, if lock conversion is allowed then the following phase can happen:
• Upgrading of lock (from S(a) to X (a)) is allowed in growing phase.
• Downgrading of lock (from X(a) to S(a)) must be done in shrinking phase.
Example: -
T1 T2
1 LOCK – S(A)
2 LOCK – S(A)
3 LOCK – X(B)
4 ------ ------
5 UNLOCK(A)
6 LOCK – X(C)
7 UNLOCK(B)
8 UNLOCK(A)
9 UNLOCK(C)
10 ------ ------
The following way shows how unlocking and locking work with 2-PL.
Transaction T1:
• Growing phase: from step 1-3
• Shrinking phase: from step 5-7
• Lock point: at 3

Transaction T2:
• Growing phase: from step 2-6
• Shrinking phase: from step 8-9
• Lock point: at 6
Lock Point is the point at which the growing phase ends, i.e., when transaction takes the final lock
it needs to carry on its work.

2-PL ensures Serializability, but there are still some drawbacks of 2-PL.
• Cascading Rollback is possible under 2-PL.
• Deadlocks and Starvation is possible.

Cascading Rollback is possible under 2-PL: -


Let’s see the following Schedule:
Because of Dirty Read in T2 and T3 in lines 8 and 12 respectively, when T1 failed we have to
rollback others also. Hence Cascading Rollbacks are possible in 2-PL.

Deadlock in 2-PL: -
Consider this simple example; it will be easy to understand. Say we have two transactions
T1 and T2.
Schedule: Lock-X1(A) Lock-X2(B) Lock-X1(B) Lock-X2(A)
Drawing the precedence graph, you may detect the loop. So Deadlock is also possible in 2-PL.

Two-phase locking may also limit the amount of concurrency that occurs in a schedule
because a Transaction may not be able to release an item after it has used it. This may be
because of the protocols and other restrictions we may put on the schedule to ensure
Serializability, deadlock freedom and other factors. The 2-PL is called Basic 2PL. To sum it up
it ensures Conflict Serializability but does not prevent Cascading Rollback and Deadlock.
There three other types of 2PL, Strict 2PL, Conservative 2PL and Rigorous 2PL.

Strict Two-Phase Locking Method: - Strict-Two phase locking system is almost similar to 2PL.
The only difference is that Strict-2PL never releases a lock after using it. It holds all the locks until
the commit point and releases all the locks at one go when the process is over.

Rigorous 2PL: - This requires that in addition to the lock being 2 phase all Exclusive(X) and
Shared(S) Locks held by the transaction be released until after the transaction commits.

Conservative 2PL: -
Conservative 2-PL is Deadlock free but it does not ensure Strict schedule. However, it is
difficult to use in practice because of need to predeclare the read-set and the write-set which is
not possible in many situations. In practice, the most popular variation of 2-PL is Strict 2-PL.
The Venn diagram below shows the classification of schedules which are rigorous and
strict. The universe represents the schedules which can be serialized as 2-PL. Now as the diagram
suggests, and it can also be logically concluded, if a schedule is Rigorous then it is Strict. We put a
restriction on a schedule which makes it strict, adding another to the list of restrictions make it
Rigorous.

Graph Based Concurrency Control Protocol: -


Graph Based Protocol is a Lock based Concurrency control mechanism that ensure Serializability.
Graph based protocols are an alternative to two phase locking protocol. Let two transaction T1
and T2 are defined as

T1: T2:
Read(A) Read(A)
A:=A-50 Temp:=A*0.1
Write(A) A:=A-Temp
Read(B) Write(A)
B:=B+50 Read(B)
Write(B). B:=B+Temp
Write(B).
Impose a partial ordering ⟶ on the set D = {d1, d2, d3, ….., dn} of all data items.
• If di –> dj then any transaction accessing both di and dj must access di before accessing dj.
• Implies that the set D may now be viewed as a directed acyclic graph (DAG), called a database
graph.
Tree Based Protocols is a simple implementation of Graph Based Protocol

Tree Based Protocol: -


Rules for tree based protocol are
Each transaction Ti can lock a data item at most once, and must observe the following rules:
• Only exclusive locks are allowed.
• The first lock by Ti may be on any data item.
• Subsequently, a data Q can be locked by Ti only if the parent of Q is currently locked by Ti.
• Data items may be unlocked at any time.
• A data item that has been locked and unlocked by Ti cannot subsequently be relocked by Ti.
Example: -
Let us consider a database D={A,B,C,D,E,F,G,H,I,J}.

Database Graph: -

The following 4 transactions follow the tree protocol on previous data base graph.
• T1: lock – x(B); lock – x(E); lock – x(D); unlock – x(B); unlock – x(E); lock – x(G); unlock – x(D);
unlock – x(G).
• T2: lock – x(D); lock – x(H); unlock – x(D); unlock – x(H).
• T3: lock – x(B); lock – x(E); unlock – x(E); unlock – x(B).
• T4: lock – x(H); lock – x(J); unlock – x(H); unlock – x(J).

Serializable Schedule under the tree protocol


T1 T2 T3 T4
Lock – x(B);
Lock – x(D);
Lock – x(H);
Unlock – x(D);
Lock – x(E);
Lock – x(D);
Unlock – x(B);
Unlock – x(E);
Lock – x(B);
Lock – x(E);
Unlock – x(H);
Lock – x(G);
Unlock – x(D);
Lock – x(H);
Lock – x(J);
Unlock – x(H);
Unlock – x(J);
Unlock – x(E);
Unlock – x(B);
Unlock – x(G);

Advantages: -
• The tree protocol ensures conflict Serializability.
• Freedom from deadlock.
• Unlocking may occur earlier in the tree locking protocol than in the two phase locking
protocol.
• Shorter waiting times, and increase in concurrency.
• Protocol is deadlock free, no rollbacks are required.

Disadvantages: -
• In the tree locking protocol, a transaction may have to lock data items that it does not access.
• Increased locking over head, and additional waiting time.
• Potential decrease in concurrency.

Introduction to Recovery Protocols


Database systems, like any other computer system, are subject to failures but the data stored
in it must be available as and when required. When a database fails it must possess the facilities
for fast recovery. It must also have atomicity i.e. either transactions are completed successfully
and committed (the effect is recorded permanently in the database) or the transaction should
have no effect on the database.
There are both automatic and non-automatic ways for both, backing up of data and recovery
from any failure situations. The techniques used to recover the lost data due to system crash,
transaction errors, viruses, catastrophic failure, incorrect commands execution etc. are
database recovery techniques. So to prevent data loss recovery techniques based on deferred
update and immediate update or backing up data can be used.
Recovery techniques are heavily dependent upon the existence of a special file known as
a system log. It contains information about the start and end of each transaction and any
updates which occur in the transaction. The log keeps track of all transaction operations that
affect the values of database items. This information is needed to recover from transaction
failure.
• The log is kept on disk start_transaction(T): This log entry records that transaction T
starts the execution.
• read_item(T, X): This log entry records that transaction T reads the value of database item
X.
• write_item(T, X, old_value, new_value): This log entry records that transaction T changes
the value of the database item X from old_value to new_value. The old value is sometimes
known as a before an image of X, and the new value is known as an afterimage of X.
• commit(T): This log entry records that transaction T has completed all accesses to the
database successfully and its effect can be committed (recorded permanently) to the
database.
• abort(T): This records that transaction T has been aborted.
• checkpoint: Checkpoint is a mechanism where all the previous logs are removed from the
system and stored permanently in a storage disk. Checkpoint declares a point before which
the DBMS was in consistent state, and all the transactions were committed.
A transaction T reaches its commit point when all its operations that access the database have
been executed successfully i.e. the transaction has reached the point at which it will
not abort (terminate without completing). Once committed, the transaction is permanently
recorded in the database. Commitment always involves writing a commit ent ry to the log and
writing the log to disk. At the time of a system crash, item is searched back in the log for all
transactions T that have written a start_transaction(T) entry into the log but have not written a
commit(T) entry yet; these transactions may have to be rolled back to undo their effect on the
database during the recovery process
• Undoing – If a transaction crashes, then the recovery manager may undo transactions i.e.
reverse the operations of a transaction. This involves examining a transaction for the log
entry write_item(T, x, old_value, new_value) and setting the value of item x in the database
to old-value.There are two major techniques for recovery from non-catastrophic
transaction failures: deferred updates and immediate updates.
• Deferred update – This technique does not physically update the database on disk until a
transaction has reached its commit point. Before reaching commit, all transaction updates
are recorded in the local transaction workspace. If a transaction fails before reachin g its
commit point, it will not have changed the database in any way so UNDO is not needed. It
may be necessary to REDO the effect of the operations that are recorded in the local
transaction workspace, because their effect may not yet have been written in the database.
Hence, a deferred update is also known as the No-undo/redo algorithm
• Immediate update – In the immediate update, the database may be updated by some
operations of a transaction before the transaction reaches its commit point. However, these
operations are recorded in a log on disk before they are applied to the database, making
recovery still possible. If a transaction fails to reach its commit point, the effect of its
operation must be undone i.e. the transaction must be rolled back hence we require both
undo and redo. This technique is known as undo/redo algorithm.
• Caching/Buffering – In this one or more disk pages that include data items to be updated
are cached into main memory buffers and then updated in memory before being written
back to disk. A collection of in-memory buffers called the DBMS cache is kept under control
of DBMS for holding these buffers. A directory is used to keep track of which database items
are in the buffer. A dirty bit is associated with each buffer, which is 0 if the buffer is not
modified else 1 if modified.
• Shadow paging – It provides atomicity and durability. A directory with n entries is
constructed, where the i th entry points to the i th database page on the link. When a
transaction began executing the current directory is copied into a shadow directory. When
a page is to be modified, a shadow page is allocated in which changes are made and when it
is ready to become durable, all pages that refer to original are updated to refer new
replacement page.
Some of the backup techniques are as follows:
• Full database backup – In this full database including data and database, Meta information
needed to restore the whole database, including full-text catalogs are backed up in a
predefined time series.
• Differential backup – It stores only the data changes that have occurred since last full
database backup. When same data has changed many times since last full database backup,
a differential backup stores the most recent version of changed data. For this first, we need
to restore a full database backup.
• Transaction log backup – In this, all events that have occurred in the database, like a
record of every single statement executed is backed up. It is the backup of transaction log
entries and contains all transaction that had happened to the database. Through this, the
database can be recovered to a specific point in time. It is even possible to perform a
backup from a transaction log if the data files are destroyed and not even a single
committed transaction is lost.
1. Deferred Update:
It is a technique for the maintenance of the transaction log files of the DBMS. It is also called
NO-UNDO/REDO technique. It is used for the recovery of the transaction failures which occur
due to power, memory or OS failures. Whenever any transaction is executed, the updates are
not made immediately to the database. They are first recorded on the log file and then those
changes are applied once commit is done. This is called “Re-doing” process. Once the rollback is
done none of the changes are applied to the database and the changes in the log file are also
discarded. If commit is done before crashing of the system, then after restarting of the system
the changes that have been recorded in the log file are thus applied to the database.
2. Immediate Update:
It is a technique for the maintenance of the transaction log files of the DBMS. It is also called
UNDO/REDO technique. It is used for the recovery of the transaction failures which occur due
to power, memory or OS failures. Whenever any transaction is executed, the updates are made
directly to the database and the log file is also maintained which contains both old and new
values. Once commit is done, all the changes get stored permanently into the database and
records in log file are thus discarded. Once rollback is done the old values get restored in the
database and all the changes made to the database are also discarded. This is called “Un-doing”
process. If commit is done before crashing of the system, then after restarting of the system the
changes are stored permanently in the database.
Difference between Deferred update and Immediate update:

S.NO. Deferred Update Immediate Update

In deferred update, the changes are not In immediate update, the changes are
1. applied immediately to the database. applied directly to the database.

The log file contains all the changes that The log file contains both old as well as new
2. are to be applied to the database. values.

In this method once rollback is done all In this method once rollback is done the old
the records of log file are discarded and values are restored into the database using
3. no changes are applied to the database. the records of the log file.

Concepts of buffering and caching are Concept of shadow paging is used in


4. used in deferred update method. immediate update method.

The major disadvantage of this method The major disadvantage of this method is
is that it requires a lot of time for that there are frequent I/O operations
5. recovery in case of system failure. while the transaction is active.

Introduction of Shadow Paging


Shadow Paging is recovery technique that is used to recover database. In this recovery
technique, database is considered as made up of fixed size of logical units of storage which are
referred as pages. pages are mapped into physical blocks of storage, with help of the page
table which allow one entry for each logical page of database. This method uses two page
tables named current page table and shadow page table.
The entries which are present in current page table are used to point to most recent database
pages on disk. Another table i.e., Shadow page table is used when the transaction starts which is
copying current page table. After this, shadow page table gets saved on disk and current page
table is going to be used for transaction. Entries present in current page table may be changed
during execution but in shadow page table it never get changed. After transaction, both tables
become identical.
This technique is also known as Cut-of-Place updating.
To understand concept, consider above figure. In this 2 write operations are performed on page
3 and 5. Before start of write operation on page 3, current page table points to old page 3. When
write operation starts following steps are performed:
1. Firstly, search start for available free block in disk blocks.
2. After finding free block, it copies page 3 to free block which is represented by Page 3 (New).
3. Now current page table points to Page 3 (New) on disk but shadow page table points to old
page 3 because it is not modified.
4. The changes are now propagated to Page 3 (New) which is pointed by current page table.
COMMIT Operation:
To commit transaction following steps should be done:
1. All the modifications which are done by transaction which are present in buffers are
transferred to physical database.
2. Output current page table to disk.
3. Disk address of current page table output to fixed location which is in stable storage
containing address of shadow page table. This operation overwrites address of old shadow
page table. With this current page table becomes same as shadow page table and
transaction is committed.
Failure:
If system crashes during execution of transaction but before commit operation, With this, it is
sufficient only to free modified database pages and discard current page table. Before execution
of transaction, state of database gets recovered by reinstalling shadow page table.
If the crash of system occurs after last write operation then it does not affect propagation of
changes that are made by transaction. These changes are preserved and there is no need to
perform redo operation.
Advantages:
• This method requires fewer disk accesses to perform operation.
• In this method, recovery from crash is inexpensive and quite fast.
• There is no need of operations like- Undo and Redo.
Disadvantages:
• Due to location change on disk due to update database it is quite difficult to keep related
pages in database closer on disk.
• During commit operation, changed blocks are going to be pointed by shadow page table
which have to be returned to collection of free blocks otherwise they become accessible.
• The commit of single transaction requires multiple blocks which decreases execution speed.
• To allow this technique to multiple transactions concurrently it is difficult.

IMPORTANT QUESTIONS: -
1. Explain two multiversion techniques for concurrency control.
2. Discuss the problems of deadlocks and starvation.
3. Discuss two phase locking protocol and strict two phase locking protocols.
4. Explain these terms: conflict-serializable schedule, view- serializable schedule & Shadow
Paging technique.
5. Explain different types of locks that can be applied in brief.
6. Discuss the following terminology in Transaction support in SQL:
i) Access mode
ii) Isolation level
iii) Dirty read
iv) No repeatable read
7. Explain anomalies due to interleaved transactions. Write lock based concurrency control.
8. Write Recovery Techniques Based on Immediate Update operations.
9. Outline how Serializability is guaranteed by Two-Phase Locking.
10. Discuss how schedules are Characterized based on Recoverability & Serializability.
11. Discuss the UNDO and REDO operations and the recovery techniques that use each in
transactions.
12. Discuss the actions taken by the read_item and write_item operations on a database.
13. Explain anomalies due to interleaved transactions.
14. Write about any two Recovery Protocols
15. Discuss various Recovery Techniques Based on Immediate Update and Shadow Paging.
16. Explain the Transaction Support provided by SQL.
17. What type of lock is need for insert and deletes operations?
18. Discuss about serialization by 2 phase locking.

You might also like