DBMS Q Bank

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 48

DBMS Q BANK

Favorite

Archived

Project 📊 the 3rd midsem, 📊 the 3rd endsem


🍳 Recipe Tags
UNIT 1
Compare traditional file processing system and database management system.

Database Management System


Feature File System
(DBMS)

Data consistency Low High

Data security Low High

Handling Easy Complex

Backup and
Not efficient Efficient
recovery

Storing, retrieving, and searching Storing, retrieving, and manipulating


Operations
data manually data using SQL queries

To store a collection of raw data


Purpose To create and manage databases
files into the hard disk

What is DBMS? List out applications of DBMS

A database management system (or DBMS) is essentially nothing more than a


computerized datakeeping system.

Applications:
Student information
Customer information
Stock information
account
sales

DBMS Q BANK 1
Draw and explain three level architecture of DBMS.

1. External level :
It is also called view level. The reason this level is called “view” is because
several users can
view their desired data from this level which is internally fetched from database
with the help of conceptual and internal level mapping.

2. Conceptual level :
It is also called logical level. The whole design of the database such as
relationship among data, schema of data etc. are described in this level.

DBMS Q BANK 2
3. Internal level :
This level is also known as physical level. This level describes how the data is
actually stored in the storage devices. This level is also responsible for
allocating space to the data. This is the lowest level of the architecture.

Who is DBA? Discuss the role of database administrator (DBA).

A Database Administrator (DBA) is individual or person responsible for


controlling, maintenance, coordinating, and operation of database management
system. Managing, securing, and taking care of database system is prime
responsibility.

Role and Duties of Database Administrator (DBA) :

Decides hardware –
They decides economical hardware, based upon cost, performance and
efficiency of hardware, and best suits organisation. It is hardware which is
interface between end users and database.

Manages data integrity and security –


Data integrity need to be checked and managed accurately as it protects and
restricts data from unauthorized use. DBA eyes on relationship within data to
maintain data integrity.

Database design –
DBA is held responsible and accountable for logical, physical design, external
model design, and integrity and security control.

Database implementation –
DBA implements DBMS and checks database loading at time of its
implementation.

Query processing performance –


DBA enhances query processing by improving their speed, performance and
accuracy.

Tuning Database Performance –


If user is not able to get data speedily and accurately then it may loss
organization business. So by tuning SQL commands DBA can enhance
performance of database.

DBMS Q BANK 3
Explain different levels of data abstraction

Internal Level/Schema

The internal schema defines the physical storage structure of the database.
The internal schema is a very low-level representation of the entire
database. It contains multiple occurrences of multiple types of internal
record. In the ANSI term, it is also called “stored record’.

Conceptual Schema/Level

The conceptual schema describes the Database structure of the whole


database for the community of users. This schema hides information about
the physical storage structures and focuses on describing data types,
entities, relationships, etc.

This logical level comes between the user level and physical storage view.
However, there is only single conceptual view of a single database.

External Schema/Level

An external schema describes the part of the database which specific user
is interested in. It hides the unrelated details of the database from the user.
There may be “n” number of external views for each database.

Each external view is defined using an external schema, which consists of


definitions of various types of external record of that specific view.

An external view is just the content of the database as it is seen by some


specific particular user. For example, a user from the sales department will
see only sales related data.

Explain the DBMS languages with examples: DDL, DML, and DCL.

1. DDL (Data Definition Language):

DDL or Data Definition Language actually consists of the SQL commands that
can be used to define the database schema. It simply deals with descriptions of
the database schema and is used to create and modify the structure of
database objects in the database. DDL is a set of SQL commands used to
create, modify, and delete database structures but not data. These commands
are normally not used by a general user, who should be accessing the database
via an application.

DBMS Q BANK 4
List of DDL commands:

CREATE: This command is used to create the database or its objects (like
table, index,function, views, store procedure, and triggers).

DROP : This command is used to delete objects from the database.

ALTER : This is used to alter the structure of the database.


TRUNCATE : This is used to remove all records from a table, including all
spacesallocated for the records are removed.

COMMET : This is used to add comments to the data dictionary.

RENAME : This is used to rename an object existing in the database.

2. DML(Data Manipulation Language):

The SQL commands that deals with the manipulation of data present in the
database belong to DML or Data Manipulation Language and this includes most
of the SQL statements. It is the component of the SQL statement that controls
access to data and to the database. Basically, DCL statements are grouped
with DML statements.

List of DML commands:

INSERT: It is used to insert data into a table.

UPDATE: It is used to update existing data within a table.

DELETE : It is used to delete records from a database table.

LOCK : Table control concurrency.

CALL: Call a PL/SQL or JAVA subprogram.

EXPLAIN PLAN: It describes the access path to data.

3. DCL (Data Control Language):

DCL includes commands such as GRANT and REVOKE which mainly deal with
the rights, permissions, and other controls of the database system.

List of DCL commands:

GRANT: This command gives users access privileges to the database.

DBMS Q BANK 5
REVOKE: This command withdraws the user’s access privileges given by
using the GRANT command.

Enlist Application of DBMS.


Sure, here are some of the applications of DBMS in small points:

Airline reservation system: To store and manage flight schedules, passenger


information, and reservation details.

Banking system: To store customer account information, transactions, and


loans.

Education: To store student records, course information, and grades.

Hospital management: To store patient records, medical history, and billing


information.

Inventory management: To track inventory levels, orders, and shipments.

Manufacturing: To track production orders, materials, and inventory.

Online shopping: To store product information, customer orders, and payment


details.

Social media: To store user profiles, posts, and comments.

Telephone company: To store customer records, call logs, and billing


information.

Weather forecasting: To store weather data, historical records, and models.

Explain Pros And Cons of DBMS.

Pros Cons

Improved data security and integrity High cost

Efficient data sharing and collaboration Complex to set up and manage

Robust data backup and recovery Can be a single point of failure

Vulnerable to hacking and data


Scalable to meet growing needs
breaches

Supports complex queries and reporting Requires specialized skills to use

Can be used to create different types of


databases

DBMS Q BANK 6
Explain Data Independence.

Data Independence

Data independence can be explained using the three-schema architecture.

Data independence refers characteristic of being able to modify the schema


at one level of the database system without altering the schema at the next
higher level.

1. Logical Data Independence

Logical data independence refers characteristic of being able to change the


conceptual schema without having to change the external schema.

Logical data independence is used to separate the external level from the
conceptual view.

2. Physical Data Independence

Physical data independence can be defined as the capacity to change the


internal schema without having to change the conceptual schema.

Physical data independence is used to separate conceptual levels from the


internal levels.

Explain Difference Between DDL And DML.

Feature DDL DML

Definition Data Definition Language Data Manipulation Language

Defines the structure of the Manipulates the data in the


Purpose
database database

CREATE, ALTER, DROP, INSERT, UPDATE, DELETE,


Statements
RENAME SELECT

Changes the schema of the


Effect Changes the data in the database
database

Rollback Not possible Possible

Here is a more detailed explanation of each:

DDL (Data Definition Language): DDL is used to define the structure of the
database. This includes creating tables, defining the columns in those tables,

DBMS Q BANK 7
and specifying the constraints on those columns. DDL statements are not
reversible, meaning that they cannot be rolled back.

DML (Data Manipulation Language): DML is used to manipulate the data in


the database. This includes inserting new data into the database, updating
existing data, deleting data from the database, and retrieving data from the
database. DML statements can be rolled back, meaning that they can be
undone if they are executed incorrectly.

UNIT 2
List various mapping cardinalities of E-R diagram.

1. One-to-One (1:1):

Each instance of one entity is associated with exactly one instance of


another entity.

Example: A person has exactly one passport, and a passport belongs to


only one person.

2. One-to-Many (1:N or 1:∗):

Each instance of one entity is associated with multiple instances of another


entity.

Example: A university department has multiple students, but each student is


associated with only one department.

3. Many-to-One (N:1 or ∗:1):

Multiple instances of one entity are associated with exactly one instance of
another entity.

Example: Multiple employees work in one department, but each employee


belongs to only one department.

4. Many-to-Many (N:N or ∗:∗):

Multiple instances of one entity are associated with multiple instances of


another entity.

Often implemented using an intermediary or junction table.

DBMS Q BANK 8
Example: Students enroll in multiple courses, and each course has multiple
enrolled students.

What are constraints in DBMS ? explain with proper example.

Constraints in a Database Management System (DBMS) are rules and


conditions applied to the data in a database to ensure data integrity,
consistency, and accuracy. They help maintain the quality and reliability of the
data stored in a database. Here are some common types of constraints with
examples:

1. Primary Key Constraint:

Ensures uniqueness and identifies each record uniquely.

Example: In a "Students" table, the "StudentID" column can be a primary


key to ensure each student has a unique identifier.

2. Unique Constraint:

Enforces uniqueness but doesn't necessarily identify records.

Example: A "Username" column in a "User" table should have a unique


constraint to ensure no two users have the same username.

3. Foreign Key Constraint:

Ensures referential integrity by linking a column to a primary key in another


table.

Example: In an "Orders" table, a "CustomerID" column can be a foreign key


linked to the "Customer" table's "CustomerID" primary key to ensure that
orders are associated with valid customers.

4. Check Constraint:

Defines a condition that must be true for data to be inserted or updated.

Example: A "Birthdate" column in a "Person" table can have a check


constraint to ensure that the date is not in the future.

5. Default Constraint:

Specifies a default value for a column if no value is provided during


insertion.

DBMS Q BANK 9
Example: A "Status" column in an "Employee" table can have a default
constraint set to 'Active' so that new employee records are automatically
marked as active.

6. Not Null Constraint:

Ensures that a column cannot have NULL values.

Example: An "Email" column in a "Customer" table can have a not null


constraint to ensure every customer has an email address on record.

7. Composite Constraint:

Involves multiple columns to define uniqueness or other conditions.

Example: In a "Book" table, a composite unique constraint on "ISBN" and


"Edition" ensures that no two books with the same ISBN and edition can
exist.

8. Check Constraint with Subquery:

Uses a subquery to validate data against values in another table.

Example: A "ManagerID" column in an "Employee" table can have a check


constraint with a subquery to ensure that only valid manager IDs from the
same table are allowed.

9. Domain Constraint:

Defines acceptable values within a predefined domain or range.

Example: A "Rating" column in a "Movie" table can have a domain


constraint that limits values to a range of 1 to 5 for user ratings.

10. Inheritance Constraint:

Enforces rules and relationships within an object-oriented database model,


such as superclass and subclass relationships.

Example: In an object-oriented database for a zoo, there could be an


inheritance constraint ensuring that specific attributes and methods are
inherited from a superclass "Animal" to subclasses like "Mammal" and
"Bird."

Define E-R Diagram. Discuss generalization in E-R Diagram.

DBMS Q BANK 10
An E-R diagram (Entity-Relationship Diagram) is a graphical representation of
the entities and relationships of a database. It is used to model the data of a
database in a way that is easy to understand and maintain.

Generalization is a technique used in E-R diagrams to group entities that share


common properties into a higher-level entity. The common properties are
represented by the attributes of the higher-level entity. The entities that are
grouped together are called subclasses, and the higher-level entity is called the
superclass.

Here are some of the benefits of using generalization in E-R diagrams:

It improves the flexibility of the database design.

It makes it easier to add new entities to the database.

It makes it easier to modify existing entities.

It reduces the redundancy of data in the database.

It improves the performance of queries on the database.

Differentiate strong entity set and weak entity set. Demonstrate the concept of both
using real-time example using E-R diagram

Feature Strong Entity Set Weak Entity Set

Primary key Has a unique primary key Does not have a unique primary key

Exists only as a dependent of a


Existence Exists independently
strong entity

Representation Represented by a single rectangle Represented by a double rectangle

Related to strong entity by a Related to strong entity by a non-


Relationship
identifying relationship identifying relationship

DBMS Q BANK 11
Explain specialization in E-R diagram

In specialization, an entity is divided into sub-entities based on their


characteristics. It is a top-down approach where higher level entity is
specialized into two or more lower level entities.

Example,

EMPLOYEE entity in an Employee management system can be specialized into


DEVELOPER, TESTER etc. as shown in Figure 2. In this case, common
attributes like E_NAME, E_SAL etc. become part of higher entity (EMPLOYEE)
and specialized attributes like TES_TYPE become part of specialized entity
(TESTER)

Define: Primary Key, Foreign key, NOT NULL constraints and referential integrity
(Foreign Key) constraint

Primary Key : A table typically has a column or combination of columns that


contain values that uniquely identify each row in the table. This column, or
columns, is called the primary key (PK) of the table and enforces the entity
integrity of the table. Because primary key constraints guarantee unique data,
they are frequently defined on an identity column.

Foreign key : A foreign key (FK) is a column or combination of columns that is


used to establish and enforce a link between the data in two tables to control
the data that can be stored in the foreign key table. In a foreign key reference, a
link is created between two tables when the column or columns that hold the

DBMS Q BANK 12
primary key value for one table are referenced by the column or columns in
another table. This column becomes a foreign key in the second table.

NOT NULL : In SQL, constraints are some set of rules that are applied to the
data type of the specified table. Or we can say that using constraints we can
apply limits on the type of data that can be stored in the particular column of the
table. Constraints are typically placed specified along with CREATE statement.
By default, a column can hold null values.

Referential Integrty : Although the main purpose of a foreign key constraint is to


control the data that can be stored in the foreign key table, it also controls
changes to data in the primary key table. For example, if the row for a
salesperson is deleted from the Sales.SalesPerson table, and the salesperson's
ID is used for sales orders in the Sales.SalesOrderHeader table, the relational
integrity between the two tables is broken; the deleted salesperson's sales
orders are orphaned in the SalesOrderHeader table without a link to the data in
the SalesPerson table.

Explain Network model and Object Oriented model in brief


Network Model

A hierarchical model that is used to represent the many-to-many relationship


among the database constraints.

Represented in the form of a graph.

Allows 1:1, 1:M, and M:N relationships among the entities or members.

Object-Oriented Model

A database system that can work with complex data objects.

Objects mirror those used in object-oriented programming languages.

Everything is an object in object-oriented programming.

Objects have different properties and methods.

Works in concert with an object-oriented programming language to facilitate the


storage and retrieval of object-oriented data.

OOD databases support object data persistence, which means that objects can
be stored and retrieved from the database even when the program that created

DBMS Q BANK 13
them is not running.

Draw E-R diagram for bank management system

Draw E-R diagram for Hospital management system

DBMS Q BANK 14
List the steps in proper sequence in order to convert an ER diagram into tables
Step 1 − Conversion of strong entities
Step 2 − Conversion of weak entity
Step 3 − Conversion of one-to-one relationship
Step 4 − Conversion of one-to-many relationship
Step 5 − Conversion of many-many relationship
Step 6 − Conversion of multivalued attributes
Step 7 − Conversion of n-ary relationship

DBMS Q BANK 15
UNIT 3
Explain keys(Super Key,Candidate Key,Primary Key,Foregin Key,Alternate Key)
here are the different types of keys in a relational database in short and simple
points:

Super key: A super key is a set of one or more attributes that can uniquely
identify each row in a table.

Candidate key: A candidate key is a super key that does not contain any
redundant attributes.

Primary key: A primary key is a candidate key that is chosen to uniquely


identify each row in a table.

Foreign key: A foreign key is an attribute in one table that refers to the primary
key of another table.

Alternate key: An alternate key is a candidate key that is not chosen to be the
primary key.

Here is a table summarizing the differences between these keys:

Key Definition

A set of one or more attributes that can uniquely identify each row in a
Super key
table.

Candidate key A super key that does not contain any redundant attributes.

Primary key A candidate key that is chosen to uniquely identify each row in a table.

Foreign key An attribute in one table that refers to the primary key of another table.

Alternate key A candidate key that is not chosen to be the primary key.

List the relational algebra operators. Discuss any one such algebra operator with
suitable example.
Relational Algebra Operators:
In relational algebra, there are several fundamental operations used to manipulate
and retrieve data from relational databases. One of these operations is the
Projection operation.
Projection Operation:

DBMS Q BANK 16
Projection is an operation in relational algebra that allows us to select specific
attributes (columns) from a table while eliminating the rest. It is denoted by the
symbol ∏ and is used to produce a new table with only the specified attributes from
the original table.

Notation: ∏ A1, A2, An (r)


Example:
Consider a table named "CUSTOMER" with attributes NAME, STREET, and CITY.

NAME STREET CITY

Jones Main Harrison

Smith North Rye

Hays Main Harrison

Curry North Rye

Johnson Alma Brooklyn

Input:
To retrieve only the "NAME" and "CITY" attributes from the "CUSTOMER" table, we
can use the projection operation as follows:

∏ NAME, CITY (CUSTOMER)


Output:
The result will be a new table with only the selected attributes:

NAME CITY

Jones Harrison

Smith Rye

Hays Harrison

Curry Rye

Johnson Brooklyn

In this example, the projection operation (denoted by ∏) was used to create a new
table containing only the "NAME" and "CITY" attributes from the original
"CUSTOMER" table, effectively eliminating the "STREET" attribute. This operation
is valuable for extracting specific information and simplifying query results.

DBMS Q BANK 17
Explain all types of join in detail with example

Inner Join : Returns records that have matching values in both tables.

Left Join : Returns all records from the left table, and the matched records from
the right table.

Right Join : Returns all records from the right table, and the matched records
from the left table

Full Outer Join : Returns all records when there is a match in either left or right
table

What is the difference between Open source and Commercial DBMS.

DBMS Q BANK 18
Feature Open Source DBMS Commercial DBMS

Cost Free or low-cost Paid

Open-source licenses allow users to Commercial licenses restrict


Licensing
modify and redistribute the software the use of the software

Community-driven support or paid


Support Paid support from the vendor
support from third-party vendors

May lack some features that are May have more features than
Features
available in commercial DBMS open source DBMS

May be less secure than commercial May be more secure than open
Security
DBMS source DBMS

May be less scalable than commercial May be more scalable than


Scalability
DBMS open source DBMS

May be less flexible than commercial May be more flexible than


Flexibility
DBMS open source DBMS

Explain the working of Cartesian product Operation and the Division Operation with
an appropriate example
Cartesian Product Operation:
The Cartesian Product in DBMS is an operation used to combine columns from two
relations. It is not typically meaningful on its own but becomes meaningful when
followed by other operations. This operation is also known as a Cross Product or
Cross Join.
Example - Cartesian Product:
Consider two relations, A and B:
Relation A:

Column1 Column2

1 1

1 1

Relation B:

Column1 Column2

1 1

DBMS Q BANK 19
To find all rows from the Cartesian product of A and B where "Column2" has the
value '1', you can use the selection operation (σ):
σ Column2 = '1' (A X B)
Output:

Column1 Column2

1 1

1 1

In this example, the Cartesian product (A X B) is first created, and then a selection
operation is applied to filter rows where "Column2" equals '1'.
Division Operation:
The division operation in DBMS is used when you want to find entities that interact
with all entities in a set of different types of entities. It is typically required for queries
containing the keyword 'all.'
Examples:

1. List suppliers who supply all 'Red' Parts (supply schema):


This query involves the division operation to find suppliers who supply all 'Red'
parts.

2. Retrieve the names of employees who work on all the projects that 'John
Smith' works (company schema):
In this case, the division operation helps find employees who are involved in all
the same projects as 'John Smith.'

The division operation is useful for complex queries involving relationships between
entities and finding entities that meet specific criteria across different types of
entities.

UNIT 4
What is functional dependency? Explain its types in detail
Functional dependencies are constraints that describe the relationship between two
sets of attributes, with one set capable of determining the value of the other. They

DBMS Q BANK 20
are represented as X → Y, where X is a set of attributes that can accurately
determine the value of Y.
There are several types of functional dependencies:

1. Trivial Functional Dependency:

This occurs when the dependent set is always a subset of the determinant.

In other words, if X → Y and Y is a subset of X, it is called a trivial functional


dependency.

2. Non-Trivial Functional Dependency:

This type of dependency exists when the dependent set is not a subset of
the determinant.

In simpler terms, if X → Y and Y is not a subset of X, it is classified as a


non-trivial functional dependency.

3. Multivalued Functional Dependency:

In cases where the entities within the dependent set are not interdependent,
a multivalued functional dependency occurs.

For example, if a → {b, c} and there is no functional dependency between b


and c, it is termed as a multivalued functional dependency.

4. Transitive Functional Dependency:

A transitive functional dependency arises when the dependent set indirectly


relies on the determinant.

If a → b and b → c, the axiom of transitivity dictates that a → c. This is


known as a transitive functional dependency.

Compute closure of following set F of functional dependencies for relation schema r


(A, B, C, D, E). A → BC , CD → E , B → D , E → A List the candidate keys for R.
To compute the closure of the set of functional dependencies F and determine the
candidate keys for relation schema R (A, B, C, D, E), we can follow these steps:
Step 1: Start with the given functional dependencies F.
F = {A → BC, CD → E, B → D, E → A}
Step 2: Compute the closure of each attribute with respect to F.

DBMS Q BANK 21
Let's start with A:
Closure of A (A+):
A → A (Reflexivity)
A → BC (From F)
A → B (Using A → BC and then BC → B, transitivity)
A → D (Using A → B and then B → D, transitivity)
A → E (Using A → BC and then BC → E, transitivity)
So, A+ = {A, B, D, E}
Now, let's compute the closure of B:
Closure of B (B+):
B → B (Reflexivity)
B → D (From F)
B → BC (Using B → D and then D → BC, transitivity)
B → E (Using B → D and then D → E, transitivity)
So, B+ = {B, D, E, C}
Next, let's compute the closure of C:
Closure of C (C+):
C → C (Reflexivity)
C → BC (From F)
C → B (Using C → BC and then BC → B, transitivity)
C → D (Using C → B and then B → D, transitivity)
C → E (Using C → BC and then BC → E, transitivity)
So, C+ = {C, B, D, E}
Now, let's compute the closure of D:

Closure of D (D+):
D → D (Reflexivity)
D → BC (Using D → B and then B → BC, transitivity)
D → E (Using D → BC and then BC → E, transitivity)
So, D+ = {D, B, E, C}
Finally, let's compute the closure of E:

DBMS Q BANK 22
Closure of E (E+):
E → E (Reflexivity)
E → A (From F)
E → BC (Using E → A and then A → BC, transitivity)
E → B (Using E → BC and then BC → B, transitivity)
E → D (Using E → A and then A → D, transitivity)

So, E+ = {E, A, B, D, C}
Step 3: Determine the candidate keys for R.
The candidate keys are the minimal sets of attributes that can uniquely determine
all other attributes. To find the candidate keys, we look for attributes whose closures
include all attributes in the relation schema R (A, B, C, D, E). In this case, A and E
are candidate keys because their closures (A+ and E+) contain all attributes (A, B,
C, D, E).
Therefore, the candidate keys for R are {A, E}.

What is meant by normalization? Write its need. List and discuss various
normalization forms

Normalization is the process of organizing data in a database. This includes


creating tables and establishing relationships between those tables according to
rules designed both to protect the data and to make the database more flexible by
eliminating redundancy and inconsistent dependency.
Needs for normalization:

1. To minimize redundancy.

2. To ensure only related data is stored in each table.

Forms of Normalization:

1. First Normal Form (1 NF)

2. Second Normal Form (2 NF)

3. Third Normal Form (3 NF)

4. Boyce Codd Normal Form or Fourth Normal Form (BCNF or 4 NF)

5. Fifth Normal Form (5 NF)

6. Sixth Normal Form (6 NF)

DBMS Q BANK 23
Explain Armstrong's axioms

Armstrong’s Axioms are a set of rules of axioms. It was developed by William


W.Armstrong in 1974. It is used to infer all the functional dependencies on a
relational database.

Armstrong’s Axioms are sound in generating only functional dependencies in


the closure of a set of functional dependencies (denoted as F+) when applied to
that set (denoted as F)

State true or false: Any relation schema that satisfies BCNF also satisfies 3NF

True

A relation R is in 3NF if every non-prime attribute of R is fully functionally


dependent on every key of R but this does not guarantee transitive
dependency. Every relation is BCNF must be in 3NF.

UNIT 5
Explain steps of query processing with the help of a neat diagram.

DBMS Q BANK 24
Query Processing is the activity performed in extracting data from the database. In
query processing, it takes various steps for fetching the data from the database.
The steps involved are:

1. Parsing and translation: As query processing includes certain activities for data
retrieval.

2. Optimization: Involves configuring, tuning, and measuring performance, at


several levels.

3. Evaluation: Is used in the WHERE clause of a SQL statement to compare


stored expressions to incoming data items.

Explain linear search and binary search algorithm for selection operation.
Linear Search:

1. Sequential Search: Linear search is a sequential search algorithm that scans


elements one by one.

DBMS Q BANK 25
2. Operation: It checks each element from the beginning until the desired element
is found or the list is exhausted.

3. Time Complexity: O(n) in the worst case, where n is the number of elements.

4. Use Case: Suitable for small lists or unsorted data.

Binary Search:

1. Divide and Conquer: Binary search divides the list into two halves and
eliminates one half based on comparison with the target value.

2. Operation: It compares the target value with the middle element and proceeds
to the left or right half accordingly.

3. Time Complexity: O(log n) in the worst case, where n is the number of


elements. Efficient for large, sorted lists.

4. Use Case: Ideal for sorted data where quick searches are needed.

Evaluation Expression Process in Query Optimization.

1. Parsing: The query is parsed to identify its components, such as tables,


conditions, and requested data.

2. Expression Tree: An expression tree is constructed to represent the query's


logical structure.

3. Cost Estimation: The optimizer estimates the cost of various execution plans
for the query.

4. Plan Generation: Different query execution plans are generated, considering


indexes, joins, and other operations.

5. Plan Selection: The optimizer selects the most cost-effective execution plan
based on the estimated costs.

6. Plan Execution: The chosen plan is executed to retrieve and deliver the
requested data.

7. Monitoring: Query performance is monitored during execution, and


adjustments may be made if necessary.

8. Result Delivery: The query results are delivered to the user or application.

Explain External Sort Merge Algorithm with example.

DBMS Q BANK 26
The External Sort-Merge Algorithm is a method for sorting large datasets that do not
fit entirely in memory (RAM). It divides the dataset into smaller chunks, sorts each
chunk in memory, and then merges these sorted chunks together to produce the
final sorted dataset. Here's an explanation with an example:
Original Dataset: [32, 15, 7, 10, 23, 42, 5, 17]

We want to sort this dataset using the External Sort-Merge Algorithm with a chunk
size of 3 (so each chunk can fit three numbers).
Step 1 - Divide into Chunks:

Chunk 1: [32, 15, 7]

Chunk 2: [10, 23, 42]

Chunk 3: [5, 17]

Step 2 - Sort Each Chunk:

Chunk 1: [7, 15, 32]

Chunk 2: [10, 23, 42]

Chunk 3: [5, 17]

Step 3 - Merge Sorted Chunks:

Initialize the priority queue: [7, 10, 5] (smallest elements from each chunk).

Output: [7] (smallest element)

Priority queue: [10, 15, 5]

Output: [7, 10] (next smallest element)

Priority queue: [15, 23, 5]

Output: [7, 10, 15]

Priority queue: [23, 32, 5]

Output: [7, 10, 15, 17]

Priority queue: [23, 32, 42]

Output: [7, 10, 15, 17, 23]

Priority queue: [32, 42, 23]

DBMS Q BANK 27
Output: [7, 10, 15, 17, 23, 32]

Priority queue: [42, 23]

Output: [5]

Priority queue: [42]

Output: [5, 23]

Priority queue: [] (empty)

Output: [5, 23, 32]

Priority queue: [] (empty)

Output: [5, 23, 32, 42]

The final sorted dataset is: [5, 7, 10, 15, 17, 23, 32, 42].

UNIT 6
Explain B-tree with Example.

B-Tree:

A B-Tree is a specialized m-way tree widely used for disk access. A B-Tree
of order m can have at most m-1 keys and m children. One of the main
reasons for using a B-Tree is its capability to store a large number of keys
in a single node and large key values while keeping the height of the tree
relatively small.

A B-Tree of order m contains all the properties of an M-way tree. Additionally, it has
the following properties:

Every node in a B-Tree contains at most m children.

Every node in a B-Tree except the root node and the leaf nodes contains at
least m/2 children.

The root node must have at least 2 children.

All leaf nodes must be at the same level.

example of a B-tree

DBMS Q BANK 28
Root
/ \
10 20
/ \ / \
15 18 25 30
/\ /\
16 17 24 29

Define Static and Dynamic Hashing


Static Hashing:

1. Fixed Buckets: Static hashing uses a fixed number of buckets or bins to store
data.

2. No Overflow Handling: When a bucket becomes full, it cannot accommodate


more data, and overflow handling can be complex.

3. Limited Flexibility: It lacks flexibility in adapting to changes in data volume.

Dynamic Hashing:

1. Variable Buckets: Dynamic hashing allows the number of buckets to change


as needed based on the data volume.

2. Overflow Handling: It can dynamically allocate new buckets when needed,


making overflow handling more efficient.

3. Adaptable: Dynamic hashing is adaptable to changing data sizes and reduces


the risk of performance degradation.

Explain Indices in DBMS.


Indexing refers to a data structure technique that is used for quickly retrieving
entries from database files using some attributes that have been indexed. In
database systems, indexing is comparable to indexing in books. The indexing
attributes are used to define the indexing.
OR
Indexing is a technique for improving database performance by reducing the
number of disk accesses necessary when a query is run. An index is a form of data
structure. It’s used to swiftly identify and access data and information present in a
database table.

DBMS Q BANK 29
define sparce index

UNIT 7
List and discuss ACID properties of transactions.
ACID Properties of Transactions:

1. Atomicity:

Ensures that a transaction is treated as a single, indivisible unit.

Either all changes in a transaction are committed, or none are.

Example: A fund transfer is either completed entirely or not at all, even in


case of system failures.

2. Consistency:

Guarantees that a transaction brings the database from one consistent


state to another.

All database constraints and rules are maintained during and after the
transaction.

Example: If an order reduces the inventory of an item, the inventory count


cannot go negative.

3. Isolation:

Ensures that concurrent transactions do not interfere with each other.

Transactions appear to be executed sequentially, even if they run


concurrently.

Example: Two users booking seats on a flight simultaneously should not


result in overlapping seat allocations.

4. Durability:

Ensures that once a transaction is committed, its changes are permanent


and survive system failures.

Data changes are stored safely and are not lost, even in the event of a
crash.

DBMS Q BANK 30
Example: After a successful payment transaction, the record of the payment
is not lost and remains in the database.

ACID properties are crucial for maintaining data integrity and reliability in database
systems, especially in scenarios involving critical transactions.

Define transaction. Explain various states of transaction with suitable diagram


Definition of transaction

A transaction in a database management system (DBMS) is a logical unit of work


that consists of one or more SQL statements. A transaction must be completed
successfully or rolled back completely. Transactions are used to ensure the integrity
and consistency of data in a database.
States of a transaction
A transaction can be in one of the following states:

Active: The transaction is in the process of being executed.

Partially committed: The transaction has executed some of its SQL


statements, but it has not yet completed all of them.

Committed: The transaction has successfully completed all of its SQL


statements and the changes have been made permanent to the database.

Aborted: The transaction has failed and all of the changes made by the
transaction have been rolled back.

Diagram of transaction states


+---------+
| Active |
+---------+
|
+-----------+
| Partially |
| Committed |
+-----------+
|
+---------+
| Committed |
+---------+
|
+-----------+
| Aborted |
+-----------+

DBMS Q BANK 31
Transitions between transaction states
A transaction can transition between the following states:

Active to partially committed: When a transaction executes its first SQL


statement, it transitions from the active state to the partially committed state.

Partially committed to committed: When a transaction successfully


completes all of its SQL statements, it transitions from the partially committed
state to the committed state.

Partially committed to aborted: If a transaction fails, it transitions from the


partially committed state to the aborted state.

Active to aborted: If a transaction fails while it is in the active state, it


transitions to the aborted state.

State differences between conflict serializability and view serializability


Here is a short and simple table that summarizes the key differences between
conflict serializability and view serializability:

Property Conflict serializability View serializability

Strength Stronger Weaker

Ensures that the final state of the Ensures that the read and write
database is the same as if the operations of each transaction are
Definition
transactions had been executed equivalent to those of some serial
serially in some order. schedule.

Schedules where conflicts occur but


Schedules where no conflicts the read and write operations of
Examples
occur. each transaction are equivalent to
those of some serial schedule.

Suitable for applications where it Suitable for applications where it is


is important to maintain the acceptable to have some anomalies
Use cases
integrity of the database, such as in the database, such as social
financial systems. media applications.

Explain the concurrency problem. How does the strict two phase locking protocol
solve three problems concurrency? Explain with example

DBMS Q BANK 32
Concurrency problems arise in multi-user systems when multiple transactions try to
access and modify shared data concurrently. The three main concurrency problems
are:

1. Lost Updates: When multiple transactions try to update the same data
simultaneously, one update might overwrite the changes made by another
transaction, leading to lost information.

2. Uncommitted Data: A transaction reads data that is being modified by another


transaction that hasn't yet committed its changes. This can lead to inconsistent
or incorrect results if the first transaction relies on uncommitted data.

3. Inconsistent Retrievals: A transaction reads the same data multiple times


during its execution, but due to other transactions modifying the data in
between, it gets different or inconsistent values in each read, leading to
incorrect outcomes.

The Strict Two-Phase Locking Protocol helps to address these issues by enforcing
strict rules on how transactions acquire and release locks on data.

For example, consider a banking system with two transactions:

Transaction A transfers $100 from Account 1 to Account 2.

Transaction B deposits $50 into Account 1.

Without proper control, if both transactions access Account 1 concurrently, issues


like lost updates or inconsistent retrievals may occur. However, by using Strict Two-
Phase Locking:

Transaction A requests a lock on Account 1 before it starts its transfer operation.


It holds the lock until it completes the transaction (commit or abort).

Transaction B requests a lock on Account 1 before it starts its deposit operation.


It is put on hold until Transaction A releases the lock.

Once Transaction A completes and releases the lock, Transaction B acquires


the lock, ensuring that it operates on consistent and up-to-date data.

This protocol ensures that conflicting operations don't overlap, thereby preventing
lost updates, uncommitted data, and inconsistent retrievals by allowing only one
transaction to modify shared data at a time, ensuring data integrity and consistency.

DBMS Q BANK 33
Write differences between shared lock and exclusive lock
Shared Lock:

1. Purpose: Used for allowing multiple transactions to read a resource


simultaneously without conflicting with each other.

2. Concurrency: Permits multiple shared locks to be held on the same resource


simultaneously by different transactions.

3. Access Restriction: Allows concurrent read access but prohibits write access
by transactions holding shared locks.

4. Conflict: Doesn’t conflict with other shared locks but conflicts with exclusive
locks.

Exclusive Lock:

1. Purpose: Used to ensure exclusive access to a resource for a single


transaction for writing or modifying data.

2. Concurrency: Only one exclusive lock can be held on a resource at a time,


preventing concurrent access by other transactions.

3. Access Restriction: Prohibits both read and write access by other


transactions, ensuring exclusive write access by the transaction holding the
lock.

4. Conflict: Conflicts with both shared and exclusive locks, preventing any other
transactions from accessing the locked resource until released.

Explain timestamp based protocols in detail.


Timestamp-based protocols are concurrency control protocols that use
timestamps to order transactions. The transaction with the earliest timestamp is
allowed to proceed first, and the transaction with the latest timestamp is allowed to
proceed last. This helps to prevent lost updates and dirty reads.
Benefits of timestamp-based protocols:

Simple and efficient: Timestamp-based protocols are relatively simple and


efficient to implement and use.

Effective in preventing lost updates and dirty reads: Timestamp-based


protocols are effective in preventing lost updates and dirty reads, even in high-

DBMS Q BANK 34
contention environments.

Scalable: Timestamp-based protocols are scalable, meaning that they can be


used to manage a large number of transactions.

Challenges of timestamp-based protocols:

Validation overhead: Timestamp-based protocols require some overhead to


validate timestamps and check for conflicts.

Sensitivity to clock skew: Timestamp-based protocols can be sensitive to


clock skew, which can occur when the clocks of different nodes in a distributed
system are not synchronized.

Example of a timestamp-based protocol:


Optimistic concurrency control (OCC) is a timestamp-based protocol that allows
transactions to proceed without acquiring locks. Instead, transactions are validated
after they have been executed. If a transaction conflicts with another transaction, it
is aborted.

What is locking? Explain Two phase locking and its types


Locking

Locking is a concurrency control mechanism used in database management


systems (DBMS) to prevent two or more transactions from accessing and modifying
the same data simultaneously. This ensures that data remains consistent and
accurate, even when multiple transactions are accessing the database concurrently.
Two-phase locking (2PL)
2PL is a widely used locking protocol that enforces strict rules on how transactions
acquire and release locks. It ensures that a transaction acquires all locks it needs
before it starts modifying data and releases all locks it holds before committing or
aborting.
Types of 2PL locks:
2PL employs two main types of locks:

Shared lock (S lock): Allows multiple transactions to read the same data
concurrently.

DBMS Q BANK 35
Exclusive lock (X lock): Grants exclusive access to data, allowing only one
transaction to read or write the data.

2PL protocol:

2PL follows a strict two-phase approach:

1. Growing phase: During this phase, the transaction acquires all locks it needs
for data modification. It cannot release any locks until the growing phase is
complete.

2. Shrinking phase: Once the transaction completes its data modifications, it


enters the shrinking phase. It releases all locks it holds, ensuring that other
transactions can access the data.

Benefits of 2PL:
2PL offers several benefits:

Prevents lost updates: Ensures that concurrent modifications to the same


data don't overwrite each other.

Prevents dirty reads: Ensures that a transaction doesn't read data that has
been modified but not yet committed by another transaction.

Prevents deadlocks: Deadlocks occur when two transactions wait for each
other to release locks. 2PL's strict ordering of lock acquisition and release helps
prevent deadlocks.

Conclusion:
2PL is a robust and effective concurrency control mechanism that plays a crucial
role in maintaining data integrity and consistency in database systems.

Explain data storage hierarchy with a neat diagram.

Explain Schedule and its types.


In database management systems (DBMS), a schedule refers to the sequence in
which operations from multiple transactions are executed. It represents the order in
which the database system processes and completes the actions requested by
different users or applications.
Types of Schedules

DBMS Q BANK 36
Schedules can be categorized into two main types:

1. Serial Schedules: In a serial schedule, the transactions are executed one at a


time, ensuring that each transaction completes before the next one begins. This
prevents any conflicts or interleaving of operations, ensuring data consistency
and eliminating concurrency issues.

2. Non-Serial Schedules: In a non-serial schedule, the operations from multiple


transactions are interleaved, allowing multiple transactions to execute
concurrently. This can improve performance but introduces the risk of
concurrency problems, such as lost updates, dirty reads, and deadlocks.

Further Classification of Non-Serial Schedules


Non-serial schedules can be further classified into two types based on their ability to
maintain consistency:

1. Serializable Schedules: A serializable schedule is a non-serial schedule that


preserves the effects of a serial execution, ensuring that the final state of the
database is the same as if the transactions had been executed serially. This
maintains data consistency and prevents concurrency anomalies.

2. Non-Serializable Schedules: A non-serial schedule that does not preserve the


effects of a serial execution is considered non-serializable. This can lead to
concurrency anomalies, such as lost updates and dirty reads.

Explain Two Phase Commit Protocol in detail.


Two-phase commit (2PC) is a distributed transaction management protocol that
ensures the atomicity of transactions across multiple nodes in a distributed system.
It guarantees that either all participating nodes successfully commit or all nodes roll
back the transaction, preventing any inconsistencies in the distributed data.
The 2PC protocol involves two phases:

1. Prepare phase: The coordinator sends a "prepare" message to all participating


nodes, instructing them to prepare for committing the transaction. Each node
checks its local data and resources to ensure they are ready to commit. If all
nodes respond with a "yes" message, the protocol proceeds to the commit
phase.

DBMS Q BANK 37
2. Commit phase: The coordinator sends a "commit" message to all participating
nodes, instructing them to commit the transaction. Each node applies the
transaction's changes to its local data and resources and sends an
acknowledgment message to the coordinator. Once all acknowledgments are
received, the transaction is considered fully committed.

Benefits of 2PC:

Ensures atomicity of distributed transactions: Guarantees that either all


participating nodes commit the transaction or all nodes roll back, preventing any
inconsistencies in the distributed data.

Improves data integrity and consistency: Maintains the integrity of data


across multiple nodes, ensuring that changes made by a transaction are either
applied fully or not at all.

Robustness against failures: Can handle failures at any node during the
commit process, ensuring that the transaction is either fully committed or rolled
back consistently across all nodes.

Applications of 2PC:

Distributed banking systems: Ensures consistent updates to account


balances across multiple branches.

E-commerce transactions: Ensures consistent updates to order details and


payment information across multiple systems.

Airline reservation systems: Ensures consistent updates to flight bookings


and passenger information across multiple databases.

Conclusion:
2PC is a widely used and reliable protocol for managing distributed transactions,
ensuring data integrity and consistency across multiple nodes in a distributed
system. Its two-phase approach provides a structured and predictable way to
commit or roll back transactions, preventing inconsistencies and maintaining the
integrity of distributed data

UNIT 8
Explain cryptography techniques to secure data. 4

DBMS Q BANK 38
Cryptography is a crucial aspect of data security, employing mathematical
techniques to transform information into an unreadable format, making it
inaccessible to unauthorized parties. By encrypting data, we safeguard sensitive
information from unauthorized access, theft, or modification.
Here's a simplified explanation of cryptography techniques for data security:

1. Symmetric-key cryptography: This method utilizes a single secret key shared


between the sender and receiver for both encryption and decryption. Widely
used algorithms include AES (Advanced Encryption Standard) and DES (Data
Encryption Standard).

2. Asymmetric-key cryptography (Public-key cryptography): This method


employs two keys, a public key and a private key. The public key is freely
available for encryption, while the private key is kept confidential and used for
decryption. Common algorithms include RSA (Rivest-Shamir-Adleman) and
Elliptic Curve Cryptography (ECC).

3. Hashing: Hashing functions generate a unique fixed-length output, known as a


hash value or message digest, from an input message. This output serves as a
fingerprint of the data, ensuring its integrity and authenticity. Popular hashing
algorithms include SHA-256 (Secure Hash Algorithm 256) and MD5 (Message-
Digest Algorithm 5).

4. Digital signatures: Digital signatures provide a mechanism for verifying the


identity of the sender of a message and the integrity of the message itself. They
are created using asymmetric-key cryptography and allow the receiver to
confirm that the message has not been tampered with and was indeed sent by
the claimed sender.

5. Data encryption algorithms: Data encryption algorithms convert plain text into
ciphertext, an unintelligible form that can only be decrypted with the appropriate
key. This protects data from unauthorized access and ensures its confidentiality.

Explain SQL Injection 3


SQL Injection is a cybersecurity attack targeting databases through malicious SQL
code inserted into input fields of an application that interacts with a database.

1. Injection Point: Attackers exploit vulnerable input fields (e.g., login forms,
search bars) by inserting malicious SQL commands.

DBMS Q BANK 39
2. Malicious SQL Code: Attackers insert SQL statements (queries) within input
fields to manipulate the database or retrieve sensitive information.

3. Goal: Gain unauthorized access, manipulate, or extract data from the database
by altering SQL queries.

4. Impact: Can lead to data theft, unauthorized access, data corruption, or even
complete control of the database.

5. Prevention: Measures include using parameterized queries, input validation,


and sanitization to prevent the execution of malicious SQL commands from user
inputs.

Explain DAC and MAC models in detail. 3

DAC (Discretionary Access Control) and MAC (Mandatory Access Control) are
models used in computer security to manage access to resources. Here are simple
points explaining both:
DAC (Discretionary Access Control):

1. Controlled by Users: Access control is at the discretion of the resource owner


or users. Owners can determine who has access to resources and what level of
access they possess.

2. Flexibility: Users can grant or revoke access permissions to resources they


own. It allows for flexibility in managing access based on user-defined rules.

3. Example: File systems that allow users to set permissions (read, write,
execute) for files and folders based on their discretion use DAC.

MAC (Mandatory Access Control):

1. Controlled by System Policies: Access control is based on system-defined


policies rather than user discretion. Access is determined by labels, clearances,
and security classifications.

2. Rigid Control: Access decisions are based on predefined rules set by the
system or security administrator. Users cannot change or override these rules.

3. Example: Military or government systems that enforce strict access control


based on security clearances and labels use MAC. Users have access based

DBMS Q BANK 40
on their clearance level, and they can't modify these access permissions
themselves.

Difference Between: (1) Security / Integrity (2) Authentication / Authorization 6

Aspect Security Integrity

Focuses on protecting systems, Pertains to maintaining the


data, and resources against accuracy, consistency, and
unauthorized access, breaches, trustworthiness of data over its
Definition and various threats. Includes lifecycle. Ensures data remains
safeguarding confidentiality, unchanged and reliable, guarding
availability, and integrity of against unauthorized alterations or
information. corruption.

Ensures data remains accurate,


Protects confidentiality, availability,
consistent, and reliable throughout
and integrity of information assets
Main Objective its lifecycle, safeguarding against
from various threats and
unauthorized alterations, deletions,
unauthorized access.
or corruption.

Protects against unauthorized Ensures data accuracy,


access, breaches, malware, cyber consistency, and reliability by
Key Concerns
attacks, data leaks, and other preventing unauthorized changes,
security threats. alterations, or corruption.

The process of verifying the Determining access rights or


identity of a user or entity, permissions granted to authenticated
Definition confirming whether the users or entities, specifying what
user/entity is who they claim to actions or resources they can access
be. based on their identity.

Specifies the level of access or


Confirms the identity of
permissions granted to authenticated
Objective users/entities attempting to
users/entities based on established
access a system or service.
policies and their identity.

Verifying the legitimacy of users Defining and enforcing access control


or entities through credentials policies to regulate what resources or
Focus
(passwords, biometrics, actions users/entities can access or
security tokens, etc.). perform.

Purpose Ensures that only legitimate Controls and manages access


and authorized users gain privileges, determining what resources

DBMS Q BANK 41
access to systems or services. or actions users/entities are allowed to
access or perform.

Verifying a user's identity


Assigning specific permissions (read,
through a password, fingerprint
Example write, execute) to users based on their
scan, or two-factor
roles or policies (Authorization).
authentication (Authentication).

List different Encryption Schemes and Explain it 3

1. Symmetric Encryption:

Explanation: Uses a single key for both encryption and decryption.

How it Works: The sender and receiver share the same secret key. Data is
encrypted using this key and decrypted using the same key.

Example: AES (Advanced Encryption Standard), DES (Data Encryption


Standard).

2. Asymmetric Encryption (Public-Key Encryption):

Explanation: Involves a pair of keys - public and private - for encryption


and decryption.

How it Works: Public key encrypts data while the private key decrypts it.
Information encrypted with the public key can only be decrypted with the
corresponding private key.

Example: RSA (Rivest-Shamir-Adleman), ECC (Elliptic Curve


Cryptography).

3. Hash Functions:

Explanation: Converts data into a fixed-size string of characters, known as


a hash or digest.

How it Works: Irreversibly transforms input data into a unique hash value.
The same input always produces the same output.

Example: SHA-256 (Secure Hash Algorithm), MD5 (Message Digest


Algorithm).

4. Hybrid Encryption:

DBMS Q BANK 42
Explanation: Combines symmetric and asymmetric encryption for better
security and efficiency.

How it Works: Uses symmetric encryption for encrypting data and


asymmetric encryption for sharing the symmetric key securely.

Example: SSL/TLS (Secure Socket Layer/Transport Layer Security).

5. Quantum Encryption:

Explanation: Utilizes principles of quantum mechanics to secure


communication.

How it Works: Relies on the properties of quantum physics (such as


entanglement and superposition) to secure data transmission, offering
resistance against hacking attempts.

Example: Quantum Key Distribution (QKD).

Each encryption scheme serves different purposes and has varying levels of
security. They are used based on specific requirements like performance, security
needs, and the nature of data being protected.

UNIT 9
Write SQL statements (Query) for following tables:
T1(rollno, stuname, age, city, branchcode)
T2(branchcode, branchname)

1. Retrieve students details whose branchcode is 5.

2. Find an average age of all students.

3. Add new branch in T2 table.

4. Display rollno, stuname and age of students whose city is Chennai.

5. Change age of student to 20 whose rollno is 1.

6. Delete student details whose age is 18.

7. Retrieve branch information in descending order.

1. Select * from T1 where branchcode=5;

DBMS Q BANK 43
2. Select avg(age) from T1;

3. Alter table T2 add (new_branch varchar2(15));

4. Select rollno, stuname, age from T1 where city=’Chennai’;

5. Update T1 set age=20 where rollno=1;

6. Delete from T1 where age=18;

7. Select * from T2 order by branchcode desc;

List and explain aggregation functions with suitable example.

Count():

Count(*): Returns the total number of records, i.e., 6.

Count(salary): Returns the number of non-null values in the "salary" column,


i.e., 5.

Count(Distinct Salary): Returns the number of distinct non-null values in the


"salary" column, i.e., 4.

Sum():

Sum(salary): Sum of all non-null values in the "salary" column, i.e., 310.

Sum(Distinct salary): Sum of all distinct non-null values in the "salary" column,
i.e., 250.

Avg():

Avg(salary) = Sum(salary) / Count(salary) = 310/5.

Avg(Distinct salary) = Sum(Distinct salary) / Count(Distinct Salary) = 250/4.

Min() and Max():

Min(salary): Minimum value in the "salary" column (excluding NULL), i.e., 40.

Max(salary): Maximum value in the "salary" column, i.e., 80.

Describe GRANT and REVOKE commands


Grant :
The GRANT (privilege) statement grants privileges on the database as a whole or

DBMS Q BANK 44
on individual tables, views, sequences or procedures.
Revoke :
The GRANT (privilege) statement grants privileges on the database as a whole or
on individual tables, views, sequences or procedures.

Describe ROLLBACK and COMMIT commands


ROLLBACK : The ROLLBACK statement lets a user undo all the alterations and
changes that occurred on the current transaction after the last COMMIT.
COMMIT : The COMMIT statement lets a user save any changes or alterations on
the current transaction. These changes then remain permanent.

UNIT 10
Write short note on cursor.

Cursors are database objects used to traverse the results


of a select SQL query.

It is a temporary work area created in the system memory


when a select SQL statement is executed.

This temporary work area is used to store the data


retrieved from the database, and manipulate this data.

It points to a certain location within a record set and allow


the operator to move forward (and sometimes backward,
depending upon the cursor type).

We can process only one record at a time.

The set of rows the cursor holds which is called the active set (active data set).

Cursors are often criticized for their high overhead.

There are two types of cursors in PL/SQL:


• Implicit cursors

These are created by default by ORACLE itself when DML


statements like, insert, update, and delete statements are
executed.

DBMS Q BANK 45
• They are also created when a SELECT statement that
returns just one row is executed.
• We cannot use implicit cursors for user defined work.

• Explicit cursors

Explicit cursors are user defined cursors written by the


developer.
• They can be created when a SELECT statement that
returns more than one row is executed.
• Even though the cursor stores multiple records, only one
record can be processed at a time, which is called as
current row.
• When you fetch a row, the current row position moves to
next row.

Explain Trigger with proper example


A trigger in DBMS is a special type of stored procedure that is automatically
executed when a specific event occurs on a database table. Triggers can be used to
enforce business rules, maintain data integrity, and automate certain actions within
a database.
Here is a short and simple explanation of triggers in DBMS with an example:

Example:
Consider a database table for a bank that stores customer account information. The
bank has a business rule that requires all accounts to have a minimum balance of
$100. To enforce this business rule, the bank can create a trigger that is executed
whenever a withdrawal is made from an account. The trigger will check the account
balance after the withdrawal and raise an error if the balance falls below $100.
Here is a pseudocode example of the trigger:
Trigger: account_balance_check

Event: withdrawal

Condition: account_balance < 100

Action: raise error

This trigger will ensure that no account balance ever falls below $100.

DBMS Q BANK 46
Triggers can be used to implement a variety of other business rules, such as:

Preventing users from deleting important data

Auditing changes to data

Automatically updating related data when a change is made to a table

Triggers can also be used to automate tasks, such as sending email notifications or
generating reports.
Overall, triggers are a powerful tool that can be used to improve the integrity and
functionality of database systems.

Explain stored procedures with proper example.


A stored procedure in a database management system (DBMS) is a pre-compiled
set of SQL statements that can be executed as a single unit. Stored procedures are
typically used to perform complex database operations, such as inserting, updating,
deleting, and querying data. They can also be used to implement business logic and
enforce data integrity.
Here is a short and simple explanation of stored procedures with an example:

Example:
Consider a database for a bank that stores customer account information. The bank
has a stored procedure that is used to open new accounts. The stored procedure
takes the customer's name, address, and initial deposit amount as parameters and
inserts a new record into the customer account table. The stored procedure also
verifies that the initial deposit amount is greater than zero and that the customer's
address is valid.
Here is a pseudocode example of the stored procedure:
Stored procedure: open_account

Parameters: customer_name, customer_address, initial_deposit_amount

Conditions: initial_deposit_amount > 0 and customer_address is valid

Action: insert a new record into the customer account table

This stored procedure encapsulates the logic for opening a new account into a
single unit. This makes it easier to maintain and reuse the logic.
Stored procedures can be used to implement a variety of other operations, such as:

DBMS Q BANK 47
Transferring money between accounts

Generating reports

Performing data validation

Enforcing business rules

Write a PL/SQL code to print sum of the even numbers between 1 to 100
here is a PL/SQL code to print the sum of the even numbers between 1 to 100:
Code snippet

DECLARE
even_sum NUMBER := 0;
BEGIN
FOR i IN 2..100 BY 2 LOOP
even_sum := even_sum + i;
END LOOP;

DBMS_OUTPUT.PUT_LINE('The sum of the even numbers between 1 to 100 is ' || even_su


m);
END;
/

This code will declare a variable called even_sum to store the sum of the even
numbers. Then, it will use a FOR loop to iterate over all even numbers from 2 to
100. For each even number, the code will add the number to the even_sum variable.
Finally, the code will use the DBMS_OUTPUT.PUT_LINE() procedure to print the value of
the even_sum variable to the console.
To execute this code, you can use the following command:

sqlplus user/password @sum_even_numbers.sql

This will execute the PL/SQL code and print the sum of the even numbers between
1 to 100 to the console.

DBMS Q BANK 48

You might also like