Professional Documents
Culture Documents
Adbms Question Paper 1-1
Adbms Question Paper 1-1
Q2) Draw the E-R Diagram for the following system & explain notation & Relationship XYZ hospital is a
multispeciality hospital that includes a names of department, rooms, Doctors, nurses, compounders
of other working staff. Patient having different kind of ailments come to the hospital of get checkup
done from the concerned doctors. If required they are admitted in the hospital of discharged after
treatment. The aim of this case study is to design and develop a database for the hospital to maintain
the records of various department, rooms and doctors in the hospital. It also maintains records of the
regular patients, patient admitted in the hospital, the checkup of patient done by the doctors, the
patients that have been operated and patient discharged from the hospital. [10 ]
-->
What is meant by lock? Explain two phase locking protocal for concurrency control with example. [10]
1. Shared lock:
2. Exclusive lock:
o In the exclusive lock, the data item can be both reads as well as
written by the transaction.
o This lock is exclusive, and in this lock, multiple transactions do not
modify the same data simultaneously.
Two-phase locking (2PL)
o The two-phase locking protocol divides the execution phase of the
transaction into three parts.
o In the first part, when the execution of the transaction starts, it seeks
permission for the lock it requires.
o In the second part, the transaction acquires all the locks. The third
phase is started as soon as the transaction releases its first lock.
o In the third phase, the transaction cannot demand any new locks. It
only releases the acquired locks.
Growing phase: In the growing phase, a new lock on the data item may be
acquired by the transaction, but none can be released.
Example:
The following way shows how unlocking and locking work with 2-PL.
Transaction T1:
1. Internal Level
o The internal level has an internal schema which describes the physical storage structure
of the database.
o The internal schema is also known as a physical schema.
o It uses the physical data model. It is used to define that how the data will be stored in a
block.
o The physical level is used to describe complex low-level data structures in detail.
The internal level is generally is concerned with the following activities:
2. Conceptual Level
o The conceptual schema describes the design of a database at the conceptual level.
Conceptual level is also known as logical level.
o The conceptual schema describes the structure of the whole database.
o The conceptual level describes what data are to be stored in the database and also
describes what relationship exists among those data.
o In the conceptual level, internal details such as an implementation of the data structure
are hidden.
o Programmers and database administrators work at this level.
3. External Level
o At the external level, a database contains several schemas that sometimes called as
subschema. The subschema is used to describe the different view of the database.
o An external schema is also known as view schema.
o Each view schema describes the database part that a particular user group is interested
and hides the remaining database from that user group.
o The view schema describes the end user interaction with database systems.
In many instances, a user may utilize a mobile device to log in to any corporate
database server and deal with data there, depending on the specific requirements of
mobile applications. While in other cases, the user can upload data collected at the
remote location to the company database or download it and work with it on a mobile
device. The interaction between the corporate and mobile databases is frequently
intermittent and only occasionally establishes or establishes a link for a brief period of
time.
OR
-->1 Database and Need for DBMS: A database is a collection of data, usually
stored in electronic form. A database is typically designed so that it is easy to store and
access information.
A good database is crucial to any company or organisation. This is because the
database stores all the pertinent details about the company such as employee records,
transactional records, salary details etc.
Accurate
A database is pretty accurate as it has all sorts of build in constraints, checks etc. This
means that the information available in a database is guaranteed to be correct in most
cases.
Security of data
Databases have various methods to ensure security of data. There are user logins
required before accessing a database and various access specifiers. These allow only
authorised users to access the database.
Data integrity
This is ensured in databases by using various constraints for data. Data integrity in
databases makes sure that the data is accurate and consistent in a database.
Characteristics of DBMS
Some well-known characteristics are present in the DBMS (Database Management
System). These are explained below.
2. Self-explaining nature
o In DBMS (Database Management System), the Database contains another database, and
another database also contains metadata.
o Here the term metadata means data about data.
o For example, in a school database, the total number of rows and the table's name are
examples of metadata.
o So the self-explaining nature means the Database explains all the information
automatically itself. This is because, in the Database, all the data are stored in a
structured format.
2)Persistence: OODMS provides mechanisms for persisting objects directly into the database,
preserving their state between different program executions. Objects can be stored, retrieved,
updated, and deleted from the database, enabling seamless integration between the application
and the data storage layer.
3)Complex Data Types: OODMS supports complex data types, such as arrays, lists, sets, and
even other objects, within an object attribute. This allows for modeling and storing data
structures in a more natural and flexible manner.
4)Relationships and Associations: OODMS allows for defining and managing relationships
between objects. It supports various types of associations, including one-to-one, one-to-many,
and many-to-many relationships. This enables the representation of complex relationships and
associations between different objects in the database.
5)Inheritance and Polymorphism: OODMS supports inheritance, where objects can inherit
properties and behaviors from parent objects or classes. This promotes code reuse and
modularity. Polymorphism allows objects to be treated as instances of their parent classes,
providing flexibility and extensibility.
6)Query Language: OODMS provides a query language for retrieving and manipulating objects
stored in the database. The query language is typically based on object-oriented concepts and
syntax, allowing for expressive and powerful queries that can navigate object relationships and
perform complex operations.
9)Extensibility and Scalability: OODMS provides a framework for extending the data model
and adding new classes, attributes, and methods as application requirements evolve. It also
supports scalability, allowing the database to handle increasing data volumes and user loads
without sacrificing performance or functionality.
Q4) a) Write the log based recovery techniques with example? [5]
--> Log-Based Recovery
o The log is a sequence of records. Log of each transaction is maintained in some stable
storage so that if any failure occurs, then it can be recovered from there.
o If any operation is performed on the database, then it will be recorded in the log.
o But the process of storing the logs should be done before the actual transaction is
applied in the database.
Let's assume there is a transaction to modify the City of a student. The following logs
are written for this transaction.
1. <Tn, Commit>
1. If the log contains the record <Ti, Start> and <Ti, Commit> or <Ti, Commit>, then the
Transaction Ti needs to be redone.
2. If log contains record<Tn, Start> but does not contain the record either <Ti, commit> or
<Ti, abort>, then the Transaction Ti needs to be undone.
Checkpoint
o The checkpoint is a type of mechanism where all the previous logs are removed from the
system and permanently stored in the storage disk.
o The checkpoint is like a bookmark. While the execution of the transaction, such
checkpoints are marked, and the transaction is executed then using the steps of the
transaction, the log files will be created.
o When it reaches to the checkpoint, then the transaction will be updated into the
database, and till that point, the entire log file will be removed from the file. Then the log
file is updated with the new step of transaction till next checkpoint and so on.
o The checkpoint is used to declare a point before which the DBMS was in the consistent
state, and all transactions were committed.
b) Write short note on Grant and revoking privilege with example. [5]
-->Data Controlling Language (DCL) helps users to retrieve and modify the data
stored in the database with some specified queries. Grant and Revoke belong
to these types of commands of the Data controlling Language. DCL is a
component of SQL commands.
1. Grant :
SQL Grant command is specifically used to provide privileges to database
objects for a user. This command also allows users to grant permissions to
other users too.
Syntax:
grant privilege_name on object_name
to {user_name | public | role_name}
Here privilege_name is which permission has to be granted, object_name is the
name of the database object, user_name is the user to which access should be
provided, the public is used to permit access to all the users.
2. Revoke :
Revoke command withdraw user privileges on database objects if any granted.
It does operations opposite to the Grant command. When a privilege is revoked
from a particular user U, then the privileges granted to all other users by user U
will be revoked.
Syntax:
revoke privilege_name on object_name
from {user_name | public | role_name}
Example:
grant insert,
select on accounts to Ram
By the above command user ram has granted permissions on accounts
database object like he can query or insert into accounts.
revoke insert,
select on accounts from Ram
By the above command user ram’s permissions like query or insert on accounts
database object has been removed.
To know the exact syntax and how are they used click here.
Differences between Grant and Revoke commands:
2 It assigns access rights to users. It revokes the user access rights of users.
OR
Protecting data against loss, corruption, disasters (human-caused or natural), and other
problems is one of the IT organizations' top priorities. To avoid this loss, implementing
an efficient and effective set of backup operations can be difficult.
The term backup has become synonymous with data protection over the past several
decades and maybe accomplished via several methods. Backup software applications
reduce the complexity of performing backup and recovery operations. Backing up data
is only one part of a disaster protection plan and may not provide the level of data and
disaster recovery capabilities desired without careful design and testing.
You can manually perform the backup by copying the data to a different location or
automatically using a backup program. Each backup program has its approach in
executing the backup.
There are four most common backup types implemented and generally used in most of
these programs, such as:
1. Full backup
2. Incremental backup
3. Differential backup
4. Mirror backup
A type of backup defines how data is copied from source to destination and lays the
data repository model's grounds or how the back-up is stored and structured.
There are some types of backup that are better in certain locations. If we perform cloud
backup, then incremental backups are generally a better backup type because they
consume fewer resources. We might start with a full backup in the cloud and then shift
to incremental backups. Mirror backup, though, is typically more of an on-premises
approach and often involves disks.
Full backups
The most basic and complete type of backup operation is a full backup. As the name
implies, this backup type makes a copy of all data to a storage device, such as a disk or
tape. The primary advantage of performing a full backup during every operation is that
a complete copy of all data is available with a single media set.
It takes the shortest time to restore data, a metric known as a recovery time objective.
However, the disadvantages are that it takes longer to perform a full backup than other
types, requiring more storage space.
Thus, full backups are typically run only periodically. Data centers with a small amount of
data may choose to run a full backup daily or even more often in some cases. Typically,
backup operations employ a full backup in combination with either incremental or
differential backups.
Incremental backups
An incremental backup operation will result in copying only the data that has changed
since the last backup operation of any type. An organization typically uses the modified
timestamp on files and compares them to the last backup timestamp.
Backup applications track and record the date and time that backup operations occur to
track files modified since these operations. Because an incremental backup will only
copy data since the last backup of any type, an organization may run it as often as
desired, with only the most recent changes stored.
The benefit of an incremental backup is that it copies a smaller amount of data than a
full. Thus, these operations will have a faster backup speed and require fewer media to
store the backup.
Differential backups
A differential backup operation is similar to an incremental the first time it is performed,
in that it will copy all data changed from the previous backup. However, each time it is
run afterward, it will continue to copy all data changed since the previous full backup.
Therefore, it will store more backed up data than an incremental on subsequent
operations, although typically far less than a full backup.
Differential backups require more space and time to complete than incremental
backups, although less than full backups. From these three primary types of backup, it is
possible to develop an approach for comprehensive data protection. An organization
often uses one of the following backup settings:
o Full daily
o Full weekly + differential daily
o Full weekly + incremental daily
Full backup daily requires the most amount of space and will also take the most amount
of time. However, more total copies of data are available, and fewer media pieces are
required to perform a restore operation. As a result, implementing this backup policy
has a higher tolerance to disasters and provides the least time to restore since any data
required will be located on at most one backup set.
A full backup weekly coupled with running incremental backups daily will deliver the
shortest backup time during weekdays and use the least storage space. However, fewer
copies of data available and restored time is the longest since an organization may need
to use six sets of media to recover the necessary information.
A weekly full backup with daily differential backups delivers results in between the other
alternatives. that is, more backup media sets are required to restore than with a daily full
policy, although less than with a daily incremental policy. Also, restoring time is less than
using daily incremental backups and more than daily full backups. To restore data from
a particular day, at most two media sets are required, diminishing the time needed to
recover and the potential for problems with an unreadable backup set.
Mirror backups
A mirror backup is comparable to a full backup. This backup type creates an exact copy
of the source data set, but only the latest data version is stored in the backup repository
with no track of different versions of the files. All the different backed up files are stored
separately like they are in the source.
One of the benefits of mirror backup is a fast data recovery time. It's also easy to access
individual backed up files.
One of the main drawbacks, though, is the amount of storage space required. With that
extra storage, organizations should be wary of cost increases and maintenance needs. If
there's a problem in the source data set, such as corruption or deletion, the mirror
backup experiences the same. As a result, it is not to rely on mirror backups for all the
data protection needs and have other backup types for the data.
One specific kind of mirror, disk mirroring, is also known as RAID 1. This process
replicates data to two or more disks. Disk mirroring is a strong option for data that
needs high availability because of its quick recovery time. It's also helpful for disaster
recovery because of its immediate failover capability. Disk mirroring requires at least two
physical drives. If one hard drive fails, an organization can use the mirror copy. While
disk mirroring offers comprehensive data protection, it requires a lot of storage capacity.
Smart backups
Smart backup is a backup type that combines the full, differential and incremental
backup types with cleanup operations to efficiently manage the backup settings and the
free disk space in the destination. The Smart backup type starts with a full backup.
The advantage is that we don't need to worry about the number of backups to store to
fit on the destination drive, which backup version to clean or merge, as Backup4all will
take care of that.
--> Mandatory Access Control (MAC) is a security model used in Database Management
Systems (DBMS) to enforce data access restrictions based on predefined security policies. In
MAC, access control decisions are made by the system itself, and users have limited control
over granting or revoking access permissions.
Here are some key points about Mandatory Access Control in DBMS:
Security Levels and Labels: MAC assigns security levels or labels to both users and data objects.
Security levels are typically defined using a hierarchical structure, such as a classification
scheme, where each level represents a different level of sensitivity or confidentiality. Data
objects are labeled with their respective security levels.
Access Control Policies: MAC enforces access control policies based on security levels. These
policies define the rules and restrictions for accessing data objects. For example, a policy may
state that a user with a "Top Secret" security level can only read or modify data objects labeled
with the same or lower security level.
Data Classification: MAC requires the classification of data objects into specific security levels
based on their sensitivity or importance. This classification is typically done during the design or
setup phase, where administrators assign appropriate labels to data objects.
Need-to-Know Principle: MAC follows the principle of "need-to-know," meaning that users are
granted access only to the data objects they need to perform their authorized tasks. Access to
higher-level data objects is restricted to users with appropriate security clearances.
Centralized Control: In MAC, access control decisions are centralized and enforced by the DBMS
itself. The system administrator or security administrator defines the access control policies,
sets the security levels, and manages user privileges. Users have limited control over modifying
access permissions, as these decisions are primarily determined by the system.
Strong Data Isolation: MAC ensures strong data isolation by strictly controlling the flow of
information between different security levels. It prevents unauthorized users from accessing or
modifying data objects with higher security levels, even if they have legitimate access to lower-
level data objects.
Enhanced Security: MAC provides a higher level of security compared to other access control
models, such as Discretionary Access Control (DAC). It reduces the risk of data breaches, insider
threats, and unauthorized access, as access permissions are determined by system policies
rather than individual user discretion.
Compliance and Auditability: MAC helps organizations comply with security regulations and
standards by providing a structured approach to data access control. It also enables
comprehensive audit trails, allowing for monitoring and tracking of access activities for
compliance, forensics, and security analysis purposes.
Q5) Explain the Inter query of Intraquery parallelizm in details with example. [10]
--> For example: If there are 6 queries, each query will take 3 seconds for
evaluation. Thus, the total time taken to complete evaluation process is 18
seconds. Inter query parallelism achieves this task only in 3 seconds.
. Intra-query parallelism :
Intra-query parallelism refers to the execution of a single query in a parallel process on
different CPUs using a shared-nothing paralleling architecture technique. This uses two
types of approaches:
First approach –
In this approach, each CPU can execute the duplicate task against some data portion.
Second approach –
In this approach, the task can be divided into different sectors with each CPU
executing a distinct subtask.
For Example: If we have 6 queries, which can take 3 seconds to complete the
evaluation process, the total time to complete the evaluation process is 18
seconds. But We can achieve this task in only 3 seconds by using intra query
evaluation as each query is divided in sub-queries.
OR
Explain different between homogeneous and hetrogeneous database with example. [10]
Data Model Use a unified data model May have varying data models
Query Language Use the same query language May have different query languages
Homogeneous Database Heterogeneous Database
Vendor
Dependency Dependent on a single vendor Not tied to a specific vendor
Homogeneous Database:
Heterogeneous Database: