Download as pdf or txt
Download as pdf or txt
You are on page 1of 30

Search Mca Guys on youtube

Q2) Draw the E-R Diagram for the following system & explain notation & Relationship XYZ hospital is a
multispeciality hospital that includes a names of department, rooms, Doctors, nurses, compounders
of other working staff. Patient having different kind of ailments come to the hospital of get checkup
done from the concerned doctors. If required they are admitted in the hospital of discharged after
treatment. The aim of this case study is to design and develop a database for the hospital to maintain
the records of various department, rooms and doctors in the hospital. It also maintains records of the
regular patients, patient admitted in the hospital, the checkup of patient done by the doctors, the
patients that have been operated and patient discharged from the hospital. [10 ]

-->
What is meant by lock? Explain two phase locking protocal for concurrency control with example. [10]

--> Lock-Based Protocol


In this type of protocol, any transaction cannot read or write data until it
acquires an appropriate lock on it. There are two types of lock:

1. Shared lock:

o It is also known as a Read-only lock. In a shared lock, the data item


can only read by the transaction.
o It can be shared between the transactions because when the
transaction holds a lock, then it can't update the data on the data
item.

2. Exclusive lock:

o In the exclusive lock, the data item can be both reads as well as
written by the transaction.
o This lock is exclusive, and in this lock, multiple transactions do not
modify the same data simultaneously.
Two-phase locking (2PL)
o The two-phase locking protocol divides the execution phase of the
transaction into three parts.
o In the first part, when the execution of the transaction starts, it seeks
permission for the lock it requires.
o In the second part, the transaction acquires all the locks. The third
phase is started as soon as the transaction releases its first lock.
o In the third phase, the transaction cannot demand any new locks. It
only releases the acquired locks.

There are two phases of 2PL:

Growing phase: In the growing phase, a new lock on the data item may be
acquired by the transaction, but none can be released.

Shrinking phase: In the shrinking phase, existing lock held by the


transaction may be released, but no new locks can be acquired.

In the below example, if lock conversion is allowed then the following


phase can happen:
1. Upgrading of lock (from S(a) to X (a)) is allowed in growing phase.
2. Downgrading of lock (from X(a) to S(a)) must be done in shrinking
phase.

Example:

The following way shows how unlocking and locking work with 2-PL.

Transaction T1:

o Growing phase: from step 1-3


o Shrinking phase: from step 5-7
o Lock point: at 3
Transaction T2:

o Growing phase: from step 2-6


o Shrinking phase: from step 8-9
o Lock point: at 6
o Q3) a) Describe the data base 3-tier schema architecture? [5

--> Three schema Architecture


o The three schema architecture is also called ANSI/SPARC architecture or three-level
architecture.
o This framework is used to describe the structure of a specific database system.
o The three schema architecture is also used to separate the user applications and physical
database.
o The three schema architecture contains three-levels. It breaks the database down into
three different categories.

The three-schema architecture is as follows:


In the above diagram:

o It shows the DBMS architecture.


o Mapping is used to transform the request and response between various database levels
of architecture.
o Mapping is not good for small DBMS because it takes more time.
o In External / Conceptual mapping, it is necessary to transform the request from external
level to conceptual schema.
o In Conceptual / Internal mapping, DBMS transform the request from the conceptual to
internal level.
Objectives of Three schema Architecture
The main objective of three level architecture is to enable multiple users to access the
same data with a personalized view while storing the underlying data only once. Thus it
separates the user's view from the physical structure of the database. This separation is
desirable for the following reasons:

o Different users need different views of the same data.


o The approach in which a particular user needs to see the data may change over time.
o The users of the database should not worry about the physical implementation and
internal workings of the database such as data compression and encryption techniques,
hashing, optimization of the internal structures etc.
o All users should be able to access the same data according to their requirements.
o DBA should be able to change the conceptual structure of the database without
affecting the user's
o Internal structure of the database should be unaffected by changes to physical aspects of
the storage.

1. Internal Level

o The internal level has an internal schema which describes the physical storage structure
of the database.
o The internal schema is also known as a physical schema.
o It uses the physical data model. It is used to define that how the data will be stored in a
block.
o The physical level is used to describe complex low-level data structures in detail.
The internal level is generally is concerned with the following activities:

o Storage space allocations.


For Example: B-Trees, Hashing etc.
o Access paths.
For Example: Specification of primary and secondary keys, indexes, pointers and
sequencing.
o Data compression and encryption techniques.
o Optimization of internal structures.
o Representation of stored fields.

2. Conceptual Level

o The conceptual schema describes the design of a database at the conceptual level.
Conceptual level is also known as logical level.
o The conceptual schema describes the structure of the whole database.
o The conceptual level describes what data are to be stored in the database and also
describes what relationship exists among those data.
o In the conceptual level, internal details such as an implementation of the data structure
are hidden.
o Programmers and database administrators work at this level.

3. External Level
o At the external level, a database contains several schemas that sometimes called as
subschema. The subschema is used to describe the different view of the database.
o An external schema is also known as view schema.
o Each view schema describes the database part that a particular user group is interested
and hides the remaining database from that user group.
o The view schema describes the end user interaction with database systems.

b) Write short note on mobile database. [5]

--> Mobile Database:


A Mobile Database is a type of database that can be accessed by a mobile network and
connected to a mobile computing device (or wireless network). Here, there is a wireless
connection between the client and the server. In the modern world, Mobile Cloud
Computing is expanding quickly and has enormous potential for the database industry.
It will work with a variety of various devices, including Mobile Databases powered by
iOS and Android, among others. Couchbase Lite, Object Box, and other popular
databases are examples of databases.

Mobile Database Environment has the Following Components:


o For storing the corporate and providing the corporate applications, a Corporate
Database Server and DBMS is used.
o For storing the mobile data and providing the mobile application, a Remote Database
and server are used.
o There is always a two-way communication link present between the Mobile DBMS and
Corporate DBMS.

Features of Mobile Database:


o There are a lot of features of Mobile Database which are discussed below:
o As more people utilize laptops, smartphones, and PDAs to live on the go.
o To prevent frequent transactions from being missed due to connection failure, a cache is
kept.
o Mobile Databases and the main database server are physically independent.
o Mobile gadgets hosted Mobile Databases.
o Mobile Databases can communicate with other mobile clients or a centralized database
server from distant locations.
o Due to unreliable or nonexistent connections, mobile users need to be able to operate
without a wireless connection with the aid of a Mobile Database (disconnected).
o Information on mobile devices is analyzed and managed using a Mobile Database.

Mobile Database Consists of Three Parties Which are Described


Below:
o Fixed_Hosts:
With the aid of database servers, it handles transactions and manages data.
o Mobile_Units:
These are mobile, transportable computers, and the cell tower they utilize to connect to
base stations is a part of that geographical area.
o Base_Stations:
These two-way radios, which are installed in fixed places, allow communication between
the stationary hosts and the mobile units.

In many instances, a user may utilize a mobile device to log in to any corporate
database server and deal with data there, depending on the specific requirements of
mobile applications. While in other cases, the user can upload data collected at the
remote location to the company database or download it and work with it on a mobile
device. The interaction between the corporate and mobile databases is frequently
intermittent and only occasionally establishes or establishes a link for a brief period of
time.

Additional Functionalities of a Mobile DBMS Consist of the


Following Capabilities:
o It should communicate to the centralized and primary database through different modes.
o On mobile devices and centralized DBMS servers, the data should be repeated.
o From the internet, capture the data.
o Mobile devices should be capable of dealing with that data.
o Mobile devices must analyze the data.
o Must create a personalized and customized application.
Limitations:
o There are a lot of limitations or drawbacks available, which are pointed out below:
o Its wireless bandwidth is restricted.
o It is very difficult work to make this database theft-proof.
o To operate this, we need unlimited battery power.
o Wireless communication speed suffers in mobile databases.
o In terms of security, it is less secure.

OR

a) What is need of the Database? Write characteristics of DBMS. [5]

-->1 Database and Need for DBMS: A database is a collection of data, usually
stored in electronic form. A database is typically designed so that it is easy to store and
access information.
A good database is crucial to any company or organisation. This is because the
database stores all the pertinent details about the company such as employee records,
transactional records, salary details etc.

The various reasons a database is important are −

Manages large amounts of data


A database stores and manages a large amount of data on a daily basis. This would
not be possible using any other tool such as a spreadsheet as they would simply not
work.

Accurate
A database is pretty accurate as it has all sorts of build in constraints, checks etc. This
means that the information available in a database is guaranteed to be correct in most
cases.

Easy to update data


In a database, it is easy to update data using various Data Manipulation languages
(DML) available. One of these languages is SQL.

Security of data
Databases have various methods to ensure security of data. There are user logins
required before accessing a database and various access specifiers. These allow only
authorised users to access the database.

Data integrity
This is ensured in databases by using various constraints for data. Data integrity in
databases makes sure that the data is accurate and consistent in a database.

Easy to research data


It is very easy to access and research data in a database. This is done using Data
Query Languages (DQL) which allow searching of any data in the database and
performing computations on it.

Characteristics of DBMS
Some well-known characteristics are present in the DBMS (Database Management
System). These are explained below.

1. Real World Entity


o The reality of DBMS (Database Management System) is one of the most important and
easily understandable characteristics. The DBMS (Database Management System) is
developed in such a way that it can manage huge business organizations and store their
business data with security.
o The Database can store information such as the cost of vegetables, milk, bread, etc. In
DBMS (Database Management System), the entities look like real-world entities.
o For example, if we want to create a student database, we need some entity. Any student
stores their data.
o In the Database, then, it should be the real-world entity. The most commonly used
properties in the student database are name, age, gender, roll number, etc.

2. Self-explaining nature
o In DBMS (Database Management System), the Database contains another database, and
another database also contains metadata.
o Here the term metadata means data about data.
o For example, in a school database, the total number of rows and the table's name are
examples of metadata.
o So the self-explaining nature means the Database explains all the information
automatically itself. This is because, in the Database, all the data are stored in a
structured format.

3. Concurrent Access without Anomalies


o Here the term anomalies mean multiuser can access the Database and fetch the
information without any problem.
o For a better understanding, let's take the example of a bank again. Let Sonu give his
ATM card to his sister Archita and tell her to withdraw 5000 from the ATM. At the same
time, Sonu transferred 2000 rupees to his brother Monu. At the same time, both
operations perform successfully. Initially, Sonu had 10000 rupees in his bank account.
After both transactions, i.e., transfer and withdraw, when Sonu checks his bank balance, it
shows 3000 rupees. This error-free updation of bank balance is possible with the help of
the concurrent feature of the Database.
o Thus here we see that concurrent is a great feature of the Database.

4. Stores Any Kind of Structured Data


o The Database has the ability to store the data in a structured format.
o In most of the websites, we see that only student database examples are given for a
better understanding, but the important fact is that the Database has the ability to store
an unlimited amount of data.
o DBMS has the ability to store any type of data that exists in the real world, and these
data are structured way. It is another type of very important characteristic of DBMS..

5. Ease of Access (The DBMS Queries)


o The file and folder system was used to store the data before the DBMS came to the
market.
o Searching for the student's name was a very difficult task at that time. This is because
every search operation is done manually in the file and folder system. But when DBMS
comes into the market, it is very easy to access the Database.
o In DBMS, we can search any kind of stored data by applying a simple search operation
query. It is so much faster than manual searching.
o In DBMS, there is a CRUD operation ( here CRUD means Create, Read, Update & Delete)
by which we can implement all the types of query in the Database.

6. SQL and No-SQL Databases


o There are two types of databases (not DBMS): SQL and No-SQL.
o The SQL databases store the data in the form of Tables, i.e., rows and columns. The No-
SQL databases can store data in any form other than a table. For instance: the very
popular MongoDB stores the data in the form of JSON (JavaScript Object Notation).
o The availability of SQL and No-SQL databases allows us to choose the method of storing
the data as well.
o There should not be any debate between SQL and No-SQL databases. The one that we
require for a particular project is better for that project, while the other might be better
for some other use.
o This is a characteristic of DBMS because DBMS allows us to perform operations on both
kinds of databases. So, we can run queries and operations on SQL as well as No-SQL
databases.
7. ACID Properties
o The DBMS follows certain properties to maintain consistency in the Database. These
properties are usually termed ACID Properties.
o However, we have already talked about some of these properties, but it is very important
to mention the ACID properties as a whole.
o ACID stands for Atomicity, Consistency, Isolation, and Durability.
o We have already talked about atomicity and consistency. Atomicity means the
transaction should either be 0% or 100% completed, and consistency means that the
change in data should be reflected everywhere in a database.
o Isolation means that multiple transactions can occur independently without the
interference of some other transactions.
o Durability means that the chances of a successful atomic transaction, i.e., a transaction
that has been 100% completed, should reflect in the Database.

b) Write the characteristics of OODMS. [5]

-->1)Object-Orientation: OODMS follows the principles of object-oriented programming,


where data is represented as objects that encapsulate both data and behavior. Objects are
instances of classes, and they can inherit properties and methods from their parent classes. This
allows for modeling complex real-world entities and relationships in a natural and intuitive way.

2)Persistence: OODMS provides mechanisms for persisting objects directly into the database,
preserving their state between different program executions. Objects can be stored, retrieved,
updated, and deleted from the database, enabling seamless integration between the application
and the data storage layer.

3)Complex Data Types: OODMS supports complex data types, such as arrays, lists, sets, and
even other objects, within an object attribute. This allows for modeling and storing data
structures in a more natural and flexible manner.

4)Relationships and Associations: OODMS allows for defining and managing relationships
between objects. It supports various types of associations, including one-to-one, one-to-many,
and many-to-many relationships. This enables the representation of complex relationships and
associations between different objects in the database.
5)Inheritance and Polymorphism: OODMS supports inheritance, where objects can inherit
properties and behaviors from parent objects or classes. This promotes code reuse and
modularity. Polymorphism allows objects to be treated as instances of their parent classes,
providing flexibility and extensibility.

6)Query Language: OODMS provides a query language for retrieving and manipulating objects
stored in the database. The query language is typically based on object-oriented concepts and
syntax, allowing for expressive and powerful queries that can navigate object relationships and
perform complex operations.

7)Concurrency Control and Transaction Management: OODMS incorporates concurrency


control mechanisms to handle concurrent access to the database by multiple users or processes. It
ensures data integrity and consistency by managing locks, handling conflicts, and enforcing
transaction isolation and atomicity.

8)Performance Optimization: OODMS includes optimization techniques to improve query


performance and minimize the impact on system resources. These techniques may involve
indexing, caching, query optimization, and other strategies specific to object-oriented data
management.

9)Extensibility and Scalability: OODMS provides a framework for extending the data model
and adding new classes, attributes, and methods as application requirements evolve. It also
supports scalability, allowing the database to handle increasing data volumes and user loads
without sacrificing performance or functionality.

10)Integration with Programming Languages: OODMS is designed to seamlessly integrate


with object-oriented programming languages, enabling easy interaction between the application
code and the database. This allows developers to work with objects in a consistent and efficient
manner.

Overall, OODMS combines the benefits of object-oriented programming and database


management, providing a powerful and flexible approach to storing, retrieving, and managing
data in complex software systems.

Q4) a) Write the log based recovery techniques with example? [5]
--> Log-Based Recovery
o The log is a sequence of records. Log of each transaction is maintained in some stable
storage so that if any failure occurs, then it can be recovered from there.
o If any operation is performed on the database, then it will be recorded in the log.
o But the process of storing the logs should be done before the actual transaction is
applied in the database.

Let's assume there is a transaction to modify the City of a student. The following logs
are written for this transaction.

o When the transaction is initiated, then it writes 'start' log.


1. <Tn, Start>
o When the transaction modifies the City from 'Noida' to 'Bangalore', then another log is
written to the file.

1. <Tn, City, 'Noida', 'Bangalore' >


o When the transaction is finished, then it writes another log to indicate the end of the
transaction.

1. <Tn, Commit>

There are two approaches to modify the database:

1. Deferred database modification:


o The deferred modification technique occurs if the transaction does not modify the
database until it has committed.
o In this method, all the logs are created and stored in the stable storage, and the
database is updated when a transaction commits.

2. Immediate database modification:


o The Immediate modification technique occurs if database modification occurs while the
transaction is still active.
o In this technique, the database is modified immediately after every operation. It follows
an actual database modification.
 Recovery using Log records
When the system is crashed, then the system consults the log to find which transactions
need to be undone and which need to be redone.

1. If the log contains the record <Ti, Start> and <Ti, Commit> or <Ti, Commit>, then the
Transaction Ti needs to be redone.
2. If log contains record<Tn, Start> but does not contain the record either <Ti, commit> or
<Ti, abort>, then the Transaction Ti needs to be undone.

Checkpoint
o The checkpoint is a type of mechanism where all the previous logs are removed from the
system and permanently stored in the storage disk.
o The checkpoint is like a bookmark. While the execution of the transaction, such
checkpoints are marked, and the transaction is executed then using the steps of the
transaction, the log files will be created.
o When it reaches to the checkpoint, then the transaction will be updated into the
database, and till that point, the entire log file will be removed from the file. Then the log
file is updated with the new step of transaction till next checkpoint and so on.
o The checkpoint is used to declare a point before which the DBMS was in the consistent
state, and all transactions were committed.

Recovery using Checkpoint


In the following manner, a recovery system recovers the database from this failure:
o The recovery system reads log files from the end to start. It reads log files from T4 to T1.
o Recovery system maintains two lists, a redo-list, and an undo-list.
o The transaction is put into redo state if the recovery system sees a log with <Tn, Start>
and <Tn, Commit> or just <Tn, Commit>. In the redo-list and their previous list, all the
transactions are removed and then redone before saving their logs.
o For example: In the log file, transaction T2 and T3 will have <Tn, Start> and <Tn,
Commit>. The T1 transaction will have only <Tn, commit> in the log file. That's why the
transaction is committed after the checkpoint is crossed. Hence it puts T1, T2 and T3
transaction into redo list.
o The transaction is put into undo state if the recovery system sees a log with <Tn, Start>
but no commit or abort log found. In the undo-list, all the transactions are undone, and
their logs are removed.
o For example: Transaction T4 will have <Tn, Start>. So T4 will be put into undo list since
this transaction is not yet complete and failed amid.

b) Write short note on Grant and revoking privilege with example. [5]

-->Data Controlling Language (DCL) helps users to retrieve and modify the data
stored in the database with some specified queries. Grant and Revoke belong
to these types of commands of the Data controlling Language. DCL is a
component of SQL commands.
1. Grant :
SQL Grant command is specifically used to provide privileges to database
objects for a user. This command also allows users to grant permissions to
other users too.
Syntax:
grant privilege_name on object_name
to {user_name | public | role_name}
Here privilege_name is which permission has to be granted, object_name is the
name of the database object, user_name is the user to which access should be
provided, the public is used to permit access to all the users.
2. Revoke :
Revoke command withdraw user privileges on database objects if any granted.
It does operations opposite to the Grant command. When a privilege is revoked
from a particular user U, then the privileges granted to all other users by user U
will be revoked.
Syntax:
revoke privilege_name on object_name
from {user_name | public | role_name}
Example:
grant insert,
select on accounts to Ram
By the above command user ram has granted permissions on accounts
database object like he can query or insert into accounts.
revoke insert,
select on accounts from Ram
By the above command user ram’s permissions like query or insert on accounts
database object has been removed.
To know the exact syntax and how are they used click here.
Differences between Grant and Revoke commands:

S.NO Grant Revoke

This DCL command grants


This DCL command removes permissions if
1 permissions to the user on the
any granted to the users on database objects.
database objects.

2 It assigns access rights to users. It revokes the user access rights of users.

If access for one user is removed; all the


For each user you need to specify
3 particular permissions provided by that users
the permissions.
to others will be removed.

When the access is decentralized If decentralized access removing the granted


4
granting permissions will be easy. permissions is difficult.

OR

What is Database Backup of types of backup? [5]


--> A backup is a copy of the data that store in the cloud. Backing-up is an important
process that everyone should do to have a fail-safe for when the inevitable happens.
The principle is to make copies of particular data to use those copies for restoring the
information if a failure occurs. A data loss event occurs due to deletion, corruption, theft,
viruses, etc.

Protecting data against loss, corruption, disasters (human-caused or natural), and other
problems is one of the IT organizations' top priorities. To avoid this loss, implementing
an efficient and effective set of backup operations can be difficult.

The term backup has become synonymous with data protection over the past several
decades and maybe accomplished via several methods. Backup software applications
reduce the complexity of performing backup and recovery operations. Backing up data
is only one part of a disaster protection plan and may not provide the level of data and
disaster recovery capabilities desired without careful design and testing.

You can manually perform the backup by copying the data to a different location or
automatically using a backup program. Each backup program has its approach in
executing the backup.

There are four most common backup types implemented and generally used in most of
these programs, such as:

1. Full backup
2. Incremental backup
3. Differential backup
4. Mirror backup

A type of backup defines how data is copied from source to destination and lays the
data repository model's grounds or how the back-up is stored and structured.

There are some types of backup that are better in certain locations. If we perform cloud
backup, then incremental backups are generally a better backup type because they
consume fewer resources. We might start with a full backup in the cloud and then shift
to incremental backups. Mirror backup, though, is typically more of an on-premises
approach and often involves disks.
Full backups
The most basic and complete type of backup operation is a full backup. As the name
implies, this backup type makes a copy of all data to a storage device, such as a disk or
tape. The primary advantage of performing a full backup during every operation is that
a complete copy of all data is available with a single media set.

It takes the shortest time to restore data, a metric known as a recovery time objective.
However, the disadvantages are that it takes longer to perform a full backup than other
types, requiring more storage space.

Thus, full backups are typically run only periodically. Data centers with a small amount of
data may choose to run a full backup daily or even more often in some cases. Typically,
backup operations employ a full backup in combination with either incremental or
differential backups.

Incremental backups
An incremental backup operation will result in copying only the data that has changed
since the last backup operation of any type. An organization typically uses the modified
timestamp on files and compares them to the last backup timestamp.

Backup applications track and record the date and time that backup operations occur to
track files modified since these operations. Because an incremental backup will only
copy data since the last backup of any type, an organization may run it as often as
desired, with only the most recent changes stored.

The benefit of an incremental backup is that it copies a smaller amount of data than a
full. Thus, these operations will have a faster backup speed and require fewer media to
store the backup.

Differential backups
A differential backup operation is similar to an incremental the first time it is performed,
in that it will copy all data changed from the previous backup. However, each time it is
run afterward, it will continue to copy all data changed since the previous full backup.
Therefore, it will store more backed up data than an incremental on subsequent
operations, although typically far less than a full backup.

Differential backups require more space and time to complete than incremental
backups, although less than full backups. From these three primary types of backup, it is
possible to develop an approach for comprehensive data protection. An organization
often uses one of the following backup settings:

o Full daily
o Full weekly + differential daily
o Full weekly + incremental daily

Full backup daily requires the most amount of space and will also take the most amount
of time. However, more total copies of data are available, and fewer media pieces are
required to perform a restore operation. As a result, implementing this backup policy
has a higher tolerance to disasters and provides the least time to restore since any data
required will be located on at most one backup set.

A full backup weekly coupled with running incremental backups daily will deliver the
shortest backup time during weekdays and use the least storage space. However, fewer
copies of data available and restored time is the longest since an organization may need
to use six sets of media to recover the necessary information.

A weekly full backup with daily differential backups delivers results in between the other
alternatives. that is, more backup media sets are required to restore than with a daily full
policy, although less than with a daily incremental policy. Also, restoring time is less than
using daily incremental backups and more than daily full backups. To restore data from
a particular day, at most two media sets are required, diminishing the time needed to
recover and the potential for problems with an unreadable backup set.

Mirror backups
A mirror backup is comparable to a full backup. This backup type creates an exact copy
of the source data set, but only the latest data version is stored in the backup repository
with no track of different versions of the files. All the different backed up files are stored
separately like they are in the source.

One of the benefits of mirror backup is a fast data recovery time. It's also easy to access
individual backed up files.

One of the main drawbacks, though, is the amount of storage space required. With that
extra storage, organizations should be wary of cost increases and maintenance needs. If
there's a problem in the source data set, such as corruption or deletion, the mirror
backup experiences the same. As a result, it is not to rely on mirror backups for all the
data protection needs and have other backup types for the data.
One specific kind of mirror, disk mirroring, is also known as RAID 1. This process
replicates data to two or more disks. Disk mirroring is a strong option for data that
needs high availability because of its quick recovery time. It's also helpful for disaster
recovery because of its immediate failover capability. Disk mirroring requires at least two
physical drives. If one hard drive fails, an organization can use the mirror copy. While
disk mirroring offers comprehensive data protection, it requires a lot of storage capacity.

Smart backups
Smart backup is a backup type that combines the full, differential and incremental
backup types with cleanup operations to efficiently manage the backup settings and the
free disk space in the destination. The Smart backup type starts with a full backup.

The advantage is that we don't need to worry about the number of backups to store to
fit on the destination drive, which backup version to clean or merge, as Backup4all will
take care of that.

b) Write short note on mandatory Access control. [5]

--> Mandatory Access Control (MAC) is a security model used in Database Management
Systems (DBMS) to enforce data access restrictions based on predefined security policies. In
MAC, access control decisions are made by the system itself, and users have limited control
over granting or revoking access permissions.

Here are some key points about Mandatory Access Control in DBMS:

Security Levels and Labels: MAC assigns security levels or labels to both users and data objects.
Security levels are typically defined using a hierarchical structure, such as a classification
scheme, where each level represents a different level of sensitivity or confidentiality. Data
objects are labeled with their respective security levels.

Access Control Policies: MAC enforces access control policies based on security levels. These
policies define the rules and restrictions for accessing data objects. For example, a policy may
state that a user with a "Top Secret" security level can only read or modify data objects labeled
with the same or lower security level.
Data Classification: MAC requires the classification of data objects into specific security levels
based on their sensitivity or importance. This classification is typically done during the design or
setup phase, where administrators assign appropriate labels to data objects.

Need-to-Know Principle: MAC follows the principle of "need-to-know," meaning that users are
granted access only to the data objects they need to perform their authorized tasks. Access to
higher-level data objects is restricted to users with appropriate security clearances.

Centralized Control: In MAC, access control decisions are centralized and enforced by the DBMS
itself. The system administrator or security administrator defines the access control policies,
sets the security levels, and manages user privileges. Users have limited control over modifying
access permissions, as these decisions are primarily determined by the system.

Strong Data Isolation: MAC ensures strong data isolation by strictly controlling the flow of
information between different security levels. It prevents unauthorized users from accessing or
modifying data objects with higher security levels, even if they have legitimate access to lower-
level data objects.

Enhanced Security: MAC provides a higher level of security compared to other access control
models, such as Discretionary Access Control (DAC). It reduces the risk of data breaches, insider
threats, and unauthorized access, as access permissions are determined by system policies
rather than individual user discretion.

Compliance and Auditability: MAC helps organizations comply with security regulations and
standards by providing a structured approach to data access control. It also enables
comprehensive audit trails, allowing for monitoring and tracking of access activities for
compliance, forensics, and security analysis purposes.

Mandatory Access Control is commonly used in high-security environments, such as


government agencies, military organizations, and financial institutions, where protecting
sensitive data and preventing unauthorized access is of utmost importance. By enforcing strict
access control policies based on security levels, MAC helps mitigate security risks and ensures
the confidentiality, integrity, and availability of critical data.

Q5) Explain the Inter query of Intraquery parallelizm in details with example. [10]

Parallelism in a query allows us to parallel execution of multiple queries by


decomposing them into the parts that work in parallel. This can be achieved by shared-
nothing architecture. Parallelism is also used in fastening the process of a query
execution as more and more resources like processors and disks are provided. We can
achieve parallelism in a query by the following methods :
1. I/O parallelism
2. Intra-query parallelism
3. Inter-query parallelism
4. Intra-operation parallelism
5. Inter-operation parallelism
6. Inter-query parallelism :
In Inter-query parallelism, there is an execution of multiple transactions by each
CPU. It is called parallel transaction processing. DBMS uses transaction
dispatching to carry inter query parallelism. We can also use some different
methods, like efficient lock management. In this method, each query is run
sequentially, which leads to slowing down the running of long queries. In such
cases, DBMS must understand the locks held by different transactions running
on different processes. Inter query parallelism on shared disk architecture
performs best when transactions that execute in parallel do not accept the same
data. Also, it is the easiest form of parallelism in DBMS, and there is an
increased transaction throughput.

--> For example: If there are 6 queries, each query will take 3 seconds for
evaluation. Thus, the total time taken to complete evaluation process is 18
seconds. Inter query parallelism achieves this task only in 3 seconds.

 However, Inter query parallelism is difficult to achieve every time.

. Intra-query parallelism :
Intra-query parallelism refers to the execution of a single query in a parallel process on
different CPUs using a shared-nothing paralleling architecture technique. This uses two
types of approaches:
 First approach –
In this approach, each CPU can execute the duplicate task against some data portion.
 Second approach –
In this approach, the task can be divided into different sectors with each CPU
executing a distinct subtask.
For Example: If we have 6 queries, which can take 3 seconds to complete the
evaluation process, the total time to complete the evaluation process is 18
seconds. But We can achieve this task in only 3 seconds by using intra query
evaluation as each query is divided in sub-queries.

OR

Explain different between homogeneous and hetrogeneous database with example. [10]

Homogeneous Database Heterogeneous Database

Database environment where multiple


Database environment where all systems of different types or vendors are
Definition systems are of the same type or vendor used

Architecture Share a common architecture Systems may have different architectures

Data Model Use a unified data model May have varying data models

Integration requires additional tools or


Integration Seamless integration and data sharing middleware

Query Language Use the same query language May have different query languages
Homogeneous Database Heterogeneous Database

Vendor
Dependency Dependent on a single vendor Not tied to a specific vendor

Replication across systems is Data replication may require transformation


Data Replication straightforward and mapping

Similar administrative tasks across


Administration systems Different administrative tasks for each system

Homogeneous systems offer optimized Performance may vary due to the


Performance performance heterogeneity

Combination of Oracle, SQL Server, and


Example All databases are Oracle Databases MongoDB databases

Certainly! Here are examples of a homogeneous database and a heterogeneous database:

Homogeneous Database:

Let's consider an example where a company uses a homogeneous database environment


consisting of multiple instances of Oracle Database. All the databases in the environment are of
the same type (Oracle) and share a common architecture, data model, and query language.
Data replication and integration between these Oracle Databases are straightforward, as they
have native compatibility and can seamlessly communicate with each other. Administrators can
perform similar administrative tasks across the databases, such as backup, recovery, and user
management. This homogeneous setup allows for easy data sharing, consistent operations, and
optimized performance within the Oracle Database ecosystem.

Heterogeneous Database:

Now, let's look at a heterogeneous database scenario. Suppose a multinational organization


operates different departments, and each department uses a different database system for
specific purposes. The sales department uses Microsoft SQL Server, the marketing department
relies on MongoDB, and the finance department employs Oracle Database. In this case, the
organization has a heterogeneous database environment since multiple database systems from
different vendors (Microsoft, MongoDB, Oracle) are in use. Data integration and sharing
between these systems require additional tools or middleware to bridge the gaps between
different architectures, data models, and query languages. For example, an Extract, Transform,
Load (ETL) process or middleware can be used to extract data from one database, transform it
into a compatible format, and load it into another database. Managing administrative tasks for
each database system may differ, as administrators need to work with the specific tools and
interfaces provided by each vendor. Performance may vary across the heterogeneous
environment due to the differences in optimization techniques and underlying technologies
used by the different database systems.

In summary, a homogeneous database environment consists of multiple databases of the same


type, such as Oracle Database, while a heterogeneous database environment involves the use
of different database systems from various vendors, such as Microsoft SQL Server, MongoDB,
and Oracle Database. The examples demonstrate how data integration, administrative tasks,
and performance considerations differ between homogeneous and heterogeneous database
setups.

You might also like