Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 26

Q1.

Design a database for a university’s examination division which


conducts examination, issue hall tickets to the students for appearing for
TEE. It also issue grade cards to the successful candidates. It maintains
data about (a) Programs (b) Study center (SC) (c) Regional center (RC)
(d) Students (e) courses within a program.
The hall ticket should have student name, SC-code, RC code, Student id,
Program code & course code and date of examination.
The grade card should have the following attributes Student id, Student
name, program code, Course code and grade in each course.

(a) Draw the EER (extended ER) diagram for the above problem showing
all entities, attributes and relationship. Also identify multivalued and
derived attributes.
Answer : -
(b) Draw the appropriate tables and relationship among the tables for
the above diagram and normalize the tables up to 3NF.
Answer : -

Table Name : regional_center

Field Name Data Type Constraint Description


Primary Uniquely identify each Regional Center in this
rc_code varchar(10)
Key table
rc_name varchar(100) Name of the Regional Center
rc_address varchar(200) Address of the Regional Center
rc_phone bigint Contact number of the Regional Center
no_of_study_cente Number of Study Center present under the
int
r Regional Center

Table Name : study_center

Field Name Data Type Constraint Description


Primary Uniquely identify each Study Center in this
sc_code varchar(10)
Key table
sc_name varchar(100) Name of the Study Center
sc_address varchar(200) Address of the Study Center
sc_phone bigint Contact number of the Study Center
rc_code varchar(10) Foreign Key Regional Center identification code

Table Name : program

Field Name Data Type Constraint Description


Primary
p_code varchar(10) Uniquely identify each Program in this table
Key
p_name varchar(100) Name of the Program
total_fees int Total fees require for the Program

Table Name : offer

Field Name Data Type Constraint Description


rc_code varchar(10) Foreign Key Regional Center identification code
p_code varchar(10) Foreign Key Program identification code

Table Name : teach

Field Name Data Type Constraint Description


sc_code varchar(10) Foreign Key Study Center identification code
p_code varchar(10) Foreign Key Program identification code

Table Name : course

Field Name Data Type Constraint Description


Primary Uniquely identify each Course/Subject in this
c_code varchar(10)
Key table
c_name varchar(100) Name of the Course
points int Points of the Course

Table Name : contents

Field Name Data Type Constraint Description


p_code varchar(10) Foreign Key Program identification code
c_code varchar(10) Foreign Key Course/Subject identification code

Table Name : student

Field Name Data Type Constraint Description


Primary
s_code varchar(10) Uniquely identify each Student in this table
Key
s_name varchar(100) Name of the Student
s_address varchar(200) Present address of the Student
s_phone bigint Contact number of the Student
sc_code varchar(10) Foreign Key Study Center identification code
p_code varchar(10) Foreign Key Program identification code

Table Name : part_time

Field Name Data Type Constraint Description


s_code varchar(10) Foreign Key Student identification code
Number of classes offer based on the Program
no_of_classes int
taken by the student
class_duration int Duration of each class

Table Name : full_time

Field Name Data Type Constraint Description


s_code varchar(10) Foreign Key Student identification code
attendance int Attendance of each Student

Table Name : exam_schedule


Field Name Data Type Constraint Description
c_code varchar(10) Foreign Key Course/Subject identification code
exam_date varchar(10) Examination date
exam_time varchar(10) Examination time

Table Name : hall_ticket

Field Name Data Type Constraint Description


s_code varchar(10) Foreign Key Student identification code
c_code varchar(10) Foreign Key Course/Subject identification code
ec_code varchar(10) Examination Center identification code

Table Name : grade_card

Field Name Data Type Constraint Description


s_code varchar(10) Foreign Key Student identification code
c_code varchar(10) Foreign Key Course/Subject identification code
assignment_mark Assignment Marks obtain by a student
int
s for a particular subject
Theory Marks obtain by a student for
theory_marks int
a particular subject
Practical Marks obtain by a student
practical_marks int
for a particular subject
Indicate pass or fail status of a student
status varchar(15)
for a particular subject

(c) Include generalization and aggregation features in the diagram and


draw their tabular representations and explain.
Answer : -

Generalization

Generalization is like a bottom-up approach in which two or more entities of lower level
combine to form a higher level entity if they have some attributes in common.

In the above EER-Diagram both part_time and full_time entities has some common


attributes - s_code, s_name, s_address, s_phone, p_code and sc_code.

Aggregation

In aggregation, the relation between two entities is treated as a single entity. In


aggregation, relationship with its corresponding entities is aggregated into a higher level
entity.
Tabular Representations - Do it Yourself

(d) Identify weak entity sets in the above diagram if any. Show how will
you convert a weak entity set to a strong entity set? What is the need of
such task?
Answer : - weak entity set is an entity set that does not contain sufficient attributes to
uniquely identify its entities. In other words, a primary key does not exist for a weak
entity set.
In the above EER-Diagram exam_schedule representated as a weak entity, because no
Primary Key is present in this table.
exam_schedule (c_code, exam_date, exam_time)

To convert the weak entity - exam_schedule into strong entity we need to


add exam_code attribute into this, which uniquely identify each row (record) in this table.
exam_schedule (exam_code, c_code, exam_date, exam_time)

(e) Identify multivalued dependency in the above diagram if any. Justify.


Answer : - Multivalued dependency occurs when two attributes in a table are
independent of each other but, both depend on a third attribute.

A multivalued dependency consists of at least two attributes that are dependent on a third
attribute that's why it always requires at least three attributes.

In the above EER-Diagram multivalued dependency exist in exam_schedule entity.


exam_schedule (c_code, exam_date, exam_time)
Here in the exam_schedule entity, exam_date and exam_time attributes are independent of
each other but, both depend on c_code attribute.

(f) Create an XML schema for the grade card to be issued by the division
having details : student id, programme code, course id, grade, consumer
name, assignments marks, TEE marks and grade.
Answer : -

GradeCard.xsd

<?xml version="1.0"?>
<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema"
targetNamespace="http://www.questionsolves.com"
xmlns="http://www.questionsolves.com"
elementFormDefault="qualified">
<xs:element name="GradeCard">
<xs:complexType>
<xs:sequence>
<xs:element name="StudentId" type="xs:integer"/>
<xs:element name="StudentName" type="xs:string"/>
<xs:element name="ProgrammeCode" type="xs:string"/>
<xs:element name="Course" maxOccurs="unbounded"/>
<xs:complexType>
<xs:sequence>
<xs:element name="CourseCode" type="xs:string"/>
<xs:element name="AssignmentMarks" type="xs:integer"/>
<xs:element name="TEEMarks" type="xs:integer"/>
<xs:element name="Grade" type="xs:string"/>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:schema>

GradeCard.xml

<?xml version="1.0"?>
<GradeCard
xmlns="http://www.questionsolves.com"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.questionsolves.com
GradeCard.xsd">
<StudentId>105508022</StudentId>
<StudentName>Debabrata Panchadhyay</StudentName>
<ProgrammeCode>MCA</ProgrammeCode>
<Course>
<CourseCode>MCS-042</CourseCode>
<AssignmentMarks>90</AssignmentMarks>
<TEEMarks>68</TEEMarks>
<Grade>B</Grade>
</Course>
<Course>
<CourseCode>MCS-043</CourseCode>
<AssignmentMarks>85</AssignmentMarks>
<TEEMarks>42</TEEMarks>
<Grade>D</Grade>
</Course>
</GradeCard>

Q2. What is the fundamental difference between XML document and


relational database? How is XML data stored in RDBMS? Explain.
Answer : -

Difference between XML Document and Relational Database

 XML is self-describing
A given document contains not only the data, but also the necessary metadata. As a
result, an XML document can be searched or updated without requiring a static
definition of the schema. Relational models, on the other hand, require more static
schema definitions. All the rows of a table must have the same schema.
 XML is hierarchical
A given document represents not only base information, but also information about
the relationship of data items to each other in the form of the hierarchy. Relational
models require all relationship information to be expressed either by primary key
or foreign key relationships or by representing that information in other relations.
 XML is sequence-oriented order
For an XML document, the order in which data items are specified is assumed to be
the order of the data in the document. There is often no other way to specify order
within the document. For relational data, the order of the rows is not guaranteed
unless you specify an ORDER BY clause on one or more columns.

Other factors might influence your decision about which model to use. Some of those
factors are :

 Whether maximum flexibility of the data is needed


Relational tables are fairly rigid. For example, normalizing one table into many or
denormalizing many tables into one can be very difficult. If the data design changes
often, representing it as XML data is a better choice.
 Whether the data components have meaning outside a hierarchy
Data might be inherently hierarchical in nature, but the child components do not
need the parents to provide value. For example, a purchase order might contain
part numbers. The purchase orders with the part numbers might be best
represented as XML documents. However, each part number has a part description
associated with it. It might be better to include the part descriptions in a relational
table, because the relationship between the part numbers and the part descriptions
is logically independent of the purchase orders in which the part numbers are used.
 Whether data attributes apply to all data, or to only a small subset of the data
Some sets of data have a large number of possible attributes, but only a small
number of those attributes apply to any particular data value. For example, in a
retail catalog, there are many possible data attributes, such as size, color, weight,
material, style, weave, power requirements, or fuel requirements. For any given
item in the catalog, only a subset of those attributes is relevant: power
requirements are meaningful for a table saw, but not for a coat. This type of data is
difficult to represent and search with a relational model, but relatively easy to
represent and search with an XML model.
 Whether the data needs to be updated often
Currently, you can update XML data in an XML column only by replacing full
documents. If you need to frequently update small fragments of very large
documents for a large number of rows, it can be more efficient to store the data in
non-XML columns. If, however, you are updating small documents and only a few
documents at a time, storing as XML can be efficient as well.

Convert Table Data into XML File

Consider the following tables present in the SQL Server database -


department(dept_id, dept_name)
employee(emp_id, emp_name, salary, dept_id)

Query
SELECT * FROM department INNER JOIN employee ON
employee.dept_id=department.dept_id FOR XML AUTO, ELEMENTS;

Output :
<department>
<dept_id>102</dept_id>
<dept_name>HTML and Graphics</dept_name>
<employee>
<emp_id>E-0005</emp_id>
<emp_name>Subhadip Giri</emp_name>
<salary>45000</salary>
<dept_id>102</dept_id>
</employee>
</department>
<department>
<dept_id>105</dept_id>
<dept_name>DOT NET</dept_name>
<employee>
<emp_id>E-0020</emp_id>
<emp_name>Amit Das</emp_name>
<salary>65000</salary>
<dept_id>105</dept_id>
</employee>
<employee>
<emp_id>E-0085</emp_id>
<emp_name>Sohini Das</emp_name>
<salary>40000</salary>
<dept_id>105</dept_id>
</employee>
</department>

Restore department Table Data from XML File

--Setup a variable to take the file data


DECLARE @filedata XML

--Import the file contents into the variable


SELECT @filedata=BulkColumn FROM
OPENROWSET(BULK 'D:\Test\Department.xml',SINGLE_BLOB) AS X

--Insert the xml data into our department table (dept_id, dept_name)
INSERT INTO department(dept_id, dept_name)
SELECT

--"data" is our xml content alias


data.value('dept_id[1]','int') AS dept_id,
data.value('dept_name[1]','varchar(20)') AS dept_name

--This is the xpath to the individual records we want to extract


FROM @filedata.nodes('/department') AS X(data);

Restore employee Table Data from XML File

--Setup a variable to take the file data


DECLARE @filedata XML

--Import the file contents into the variable


SELECT @filedata=BulkColumn FROM
OPENROWSET(BULK 'D:\Test\Department.xml',SINGLE_BLOB)
AS X
--Insert the xml data into our employee table (emp_id,
emp_name, salary, dept_id)
INSERT INTO employee(emp_id, emp_name, salary, dept_id)
SELECT

--"data" is our xml content alias


data.value('emp_id[1]','varchar(10)') AS emp_id,
data.value('emp_name[1]','varchar(20)') AS emp_name,
data.value('salary[1]','int') AS salary,
data.value('dept_id[1]','int') AS dept_id

--This is the xpath to the individual records we want to extract


FROM @filedata.nodes('/department/employee') AS X(data);

Q3. Write an algorithm that checks whether to the concurrent


transactions are in deadlock or not ?
Answer : - Solve it Yourself

Q4. What are views? What is their significance in DBMS? How are views
created in SQL? Explain the concept with the help of an example
pertaining to the design of University’s examination system (refer to Q1)
Answer : -

 In SQL, a view is a virtual table based on the result-set of an SQL statement.


 A view contains rows and columns, just like a real table. The fields in a view are
fields from one or more real tables in the database.
 A View can either have all the rows of a table or specific rows based on certain
condition.

There are many advantages using view :


 Security - Each user can be given permission to access the database only through a
small set of views that contain the specific data the user is authorized to see, thus
restricting the user's access to stored data.
 Query Simplicity - A view can draw data from several different tables and present
it as a single table, turning multi-table queries into single-table queries against the
view.
 Consistency - A view can present a consistent, unchanged image of the structure of
the database, even if the underlying source tables are split, restructured, or
renamed.
 Data Integrity - If data is accessed and entered through a view, the DBMS can
automatically check the data to ensure that it meets the specified integrity
constraints.

Syntax :

CREATE or REPLACE VIEW view_name AS SELECT column_name(s) FROM table_name(s)


WHERE condition;

A view can be dropped using a DROP statement as :


DROP VIEW view_name;

Example : COMING SOON

Q5. Discuss the algorithm and the related cost of performing Selection
operation.
Answer : - The selection operation can be performed in a number of ways. Let us discuss
the algorithms and the related cost of performing selection operations.

 File Scan - Based on the filter condition, the file is traversed and the records are
fetched in this method. The following are the two basic files scan algorithms for
selection operation :
1. Linear search - This algorithm scans each file block and tests all records to
see whether they satisfy the selection condition.

Cost of searching records satisfying a condition = number of blocks that


contain tuples of the relation = BR
2. Binary search - This method of selection is applicable only when the records
are sorted based on the search key value and we have equal condition. i.e.;
this method is not suitable for range operation or any other kind.

Cost = [log2 BR] + [average number of tuples with the same value of the
relation / blocking factor (Number of tuples in a block) of the relation]

 Index Scan - This method of selection uses indexes to traverse the record. The
search key should be indexed and should be used in the filter condition for an Index
scan to work.
1. Primary Key index with equality - This search retrieves a single record that
satisfies the corresponding equality condition. For example, SELECT * FROM
student WHERE enrolment_no = 105508022;

Cost = Height traversed in index to locate the block pointer + 1 (block of the
primary key is transferred for access).

2. Primary key index with comparison - In this method comparison operators
like <, <=, >, and >= are used to retrieved more than one record from the file.
This comparison operator is used on the primary key, on which index is also
created.

For example, SELECT * FROM student WHERE enrolment_no >=


105508022; Since the index is on primary key, records are already in sorted
based on the key. So, the parser has to seek the first record that is
satisfying enrolment_no = 105508022. Then it has to add the number of blocks
which satisfies >= conditions.

Cost = Number of block transfer to locate the value in index + Transferring all
the blocks of data satisfying that condition.

3. Equality on clustering index to retrieve multiple records - This is similar


to Primary Key index with equality, but here the index is not on any key
columns. Hence it can have more than one match and return multiple records.

For example, SELECT * FROM student WHERE student_name = ‘Amit’; There


will be any number of students with name "Amit". They will have
different enrolment_no, but same name. Hence it retrieves multiple records
stored in different set of memory blocks.

4. Equality on search-key of secondary index - This is same as secondary


index with equality but with comparison operator. In secondary index with
equality, index is created on the attribute which is neither a primary key nor
a search key column.
For example, SELECT * FROM student WHERE age >=
18; where enrolment_no is the primary key, student_name is the index column
and age is the comparison column / search key column. Here the comparison
works similar to primary key index with comparison.

 Conjunction - This is the reference to the conditions in the WHERE clause


combined with AND operator. For example, SELECT * FROM student WHERE
programme_code = ‘MCA’ AND age >= 18;

There are different combinations of columns can be used in conjunctions – single


index, multiple-key index or composite index, multiple indexes (multiple index on
different columns) etc.

 Disjunction - This is the reference to the conditions in the WHERE clause combined
with OR operator. This is even considered as queries with UNION operator.

For example,

SELECT * FROM student WHERE programme_code = ‘MCA’ OR age >= 18;

or

SELECT * FROM student WHERE CLASS_ID = programme_code = ‘MCA’


UNION
SELECT * FROM student WHERE age >= 18;

 Negation - It has not equal condition used in the WHERE clause. In most of the cases
it uses the linear search method to fetch the records. If the index is present on the
search key column then index is used search the records.

For example, SELECT * FROM STUDENT WHERE programme_code <> ‘MCA’

Q6. What is a timestamp? What is the use of timestamp protocols in


distributed database? How does timestamp generation take place in
distributed database?
Answer : - In a multiprogramming environment where multiple transactions can be
executed simultaneously, it is highly important to control the concurrency of transactions.
We have concurrency control protocols to ensure atomicity, isolation, and serializability of
concurrent transactions. Concurrency control protocols can be broadly divided into two
categories −
 Lock based protocols
 Time stamp based protocols

Timestamp Based Protocols

The most commonly used concurrency protocol is the timestamp based protocol. This
protocol uses either system time or logical counter as a timestamp.

Lock-based protocols manage the order between the conflicting pairs among transactions
at the time of execution, whereas timestamp-based protocols start working as soon as a
transaction is created.

Timestamp Concurrency Control Algorithms

Timestamp-based concurrency control algorithms use a transaction’s timestamp to


coordinate concurrent access to a data item to ensure serializability. A timestamp is a
unique identifier given by DBMS to a transaction that represents the transaction’s start
time.

These algorithms ensure that transactions commit in the order dictated by their
timestamps. An older transaction should commit before a younger transaction, since the
older transaction enters the system before the younger one.

Timestamp based ordering follow three rules to enforce serializability −

 Access Rule - When two transactions try to access the same data item
simultaneously, for conflicting operations, priority is given to the older transaction.
This causes the younger transaction to wait for the older transaction to commit
first.
 Late Transaction Rule - If a younger transaction has written a data item, then an
older transaction is not allowed to read or write that data item. This rule prevents
the older transaction from committing after the younger transaction has already
committed.
 Younger Transaction Rule - A younger transaction can read or write a data item
that has already been written by an older transaction.

Optimistic Concurrency Control Algorithm


In systems with low conflict rates, the task of validating every transaction for
serializability may lower performance. In these cases, the test for serializability is
postponed to just before commit. Since the conflict rate is low, the probability of aborting
transactions which are not serializable is also low. This approach is called optimistic
concurrency control technique.

This algorithm uses three rules to enforce serializability -

 Rule 1 - Given two transactions T1 and T2, if T1 is reading the data item which T2 is
writing, then T1’s execution phase cannot overlap with T2’s commit phase. T2 can
commit only after T1 has finished execution.
 Rule 2 - Given two transactions T1 and T2, if T1 is writing the data item that T2 is
reading, then T1’s commit phase cannot overlap with T2’s execution phase. T2 can
start executing only after T1 has already committed.
 Rule 3 - Given two transactions T1 and T2, if T1 is writing the data item which T2 is
also writing, then T1’s commit phase cannot overlap with T2’s commit phase. T2
can start to commit only after T1 has already committed.

Distributed Timestamp Concurrency Control

In a centralized system, timestamp of any transaction is determined by the physical clock


reading. In a distributed system, the concept of clock reading cannot be used as they
reading of the clock is not the same globally. Hence for a distributed system the timestamp
of a transaction is identified by using the ID of the site along with the clock reading of that
particular site.

For implementing timestamp ordering algorithms, each site has a scheduler that
maintains a separate queue for each transaction manager. During transaction, a
transaction manager sends a lock request to the site’s scheduler. The scheduler puts the
request to the corresponding queue in increasing timestamp order. Requests are
processed from the front of the queues in the order of their timestamps, i.e. the oldest
first.

Distributed Optimistic Concurrency Control Algorithm

The extension of optimistic concurrency control algorithm is Distributed optimistic


concurrency control algorithm, accompanied by the following rules -

 Rule 1 - According to this rule, a transaction must be validated locally at all sites
when it executes. If a transaction is found to be invalid at any site, it is aborted.
Local validation guarantees that the transaction maintains serializability at the sites
where it has been executed. After a transaction passes local validation test, it is
globally validated.
 Rule 2 - According to this rule, after a transaction passes local validation test, it
should be globally validated. Global validation ensures that if two conflicting
transactions run together at more than one site, they should commit in the same
relative order at all the sites they run together. This may require a transaction to
wait for the other conflicting transaction, after validation before commit. This
requirement makes the algorithm less optimistic since a transaction may not be
able to commit as soon as it is validated at a site.

Q7. How are implementations of triggers in Oracle different from the


standard implementations ?
Answer : - Solve it Yourself

Q8. Explain multiple granularities with the help of an example. How is


locking done in such a case ?
Answer : -

 Multiple Granularity can be defined as hierarchically breaking up the database into


blocks which can be locked.
 The Multiple Granularity protocol enhances concurrency and reduces lock
overhead.
 It maintains the track of what to lock and how to lock.
 It makes easy to decide either to lock a data item or to unlock a data item. This type
of hierarchy can be graphically represented as a tree.

For example : Consider a tree which has four levels of nodes.

 The first level or higher level shows the entire database.


 Below it are nodes of type area; the database consists of exactly these areas.
 The area consists of children nodes which are known as files. No file can be present
in more than one area.
 Finally, each file contains child nodes known as records. The file has exactly those
records that are its child nodes. No records represent in more than one file.
 Hence, the levels of the tree starting from the top level are as follows :
1. Database
2. Area
3. File
4. Record

In this example, the highest level shows the entire database. The levels below are file,
record, and fields.

Intention Mode Lock

In the 2-phase locking protocol, shared (S) and exclusive (X) lock modes are used, but
there are three additional lock modes with multiple granularity :

 Intention-shared (IS) - It contains explicit locking at a lower level of the tree but
only with shared locks.
 Intention-Exclusive (IX) - It contains explicit locking at a lower level with exclusive
or shared locks.
 Shared & Intention-Exclusive (SIX) - In this lock, the node is locked in shared
mode, and some node is locked in exclusive mode by the same transaction.
IS IX S SIX X
IS true true true true false
IX true true false false false
S true false true false false
SIX true false false false false
X false false false false false
The multiple-granularity locking protocol uses the intention lock modes to ensure
serializability. It requires that a transaction Ti that attempts to lock a node must follow
these protocols :

 Transaction Ti must follow the lock-compatibility matrix.


 Transaction Ti must lock the root of the tree first, and it can lock it in any mode.
 Transaction Ti can lock a node in S or IS mode only if Ti currently has the parent of
the node locked in either IX or IS mode.
 Transaction Ti can lock a node in X, SIX, or IX mode only if Ti currently has the parent
of the node locked in either IX or SIX mode.
 Tansaction Ti can lock a node only if Ti has not previously unlocked any node (i.e.,
Ti is two phase).
 Transaction Ti can unlock a node only if Ti currently has none of the children of the
node locked.

Observe that in multiple-granularity, the locks are acquired in top-down order, and locks
must be released in bottom-up order.

 If transaction T1 reads record Ra2 in file Fa, then transaction T1 needs to lock the
database, area A1 and file Fa in IX mode. Finally, it needs to lock Ra2 in S mode.
 If transaction T2 modifies record Ra9 in file Fa, then it can do so after locking the
database, area A1 and file Fa in IX mode. Finally, it needs to lock the Ra9 in X mode.
 If transaction T3 reads all the records in file Fa, then transaction T3 needs to lock the
database, and area A1 in IS mode. At last, it needs to lock Fa in S mode.
 If transaction T4 reads the entire database, then T4 needs to lock the database in S
mode.

Note that transactions T1, T3 and T4 can access the database concurrently. Transaction T2
can execute concurrently with T1, but not with either T3 or T4.
Q9. What are the characteristics of multimedia & mobile databases?
Explain the design challenges of these database?
Answer : -

Multimedia Databases

A Multimedia database (MMDB) is a collection of related for multimedia data. The


multimedia data include one or more primary media data types such as text, images,
graphic objects (including drawings, sketches and illustrations), animation sequences,
audio and video. These data types are broadly categorized into three classes:

 Static media (time-independent: image and graphic object).


 Dynamic media (time-dependent: audio, video and animation).
 Dimensional media (3D game and computer aided drafting programs).

Contents of Multimedia Database Management System (MMDBMS)

 Media data - This is the multimedia data that is stored in the database such as
images, videos, audios, animation etc.
 Media format data - Information about the format of the media data after it goes
through the acquisition, processing, and encoding phases.
 Media keyword data - This contains the keyword data related to the media in the
database. For an image the keyword data can be date and time of the image,
description of the image etc.
 Media feature data - Content dependent data such as the distribution of colors,
kinds of texture and different shapes present in data.

Challenges of Multimedia Database

1. Multimedia databases contains data in a large type of formats such as .txt(text),


.jpg(images), .avi(videos), .mp3(audio) etc. It is difficult to convert one type of data
format to another.
2. Multimedia database consume a lot of processing time, as well as bandwidth.
3. The data size of multimedia is large such as video; therefore, multimedia data often
require a large storage.

Mobile Databases

Mobile computing devices (example - smartphones and PDAs) store and share data over a
mobile network, or a database which is actually stored by the mobile device. This could be
a list of contacts, price information, distance travelled, or any other information.
Many applications require the ability to download information from an information
repository and operate on this information even when out of range or disconnected. An
example of this is your contacts and calendar on the phone. In this scenario, a user would
require access to update information from files in the home directories on a server or
customer records from a database. This type of access and work load generated by such
users is different from the traditional workloads seen in client–server systems of today.

Mobile users must be able to work without a network connection due to poor or even non-
existent connections. A cache could be maintained to hold recently accessed data and
transactions so that they are not lost due to connection failure. Users might not require
access to truly live data, only recently modified data, and uploading of changing might be
deferred until reconnected.

Challenges of Mobile Database System

 Limited Resources - The CPU power and storage of mobile devices is continuously
increasing. However, they are far behind non-mobile systems such as servers on the
Internet. Due to the size of the database, limited CPU power and storage capacity,
mobile device need to perform simple operations on local data available in cache.
Limited storage capacity also makes it difficult to cache entire databases to a mobile
device.
 Power Consumption - The most prominent limitation of mobile device is power.
These devices rely entirely on battery power. Combined with the compact size of
many mobile devices, this often means unusually expensive batteries must be used
to obtain the necessary battery life.
 Disconnection - Weather, terrain, and the range from the nearest signal point can
all interfere with signal reception. Reception in tunnels, some buildings, and rural
areas is still poor. Interaction between a mobile device and a database is directly
affected by the device’s network connectivity. The two solutions approach to this
disconnection challenges are : 1) Prevent disconnections, 2) Cope with
disconnections. For mobile computers, allowing disconnections to happen and
recovering from them is the better solution for asynchronous operation caching
and reconciliation.
 Insufficient Bandwidth - Mobile access is generally slower than direct cable
connections. Using technologies such as GPRS and EDGE, and more recently 3G
networks, bandwidth has been increased but still less compared to the wired
network. Asymmetry problem is faced when bandwidth in the downstream
direction is often much greater than bandwidth in the upstream direction.
 Limited Storage - Due to mobility and portability, the sizes of memory and hard
drive are smaller than the ones in the wired network. The consequences of this are
less stored/cached/replicated data, fewer installed applications, and more
communication.
Q10. What is assertion rule mining? Write apriority algorithm for finding
frequency item set. Discuss it with suitable examples.
Answer : - Solve it Yourself

Q11. Draw a simple Use case and a class diagram for a university’s
examination system.
Answer : -

You might also like