Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

Also check QP

A1.2

Information System(IS):- An Information System (IS) is any combination of information


technology

and people's activities using that technology to support operations, management, and
decision-making.

DBMS fall into 4 main groups:


* Data Definition. Defining new data structures for a database, removing data structures from
the
database, modifying the structure of existing data.
* Data Maintenance. Inserting new data into existing data structures, updating data in existing
data
structures, deleting data from existing data structures.
* Data Retrieval. Querying existing data by end-users and extracting data for use by application
programs.
* Data Control. Creating and monitoring users of the database, restricting access to data in the
database and monitoring the performance of databases.

A.1.6

● Data concurrency, which ensures that users can access data at the same time
● Data consistency, which ensures that each user sees a consistent view of the data, including
visible changes made by the user's own transactions and committed transactions of other
users

ACID A.1.7

Atomicity states that database modifications must follow an “all or nothing” rule.

Consistency states that only valid data will be written to the database.

Isolation requires that multiple transactions occurring at the same time not impact each
other’s execution.(movie booking example)

Durability ensures that any transaction committed to the database will not be lost. Durability
is ensured through the use of database backups and transaction logs that facilitate the
restoration of committed transactions in spite of any subsequent software or hardware
failures.

Read and discuss A.1.8


A.1.9 Validation and verification

Data validation is the process of ensuring that a program operates on


clean, correct and useful data.

● Data type validation;


● Range and constraint validation;
● Code and Cross-reference validation; and
● Structured validation

The simplest kind of data type validation verifies that the individual characters provided
through user input are consistent with the expected characters of one or more known primitive
data types; as defined in a programming language, data storage and retrieval mechanism, or
otherwise.

Simple range and constraint validation may examine user input for consistency with a
minimum/maximum range,or consistency with a test for evaluating a sequence of characters,
such as one or more tests against regular expressions.

Check digits- Used for numerical data. An extra digit is added to a number which is calculated
from the digits. The computer checks this calculation when data are entered. For example the
last digit of an ISBN for a book is a check digit calculated modulus 10.[3]

Cardinality check- Checks that record has a valid number of related records.

Control totals : This is a total done on one or more numeric fields which appears in every
record. This is a meaningful total, e.g., add the total payment for a number of Customers.

Data type checks- Checks the data type of the input and give an error message if the input data
does not match with the chosen data type,

File existence check- Checks that a file with a specified name exists.

Format or picture check- Checks that the data is in a specified format

Hash totals- This is just a batch total done on one or more numeric fields which appears in
every record. This is a meaningless total

Find difference between hash total and control total and what is referral integrity

Limit check:-Unlike range checks, data are checked for one limit only,

Logic check:-Checks that an input does not yield a logical error, e.g., an input value should not
be 0 when it will divide some other number somewhere in a program.

Presence check:-Checks that important data is actually present and have not been missed out

Range check Checks that the data lie within a specified range of values,

Spelling and grammar check Looks for spelling and grammatical errors.

Uniqueness check Checks that each value is unique. This can be applied to several fields (i.e.
Address, First Name, Last Name).

Table Look Up Check A table look up check takes the entered data item and compares it to a
valid list of entries that are stored in a database table.

A.2

A.2.1 Define the terms

A "database management system" (DBMS) is a suite of computer software providing the


interface between users and a database or databases.

The interactions catered for by most existing DBMSs fall into four main groups:
● Data definition – Defining new data structures for a database, removing data structures from
the
database, modifying the structure of existing data.
● Update – Inserting, modifying, and deleting data.
● Retrieval – Obtaining information either for end-user queries and reports or for processing by
applications.
● Administration – Registering and monitoring users, enforcing data security, monitoring
performance, maintaining data integrity, dealing with concurrency control, and recovering
information if the system fails.

1. Data dictionary management.


The DBMS stores definitions of the data elements and their relationships (metadata) in a data
dictionary. In turn, all programs that access the data in the database work through the DBMS.
The DBMS uses the data dictionary to look up the required data component structures and
relationships, thus relieving you from having to code such complex relationships in each
program.

2. Data storage management.


The DBMS creates and manages the complex structures required for data storage, thus
relieving you from the difficult task of defining and programming the physical data
characteristics.
3. Data transformation and presentation.
The DBMS transforms entered data to conform to required data structures. The DBMS relieves
you of the chore of making a distinction between the logical data format and the physical data
format. That is, the DBMS formats the physically retrieved data to make it conform to the user’s
logical expectations.

4. Security management.
The DBMS creates a security system that enforces user security and data privacy. Security
rules determine which users can access the database, which data items each user can access,
and which data operations (read, add, delete, or modify) the user can perform.

5. Multi User access control.


To provide data integrity and data consistency, the DBMS uses sophisticated algorithms to
ensure that multiple users can access the database concurrently without compromising the
integrity of the database.

6. Backup and recovery management.


The DBMS provides backup and data recovery to ensure data safety and integrity. Recovery
management deals with the recovery of the database after a failure, such as a bad sector in the
disk or a power failure. Such capability is critical to preserving the database’s integrity.

7. Data integrity management.

The DBMS promotes and enforces integrity rules, thus minimizing data redundancy and
maximizing data consistency. The data relationships stored in the data dictionary are used to
enforce data integrity.

8. Database access languages and application programming interfaces.


The DBMS provides data access through a query language. A query language is a
nonprocedural
language—one that lets the user specify what must be done without having to specify how it is
to be done. Structured Query Language (SQL) is the de facto query language and data access
standard supported by the majority of DBMS vendors.

A.2.3 Describe how a DBMS can be used to promote data security.

Security risks to database systems include, for example:

● Unauthorized or unintended activity or misuse by authorized database users or hackers


● Malware infections
● Overloads
● Physical damage to database servers
● Design flaws and programming bugs in databases and the associated programs and
systems,creating various security vulnerabilities
● Data corruption and/or loss caused by the entry of invalid data or commands

Different ways for protection


● Access control
● Auditing-A facility of the DBMS that enables DBAs to track the use of database resources and
authority.
● 2 factor Authentication
● Encryption
● Integrity controls
● Backups- recovery program
● Application security
● Database Security applying Statistical Methods
● Database forensics

https://www.iitianacademy.com/ibdp-computer-science-slhl-ib-style-questions-bank-all-papers/#I
BCSPaper2

Database Security applying Statistical Method

When unauthorized changes occur directly to the database there should be an algorithm such
as based on cryptology and other statistical methods that should track the events and users and
report it to the owners. These. These approaches maps large dataset into a small digital
fingerprint which is constantly updated, these fingerprints are then matches with the actual
ones, finding details about the changes,the user, and more

A 2.5

3 schema level

The structure that we are used to talk about the format, size and more about how we are saving
our data is known as schema. Data independence - user should not directly interact with the
data

First is external schema

This is basically the various unique views for each user, since particular users can only see
certain data and others can see other data but also can insert data. This can be also known as
a security point of view, where authorization is required for able to access and edit data

A perfect example of this is an website for students and faculty to see ceratin data

Second is conceptual data


This is basically how the data is actually being stored . Example:- An E-R model, or like tables in
RDMS

● Physical schema can be changed without changing application:


● DBMS must change mapping from conceptual to physical.
● Referred to as physical data independence.

Physical Data Level


The physical schema describes details of how data is stored: files, indices, etc. on the
random access disk system. It also typically describes the record layout of files and type of
files (hash, b-tree, flat). This is where the data is actually stored

Problem:
● Routines are hardcoded to deal with physical representation.
● Changes to data structures are difficult to make.
● Application code becomes complex since it must deal with details.
● Rapid implementation of new features is very difficult.

A.2.6

A data definition language (DDL) is a computer language used to create and modify the
structure of database objects in a database. These database objects include views, schemas,
tables, indexes, etc.

• CREATE: This command builds a new table and has a predefined syntax. The CREATE
statement syntax is CREATE TABLE [table name] ([column definitions]) [table
parameters]. CREATE TABLE Employee (Employee Id INTEGER PRIMARY KEY, First
name CHAR (50) NULL, Last name CHAR (75) NOT NULL).
• ALTER: An alter command modifies an existing database table. This command can
add up additional column, drop existing columns and even change the data type of
columns involved in a database table. An alter command syntax is ALTER object type
object name parameters. ALTER TABLE Employee ADD DOB Date.
• DROP: A drop command deletes a table, index or view. Drop statement syntax is
DROP object type object name. DROP TABLE Employee.

A.2.7
A.2.8
Data modelling is a process used to define and analyze data requirements needed to
support the business processes within the scope of corresponding information
systems in organizations.

First the data requirements are recorded as a conceptual data model which is specifications that
is sed to discuss the requirements with stakeholders. Then conceptual model is translated to
logical data model, which the document structures can be implemented in databases. Last step
is turning logical data model to physical data model , that oragnizes the data into tables

A.2.12 Importance of referential integrity in a normalized


database
Data integrity:
● Ensures the quality of data within a database
● Is about the actual values that are stored and used in an application's data structures
● Ensures that an application exert deliberate control on every process that uses your
data to ensure the continued correctness of the information
● Is applied through the careful implementation of several key concepts such as
normalizing data, defining business rules, providing referential integrity and
validating the data

Normalization is a method for reducing data redundancy

Data redundancy is the overlapping of repeated information increasing the size of the database
causing anomalies such as

Deletion issues
Insertion issues
Updated issues

First Normal Form

Each column should have atomic values


The headings should be unique
The values in a column should have a specific format or data type
Data should be applied to the order of what we want to

Second Normal Form

1NF
No Partial dependencies

When one column depends on more columns as the primary key, but doesn’t depend on the
second column of the primary key. Such as a teacher's name depending on subject id rather
than student id.

This can be removed from separating into two columns

Third Normal Form

Follow 2NF

It should not have transitive dependencies

Transitive dependency is when a column depends on another column which isn’t a primary key

Boyce- Codd Normal Form

3NF

For any dependency such as A to B, A should be a superkey. In this rather than a primary key
being depended by a non attribute column, a primary key now depends on a non attribute
column

Database transaction

A transaction comprises a unit of work


performed within a database management
system (or similar system) against a database,
and treated in a coherent and reliable way
independent of other transactions.

You might also like