DBMS - Old Q&A For 10 and 5 Marks

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 19

Purpose of Database System

In early days, the file processing systems were used for storing the data on operating system. The system stores permanent
records in various files, and it needs different application programs to extract records from, and add records to, the
appropriate files. Keeping organizational information in a file-processing system has a number of major disadvantages.
 Data Redundancy and Inconsistency
The creation of files and application programs are done by different programmers over a long period which gives various
structures for the files. And also different programming language is used for writing the files. The duplication of same
information can take place in several places in a file which cause overflow of data.For example, if a student has a double
major (say, Music and Dance) the address and telephone number of that student may appear in a file that
consists of student records of students in the Music department as well as in a file of Dance department. This redundancy
leads to higher storage and access cost. In addition, it may lead to data inconsistency; that is, the various copies of the
same data may no longer agree. For example, a changed student address may be reflected in the Music department
records but not elsewhere in the system.
 Difficulty in Accessing Data
Suppose that one of the university clerks needs to find out the names of all students who live within a particular postal-code
area. The clerk asks the data-processing department to generate such a list. Because the designers of the original system
did not anticipate this request, there is no application program on hand to meet it. There is, however, an application program
to generate the list of all students. The university clerk has now two choices: either obtain the list
of all students and extract the needed information manually or ask a programmer to write the necessary application
program. The point here is that conventional file-processing environments do not allow needed data to be retrieved in a
convenient and efficient manner. More responsive data-retrieval systems are required for general use.
 Data Isolation
Because data are scattered in various files, and files may be in different formats, writing new application programs to
retrieve the appropriate data is difficult.
 Integrity Problems
The data values stored in the database must satisfy certain types of consistency constraints. Suppose the university
maintains an account for each department, and records the balance amount in each account. Suppose also that the
university requires that the account balance of a department may never fall below zero. Developers enforce these
constraints in the system by adding appropriate code in the various application programs. However, when new constraints
are added, it is difficult to change the programs to enforce them. The problem is compounded when constraints involve
several data items from different files.
 Atomicity Problems
A computer system, like any other device, is subject to failure. In many applications, it is crucial that, if a failure occurs, the
be restored to the consistent state that existed prior to the failure. Consider a program to transfer $500 from the account
balance of department A to the account balance of department B. If a system failure occurs during the execution of the
program, it is possible that the $500 was removed from the balance of department A but was not credited to the balance of
department B, resulting in an inconsistent database state. Clearly, it is essential to database consistency that either both the
credit and debit occur, or that neither occur. That is, the funds transfer must be atomic—it must happen in its entirety or not
at all. It is difficult to ensure atomicity in a conventional file-processing system.
 Concurrent-access Anomalies
To achieve overall performance of the system and faster response, many systems allow multiple users to update the data
simultaneously. Indeed, today, the largest Internet retailers may have millions of accesses per day to their data by shoppers.
In such an environment, interaction of concurrent updates is possible and may result in inconsistent data. Consider
department A, with an account balance of $10,000. If two department clerks debit the account balance (by say $500 and
$100, respectively) of department A at almost exactly the same time, the result of the concurrent executions may leave the
budget in an incorrect (or inconsistent) state. Suppose that the programs executing on behalf of each withdrawal read the
old balance, reduce that value by the amount being withdrawn, and write the result back. If the two programs run
concurrently, they may both read the value $10,000, and write back $9500 and $9900, respectively. Depending on which
one writes the value last, the account balance of department A may contain either $9500 or $9900, rather than the correct
value of $9400. To guard against this possibility, the system must maintain some form of supervision. But supervision is
difficult to provide because data may be accessed by many different application programs that have not been coordinated
previously. As another example, suppose a registration program maintains
a count of students registered for a course, in order to enforce limits on the number of students registered. When a student
registers, the program reads the current count for the courses, verifies that the count is not already at the limit, adds one to
the count, and stores the count back in the database. Suppose two students register concurrently, with the count at (say) 39.
The two program executions may both read the value 39, and both would then write back 40,
leading to an incorrect increase of only 1, even though two students successfully registered for the course and the count
should be 41. Furthermore, suppose the course registration limit was 40; in the above case both students would be able to
register, leading to a violation of the limit of 40 students.
 Security Problems
Not every user of the database system should be able to access all the data. For example, in a university, payroll personnel
need to see only that part of the database that has financial information. They do not need access to information about
academic records. But, since application programs are added to the file-processing system in an ad hoc manner, enforcing
such security constraints is difficult. These difficulties, among others, prompted the development
of database systems. In what follows, we shall see the concepts and algorithms that enable database systems to solve the
problems with file-processing systems. In most of this book, we use a university organization as a running example of a
typical data-processing application.
1.3 Users of Database
A primary goal of a database system is to retrieve information from and store new information into the database. People who
work with a database can be categorized as database users or database administrators.
Database Users and User Interfaces
There are four different types of database-system users, differentiated by the way they expect to interact with the system.
Different types of user interfaces have been designed for the different types of users.
 Naive users
Naive users are unsophisticated users who interact with the system by invoking one of the application programs that have
been written previously. For example, a clerk in the university who needs to add a new instructor to department A invokes a
program called new hire. This program asks the clerk for the name of the new instructor, her new ID, the name of the
department (that is, A), and the salary. The typical user interface for naive users is a forms interface, where
the user can fill in appropriate fields of the form. Naive users may also simply read reports generated from the database. As
another example, consider a student, who during class registration period, wishes to register for a class by using a Web
interface. Such a user connects to a Web application program that runs at a Web server. The application first verifies the
identity of the user, and allows her to access a form where she enters the desired information. The form
information is sent back to the Web application at the server, which then determines if there is room in the class (by
retrieving information from the database) and if so adds the student information to the class roster in the database.
 Application programmers
These users are computer professionals who write application programs. Application programmers can choose from many
tools to develop user interfaces. Rapid application development (RAD) tools are tools that enable an application programmer
to construct forms and reports with minimal programming effort.
 Sophisticated users
Users interact with the system without writing programs. Instead, they form their requests either using a database query
language or by using tools such as data analysis software. Analysts who submit queries to explore data in the database fall
in this category. Specialized users are sophisticated users who write specialized database applications that do not fit into the
traditional data-processing framework. Among these applications are computer-aided design systems, knowledgebase and
expert systems, systems that store data with complex data types (for example, graphics data and audio data), and
environment-modelling systems.
Database Administrator
One of the main reasons for using DBMS is to have central control of both the data and the programs that access those
data. A person who has such central control over the system is called a database administrator (DBA). The functions of a
DBA include:

• Schema definition. The DBA creates the original database schema by executing a set of data definition statements in the
DDL.
• Storage structure and access-method definition.
• Schema and physical-organization modification. The DBA carries out changes to the schema and physical organization
to reflect the changing needs of the organization, or to alter the physical organization to improve performance.
• Granting of authorization for data access. By granting different types of authorization, the database administrator can
regulate which parts of the database various users can access. The authorization information is kept in a special system
structure that the database system consults whenever someone attempts to access the data in the system.
• Routine maintenance. Examples of the database administrator’s routine maintenance activities are:
 Periodically backing up the database, either onto tapes or onto remote servers, to prevent loss of data in case of disasters
such as flooding.
 Ensuring that enough free disk space is available for normal operations, and upgrading disk space as required.
 Monitoring jobs running on the database and ensuring that performance is not degraded by very expensive tasks
submitted by some users.

Introduction to Relational Database Management System


The relational model is today the primary data model for commercial data processing applications. It attained its primary
position because of its simplicity, which eases the job of the programmer, compared to earlier data models such as the
network model or the hierarchical model.
A relational database consists of a collection of tables, each of which is assigned a unique name. For example, consider the
instructortable of Figure 2.1, which stores information about instructors. The table has four column headers: ID, name, dept
name, and salary.

Figure 2.1: Relational Database


Thus, in the relational model the term relation is used to refer to a table, while the term tuple is used to refer to a row.
Similarly, the term attribute refers to a column of a table. The benefits of a database that has been designed according to the
relational model are numerous. Some of them are:
• Data entry, updates and deletions will be efficient.
• Data retrieval, summarization and reporting will also be efficient.
• Since the database follows a well-formulated model, it behaves predictably.
• Since much of the information is stored in the database rather than in the application, the database is somewhat self-
documenting.
• Changes to the database schema are easy to make.
The objective of this chapter is to explain the basic principles behind relational database design and demonstrate how to
apply these principles when designing a database.
_______________________________________________________________
SQL Data Control Language
SQL has an authorization sublanguage, Data Control Language that includes statements to grant privileges to and revoke
privileges from users. A privilege is an action, such as creating, executing, reading, updating, or deleting, that a user is
permitted to perform on database objects.
In standard SQL, the creator of a schema is given all privileges on all the objects (tables, views, roles, applications) in it, and
can pass those privileges on to others. Ordinarily, only the creator of the schema can modify the schema itself (adding
tables, columns, and so on). The statement for granting privileges has the following form:
GRANT {ALL PRIVILEGES | privilege-list }
ON { object-name }
TO {PUBLIC | user-list | role-list } [ WITH GRANT OPTION ];The possible privileges for base tables are SELECT ,
DELETE , INSERT , UPDATE , or REFERENCES( col-name ) . If a table is named in the ON clause, then ALL PRIVILEGES
includes all of these operations. If a view is named in the ON clause, and the view was constructed in such a way that it is
updatable, the SELECT, DELETE, INSERT, and UPDATE privileges can be granted on that view. For views that are not
updatable, only the SELECT can be granted. The UPDATE privilege can be made more restrictive by specifying a column
list in parentheses after the word UPDATE restricting the user to updating only certain columns, as in:
GRANT UPDATE ON Student(major) TO U101;The REFERENCES privilege is applied to columns that may be used as
foreign keys. This privilege allows the user to refer to those columns in creating foreign key integrity constraints. For
example, to allow a user who can update the Enroll table to be able to reference stuId in the Student table in order to match
its values for the Enroll table, we might write: GRANT REFERENCES (stuId) ON Student TO

U101; The user list in the TO clause can include a single user, several users, or all users (the public). The optional WITH
GRANT OPTION clause gives the newly authorized user(s) permission to pass the same privileges to others. For example,
we could write: GRANT SELECT, INSERT, UPDATE ON Student TO U101, U102, U103 WITH GRANT OPTION; Users
U101, U102, and U103 would then be permitted to write SQL SELECT, INSERT, and UPDATE statements for the Student
table, and to pass that permission on to other users. Because of the ability of users with the grant option to authorize other
users, the system must keep track of authorizations using a grant diagram, also called an authorization graph. Figure7.6
shows an authorization graph.

Figure 7.6: Authorization Graph

Here, the DBA, who (we assume) is the creator of the schema, gave a specific privilege (for example, to use SELECT on the
Student table) WITH GRANT OPTION to users U1 , U2 ,

and U3 . We will use a double arrowhead to mean granting with grant option, and a single arrow head to mean without it. A
solid outline for a node will mean that the node has received the grant option, and a dashed outline will mean it has not.U1
passed along the privilege to U21 and U22, both without the grant option. U2 also passed the privilege to U22, this time with
the grant option, and U22passed the privilege to U31, without the grant option. U3 authorized U23 and
U24, both without the grant option. Note that if we give a different privilege to one of these users, we will need a new node to
represent the new privilege. Each node on the graph represents a combination of a privilege and a user. SQL DCL includes
the capability to create user roles. A role can be thought of as a set of operations that should be performed by an individual
or a group of individuals as part of a job.
For example, in a university, advisors may need to be able to read student transcripts of selected students, so there may be
an Advisor role to permit that. Depending on the policies of the university, the Advisor role might also include the privilege of
inserting enrolment records for students at registration time. Students may be permitted to perform SELECT but not
UPDATE operations on their personal data, so there may be a Student role that permits such access.
Once the DBA has identified a role, a set of privileges is granted for the role, and then user accounts can be assigned the
role. Some user accounts may have several roles. To create a role, we write a statement such as: CREATE ROLE Adviso
rRole; CREATE ROLE Faculty Role; We then grant privileges to the role just as we would to individuals, by writing
statements such as:
GRANT SELECT ON Student TO Advisor Role; GRANT SELECT, UPDATE ON Enroll TO Advisor Role; GRANT SELECT
ON Enroll TO Faculty Role; To assign a role to a user, we write a statement such as: GRANT Advisor Role TO U999; We
can even assign a role to another role by writing, for example: GRANT Faculty Role TO Advisor Role; This provides a
means of inheriting privileges through roles. The SQL DCL statement to remove privileges has
this form:
REVOKE {ALL PRIVILEGES | privilege-list} ON object-list FROM {PUBLIC | user-list | role-list}; [CASCADE | RESTRICT];
For example, for U101, to whom we previously granted SELECT,INSERT and UPDATE on Student with the grant option, we
could remove some privileges by writing this: REVOKE INSERT ON Student FROM U101; This revokes U101 ’s ability both
to insert Student records and to authorize others to insert Student records. We can
revoke just the grant option, without revoking the insert, by writing this: REVOKE GRANT OPTION FOR INSERT ON
Student FROM U101; If an individual has the grant option for a certain privilege and the privilege or the grant option on it is
later revoked, all users who have received the privilege from that individual have their privilege revoked as well. In this way,
revocations cascade, or trigger other revocations. If a user obtained the same privilege from two authorizers, one of whom
has authorization revoked, the user still retains the privilege from the other authorizer. Thus, if the DBA revoked the
authorization of user U1, U21 would lose all privileges, but U22 would retain whatever privileges were received from U2.
Since U22 has the grant option, user U21 could regain privileges from U22. In this way, unscrupulous users could conspire
to retain privileges despite attempts by the DBA to revoke them. For this reason, the DBA should be very careful about
passing the grant option to others. If the RESTRICT option is specified, the system checks to see if there are any cascading
revocations and returns an error if they exist, without executing the revoke statement. CASCADE is the default. When a
privilege is revoked, the authorization graph is modified by removing the node(s) that lose their privileges.
________________________________________________________________________________________

Implementation of VPD
Virtual Private Database (VPD) enables you to create security policies to control database access at the row and column
level. Essentially, Virtual Private Database adds a dynamic WHERE clause to a SQL statement that is issued against the
table, view, or synonym to which a Virtual Private Database security policy was applied. You can apply Virtual Private
Database policies to SELECT, INSERT, UPDATE, INDEX, and DELETE statements. VPD enables administrators to define
and enforce row level access control policies based on session attributes using two
features called Fine – grained access control: associate security policies to database objects Application Context: define and
access application or session attributes.
10.3.1 Implementing a VPD Using Views and Its Limitations
Loss prevention of data and in particular protection of data from unauthorized accesses remains important goal of any
database management system. While VIEW can provide fairly granular access control, they have limitations which make
them less than optimal for very finegrained access control. VIEWS are not always practical when user need a lot of them to
enforce user policy. In our example Health Clinic management has 7 departments so we created 7 VIEW objects; this
becomes overhead when size of organization is very large that is an organization has hundreds of departments. In case of
Column – Level Security, maintaining triggers on VIEW objects for Insert Operation is extra burden on DBA. While
applications may incorporate and enforce security through views, users often need access to base tables to run
reports or conduct ad-hoc queries. Users who have privileges on base tables are able to bypass the security enforcement
provided by views. Note that this is a general problem of embedding security in applications instead of enforcing security
through database mechanisms, but it is exacerbated when security is enforced on views and not on the data itself. VPD
using POLICIES provides a flexible mechanism for building applications that enforce the security policies only where such
control is necessary by dynamically appending SQL statements with a predicate, VPD limits access to data at the Row Level
and ties the security policy to the table itself.
10.3.2 Application context
Application context is functionality specific to Oracle that allows user to set database application variables that can be
retrieved by database sessions. These variables can be used for security context – based or user – defined environmental
attributes. User can identify client host name, an IP address of the connected session, or the operating system user name of
a connected session using application context function SYS_CONTEXT in conjunction with
predefined user-environment attributes, known as USERENV attributes, which are grouped as a namespace.
Example: SQL> SELECT SYS_CONTEXT (‘USERENV’, ‘CURRENT_USER’) FROM
DUAL; SYS_CONTEXT (‘USERENV’,’CURRENT_USER’)

SYSTEM
The database session – based application context is managed entirely within Oracle Database. Oracle Database sets the
values, and then when the user exits the session, automatically clears the application context values stored in cache. Any
application that accesses this database will need to use this application context to permit or prevent user access to that
application. Application contexts are useful for the following purposes:
 Enforcing fine-grained access control
 Preserving user identity across multitier environments
 Serving as a holding area for name-value pairs that an application can define, modify, and access

10.3.3 Fine – Grained Access Control/Policy – Based Access control


Fine – Grained mechanisms supports access control down to the tuple level. The conventional view mechanisms have a
number of shortcomings. A naïve solution to enforce fine – grained authorization would require the specification of a view for
each tuple or part of a tuple that is to be protected. Moreover, because access control policies are often different for different
users, the number of views would further increase. Furthermore, application programs
would have to code different interfaces for each user, or group of users, as queries and other data management commands
would need to use for each user, or group of users, the correct view. Modifications to access control policies would also
require the creation of new views with consequent modifications to application programs. Alternative approaches that
address some of these issues have been proposed, and these approaches are based on the idea that queries are written
against the base tables and, then, automatically rewritten by the system against the view available to the user. These
approaches do not require that we code different interfaces for different users and, thus, address one of the main problems
in the use of conventional view mechanisms.
Figure 10.3: Fine grain access Control

Authentication and Password security can be provided to Database


Authentication is the process of validating the identity of someone or something. It uses information provided to the
authenticator to determine whether someone or something is in fact who or what it is declared to be. In private and public
computing systems, for example, in computer networks, the process of authentication commonly involves someone, usually
the user, using a password provided by the system administrator to logon. The user’s possession
of a password is meant to guarantee that the user is authentic. It means that at some previous time, the user requested,
from the system administrator, and the administrator assigned and or registered a self-selected password.

Generally, authentication requires the presentation of credentials or items of value to really prove the claim of who you are.
The items of value or credential are based on several unique factors that show something you know, something you have, or
something you are:
Something you know: This may be something you mentally possess. This could be a password, a secret word known by
the user and the authenticator. Although this is inexpensive administratively, it is prone to people’s memory lapses and other
weaknesses including secure storage of the password files by the system administrators. The user may use the same
password on all system logons or may change it periodically, which is recommended. Examples using this
factor include passwords, passphrases, and PINs (Personal Identification Numbers).
Something you have: This may be any form of issued or acquired self-identification such as:
o SecurID
o CryptoCard
o Activcard
o Safeword
o and many other forms of cards and tags.
This form is slightly safer than something you know because it is hard to abuse individual physical identifications. For
example, it is harder to lose a smart card than to remember the card number.
Something you are: This being a naturally acquired physical characteristic such as voice, fingerprint, iris pattern and other
biometrics. Although biometrics are very easy to use, this ease of use can be offset by the expenses of purchasing biometric
readers. Examples of items used in this factor include fingerprints, retinal patterns, DNA patterns, and hand geometry. In
addition to the top three factors, another factor, though indirect, also plays a part in authentication.
Somewhere you are: This usually is based on either physical or logical location of the user. The use, for example, may be
on a terminal that can be used to access certain resources.
In general authentication takes one of the following three forms
 Basic authentication involving a server. The server maintains a user file of either passwords and user names or some
other useful piece of authenticating information. This information is always examined before authorization is granted. This is
the most common way computer network systems authenticate users. It has several weaknesses though, including
forgetting and misplacing authenticating information
such as passwords.
 Challenge-response, in which the server or any other authenticating system generates a challenge to the host requesting
for authentication and expects a response.
 Centralized authentication, in which a central server authenticates users on the network and in addition also authorizes
and audits them. These three processes are done based on server action. If the authentication process is successful, the
client seeking authentication is then authorized to use the requested system resources. However, if the authentication
process fails, the authorization is denied. The process of auditing is done by the server to record all information from these
activities and store it for future use.

Password Security
There are various kinds of attacks which can be attempted by an attacker in order to obtain a victim’s password. Some of
these attacks work by targeting password during transmission, such as eavesdropping, replay, and man-in-the-middle
attacks . Other attacks, such as dictionary attacks are directed to passwords stored in the server end. Some tools
required to launch these attacks are readily available on the Internet, although most of them require some level of technical
knowledge. Examples include network sniffer (such as tcpdump and wireshark ), tools to assess WiFi network security (such
as aircrack-ng ) and password cracking utilities (such as John the Ripper and Rainbow Crack ). Service providers usually
attempt to prevent these attacks by encrypting the communication between users and server (using SSL, for example),
encrypting and limiting access to stored passwords, and blocking accounts which have too many incorrect login attempts.
Some other types of password attacks are as follows:
 Brute Force Attack
In this type of attack combinations of password applies to break the password . The passwords which are saved in
encrypted format were attacked in brute force. Even though the passwords are in the encrypted form, sometimes it may get
hacked or steal with the help of insiders. Brute force attack is time consuming.
 Dictionary Attack
The passwords which are usually very simple to guess are matched against a file which contains all possible set of words. If
there is a match then the password is hacked. This type of attack takes very faster than the brute force attack.
 Shoulder Surfing
In this type of attack user is under close observation either through CCTV or listening to the beep sounds of key pressed or
which keys has been pressed to crack the password.
 Replay Attacks
Reflection attacks is the another name for replay attacks [11]. In CRAM (Challenge- Response Authentication Mechanism)
attack user is getting authenticated in two levels. First level is basic authentication and the second level is digest
authentication. This mechanism is called as. Here the server and client are involved in mechanism, usually client keys in
his/her credentials as an authentication challenge to the server. Then server will receive the response from the client and if
the password keyed in is correct then user is allowed to access the system else not. During this basic form of authentication
password in the form of a clear text which is visible where as in the second authentication method i.e. digest, password is
encrypted to be sent over a network. Even digest authentication method can also be hacked.
 Phishing Attacks
This type of attack is a web-based attack [hich takes place during the web transactions where user will be redirected to the
fake website and hacker can get access to the user credentials. For example user wants to login to website
www.citibank.com. Then the hacker will redirect the user to the website www.ctibank.com, whose UI, look and feel will
resembles same as the real website.
 Key Loggers
Key Loggers attack is similar to the spoofing. Another name for this type of attack is key sniffers. User activities are
monitored by the key logger’s software programs and make an entry in to the log file. This log file is submitted to attacker
from which password of the user is traced out.
 Video Recording Attack
In this type of attack hacker may steal the password with the help of miniature cameras or camera equipped mobile phones
during the ATM transactions or E-Commerce transactions. With all the possibility and issues, still password usage is the
simplest of all for the purpose of authentication.

Data Risk Assessment


A data risk assessment is a review of how an organization protects its sensitive data and what improvements might be
necessary.

Organizations should perform data risk assessments periodically, as a form of audit, to help identify information security and
privacy control shortcomings and reduce risk. A data risk assessment is also necessary after a data breach, whether
intentional or inadvertent, to improve controls and reduce the likelihood of a similar breach occurring in the future.
5 steps to perform a data risk assessment.
Use the following five steps to create a thorough data risk assessment.
1. Inventory sensitive data
Check endpoints, cloud services, storage media and other locations to find and record all instances of sensitive data. A data
inventory should include any characteristics that might influence risk requirements. For example, the geographic location of
stored data affects which laws and regulations apply.Identify who is in charge of each instance of sensitive data so you can
interact with them as necessary.
2. Assign data classifications to each data instance
Every organization should already have defined data classifications, such as "protected health information" and "personally
identifiable information," for all sensitive data. These definitions should indicate which security and privacy controls are
mandatory and recommended for each sensitive data type. Even if data already has a classification, recheck it periodically.
The nature of data can change over time, and new classifications may emerge that apply to the data in question.
3. Prioritize which sensitive data to assess
An organization may have so much sensitive data that it is not feasible to review all of it during each assessment. If
necessary, prioritize the most sensitive data, the data with the most stringent requirements or the data that has gone the
longest without assessment.
4. Check all relevant security and privacy controls
Audit the controls protecting sensitive data where it is used, stored and transmitted. Common audit steps include the
following:
 Verify the principle of least privilege. Confirm that only the necessary human and nonhuman users,
services, administrators and third parties -- e.g., business partners, contractors and vendors -- have access to
sensitive data and that they have only the access they require, such as read-only, read-write, etc.

 Ensure all policies restricting access to data are actively enforced. For example, an organization might
limit access to certain sensitive data based on the following factors:
the location of the user , the location of the data, the time of day, the day of the week, the user's device type

 Ensure all other necessary security and privacy controls are in use. Common tools to mitigate risk
include the following:
data loss prevention software, firewalls, encryption, multifactor authentication, user and entity behavior analytics

 Identify data retention violations. Determine if any data is present that should be destroyed to comply
with data retention policies.
5. Document all security and privacy control shortcomings
While identifying security and privacy deficiencies is within the scope of a data risk assessment, fixing them is not. It's
reasonable, however, for an assessment to include the following:
 a relative priority level for each deficiency
 a recommended course of action for addressing each deficiency
These recommendations ideally inform a roadmap toward better data security. A risk matrix can help organize and prioritize
issues according to the severity of their potential consequences and the likelihood they will occur.

Issues of Database Security:


Many software vulnerabilities, misconfigurations, or patterns of misuse or carelessness could result in breaches. Here are a
number of the most known causes and types of database security cyber threats.
Insider Threats
An insider threat is a security risk from one of the following three sources, each of which has privileged means of entry to the
database:
. A malicious insider with ill-intent
. A negligent person within the organization who exposes the database to attack through careless actions
. An outsider who obtains credentials through social engineering or other methods, or gains access to the database’s
credentials
An insider threat is one of the most typical causes of database security breaches and it often occurs because a lot of
employees have been granted privileged user access.
Blog: How Insider Threats Drive Better Data Protection Strategies.
Human Error
Weak passwords, password sharing, accidental erasure or corruption of data, and other undesirable user behaviors are still
the cause of almost half of data breaches reported.
Exploitation of Database Software Vulnerabilities
Attackers constantly attempt to isolate and target vulnerabilities in software, and database management software is a highly
valuable target. New vulnerabilities are discovered daily, and all open source database management platforms and
commercial database software vendors issue security patches regularly. However, if you don’t use these patches quickly,
your database might be exposed to attack.
Even if you do apply patches on time, there is always the risk of zero-day attacks, when attackers discover a vulnerability,
but it has not yet been discovered and patched by the database vendor.
Blog: Imperva Protects from New Spring Framework Zero-Day Vulnerabilities.
SQL/NoSQL Injection Attacks
A database-specific threat involves the use of arbitrary non-SQL and SQL attack strings into database queries. Typically,
these are queries created as an extension of web application forms, or received via HTTP requests. Any database system
is vulnerable to these attacks, if developers do not adhere to secure coding practices, and if the organization does not carry
out regular vulnerability testing.
Buffer Overflow Attacks
Buffer overflow takes place when a process tries to write a large amount of data to a fixed-length block of memory, more
than it is permitted to hold. Attackers might use the excess data, kept in adjacent memory addresses, as the starting point
from which to launch attacks.
Denial of Service (DoS/DDoS) Attacks: In a denial of service (DoS) attack, the cybercriminal overwhelms the target
service—in this instance the database server—using a large amount of fake requests. The result is that the server cannot
carry out genuine requests from actual users, and often crashes or becomes unstable.
In a distributed denial of service attack (DDoS), fake traffic is generated by a large number of computers, participating in
a botnet controlled by the attacker. This generates very large traffic volumes, which are difficult to stop without a highly
scalable defensive architecture. Cloud-based DDoS protection services can scale up dynamically to address very
large DDoS attacks.
Malware is software written to take advantage of vulnerabilities or to cause harm to a database. Malware could arrive
through any endpoint device connected to the database’s network. Malware protection is important on any endpoint, but
especially so on database servers, because of their high value and sensitivity.
An Evolving IT Environment
The evolving IT environment is making databases more susceptible to threats. Here are trends that can lead to new types of
attacks on databases, or may require new defensive measures:

 Growing data volumes—storage, data capture, and processing is growing exponentially across almost all
organizations. Any data security practices or tools must be highly scalable to address distant and near-future
requirements.
 Distributed infrastructure—network environments are increasing in complexity, especially as businesses transfer
workloads to hybrid cloud or multi-cloud architectures, making the deployment, management, and choice of
security solutions more difficult.
 Increasingly tight regulatory requirements—the worldwide regulatory compliance landscape is growing in
complexity, so following all mandates are becoming more challenging.
 Cybersecurity skills shortage—there is a global shortage of skilled cybersecurity professionals, and
organizations are finding it difficult to fill security roles. This can make it more difficult to defend critical
infrastructure, including databases.

__________________________________________________________________________________________
Managing Database
Database Management allows a person to organize, store, and retrieve data from a computer. Database Management can
also describe the data storage, operations, and security practices of a database administrator (DBA) throughout the life
cycle of the data. Managing a database involves designing, implementing, and supporting stored data to maximize its value.
Database Management Systems, according to the DAMA DMBoK, include various types:

 Centralized: all the data lives in one system in one place. All users come to that one system to access the data.
 Distributed: Data resides over a variety of nodes, making quick access possible. “Rather than rely on hardware to
deliver high-availability, the Database Management software…is designed to replicate data amongst the servers”
allowing it to detect and handle failures.
 Federated: Provisions data without additional persistence or duplication of source data. It maps multiple
autonomous databases into one large object. This kind of database architecture is best for heterogenous and
distributed integration projects. Federated databases can be categorized as:
 Loosely Coupled: Component databases construct their own federated schema and typically requires accessing
other component database systems through a multi- database language.
 Tightly Coupled: Component systems use independent processes to construct and publish into an integrated
federal schema.
 Blockchain: A type of federated database system used to securely manage financial and other types of
transactions.

Managing Data Security:

There are many definitions of data security management, and data security solutions abound. Every organization must
clearly define and communicate the data security program and data security services it offers, as these will differ slightly
from place to place. In general, data security management is:

 The practice of ensuring that data, no matter its form, is protected while in your possession and use from
unauthorized access or corruption.
 The blending of both digital (cyber) and physical processes to protect data.
 The monitoring of data acquisition, use, storage, retrieval, and deletion such that data is not corrupted at any
point in its lineage.
 The implementation of technology defenses that prevent data loss prevention from internal malicious actions
or hacking.
 Encouraging applications and services developers to test against data security standards to improve data leak
prevention.
 The policies that train and govern individuals on the importance of data security and how best to protect
themselves and the business.
 The security of data exchanged with external applications or services.
 Taking advantage of the use of encrypted cloud storage or encrypted cloud networks to secure data transfers
and sharing.
 The management of data center security, even if you benefit from cloud services, to ensure that your most
precious non-people resource is safe.
Data security management practices are not just about sensitive or business-critical information. Data security management
practices protect you and your organization from unintentional mistakes or hackers corrupting or stealing your precious
resources.

Virtual Private Database(VPD) is the most popular secured database which was introduced by Oracle Database
Enterprise. It is used when the object privileges and database roles are inadequate to achieve security requirements. The
policies or protocols are directly proportional to security requirements. 
VPD is associated with the “application context” feature and these contexts are used to manage the data during the
execution of SQL statements. A complex VPD example might read an application context during a login trigger and a
simple VPD example might restrict access to data during business hours.

Advantages of VPD: Higher Accessibility: Users can easily access the data from anywhere.
 Flexibility: It can be easily modified without breaking the control flow.
 Higher Recovery Rate: The data can be retrieved very easily.
 Dynamically Secured: No need to maintain complex roles.
 No back doors: The security policy is attached to the data so no by-passing is allowed.

Dis-advantages of VPD:
 Difficult column-level security.
 Oracle account ID is required to use this service.
 Hard to examine.
There are the following examples of VPD:
Need for VPD
The following sentences note why VPD is required:
 Virtual Private database is needed to protect the confidential and secret information.
 You can have one database and control the delivery of the data to the right people
 VPD is used for Regulations such as HIPAA and SOX
 Security: Server-enforced security (as opposed to application-enforced).
 Purposes/benefits: Security requirements necessitate data access be restricted at row or column level (FGA). One database schema
serves multiple unrelated groups or entities

Components of VPD
The virtual private database consists of the following components.
 Application Context
 PL/SQL Function
 Security Policies
The following explains each of the components of VPD in detail.
i. Application Context
 Holds environmental variables such as application name and username
 Gathers information using Dbms_session.set_context
ii. PL/SQL Function
 Functions are used to construct and return the predicates that enforce the RLS.
 The function must be called in the correct standard, to ensure that the policy can
call the function correctly.
 Function returns a value.
iii. Security Policies
The function of security policies is future classified as follows:
 Static: The policy function is executed once, and the resulting string (the predicate)
is stored in the Shared Global Area (SGA).
 Non Static:
• SHARED_STATIC
Allows the predicate to be cached across multiple objects that use the same policy function.
128
• CONTEXT_SENSITIVE
The server always executes the policy function on statement parsing. The server will
only execute the policy function on statement execution if it detects context changes. This
makes it ideal for connection pooling solutions that share a database schema and use application
contexts to actually perform the user identity switching.
 Dynamic: The default, which makes no assumptions about caching. This policy will
be invoked every time the SQL statement is parsed or executed.

Statistical Database Security


Statistical databases are designed to provide data to support statistical analysis on populations. The data itself may contain
facts about individuals, but the data is not meant to be retrieved on an individual basis. Users are granted permission to
access statistical information such as totals, counts, or averages, but not information about individuals. For example, if a
user is permitted statistical access to an employee database, he or she is able to write queries such as:
SELECT SUM (Salary)
FROM Employee
WHERE Dept = 10;
but not:
SELECT Salary
FROM Employee
WHERE empId = ‘E101’;
Special precautions must be taken when users are permitted access to statistical data, to ensure that they are not able to
deduce data about individuals. For the preceding example, if there are no restrictions in place except that all queries must
involve COUNT, SUM, or AVERAGE, a user who wishes to find the employee of E101 can do so by adding conditions to the
WHERE line to narrow the population down to that one individual, as in:
SELECT SUM (Salary)
FROM Employee
WHERE Dept = 10 AND jobTitle = ‘Programmer’ AND
dateHired > ’01-Jan-2015'; The system can be modified to refuse to answer any query for which only one record satisfies the
predicate. However, this restriction is easily overcome, since the user can ask for total salaries for the department and then
ask for the total salary without that of E101. Neither of these queries is limited to one record, but the user can easily deduce
the salary of employee E101 from them. To prevent users from deducing information
about individuals, the system can restrict queries by requiring that the number of records satisfying the predicate must be
above some threshold and that the number of records satisfying a pair of queries simultaneously cannot exceed some limit.
It can also disallow sets of queries that repeatedly involve the same records.
_________________________________________________________________________________.
Implementing account lockout after failed login attempts.
Due to the often overwhelming prevalence of password authentication, many users forget their credentials, triggering an
account lockout following too many failed login attempts. Upon being locked out of their account, users are forced to validate
their identity -- a process that, while designed to dissuade nefarious actors, is also troublesome for legitimate users
"Account lockout is, from a user perspective, a jarring and in-your-face experience," said Allan Foster, chief evangelist at
ForgeRock.But the experience is integral to mitigate risk, said Casey Ellis, CTO and founder of Bugcrowd.
"While inconvenient for legitimate users, it is not too inconvenient -- and it can deter attackers," Ellis said. "It is a resilient and
battle-tested reset strategy that is highly available for multiple use cases."
Why enterprises need account lockout policies
Account lockout policies aim to prevent credential theft, credential stuffing and brute-force methods of guessing username
and password combinations, thus preventing user account compromise and network intrusion.
This is an important aspect of not only securing enterprise systems, but also securing users' personal accounts and
information. Companies must determine confidently whether users trying to authenticate are actually who they say they are,
or they risk falling victim to attack.

The default approach to this is to make it harder for potential attackers to compromise accounts. There are two main
techniques used to do this, Foster said. One way is to slow down the authentication cycle by making users wait longer and
longer every time there is an unsuccessful login attempt, he said. The other technique is anomaly detection. "Account
providers can shut down the account when anomalous behavior is detected until they can connect with the original owner to
confirm their identity for authentication," Foster explained.

Account lockout policy features


The account lockout policy is made up of three key security settings: account lockout duration, account lockout threshold
and reset account lockout counter after. These policy settings help prevent attackers from guessing users' passwords. In
addition, they decrease the likelihood of successful attacks on an organization's network.

Enterprises should consider a combination of these three when building an account lockout policy. Bugcrowd's Ellis
recommended Apple's iPhone password lockout policy features. "If you forget or don't properly enter your password a
certain number of times, you will be unable to try logging back in to the device for a short time," he said. "Subsequent
attempts extend the lockout period. This can prove that either the individual entering the password is a forgetful user or an
unauthorized individual attempting to obtain illegitimate access."

How to create account lockout policies


Setting account lockout policies -- including lockout duration and thresholds -- is what Ellis called a "dark art." There are
many factors to consider when determining account lockout policy security setting values. But, because every enterprise is
different, it is difficult to recommend standard values for the three security settings without calculating the organization's risk
profile first. Policymakers should account for any regulatory requirements and adjust values accordingly. The capabilities of
computing resources, as well as employee productivity, should also be accounted for.
It is also critical to weigh exposure risks set by the security group, ForgeRock's Foster said. "Accounts with different
capabilities have different levels of risk, both to the user and to the organization in the event of a compromise," he said. "Any
account where the damage that can be caused is high or is higher than normal requires a higher level of protection."

If a privileged account shows any indication of attack, the immediate response should be to assume it is an attack and to
lock down the account. Administrators may want to implement unique settings for privileged accounts, such as a longer
account lockout duration and lower account lockout threshold.

While this seems like a commonsense best practice, it's important to consider the nuance of privileged accounts, Foster
said. For example, some privileged accounts may be responsible for planning a response to a security event. "You don't
want the reaction to the threat to also compromise your ability to respond to that threat," he added.

Analyzing these factors and hypotheticals is critical to successfully creating an account lockout policy that ensures security
needs and UX needs are both met.

Limitations of account lockout policies


An account lockout policy alone is not a cybersecurity silver bullet. Enabling multifactor authentication (MFA) and single
sign-on (SSO) are critical measures that should also be incorporated into enterprise identity and access management
programs, said Anurag Kahol, CTO and co-founder of Bitglass.

"MFA confirms user identity and investigates suspicious logins, while SSO helps organizations directly manage access to
sensitive information by blocking or providing various levels of access to data and applications based on user identity and
context," Kahol said.

Managing identities and access privileges has become even more demanding tasks as many organizations transition to
remote work. Implementing the right policies and settings can empower administrators to manage and secure every
account.

Database Threat
Databases today are facing different kind of attacks. Before describing the techniques to secure databases, it is preferable
to describe the attacks which can be performed on the databases. The major attacks on databases can be categorized as
shown in Figure. 4.3. These attacks are further elaborated in the following sections.
Figure 4.3: Avenues of Attack
Excessive privileges
Privileges of database can be abused in many ways. User may abuse privilege for unauthorized purpose. Privilege abuse
comes in different flavours: Excessive privilege abuse, legitimate privileges abuse and unused privilege abuse. This type of
threat is most dangerous because authorized users are doing misuse of data. These privileges can be abused and creates
unnecessary risk. Granting excessive permissions is problematic for two reasons. About 80%
of the attacks on company data are actually executed by employees or ex-employees. Granting too many privileges or not
revoking those privileges in time makes it unnecessarily simple for them to execute their wrong doing. Some of these actions
might even be executed inadvertently or without the perception of those actions being illegal Abuse of legitimate privileges
can be considered database vulnerability, if the malicious user misuses their database access privileges.
Countermeasures of Privilege Abuse include
1. Access Control policy: Do not grant unnecessary privileges to the user.
2. Legitimate privilege abuse can be stop by a providing good audit trail.
.
4.4 Risk Analysis and Risk Management
Risk analysis has three deliverables: (1) identify threats; (2) establish arisk level by determining probability that a threat will
occur and the impact if the threat does occur; and finally, (3) identification of controls and safeguards that can reduce the
risk to an acceptable level.

Risk management is the process of identifying risk, as represented by vulnerabilities, to an organization’s information
assets and infrastructure, and taking steps to reduce this risk to an acceptable level. Risk management involves three major
undertakings: risk identification, risk assessment, and risk control.

Risk Identification
A risk management strategy requires that information security professionals know their organizations’ information assets—
that is, identify, classify, and prioritize them. Once the organizational assets have been identified, a threat assessment
process identifies and quantifies the risks facing each asset.
Risk identification is the examination and documentation of the security posture of an organization’s information
technology and the risks it faces. The components of risk identification.
Risk Assessment
Risk assessment is the determination of the extent to which the organization’s information assets are exposed or at risk.

Risk Control
Risk control is the application of controls to reduce the risks to an organization’s data and information systems. The
following are different risk control strategies.
• Defend - The defend control strategy attempts to prevent the exploitation of the vulnerability
• Transfer- The transfer control strategy attempts to shift risk to other assets, other processes, or other organizations.
• Mitigate - The mitigate control strategy attempts to reduce the impact caused by the exploitation of vulnerability through
planning and preparation. Eg ICP, BCP, DRP.
• Accept - The accept control strategy is the choice to do nothing to protect a vulnerability and to accept the outcome of its
exploitation.
• Terminate - The terminate control strategy directs the organization to avoid those business activities that introduce
uncontrollable risks.

 SQL Injections
Database systems are used for the backend functionality. User supplied data as input is often used to dynamically build sql
statements that affect directly to the databases. Input injection is an attack that is aimed at subverting the original intent of
the application by submitting attacker –supplied sql statements directly to the back end database. There are two types of
input injection:
1. SQL Injection
2. NoSQL Injection.
SQL Injection: Targets the tradition database system. It attacks usually involve injecting unauthorized statements into the
input fields of applications.
NoSQL Injection: Targets big data platforms. This type involves inserting malicious statements into big data components
like Hive, Map Reduce.
In SQL and No SQL successful input injection attack can give attacker unrestricted access to an entire database.
Counter measures of Input Injection
1. Use Stored Procedure instead of implementing direct queries.
2. Implementing MVC Architecture.
Malware
Cybercriminals, state-sponsored hackers, and spies use advanced attacks that blend multipletactics – such as spear
phishing emails and malware – to penetrate organizations and stealsensitive data. Unaware that malware has infected their
device; legitimate users become a conduitfor these groups to access your networks and sensitive data.
Countermeasures of Malware
Enable firewall protection and Install Antivirus.
Weak Audit Trail
Weak audit policy and technology represent risks in terms of compliance, deterrence, detection, forensics and recovery.
Automated recording of database transactions involving sensitive data should be part of any database deployment. Failure
to collect detailed audit records of database activity represents a serious organizational risk on many levels.
Organizations with weak database audit mechanisms will increasingly find that they are at odds with industry and
government regulatory requirements. Most audit mechanisms have no awareness of who the end user is because all activity
is associated with the web application
account name. Reporting, visibility, and forensic analysis are hampered because there is no link to the responsible user.
Finally, users with administrative access to the database, either legitimately or maliciously obtained, can turn off native
database auditing to hide fraudulent activity. Audit capabilities and responsibilities should ideally be separate from both
database administrators and the database server platform to ensure strong separation of duties policies.
Countermeasures of Weak Audit Trail
1. Network-based audit appliances are a good solution. Such appliances should have no impact on database performance,
operate independently of all users and offer granular data collection.
Backup Exposure
Backup storage media is often completely unprotected from attack. As a result, numerous security breaches have involved
the theft of database backup disks and tapes. Furthermore, failure to audit and monitor the activities of administrators who
have low-level access to sensitive information can put your data at risk. Taking the appropriate measures to protect backup
copies of sensitive data and monitor your most highly privileged users is not only a data security best practice, but also
mandated by many regulations.
Countermeasures of Backup Exposure
1. Encrypt Databases: Store data in Encrypted form as this allows you to secure both production and backup copies of
databases, then audit the activity of and control access to sensitive data from users who access databases at the operating
system and storage tiers. By leveraging database auditing along with encryption, organizations can monitor and control
users both inside and outside of the database.
Weak Authentication
Weak authentication schemes allow attackers to assume the identity of legitimate database users.Specific attack strategies
include brute force attacks, social engineering, and so on.Implementation of passwords or two-factor authentication is a
must. For scalability and ease-of use,authentication mechanisms should be integrated with enterprise directory/user
managementinfrastructures.
DB Vulnerabilities and Misconfiguration
It is common to find vulnerable and un-patched databases, or discover databases that still have default accounts and
configuration parameters. Attackers know how to exploit these vulnerabilities to launch attacks against your organization.
Unfortunately, organizations often struggle to stay on top of maintaining database configurations even when patches are
available. Typical issues include high workloads and mounting backlogs for the associated database
administrators, complex and time-consuming requirements for testing patches, and the challenge of finding a maintenance
window to take down and work on what is often classified as a businesscritical system. The net result is that it generally
takes organizations months to patch databases, during which time they remain vulnerable. Countermeasures of
Misconfigured Databases 1. No default accounts should be there. Accounts must be created using fresh username
andpassword.
Unmanaged Sensitive Data
Many companies struggle to maintain an accurate inventory of their databases and the critical data objects contained within
them. Forgotten databases may contain sensitive information, and new databases can emerge – e.g., in application testing
environments – without visibility to the security team. Sensitive data in these databases will be exposed to threats if the
required controls and Permissions are not implemented.
Countermeasures of unmanaged Sensitive Data
1. Encrypt Sensitive data in Database.
2. Apply required controls and Permissions to the database.
Denial of Service
Denial of Service is a general attack category in which access to network applications or data is denied to intend user.
Countermeasures of Denial of Service
1. Harden the TCP/IP stack by applying the appropriate registry settings to increase the size of the TCP connection queue,
decrease the connection establishment period, and employ dynamic backlog mechanisms to ensure that the connection
queue is never exhausted.
2. Use a network Intrusion Detection System (IDS) because these can automatically detect and respond to SYN attacks.
Limited Security Expertise and Education
Non-technical security is also play an important role. Internal security controls are not keeping space with data growth and
many organizations are ill-equipped to deal with a security breach. Often this is due to the lack of expertise required to
implement security controls, enforce policies, or conduct incident response processes.
Countermeasures of Limited Security and Education
1. User Education and awareness
2. Cultivate Experience Security professional

You might also like