Professional Documents
Culture Documents
Database System Handbook
Database System Handbook
Database System Handbook
Prepared by:
Muhammad Sharif
Senior Database Administrator
SKMCH&RC
Lahore, Punjab, Pakistan
==============
Dedication
I dedicate all my efforts to my reader who gives me an urge and inspiration
to work more.
Muhammad Sharif
Author
Acknowledgments
We are grateful to numerous individuals who contributed
to the preparation of relational database systems and
THANKS
Classifcation of Data
We can classify data as structured, unstructured, or semi-structured data.
1. Structured data is generally quantitative data, it usually consists of hard numbers or things that can be
counted.
2. Unstructured data is generally categorized as qualitative data, and cannot be analyzed and processed
using conventional tools and methods.
3. Semi-structured data refers to data that is not captured or formatted in conventional ways. Semi-
structured data does not follow the format of a tabular data model or relational databases because it does
not have a fixed schema. XML, JSON are semi-structured example.
Properties
Structured data is generally stored in data warehouses.
Unstructured data is stored in data lakes.
Structured data requires less storage space while Unstructured data requires more storage space.
Examples:
Structured data (Table, tabular format, or Excel spreadsheets.csv)
Unstructured data (Email and Volume, weather data)
Semi-structured data (Webpages, Resume documents, XML)
Categories of Data
Implicit data is information that is not provided intentionally but gathered from available data streams, either
directly or through analysis of explicit data.
Explicit data is information that is provided intentionally, for example through surveys and membership
registration forms. Explicit data is data that is provided intentionally and taken at face value rather than analyzed or
interpreted.
Data hacking Method
A data breach is a cyber attack in which sensitive, confidential or otherwise protected data has been accessed or
disclosed.
What is a data item?
The basic component of a file in a file system is a data item.
What are records?
A group of related data items treated as a single unit by an application is called a record.
What is the file?
A file is a collection of records of a single type. A simple file processing system refers to the first computer-based
approach to handling commercial or business applications.
Mapping from file system to Relational Database
In a relational database, a data item is called a column or attribute; a record is called a row or tuple, and a file is
called a table.
Major challenges from file system to database movements
1. Data validatin
2. Data integrity
3. Data security
4. Data sharing
Details will be written later where needed.
What is information?
When we organized data that has some meaning, we called information.
A database application is a program or group of programs that are used for performing certain operations on the data
stored in the database. These operations may contain insertion of data into a database or extracting some data from
the database based on a certain condition, updating data in the database. Examples: (GIS/GPS).
What is Knowledge?
Knowledge = information + application
What is Meta Data?
The database definition or descriptive information is also stored by the DBMS in the form of a database catalog or
dictionary, it is called meta-data. Data that describe the properties or characteristics of end-user data and the context
of those data. Information about the structure of the database.
Example Metadata for Relation Class Roster catalogs (Attr_Cat(attr_name, rel_name, type, position like 1,2,3,
access rights on objects, what is the position of attribute in the relation). Simple definition is data about data.
2-tier architecture (basic client-server APIs like ODBC, JDBC, and ORDS are used), Client and disk are connected
by APIs called network.
3-tier architecture (Used for web applications, it uses a web server to connect with a database server).
Types of databases
There are various types of databases used for storing different varieties of data in their respective DBMS data model
environment. Each database has data models except NoSQL. One is Enterprise Database Management System that is
not included in this figure. I will write details one by one in where appropriate. Sequence of details is not necessary.
S.N
UMA NUMA
O
There are 3 types of buses used in uniform While in non-uniform Memory Access, There are
1 Memory Access which are: Single, Multiple 2 types of buses used which are: Tree and
and Crossbar. hierarchical.
Advantages of NUMA
Improves the scalability of the system.
Memory bottleneck (shortage of memory) problem is minimized in this architecture.
NUMA machines provide a linear address space, allowing all processors to directly address all memory.
Distributed Databases
Distributed database system (DDBS) = Database Systems + Communication
A set of databases in a distributed system that can appear to applications as a single data source.
A distributed DBMS (DDBMS) can have the actual database and DBMS software distributed over many sites,
connected by a computer network.
Distributed DBMS architectures
Three alternative approaches are used to separate functionality across different DBMS-related processes. These
alternative distributed architectures are called
1. Client-server,
2. Collaborating server or multi-Server
3. Middleware or Peer-to-Peer
Client-server: Client can send query to server to execute. There may be multiple server process. The two
different client-server architecture models are:
1. Single Server Multiple Client
2. Multiple Server Multiple Client
Client Server architecture layers
1. Presentation layer
2. Logic layer
3. Data layer
Presentation layer
The basic work of this layer provides a user interface. The interface is a graphical user interface. The graphical user
interface is an interface that consists of menus, buttons, icons, etc. The presentation tier presents information related
to such work as browsing, sales purchasing, and shopping cart contents. It attaches with other tiers by computing
results to the browser/client tier and all other tiers in the network. Its other name is external layer.
Logic layer
The logical tier is also known as the data access tier and middle tier. It lies between the presentation tier and the data
tier. it controls the application’s functions by performing processing. The components that build this layer exist on
the server and assist the resource sharing these components also define the business rules like different government
legal rules, data rules, and different business algorithms which are designed to keep data structure consistent. This is
also known as conceptual layer.
Data layer
The 3-Data layer is the physical database tier where data is stored or manipulated. It is internal layer of database
management system where data stored.
Collaborative/Multi server: This is an integrated database system formed by a collection of two or more
autonomous database systems. Multi-DBMS can be expressed through six levels of schema:
1. Multi-database View Level − Depicts multiple user views comprising subsets of the integrated distributed
database.
2. Multi-database Conceptual Level − Depicts integrated multi-database that comprises global logical multi-
database structure definitions.
3. Multi-database Internal Level − Depicts the data distribution across different sites and multi-database to
local data mapping.
4. Local database View Level − Depicts a public view of local data.
5. Local database Conceptual Level − Depicts local data organization at each site.
6. Local database Internal Level − Depicts physical data organization at each site.
Note: The Semi Join and Bloom Join are two techniques/data fetching method in distributed databases.
Some Popular databases and respective data models
Native XML Databases
We were not surprised that the number of start-up companies as well as some established data management
companies determined that XML data would be best managed by a DBMS that was designed specifically to deal
with semi-structured data — that is, a native XML database.
Conceptual Database
This step is related to the modeling in the Entity-Relationship (E/R) Model to specify sets of data called entities,
relations among them called relationships and cardinality restrictions identified by letters N and M, in this case, the
many-many relationships stand out.
Conventional Database
This step includes Relational Modeling where a mapping from MER to relations using rules of mapping is carried
out. The posterior implementation is done in Structured Query Language (SQL).
Non-Conventional database
This step involves Object-Relational Modeling which is done by the specification in Structured Query Language. In
this case, the modeling is related to the objects and their relationships with the Relational Model.
Traditional database
Temporal database
Conventional Databases
NewSQL Database
Autonomous database
Cloud database
Spatiotemporal
The term NewSQL categorizes databases that are the combination of relational models with the advancement in
scalability, and flexibility with types of data. These databases focus on the features which are not present in NoSQL,
which offers a strong consistency guarantee. This covers two layers of data one relational one and a key-value store.
5. It promotes CAP properties. It promotes ACID properties.
Use Cases: Big Data, Social Network Use Cases: E-Commerce, Telecom industry, and
8. Applications, and IoT. Gaming.
The character data types represent alphanumeric text. PL/SQL uses the
SQL character data types such as CHAR, VARCHAR2, LONG, RAW,
Character data types LONG RAW, ROWID, and UROWID.
CHAR(n) is a fixed-length character type whose length is from 1 to
32,767 bytes.
VARCHAR2(n) is varying length character data from 1 to 32,767 bytes.
User-Defined Datatypes
There are two categories of user-defined datatypes:
Object types
Collection types
A user-defined data type (UDT) is a data type that derived from an existing data type. You can use UDTs to extend
the built-in types already available and create your own customized data types.
There are six user-defined types:
1. Distinct type
2. Structured type
3. Reference type
4. Array type
5. Row type
6. Cursor type
Exact Numeric: bit, Tinyint, Smallint, Int, Bigint, Numeric, Decimal, SmallMoney, Money.
Approximate Numeric: float, real
Data and Time: DateTime, Smalldatatime, date, time, Datetimeoffset, Datetime2
Character Strings: char, varchar, text
Unicode Character strings: Nchar, Nvarchar, Ntext
Binary strings: binary, Varbinary, image
Other Data types: sql_variant, timestamp, Uniqueidentifier, XML
CLR data types: hierarchyid
Spatial data types: geometry, geography
Abstract Data Types in OracleOne of the shortcomings of the Oracle 7 database was the limited number of
intrinsic data types.
Database Key A key is a field of a table that identifies the tuple in that table.
Super key
An attribute or a set of attributes that uniquely identifies a tuple within a relation.
Candidate key
A super key such that no proper subset is a super key within the relation. Contains no unique subset (irreducibility).
Possibly many candidate keys (specified using UNIQUE), one of which is chosen as the primary key. PRIMARY
KEY (sid), UNIQUE (id, grade)) A candidate can be unique but its value can be changed.
The candidate key is selected to identify tuples uniquely within a relation. Should remain constant over the life of
the tuple. PK is unique, Not repeated, not null, not change for life. If the primary key is to be changed. We will drop
the entity of the table, and add a new entity, In most cases, PK is used as a foreign key. You cannot change the
value. You first delete the child, so that you can modify the parent table.
Minimal Super Key
All super keys can't be primary keys. The primary key is a minimal super key. KEY is a minimal SUPERKEY, that
is, a minimized set of columns that can be used to identify a single row.
Foreign key
An attribute or set of attributes within one relation that matches the candidate key of some (possibly the same)
relation. Can you add a non-key as a foreign key? Yes, the minimum condition is it should be unique. It should be
candidate key.
Composite Key
The composite key consists of more than one attribute. COMPOSITE KEY is a combination of two or more
columns that uniquely identify rows in a table. The combination of columns guarantees uniqueness, though
individually uniqueness is not guaranteed. Hence, they are combined to uniquely identify records in a table. You can
you composite key as PK but the Composite key will go to other tables as a foreign key.
Alternate key
A relation can have only one primary key. It may contain many fields or a combination of fields that can be used as
the primary key. One field or combination of fields is used as the primary key. The fields or combinations of fields
that are not used as primary keys are known as candidate keys or alternate keys.
Sort Or control key
A field or combination of fields that are used to physically sequence the stored data is called a sort key. It is also
known s the control key.
Alternate key
An alternate key is a secondary key it can be simple to understand an example:
Let's take an example of a student it can contain NAME, ROLL NO., ID, and CLASS.
Unique key
A unique key is a set of one or more than one field/column of a table that uniquely identifies a record in a database
table.
You can say that it is a little like a primary key but it can accept only one null value and it cannot have duplicate
values.
The unique key and primary key both provide a guarantee for uniqueness for a column or a set of columns.
There is an automatically defined unique key constraint within a primary key constraint.
There may be many unique key constraints for one table, but only one PRIMARY KEY constraint for one table.
Artificial Key
The key created using arbitrarily assigned data are known as artificial keys. These keys are created when a primary
key is large and complex and has no relationship with many other relations. The data values of the artificial keys are
usually numbered in a serial order.
For example, the primary key, which is composed of Emp_ID, Emp_role, and Proj_ID, is large in employee
relations. So it would be better to add a new virtual attribute to identify each tuple in the relation uniquely. Rownum
and rowid are artificial keys. It should be a number or integer, numeric.
Format of Rowid of :
Surrogate key
SURROGATE KEYS is An artificial key that aims to uniquely identify each record and is called a surrogate key.
This kind of partial key in DBMS is unique because it is created when you don’t have any natural primary key. You
can't insert values of the surrogate key. Its value comes from the system automatically.
No business logic in key so no changes based on business requirements
Surrogate keys reduce the complexity of the composite key.
Surrogate keys integrate the extract, transform, and load in DBs.
Compound Key
COMPOUND KEY has two or more attributes that allow you to uniquely recognize a specific record. It is possible
that each column may not be unique by itself within the database.
Operators
SQL UNION clause is used to select distinct values from the tables.
SQL UNION ALL clause used to select all values including duplicates from the tables
The UNION operator is used to combine the result-set of two or more SELECT statements.
Every SELECT statement within UNION must have the same number of columns
The columns must also have similar data types
The columns in every SELECT statement must also be in the same order
EXCEPT or MINUS These are the records that exist in Dataset1 and not in Dataset2.
Each SELECT statement within the EXCEPT query must have the same number of fields in the result sets with
similar data types.
The difference is that EXCEPT is available in the PostgreSQL database while MINUS is available in MySQL and
Oracle. There is absolutely no difference between the EXCEPT clause and the MINUS clause.
IN operator allows you to specify multiple values in a WHERE clause. The IN operator is a shorthand for multiple
OR conditions.
ANY operator
Returns a Boolean value as a result Returns true if any of the subquery values meet the condition . ANY means that
the condition will be true if the operation is true for any of the values in the range.
NOT IN can also take literal values whereas not existing need a query to compare the results.
SELECT CAT_ID FROM CATEGORY_A WHERE CAT_ID NOT IN (SELECT CAT_ID FROM
CATEGORY_B)
NOT EXISTS
SELECT A.CAT_ID FROM CATEGORY_A A WHERE NOT EXISTS (SELECT B.CAT_ID FROM
CATEGORY_B B WHERE B.CAT_ID = A.CAT_ID)
NOT EXISTS could be good to use because it can join with the outer query & can lead to usage of the index if the
criteria use an indexed column.
EXISTS AND NOT EXISTS are typically used in conjuntion with a correlated nested query. The result of EXISTS
is a boolean value, TRUE if the nested query ressult contains at least one tuple, or FALSE if the nested query result
contains no tuples
Supporting operators in different DBMS environments:
Keyword Database System
TOP SQL Server, MS Access
LIMIT MySQL, PostgreSQL, SQLite
FETCH FIRST Oracle
But 10g onward TOP Clause no longer supported replace with ROWNUM clause.
SQL FUNCTIONS
SUM: It retrieves the sum of values of the number of rows in a table, it ignores null values
Example:
Subquery Concept
END
Data abstraction Process of hiding (suppressing) unnecessary details so that the high-level concept can be made
more visible. A data model is a relatively simple representation, usually graphical, of more complex real-world data
structures.
Database Instance is the data which is stored in the database at a particular moment is called an instance of the
database. Also called database state (or occurrence or snapshot). The content of the database, instance is also called
an extension.
External/Conceptual Mapping
The external/Conceptual Mapping lies between the external level and the Conceptual level. Its role is to define the
correspondence between a particular external and conceptual view.
Detail description
When a schema at a lower level is changed, only the mappings.
between this schema and higher-level schemas need to be changed in a DBMS that fully supports data
independence.
The higher-level schemas themselves are unchanged.
Hence, the application programs need not be changed since they refer to the external schemas.
For example, the internal schema may be changed when certain file structures are reorganized or new indexes are
created to improve database performance.
Data abstraction
Data abstraction makes complex systems more user-friendly by removing the specifics of the system mechanics.
The conceptual data model has been most successful as a tool for communication between the designer and the end
user during the requirements analysis and logical design phases. Its success is because the model, using either ER or
UML, is easy to understand and convenient to represent. Another reason for its effectiveness is that it is a top-down
approach using the concept of abstraction. In addition, abstraction techniques such as generalization provide useful
tools for integrating end user views to define a global conceptual schema.
These differences show up in conceptual data models as different levels of abstraction; connectivity of relationships
(one-to-many, many-to-many, and so on); or as the same concept being modeled as an entity, attribute, or
relationship, depending on the user’s perspective.
Techniques used for view integration include abstraction, such as generalization and aggregation to create new
supertypes or subtypes, or even the introduction of new relationships. The higher-level abstraction, the entity cluster,
must maintain the same relationships between entities inside and outside the entity cluster as those that occur
between the same entities in the lower-level diagram.
ERD, EER terminology is not only used in conceptual data modeling but also in artificial intelligence literature
when discussing knowledge representation (KR).
The goal of KR techniques is to develop concepts for accurately modeling some domain of knowledge by creating
an ontology.
Ontology is the fundamental part of Semantic Web. The goal of World Wide Web Consortium (W3C) is to bring
the web into (its full potential) a semantic web with reusing previous systems and artifacts. Most legacy systems
have been documented in structural analysis and structured design (SASD), especially in simple or Extended ER
Diagram (ERD). Such systems need up-gradation to become the part of semantic web. In this paper, we present
ERD to OWL-DL ontology transformation rules at concrete level. These rules facilitate an easy and understandable
transformation from ERD to OWL. Ontology engineering is an important aspect of semantic web vision to attain
the meaningful representation of data. Although various techniques exist for the creation of ontology, most of the
methods involve the number of complex phases, scenario-dependent ontology development, and poor validation of
ontology. This research work presents a lightweight approach to build domain ontology using Entity Relationship
(ER) model.
We now discuss four abstraction concepts that are used in semantic data models, such as the EER model as well as
in KR schemes: (1) classification and instantiation, (2) identification, (3) specialization and generalization, and (4)
aggregation and association.
One ongoing project that is attempting to allow information exchange among computers on the Web is called the
Semantic Web, which attempts to create knowledge representation models that are quite general in order to allow
meaningful information exchange and search among machines.
One commonly used definition of ontology is a specification of a conceptualization. In this definition, a
conceptualization is the set of concepts that are used to represent the part of reality or knowledge that is of interest to
a community of users.
Types of Abstractions
Classification: A is a member of class B
Aggregation: B, C, D Are Aggregated Into A, A Is Made Of/Composed Of B, C, D, Is-Made-Of, Is-
Associated-With, Is-Part-Of, Is-Component-Of. Aggregation is an abstraction through which relationships are
treated as higher-level entities.
Generalization: B,C,D can be generalized into a, b is-a/is-an a, is-as-like, is-kind-of.
Category or Union: A category represents a single superclass or subclass relationship with more than one
superclass.
Specialization: A can be specialized into B, C, DB, C, or D (special cases of A) Has-a, Has-A, Has An, Has-
An approach is used in the specialization
Composition: IS-MADE-OF (like aggregation)
Identification: IS-IDENTIFIED-BY
UML Diagrams Notations
UML stands for Unified Modeling Language. ERD stands for Entity Relationship Diagram. UML is a popular and
standardized modeling language that is primarily used for object-oriented software. Entity-Relationship diagrams are
used in structured analysis and conceptual modeling.
Object-oriented data models are typically depicted using Unified Modeling Language (UML) class diagrams.
Unified Modeling Language (UML) is a language based on OO concepts that describes a set of diagrams and
symbols that can be used to graphically model a system. UML class diagrams are used to represent data and their
relationships within the larger UML object-oriented system’s modeling language.
Associations
UML uses Boolean attributes instead of unary relationships but allows relationships of all other entities. Optionally,
each association may be given at most one name. Association names normally start with a capital letter. Binary
associations are depicted as lines between classes. Association lines may include elbows to assist with layout or
when needed (e.g., for ring relationships).
ER Diagram and Class Diagram Synchronization Sample
Supporting the synchronization between ERD and Class Diagram. You can transform the system design from the
data model to the Class model and vice versa, without losing its persistent logic.
Conversions of Terminology of UML and ERD
Types of Attributes-
In ER diagram, attributes associated with an entity set may be of the following types-
1. Simple attributes/atomic attributes/Static attributes
2. Key attribute
3. Unique attributes
4. Stored attributes
5. Prime attributes
6. Derived attributes (DOB, AGE, Oval is a derived attribute)
7. Composite attribute (Address (street, door#, city, town, country))
8. The multivalued attribute (double ellipse (Phone#, Hobby, Degrees))
9. Dynamic Attributes
10. Boolean attributes
The fundamental new idea in the MOST model is the so-called dynamic attributes. Each attribute of an object class
is classified to be either static or dynamic. A static attribute is as usual. A dynamic attribute changes its value with
time automatically.
Attributes of the database tables which are candidate keys of the database tables are called prime attributes.
Symbols of Attributes:
The Entity
The entity is the basic building block of the E-R data model. The term entity is used in three different meanings or
for three different terms and are:
Entity type
Entity instance
Entity set
Tangible Entity:
Tangible Entities are those entities that exist in the real world physically. Example: Person, car, etc.
Intangible Entity:
Intangible (Concepts) Entities are those entities that exist only logically and have no physical existence. Example:
Bank Account, etc.
Major of entity types
1. Strong Entity Type
2. Weak Entity Type
3. Naming Entity
4. Characteristic entities
5. Dependent entities
6. Independent entities
Details of entity types
An entity type whose instances can exist independently, that is, without being linked to the instances of any other
entity type is called a strong entity type.
A weak entity can be identified uniquely only by considering the primary key of another (owner) entity.
The owner entity set and weak entity set must participate in a one-to-many relationship set (one owner, many weak
entities).
The weak entity set must have total participation in this identifying relationship set.
Weak entities have only a “partial key” (dashed underline), When the owner entity is deleted, all owned weak
entities must also be deleted
Types Following are some recommendations for naming entity types.
Singular nouns are recommended, but still, plurals can also be used
Organization-specific names, like a customer, client, owner anything will work
Write in capitals, yes, this is something that is generally followed, otherwise will also work.
Abbreviations can be used, be consistent. Avoid using confusing abbreviations, if they are confusing for
others today, tomorrow they will confuse you too.
Database Design Tools
Some commercial products are aimed at providing environments to support the DBA in performing database design.
These environments are provided by database design tools, or sometimes as part of a more general class of products
known as computer-aided software engineering (CASE) tools. Such tools usually have some components, choose
from the following kinds. It would be rare for a single product to offer all these capabilities.
1. ER Design Editor
2. ER to Relational Design Transformer
3. FD to ER Design Transformer
4. Design Analyzers
ER Modeling Rules to design database
Three components:
1. Structural part - set of rules applied to the construction of the database
2. Manipulative part - defines the types of operations allowed on the data
3. Integrity rules - ensure the accuracy of the data
Context diagrams are the most basic data flow diagrams. They provide a broad view that is easily digestible but
offers little detail. They always consist of a single process and describe a single system. The only process displayed
in the CDFDs is the process/system being analyzed. The name of the CDFDs is generally a Noun Phrase.
1-level DFD In 1-level DFD, the context diagram is decomposed into multiple bubbles/processes. In this level,
we highlight the main functions of the system and breakdown the high-level process of 0-level DFD into
subprocesses.
2-level DFD In 2-level DFD goes one step deeper into parts of 1-level DFD. It can be used to plan or record the
specific/necessary detail about the system’s functioning.
Detailed DFDs are detailed enough that it doesn’t usually make sense to break them down further.
Logical data flow diagrams focus on what happens in a particular information flow: what information is being
transmitted, what entities are receiving that info, what general processes occur, etc. It describes the functionality of
the processes that we showed briefly in the Level 0 Diagram. It means that generally detailed DFDS is expressed as
the successive details of those processes for which we do not or could not provide enough details.
Logical DFD
Logical data flow diagram mainly focuses on the system process. It illustrates how data flows in the system. Logical
DFD is used in various organizations for the smooth running of system. Like in a Banking software system, it is
used to describe how data is moved from one entity to another.
Physical DFD
Physical data flow diagram shows how the data flow is actually implemented in the system. Physical DFD is more
specific and closer to implementation.
N-ary
N-ary (many entities involved in the relationship)
An N-ary relationship exists when there are n types of entities. There is one limitation of the N-ary any entities so it
is very hard to convert into an entity, a rational table.
A relationship between more than two entities is called an n-ary relationship.
Examples of relationships R between two entities E and F
Normalize the ERD and remove FD from Entities to enter the final steps
Transformation Rule 2. A key attribute of the entity type is represented by the primary key.
All single-valued attribute becomes a column for the table
Transformation Rule 3. Given an entity E with primary identify, a multivalued attributed attached to E
in an ER diagram is mapped to a table of its own;
Table T also contains columns for all attributes attached to the relationship. Relationship occurrences are
represented by rows of the table, with the related entity instances uniquely identified by their primary key
values as rows.
Case 1: Binary Relationship with 1:1 cardinality with the total participation of an entity
Total participation, i.e. min occur is 1 with double lines in total.
A person has 0 or 1 passport number and the Passport is always owned by 1 person. So it is 1:1
cardinality with full participation constraint from Passport. First Convert each entity and relationship to
tables.
Case 2: Binary Relationship with 1:1 cardinality and partial participation of both entities
A male marries 0 or 1 female and vice versa as well. So it is a 1:1 cardinality with partial participation
constraint from both. First Convert each entity and relationship to tables. Male table corresponds to Male
Entity with key as M-Id. Similarly, the Female table corresponds to Female Entity with the key as F-Id.
Marry Table represents the relationship between Male and Female (Which Male marries which female).
So it will take attribute M-Id from Male and F-Id from Female.
Case 3: Binary Relationship with n: 1 cardinality
Case 4: Binary Relationship with m: n cardinality
Case 5: Binary Relationship with weak entity
In this scenario, an employee can have many dependents and one dependent can depend on one employee.
A dependent does not have any existence without an employee (e.g; you as a child can be dependent on
your father in his company). So it will be a weak entity and its participation will always be total.
Generalization
Reverse processes of defining subclasses (bottom-up approach). Bring together common attributes in entities (ISA,
IS-A, IS AN, IS-AN)
Union
Models a class/subclass with more than one superclass of distinct entity types. Attribute inheritance is selective.
It expresses some entity occurrences associated with one occurrence of the related entity=>The specific.
The cardinality of a relationship is the number of instances of entity B that can be associated with entity A. There is
a minimum cardinality and a maximum cardinality for each relationship, with an unspecified maximum cardinality
being shown as N. Cardinality limits are usually derived from the organization's policies or external constraints.
For Example:
At the University, each Teacher can teach an unspecified maximum number of subjects as long as his/her weekly
hours do not exceed 24 (this is an external constraint set by an industrial award). Teachers may teach 0 subjects if
they are involved in non-teaching projects. Therefore, the cardinality limits for TEACHER are (O, N).
The University's policies state that each Subject is taught by only one teacher, but it is possible to have Subjects that
have not yet been assigned a teacher. Therefore, the cardinality limits for SUBJECT are (0,1). Teacher and subject
have M: N relationship connectivity. And they are binary (two) ternary too if we break this relationship. Such
situations are modeled using a composite entity (or gerund)
Cardinality Constraint: Quantification of the relationship between two concepts or classes (a constraint on
aggregation)
Remember cardinality is always a relationship to another thing.
Max Cardinality(Cardinality) Always 1 or Many. Class A has a relationship to Package B with a cardinality of one,
which means at most there can be one occurrence of this class in the package. The opposite could be a Package that
has a Max Cardinality of N, which would mean there can be N number of classes
Min Cardinality(Optionality) Simply means "required." Its always 0 or 1. 0 would mean 0 or more, 1 or more
The three types of cardinality you can define for a relationship are as follows:
Minimum Cardinality. Governs whether or not selecting items from this relationship is optional or required. If you
set the minimum cardinality to 0, selecting items is optional. If you set the minimum cardinality to greater than 0,
the user must select that number of items from the relationship.
Optional to Mandatory, Optional to Optional, Mandatory to Optional, Mandatory to Mandatory
Summary Of ER Diagram Symbols
Maximum Cardinality. Sets the maximum number of items that the user can select from a relationship. If you set the
minimum cardinality to greater than 0, you must set the maximum cardinality to a number at least as large If you do
not enter a maximum cardinality, the default is 999.
Type of Max Cardinality: 1 to 1, 1 to many, many to many, many to 1
Default Cardinality. Specifies what quantity of the default product is automatically added to the initial solution that
the user sees. Default cardinality must be equal to or greater than the minimum cardinality and must be less than or
equal to the maximum cardinality.
Replaces cardinality ratio numerals and single/double line notation
Associate a pair of integer numbers (min, max) with each participant of an entity type E in a relationship type R,
where 0 ≤ min ≤ max and max ≥ 1 max=N => finite, but unbounded
Relationship types can also have attributes
Attributes of 1:1 or 1:N relationship types can be migrated to one of the participating entity types
For a 1:N relationship type, the relationship attribute can be migrated only to the entity type on the N-side of the
relationship
Attributes on M: N relationship types must be specified as relationship attributes
In the case of Data Modelling, Cardinality defines the number of attributes in one entity set, which can be associated
with the number of attributes of other sets via a relationship set. In simple words, it refers to the relationship one
table can have with the other table. They can be One-to-one, One-to-many, Many-to-one, or Many-to-many. And
third may be the number of tuples in a relation.
In the case of SQL, Cardinality refers to a number. It gives the number of unique values that appear in the table for a
particular column. For eg: you have a table called Person with the column Gender. Gender column can have values
either 'Male' or 'Female''.
cardinality is the number of tuples in a relation (number of rows).
The Multiplicity of an association indicates how many objects the opposing class of an object can be instantiated.
When this number is variable then the.
Multiplicity Cardinality + Participation dictionary definition of cardinality is the number of elements in a particular
set or other.
Multiplicity can be set for attribute operations and associations in a UML class diagram (Equivalent to ERD) and
associations in a use case diagram.
A cardinality is how many elements are in a set. Thus, a multiplicity tells you the minimum and maximum allowed
members of the set. They are not synonymous.
UML uses the term Multiplicity, whereas Data Modelling uses the term Cardinality. They are for all intents and
purposes, the same.
Cardinality (sometimes referred to as Ordinality) is what is used in ER modeling to "describe" a relationship
between two Entities.
Cardinality and Modality
The maindifference between cardinality and modality is that cardinality is defined as the metric used to specify the
number of occurrences of one object related to the number of occurrences of another object. On the contrary,
modality signifies whether a certain data object must participate in the relationship or not.
Cardinality refers to the maximum number of times an instance in one entity can be associated with instances in the
related entity. Modality refers to the minimum number of times an instance in one entity can be associated with an
instance in the related entity.
Cardinality can be 1 or Many and the symbol is placed on the outside ends of the relationship line, closest to the
entity, Modality can be 1 or 0 and the symbol is placed on the inside, next to the cardinality symbol. For a
cardinality of 1, a straight line is drawn. For a cardinality of Many a foot with three toes is drawn. For a modality of
1, a straight line is drawn. For a modality of 0, a circle is drawn.
zero or more
1 or more
1 and only 1 (exactly 1)
Multiplicity = Cardinality + Participation
Cardinality: Denotes the maximum number of possible relationship occurrences in which a certain entity can
participate (in simple terms: at most).
Note: Connectivity and Modality/ multiplicity/ Cardinality and Relationship are same terms.
Participation: Denotes if all or only some entity occurrences participate in a relationship (in simple terms: at least).
BASIS FOR
CARDINALITY MODALITY
COMPARISON
Generalization is like a bottom-up approach in which two or more entities of lower levels combine to form a higher
level entity if they have some attributes in common.
Generalization is more like a subclass and superclass system, but the only difference is the approach. Generalization
uses the bottom-up approach. Like subclasses are combined to make a superclass. IS-A, ISA, IS A, IS AN, IS-AN
Approach is used in generalization
Generalization is the result of taking the union of two or more (lower level) entity types to produce a higher level
entity type.
Generalization is the same as UNION. Specialization is the same as ISA.
A specialization is a top-down approach, and it is the opposite of Generalization. In specialization, one higher-level
entity can be broken down into two lower-level entities. Specialization is the result of taking a subset of a higher-
level entity type to form a lower-level entity type.
Normally, the superclass is defined first, the subclass and its related attributes are defined next, and the relationship
set is then added. HASA, HAS-A, HAS AN, HAS-AN.
UML to EER specialization or generalization comes in the form of hierarchical entity set:
Mapping Process
1. Create tables for all higher-level entities.
2. Create tables for lower-level entities.
3. Add primary keys of higher-level entities in the table of lower-level entities.
4. In lower-level tables, add all other attributes of lower-level entities.
5. Declare the primary key of the higher-level table and the primary key of the lower-level table.
6. Declare foreign key constraints.
This section presents the concept of entity clustering, which abstracts the ER schema to such a degree that the entire
schema can appear on a single sheet of paper or a single computer screen.
END
CHAPTER 4 DISCOVERING BUSINESS RULES AND DATABASE CONSTRAINTS
Overview of Database Constraints
Definition of Data integrity Constraints placed on the set of values allowed for the attributes of relation as relational
Integrity.
2. Null Constraints
Comparisons Involving NULL and Three-Valued Logic:
SQL has various rules for dealing with NULL values. Recall from Section 3.1.2 that NULL is used to represent a
missing value, but that it usually has one of three different interpretations—value unknown (exists but is not
known), value not available (exists but is purposely withheld), or value not applicable (the attribute is undefined for
this tuple). Consider the following examples to illustrate each of the meanings of NULL.
1. Unknownalue. A person’s date of birth is not known, so it is represented by NULL in the database.
2. Unavailable or withheld value. A person has a home phone but does not want it to be listed, so it is
withheld and represented as NULL in the database.
3. Not applicable attribute. An attribute Last_College_Degree would be NULL for a person who has no
college degrees because it does not apply to that person.
3. Enterprise Constraints
Enterprise constraints – sometimes referred to as semantic constraints – are additional rules specified by users or
database administrators and can be based on multiple tables.
Here are some examples.
A class can have a maximum of 30 students.
A teacher can teach a maximum of four classes per semester.
An employee cannot take part in more than five projects.
The salary of an employee cannot exceed the salary of the employee’s manager.
4. Key Constraints or Uniqueness Constraints :
These are called uniqueness constraints since it ensures that every tuple in the relation should be unique.
A relation can have multiple keys or candidate keys(minimal superkey), out of which we choose one of the keys as
primary key, we don’t have any restriction on choosing the primary key out of candidate keys, but it is suggested to
go with the candidate key with less number of attributes.
Null values are not allowed in the primary key, hence Not Null constraint is also a part of key constraint.
5. Domain, Field, Row integrity Constraints
Domain Integrity:
A domain of possible values must be associated with every attribute (for example, integer types, character types,
date/time types). Declaring an attribute to be of a particular domain act as the constraint on the values that it can
take. Domain Integrity rules govern the values.
In the specific field/cell values must be with in column domain and represent a specific location within at table
Entity integrity:
No attribute of a primary key can be null (every tuple must be uniquely identified)
6. Referential Integrity Constraints
A referential integrity constraint is famous as a foreign key constraint. The value of foreign key values is derived
from the Primary key of another table. Similar options exist to deal with referential integrity violations caused by
Update as those options discussed for the Delete operation.
There are two types of referential integrity constraints:
Insert Constraint: We can’t inert value in CHILD Table if the value is not stored in MASTER Table
Delete Constraint: We can’t delete a value from MASTER Table if the value is existing in CHILD Table
The three rules that referential integrity enforces are:
1. A foreign key must have a corresponding primary key. (“No orphans” rule.)
2. When a record in a primary table is deleted, all related records referencing the primary key must also be
deleted, which is typically accomplished by using cascade delete.
3. If the primary key for record changes, all corresponding records in other tables using the primary key as a
foreign key must also be modified. This can be accomplished by using a cascade update.
7. Assertions constraints
An assertion is any condition that the database must always satisfy. Domain constraints and Integrity constraints are
special forms of assertions.
8. Authorization constraints
We may want to differentiate among the users as far as the type of access they are permitted to various data values
in the database. This differentiation is expressed in terms of Authorization.
The most common being:
Read authorization – which allows reading but not the modification of data;
Insert authorization – which allows the insertion of new data but not the modification of existing data
Update authorization – which allows modification, but not deletion.
9. Preceding integrity constraints
Preceding integrity constraints are included in the data definition language because they occur in most
database applications. However, they do not include a large class of general constraints, sometimes called semantic
integrity constraints, which may have to be specified and enforced on a relational database.
The types of constraints we discussed so far may be called state constraints because they define the constraints that
a valid state of the database must satisfy. Another type of constraint, called transition constraints, can be defined to
deal with state changes in the database. An example of a transition constraint is: “the salary of an employee can only
increase.”
What is the use of data constraints?
Constraints are used to:
Avoid bad data being entered into tables.
At the database level, it helps to enforce business logic.
Improves database performance.
Enforces uniqueness and avoid redundant data to the database.
END
CHAPTER 5 DATABASE DESIGN STEPS AND IMPLEMENTATIONS
SQL version:
1970 – Dr. Edgar F. “Ted” Codd described a relational model for databases.
1974 – Structured Query Language appeared.
1978 – IBM released a product called System/R.
1986 – SQL1 IBM developed the prototype of a relational database, which is standardized by ANSI.
1989- First minor changes but not standards changed
1992 – SQL2 launched with features like triggers, object orientation, etc.
SQL1999 to 2003- SQL3 launched
SQL2006- Support for XML Query Language
SQL2011-improved support for temporal databases
SQL-86 in 1986, the most recent version in 2011 (SQL:2016).
SQL-86
The first SQL standard was SQL-86. It was published in 1986 as ANSI standard and in 1987 as International
Organization for Standardization (ISO) standard. The starting point for the ISO standard was IBM’s SQL standard
implementation. This version of the SQL standard is also known as SQL 1.
SQL-89
The next SQL standard was SQL-89, published in 1989. This was a minor revision of the earlier standard, a superset
of SQL-86 that replaced SQL-86. The size of the standard did not change.
SQL-92
The next revision of the standard was SQL-92 – and it was a major revision. The language introduced by SQL-92 is
sometimes referred to as SQL 2. The standard document grew from 120 to 579 pages. However, much of the growth
was due to more precise specifications of existing features.
The most important new features were:
An explicit JOIN syntax and the introduction of outer joins: LEFT JOIN, RIGHT JOIN, FULL JOIN.
The introduction of NATURAL JOIN and CROSS JOIN
SQL:1999
SQL:1999 (also called SQL 3) was the fourth revision of the SQL standard. Starting with this version, the standard
name used a colon instead of a hyphen to be consistent with the names of other ISO standards. This standard was
published in multiple installments between 1999 and 2002.
In 1993, the ANSI and ISO development committees decided to split future SQL development into a multi-part
standard.
The first installment of 1995 and SQL:1999 had many parts:
Part 1: SQL/Framework (100 pages) defined the fundamental concepts of SQL.
Part 2: SQL/Foundation (1050 pages) defined the fundamental syntax and operations of SQL: types, schemas,
tables, views, query and update statements, expressions, and so forth. This part is the most important for regular
SQL users.
Part 3: SQL/CLI (Call Level Interface) (514 pages) defined an application programming interface for SQL.
Part 4: SQL/PSM (Persistent Stored Modules) (193 pages) defined extensions that make SQL procedural.
Part 5: SQL/Bindings (270 pages) defined methods for embedding SQL statements in application programs written
in a standard programming language. SQL/Bindings. The Dynamic SQL and Embedded SQL bindings are taken
from SQL-92. No active new work at this time, although C++ and Java interfaces are under discussion.
Part 6: SQL/XA. An SQL specialization of the popular XA Interface developed by X/Open (see below).
Part 7: SQL/Temporal. A newly approved SQL subproject to develop enhanced facilities for temporal data
management using SQL.
Part 8: SQL Multimedia (SQL/Mm)
A new ISO/IEC international standardization project for the development of an SQL class library for multimedia
applications was approved in early 1993. This new standardization activity, named SQL Multimedia (SQL/MM),
will specify packages of SQL abstract data type (ADT) definitions using the facilities for ADT specification and
invocation provided in the emerging SQL3 specification.
SQL:2006 further specified how to use SQL with XML. It was not a revision of the complete SQL standard, just
Part 14, which deals with SQL-XML interoperability.
The current SQL standard is SQL:2019. It added Part 15, which defines multidimensional array support in SQL.
In the 21st century, the SQL standard has been regularly updated.
The SQL:2003 standard was published on March 1, 2004. Its major addition was window functions, a powerful
analytical feature that allows you to compute summary statistics without collapsing rows. Window functions
significantly increased the expressive power of SQL. They are extremely useful in preparing all kinds of business
reports, analyzing time series data, and analyzing trends. The addition of window functions to the standard coincided
with the popularity of OLAP and data warehouses. People started using databases to make data-driven business
decisions. This trend is only gaining momentum, thanks to the growing amount of data that all businesses collect.
You can learn window functions with our Window Functions course. (Read about the course or why it’s worth
learning SQL window functions here.) SQL:2003 also introduced XML-related functions, sequence generators, and
identity columns.
Conformance with Standard SQL
This section declares Oracle's conformance to the SQL standards established by these organizations:
1. American National Standards Institute (ANSI) in 1986.
2. International Standards Organization (ISO) in 1987.
3. United States Federal Government Federal Information Processing Standards (FIPS)
RCTE is a CTE that references itself. By doing so, the CTE repeatedly executes, and returns subsets of data, until it
returns the complete result set.
A recursive CTE is useful in querying hierarchical data such as organization charts where one employee reports to a
manager or a multi-level bill of materials when a product consists of many components, and each component itself
also consists of many other components.
Query-By-Example (QBE)
Query-By-Example (QBE) is the first interactive database query language to exploit such modes of HCI. In QBE, a
query is constructed on an interactive terminal involving two-dimensional ‘drawings’ of one or more relations,
visualized in tabular form, which are filled in selected columns with ‘examples’ of data items to be retrieved (thus
the phrase query-by-example).
It is different from SQL, and from most other database query languages, in having a graphical user interface that
allows users to write queries by creating example tables on the screen.
QBE, like SQL, was developed at IBM and QBE is an IBM trademark, but a number of other companies sell QBE-
like interfaces, including Paradox.
A convenient shorthand notation is that if we want to print all fields in some relation, we can place P. under the
name of the relation. This notation is like the SELECT * convention in SQL. It is equivalent to placing a P. in every
field:
Example of QBE:
III. Physical design. The physical design step involves the selection of indexes (access methods), partitioning,
and clustering of data. The logical design methodology in step II simplifies the approach to designing large
relational databases by reducing the number of data dependencies that need to be analyzed. This is accomplished by
inserting conceptual data modeling and integration steps (II(a) and II(b) of pictures into the traditional relational
design
approach.
IV. Database implementation, monitoring, and modification.
Once thedesign is completed, and the database can be created through the implementation of the formal schema
using the data definition language (DDL) of a DBMS.
Attribute Describes some aspect of the entity/object, characteristics of object. An attribute is a data item that
describes a property of an entity or a relationship
Column or field The column represents the set of values for a specific attribute. An attribute is for a model and a
column is for a table, a column is a column in a database table whereas attribute(s) are externally visible facets
of an object.
A relation instance is a finite set of tuples in the RDBMS system. Relation instances never have duplicate tuples.
Relationship Association between entities, connected entities are called participants, Connectivity describes the
relationship (1-1, 1-M, M-N)
The degree of a relationship refers to the=> number of entities
Following the relation in above image consist degree=4, 5=cardinality, data values/cells = 20.
Characteristics of relation
1. Distinct Relation/table name
2. Relations are unordered
3. Cells contain exactly one atomic (Single) value means Each cell (field) must contain a single value
4. No repeating groups
5. Distinct attributes name
6. Value of attribute comes from the same domain
7. Order of attribute has no significant
8. The attributes in R(A1, ...,An) and the values in t = <V1,V2, ..... , Vn> are ordered.
9. Each tuple is a distinct
10. order of tuples that has no significance.
11. tuples may be stored and retrieved in an arbitrary order
12. Tables manage attributes. This means they store information in form of attributes only
13. Tables contain rows. Each row is one record only
14. All rows in a table have the same columns. Columns are also called fields
15. Each field has a data type and a name
16. A relation must contain at least one attribute (column) that identifies each tuple (row) uniquely
External Tables
An external table is a read-only table whose metadata is stored in the database but whose data is
stored outside the database.
Horizontal partitioning divides a table into multiple tables that contain the same number of columns, but fewer rows.
Table partitioning vertically (Table columns)
Vertical partitioning splits a table into two or more tables containing different columns.
Collections Records
All items are of the same data type All items are different data types
Same data type items are called elements Different data type items are called fields
Syntax: variable_name(index) Syntax: variable_name.field_name
For creating a collection variable you can use %TYPE For creating a record variable you can use %ROWTYPE
or %TYPE
Lists and arrays are examples Tables and columns are examples
By default, tables are heap-organized. This means the database is free to store rows wherever there is space. You can
add the "organization heap" clause if you want to be explicit.
We can once again be faced with possible ambiguity among attribute names if attributes of the same name exist—
one in a relation in the FROM clause of the outer query, and another in a relation in the FROM clause of the nested
query. The rule is that a reference to an unqualified attribute refers to the relation declared in the innermost nested
query.
In general, ANSI SQL permits the use of ON DELETE and ON UPDATE clauses to cover
CASCADE, SET NULL, or SET DEFAULT.
MS Access, SQL Server, and Oracle support ON DELETE CASCADE.
MS Access and SQL Server support ON UPDATE CASCADE.
Oracle does not support ON UPDATE CASCADE.
Oracle supports SET NULL.
MS Access and SQL Server do not support SET NULL.
Refer to your product manuals for additional information on referential constraints.
While MS Access does not support ON DELETE CASCADE or ON UPDATE CASCADE at the SQL command-
line level,
Types of View
1. User-defined view
a. Simple view (Single table view)
b. Complex View (Multiple tables having joins, group by, and functions)
c. Inline View (Based on a subquery in from clause to create a temp table and form a complex query)
Advantages of View:
Provide security
Hide specific parts of the database from certain users
Customize base relations based on their needs
It supports the external model
Provide logical independence
Views don't store data in a physical location.
Views can provide Access Restriction, since data insertion, update, and deletion is not possible with the
view.
We can DML on view if it is derived from a single base relation, and contains the primary key or a
candidate key
When can a view be updated?
1. The view is defined based on one and only one table.
2. The view must include the PRIMARY KEY of the table based upon which the view has been created.
3. The view should not have any field made out of aggregate functions.
4. The view must not have any DISTINCT clause in its definition.
5. The view must not have any GROUP BY or HAVING clause in its definition.
6. The view must not have any SUBQUERIES in its definitions.
7. If the view you want to update is based upon another view, the latter should be updatable.
8. Any of the selected output fields (of the view) must not use constants, strings, or value expressions.
END
CHAPTER 6 DATABASE NORMALIZATION AND DATABASE JOINS
Quick Overview of 12 Codd's Rule
Every database has tables, and constraints cannot be referred to as a rational database system. And if any database
has only a relational data model, it cannot be a Relational Database System (RDBMS). So, some rules define a
database to be the correct RDBMS. These rules were developed by Dr. Edgar F. Codd (E.F. Codd) in 1985, who
has vast research knowledge on the Relational Model of database Systems. Codd presents his 13 rules for a database
to test the concept of DBMS against his relational model, and if a database follows the rule, it is called a true
relational database (RDBMS). These 12 rules are popular in RDBMS, known as Codd's 12 rules.
Rule 0: The Foundation Rule
The database must be in relational form. So that the system can handle the database through its relational
capabilities.
Rule 1: Information Rule
A database contains various information, and this information must be stored in each cell of a table in the form of
rows and columns.
In most cases, if you can place your relations in the third normal form (3NF), then you will have avoided most of the
problems common to bad relational designs. Boyce-Codd (BCNF) and the fourth normal form (4NF) handle special
situations that arise only occasionally.
Denormalization in Databases
Denormalization is a database optimization technique in which we add redundant data to one or more tables. This
can help us avoid costly joins in a relational database. Note that denormalization does not mean not doing
normalization. It is an optimization technique that is applied after normalization.
Types of Denormalization
The two most common types of denormalization are two entities in a one-to-one relationship and two entities in a
one-to-many relationship.
Pros of Denormalization: -
Retrieving data is faster since we do fewer joins Queries to retrieve can be simpler (and therefore less likely to have
bugs), since we need to look at fewer tables.
Cons of Denormalization: -
Updates and inserts are more expensive. Denormalization can make an update and insert code harder to write.
Data may be inconsistent. Which is the “correct” value for a piece of data?
Data redundancy necessities more storage.
Relational Decomposition
Decomposition is used to eliminate some of the problems of bad design like anomalies, inconsistencies, and
redundancy.
When a relation in the relational model is not inappropriate normal form then the decomposition of a relationship is
required. In a database, it breaks the table into multiple tables.
Types of Decomposition
1 Lossless Decomposition
If the information is not lost from the relation that is decomposed, then the decomposition will be lossless. The
process of normalization depends on being able to factor or decompose a table into two or smaller tables, in such a
way that we can recapture the precise content of the original table by joining the decomposed parts.
2 Lossy Decomposition
Data will be lost for more decomposition of the table.
END
CHAPTER 7 FUNCTIONAL DEPENDENCIES IN THE DATABASE MANAGEMENT
SYSTEM
SQL Server records two types of dependency:
Functional Dependency
Functional dependency (FD) is a set of constraints between two attributes in a relation. Functional dependency says
that if two tuples have the same values for attributes A1, A2,..., An, then those two tuples must have to have same
values for attributes B1, B2, ..., Bn.
Functional dependency is represented by an arrow sign (→) that is, X→Y, where X functionally determines Y. The
left-hand side attributes determine the values of attributes on the right-hand side.
Types of schema dependency
Schema-bound and non-schema-bound dependencies.
A Schema-bound dependencies are those dependencies that prevent the referenced object from being altered or
dropped without first removing the dependency.
An example of a schema-bound reference would be a view created on a table using the WITH SCHEMABINDING
option.
A Non-schema-bound dependency: does not prevent the referenced object from being altered or dropped.
An example of this is a stored procedure that selects from a table. The table can be dropped without first dropping
the stored procedure or removing the reference to the table from that stored procedure. Consider the following.
Functional Dependency (FD) is a constraint that determines the relation of one attribute to another attribute.
Functional dependency is denoted by an arrow “→”. The functional dependency of X on Y is represented by X →
Y.
In this example, if we know the value of the Employee number, we can obtain Employee Name, city, salary, etc. By
this, we can say that the city, Employee Name, and salary are functionally dependent on the Employee number.
Key Terms for Functional Dependency
Description
in Database
Axioms are a set of inference rules used to infer all the functional
Axiom
dependencies on a relational database.
It is a rule that suggests if you have a table that appears to contain two
Decomposition entities that are determined by the same primary key then you should
consider breaking them up into two different tables.
Dependent It is displayed on the right side of the functional dependency diagram.
Determinant It is displayed on the left side of the functional dependency Diagram.
It suggests that if two tables are separate, and the PK is the same, you
Union
should consider putting them. Together
Armstrong’s Axioms
The inclusion rule is one rule of implication by which FDs can be generated that are guaranteed to hold for all
possible tables. It turns out that from a small set of basic rules of implication, we can derive all others. We list here
three basic rules that we call Armstrong’s Axioms
Armstrong’s Axioms property was developed by William Armstrong in 1974 to reason about functional
dependencies.
The property suggests rules that hold true if the following are satisfied:
1. Transitivity
If A->B and B->C, then A->C i.e. a transitive relation.
2. Reflexivity
A-> B, if B is a subset of A.
3. Augmentation -> The last rule suggests: AC->BC, if A->B
When the existence of one or more rows in a table implies one or more other rows in the same table, then the Multi-
valued dependencies occur.
Multivalued dependency occurs when two attributes in a table are independent of each other but, both depend on a
third attribute.
A multivalued dependency consists of at least two attributes that are dependent on a third attribute that's why it
always requires at least three attributes.
Join Dependency
Join decomposition is a further generalization of Multivalued dependencies.
If the join of R1 and R2 over C is equal to relation R, then we can say that a join dependency (JD) exists.
Inclusion Dependency
Multivalued dependency and join dependency can be used to guide database design although they both are less
common than functional dependencies. The inclusion dependency is a statement in which some columns of a
relation are contained in other columns.
Transitive Dependency
When an indirect relationship causes functional dependency it is called Transitive Dependency.
Fully-functionally Dependency
An attribute is fully functional dependent on another attribute if it is Functionally Dependent on that attribute and
not on any of its proper subset
Trivial functional dependency
A → B has trivial functional dependency if B is a subset of A.
The following dependencies are also trivial: A → A, B → B
{ DeptId, DeptName } -> Dept Id
Non-trivial functional dependency
A → B has a non-trivial functional dependency if B is not a subset of A.
Trivial − If a functional dependency (FD) X → Y holds, where Y is a subset of X, then it is called a trivial FD. It
occurs when B is not a subset of A in −
A ->B
DeptId -> DeptName
Non-trivial − If an FD X → Y holds, where Y is not a subset of X, then it is called a non-trivial FD.
Completely non-trivial − If an FD X → Y holds, where x intersects Y = Φ, it is said to be a completely non-trivial
FD. When A intersection B is NULL, then A → B is called a complete non-trivial. A ->B Intersaction is empty.
Multivalued Dependency and its types
1. Join Dependency
2. Join decomposition is a further generalization of Multivalued dependencies.
3. Inclusion Dependency
Example of Dependency diagrams and flow
Dependency Preserving
If a relation R is decomposed into relations R1 and R2, then the dependencies of R either must be a part of R1 or R2
or must be derivable from the combination of functional dependencies of R1 and R2.
For example, suppose there is a relation R (A, B, C, D) with a functional dependency set (A->BC). The relational R
is decomposed into R1(ABC) and R2(AD) which is dependency preserving because FD A->BC is a part of relation
R1(ABC)
Find the canonical cover?
Solution: Given FD = { B → A, AD → BC, C → ABD }, now decompose the FD using decomposition
rule( Armstrong Axiom ).
B→A
AD → B ( using decomposition inference rule on AD → BC)
AD → C ( using decomposition inference rule on AD → BC)
C → A ( using decomposition inference rule on C → ABD)
C → B ( using decomposition inference rule on C → ABD)
C → D ( using decomposition inference rule on C → ABD)
Now set of FD = { B → A, AD → B, AD → C, C → A, C → B, C → D }
Canonical Cover/ irreducible
A canonical cover or irreducible set of functional dependencies FD is a simplified set of FD that has a similar
closure as the original set FD.
Extraneous attributes
An attribute of an FD is said to be extraneous if we can remove it without changing the closure of the set of FD.
The five concurrency problems that can occur in the database are:
1. Temporary Update Problem
2. Incorrect Summary Problem
3. Lost Update Problem
4. Unrepeatable Read Problem
5. Phantom Read Problem
Dirty Read – A Dirty read is a situation when a transaction reads data that has not yet been committed. For
example, Let’s say transaction 1 updates a row and leaves it uncommitted, meanwhile, Transaction 2 reads the
updated row. If transaction 1 rolls back the change, transaction 2 will have read data that is considered never to have
existed. (Dirty Read Problems (W-R Conflict))
Lost Updates occur when multiple transactions select the same row and update the row based on the value
selected (Lost Update Problems (W - W Conflict))
Non Repeatable read – Non Repeatable read occurs when a transaction reads the same row twice and gets a
different value each time. For example, suppose transaction T1 reads data. Due to concurrency, another transaction
T2 updates the same data and commits, Now if transaction T1 rereads the same data, it will retrieve a different
value. (Unrepeatable Read Problem (W-R Conflict))
Phantom Read – Phantom Read occurs when two same queries are executed, but the rows retrieved by the two,
are different. For example, suppose transaction T1 retrieves a set of rows that satisfy some search criteria. Now,
Transaction T2 generates some new rows that match the search criteria for transaction T1. If transaction T1 re-
executes the statement that reads the rows, it gets a different set of rows this time.
Based on these phenomena, the SQL standard defines four isolation levels :
Read Uncommitted – Read Uncommitted is the lowest isolation level. In this level, one transaction may read
not yet committed changes made by another transaction, thereby allowing dirty reads. At this level, transactions are
not isolated from each other.
Read Committed – This isolation level guarantees that any data read is committed at the moment it is read.
Thus it does not allows dirty reading. The transaction holds a read or write lock on the current row, and thus
prevents other transactions from reading, updating, or deleting it.
Repeatable Read – This is the most restrictive isolation level. The transaction holds read locks on all rows it
references and writes locks on all rows it inserts, updates, or deletes. Since other transactions cannot read, update or
delete these rows, consequently it avoids non-repeatable read.
Serializable – This is the highest isolation level. A serializable execution is guaranteed to be serializable.
Serializable execution is defined to be an execution of operations in which concurrently executing transactions
appear to be serially executing.
Durability: Durability ensures the permanency of something. In DBMS, the term durability ensures that the data
after the successful execution of the operation becomes permanent in the database. If a transaction is committed, it
will remain even error, power loss, etc.
ACID Example:
States of Transaction
Begin, active, partially committed, failed, committed, end, aborted
Aborted details are necessary
If any of the checks fail and the transaction has reached a failed state then the database recovery system will make
sure that the database is in its previous consistent state. If not then it will abort or roll back the transaction to bring
the database into a consistent state.
If the transaction fails in the middle of the transaction then before executing the transaction, all the executed
transactions are rolled back to their consistent state. After aborting the transaction, the database recovery module
will select one of the two operations: 1) Re-start the transaction 2) Kill the transaction
The concurrency control protocols ensure the atomicity, consistency, isolation, durability and serializability of the
concurrent execution of the database transactions.
Therefore, these protocols are categorized as:
1. Lock Based Concurrency Control Protocol
2. Time Stamp Concurrency Control Protocol
3. Validation Based Concurrency Control Protocol
The scheduler
A module that schedules the transaction’s actions, ensuring serializability
Two main approaches
1. Pessimistic: locks
2. Optimistic: time stamps, MV, validation
Scheduling
A schedule is responsible for maintaining jobs/transactions if many jobs are entered at the same
time(by multiple users) to execute state and read/write operations performed at that jobs.
A schedule is a sequence of interleaved actions from all transactions. Execution of several Facts while preserving
the order of R(A) and W(A) of any 1 Xact.
Note: Two schedules are equivalent if:
Two Schedules are equivalent if they have the same dependencies.
They contain the same transactions and operations
They order all conflicting operations of non-aborting transactions in the same way
A schedule is serializable if it is equivalent to a serial schedule
Process Scheduling handles the selection of a process for the processor on the basis of a
scheduling algorithm and also the removal of a process from the processor. It is an important part
of multiprogramming in operating system.
Process scheduling involves short-term scheduling, medium-term scheduling and long-term
scheduling.
The major differences between long term, medium term and short term scheduler are as follows
–
Long term scheduler is a job Medium term is a process of Short term scheduler is
scheduler. swapping schedulers. called a CPU scheduler.
The speed of long term is lesser The speed of medium term is The speed of short term is
than the short term. in between short and long term fastest among the other two.
scheduler.
Long term controls the degree of Medium term reduces the The short term provides
The long term is almost nil or The medium term is a part of Short term is also a minimal
minimal in the time sharing the time sharing system. time sharing system.
system.
The long term selects the Medium term can reintroduce Short term selects those
processes from the pool and the process into memory and processes that are ready to
loads them into memory for execution can be continued. execute.
execution.
Serial Schedule
The serial schedule is a type of schedule where one transaction is executed completely before starting another
transaction.
Example of Serial Schedule
Example of Serializable
A serializable schedule always leaves the database in a consistent state. A serial schedule is always a
serializable schedule because, in a serial schedule, a transaction only starts when the other transaction finished
execution. However, a non-serial schedule needs to be checked for Serializability.
A non-serial schedule of n number of transactions is said to be a serializable schedule if it is equivalent to the serial
schedule of those n transactions. A serial schedule doesn’t allow concurrency, only one transaction executes at a
time, and the other stars when the already running transaction is finished.
Linearizability: a guarantee about single operations on single objects Once the write completes, all later reads
(by wall clock) should reflect that write.
Types of Serializability
There are two types of Serializability.
1. Conflict Serializability
2. View Serializability
Conflict Serializable A schedule is conflict serializable if it is equivalent to some serial schedule
Non-conflicting operations can be reordered to get a serial schedule.
In general, a schedule is conflict-serializable if and only if its precedence graph is acyclic
A precedence graph is used for Testing for Conflict-Serializability
View serializability/view equivalence is a concept that is used to compute whether schedules are View-
Serializable or not. A schedule is said to be View-Serializable if it is view equivalent to a Serial Schedule (where no
interleaving of transactions is possible).
The non-serializable schedule is divided into two types, Recoverable and Non-recoverable Schedules.
1. Recoverable Schedule(Cascading Schedule, cascades Schedule, strict Schedule). In a recoverable schedule, if a
transaction T commits, then any other transaction that T read from must also have committed.
A schedule is recoverable if:
It is conflict-serializable, and
Whenever a transaction T commits, all transactions that have written elements read by T have already been
committed.
2. Non-Recoverable Schedule
The relation between various types of schedules can be depicted as:
Three-phase Commit
Another real-world atomic commit protocol is a three-phase commit (3PC). This protocol can reduce the amount of
blocking and provide for more flexible recovery in the event of failure. Although it is a better choice in unusually
failure-prone environments, its complexity makes 2PC the more popular choice.
Transaction atomicity using a two-phase commit
Transaction serializability using distributed locking.
DBMS Deadlock Types or techniques
All lock requests are made to the concurrency-control manager. Transactions proceed only once the lock request is
granted. A lock is a variable, associated with the data item, which controls the access of that data item. Locking is
the most widely used form of concurrency control.
Deadlock Example:
1. Binary Locks: A Binary lock on a data item can either be locked or unlocked states.
2. Shared/exclusive: This type of locking mechanism separates the locks in DBMS based on their uses. If a
lock is acquired on a data item to perform a write operation, it is called an exclusive lock.
3. Simplistic Lock Protocol: This type of lock-based protocol allows transactions to obtain a lock on every
object before beginning operation. Transactions may unlock the data item after finishing the ‘write’
operation.
4. Pre-claiming Locking: Two-Phase locking protocol which is also known as a 2PL protocol needs a
transaction should acquire a lock after it releases one of its locks. It has 2 phases growing and shrinking.
5. Shared lock: These locks are referred to as read locks, and denoted by 'S'.
If a transaction T has obtained Shared-lock on data item X, then T can read X, but cannot write X. Multiple Shared
locks can be placed simultaneously on a data item.
A deadlock is an unwanted situation in which two or more transactions are waiting indefinitely for one another to
give up locks.
The Bakery algorithm is one of the simplest known solutions to the mutual exclusion problem for the general case of
the N process. The bakery Algorithm is a critical section solution for N processes. The algorithm preserves the first
come first serve the property.
Before entering its critical section, the process receives a number. The holder of the smallest number enters the
critical section.
Deadlock detection
This technique allows deadlock to occur, but then, it detects it and solves it. Here, a database is periodically checked
for deadlocks. If a deadlock is detected, one of the transactions, involved in the deadlock cycle, is aborted. Other
transactions continue their execution. An aborted transaction is rolled back and restarted.
When a transaction waits more than a specific amount of time to obtain a lock (called the deadlock timeout), Derby
can detect whether the transaction is involved in a deadlock.
If deadlocks occur frequently in your multi-user system with a particular application, you might need to do some
debugging.
A deadlock where two transactions are waiting for one another to give up locks.
Deadlock detection and removal schemes
Wait-for-graph
This scheme allows the older transaction to wait but kills the younger one.
Phantom deadlock detection is the condition where the deadlock does not exist but due to a delay in propagating
local information, deadlock detection algorithms identify the locks that have been already acquired.
There are three alternatives for deadlock detection in a distributed system, namely.
Centralized Deadlock Detector − One site is designated as the central deadlock detector.
Hierarchical Deadlock Detector − Some deadlock detectors are arranged in a hierarchy.
Distributed Deadlock Detector − All the sites participate in detecting deadlocks and removing them.
Resource Preemption:
To eliminate deadlocks using resource preemption, we preempt some resources from processes and give those
resources to other processes. This method will raise three issues –
(a) Selecting a victim:
We must determine which resources and which processes are to be preempted and also order to minimize the cost.
(b) Rollback:
We must determine what should be done with the process from which resources are preempted. One simple idea is
total rollback. That means aborting the process and restarting it.
(c) Starvation:
In a system, the same process may be always picked as a victim. As a result, that process will never complete its
designated task. This situation is called Starvation and must be avoided. One solution is that a process must be
picked as a victim only a finite number of times.
Concurrent executions are done for Better transaction throughput, response time Done via better utilization of
resources
What is Concurrency Control?
Concurrent access is quite easy if all users are just reading data. There is no way they can interfere with one another.
Though for any practical Database, it would have a mix of READ and WRITE operations, and hence the
concurrency is a challenge. DBMS Concurrency Control is used to address such conflicts, which mostly occur with
a multi-user system.
Two Phase Locking Protocol is also known as 2PL protocol is a method of concurrency control in DBMS
that ensures serializability by applying a lock to the transaction data which blocks other transactions to access the
same data simultaneously. Two Phase Locking protocol helps to eliminate the concurrency problem in DBMS.
Every 2PL schedule is serializable.
Theorem: 2PL ensures/enforce conflict serializability schedule
But does not enforce recoverable schedules
2PL rule: Once a transaction has released a lock it is not allowed to obtain any other locks
This locking protocol divides the execution phase of a transaction into three different parts.
In the first phase, when the transaction begins to execute, it requires permission for the locks it needs.
The second part is where the transaction obtains all the locks. When a transaction releases its first lock, the third
phase starts.
In this third phase, the transaction cannot demand any new locks. Instead, it only releases the acquired locks.
The Two-Phase Locking protocol allows each transaction to make a lock or unlock request Growing Phase and
Shrinking Phase.
2PL has the following two phases:
A growing phase, in which a transaction acquires all the required locks without unlocking any data. Once all locks
have been acquired, the transaction is in its locked
point.
A shrinking phase, in which a transaction releases all locks and cannot obtain any new lock.
In practice:
– Growing phase is the entire transaction
– Shrinking phase is during the commit
The 2PL protocol indeed offers serializability. However, it does not ensure that deadlocks do not happen.
In the above-given diagram, you can see that local and global deadlock detectors are searching for deadlocks and
solving them by resuming transactions to their initial states.
Strict Two-Phase Locking Method
Strict-Two phase locking system is almost like 2PL. The only difference is that Strict-2PL never releases a lock after
using it. It holds all the locks until the commit point and releases all the locks at one go when the process is over.
Strict 2PL: All locks held by a transaction are released when the transaction is completed. Strict 2PL guarantees
conflict serializability, but not serializability.
Centralized 2PL
In Centralized 2PL, a single site is responsible for the lock management process. It has only one lock manager for
the entire DBMS.
Primary copy 2PL
Primary copy 2PL mechanism, many lock managers are distributed to different sites. After that, a particular lock
manager is responsible for managing the lock for a set of data items. When the primary copy has been updated, the
change is propagated to the slaves.
Distributed 2PL
In this kind of two-phase locking mechanism, Lock managers are distributed to all sites. They are responsible for
managing locks for data at that site. If no data is replicated, it is equivalent to primary copy 2PL. Communication
costs of Distributed 2PL are quite higher than primary copy 2PL
Time-Stamp Methods for Concurrency control:
The timestamp is a unique identifier created by the DBMS to identify the relative starting time of a transaction.
Typically, timestamp values are assigned in the order in which the transactions are submitted to the system. So, a
timestamp can be thought of as the transaction start time. Therefore, time stamping is a method of concurrency
control in which each transaction is assigned a transaction timestamp.
Timestamps must have two properties namely
Uniqueness: The uniqueness property assures that no equal timestamp values can exist.
Monotonicity: monotonicity assures that timestamp values always increase.
Timestamps are divided into further fields:
Granule Timestamps
Timestamp Ordering
Conflict Resolution in Timestamps
Timestamp-based Protocol in DBMS is an algorithm that uses the System Time or Logical Counter as a timestamp
to serialize the execution of concurrent transactions. The Timestamp-based protocol ensures that every conflicting
read and write operation is executed in timestamp order.
The timestamp-based algorithm uses a timestamp to serialize the execution of concurrent transactions. The protocol
uses the System Time or Logical Count as a Timestamp.
Conflict Resolution in Timestamps:
To deal with conflicts in timestamp algorithms, some transactions involved in conflicts are made to wait and abort
others.
Following are the main strategies of conflict resolution in timestamps:
Wait-die:
The older transaction waits for the younger if the younger has accessed the granule first.
The younger transaction is aborted (dies) and restarted if it tries to access a granule after an older concurrent
transaction.
Wound-wait:
The older transaction pre-empts the younger by suspending (wounding) it if the younger transaction tries to access a
granule after an older concurrent transaction.
An older transaction will wait for a younger one to commit if the younger has accessed a granule that both want.
Timestamp Ordering:
Following are the three basic variants of timestamp-based methods of concurrency control:
1. Total timestamp ordering
2. Partial timestamp ordering
Multiversion timestamp ordering
Multi-version concurrency control
Multiversion Concurrency Control (MVCC) enables snapshot isolation. Snapshot isolation means that whenever a
transaction would take a read lock on a page, it makes a copy of the page instead, and then performs its operations
on that copied page. This frees other writers from blocking due to read lock held by other transactions. Maintain
multiple versions of objects, each with its timestamp. Allocate the correct version to reads. Multiversion schemes
keep old versions of data items to increase concurrency.
The main difference between MVCC and standard locking:
read locks do not conflict with write locks ⇒ reading never blocks writing, writing blocks reading
Advantage of MVCC
locking needed for serializability considerably reduced
Disadvantages of MVCC
visibility-check overhead (on every tuple read/write)
Validation-Based Protocols
Validation-based Protocol in DBMS also known as Optimistic Concurrency Control Technique is a method to
avoid concurrency in transactions. In this protocol, the local copies of the transaction data are updated rather than
the data itself, which results in less interference while the execution of the transaction.
Optimistic Methods of Concurrency Control:
The optimistic method of concurrency control is based on the assumption that conflicts in database operations are
rare and that it is better to let transactions run to completion and only check for conflicts before they commit.
The Validation based Protocol is performed in the following three phases:
Read Phase
Validation Phase
Write Phase
Read Phase
In the Read Phase, the data values from the database can be read by a transaction but the write operation or updates
are only applied to the local data copies, not the actual database.
Validation Phase
In the Validation Phase, the data is checked to ensure that there is no violation of serializability while applying the
transaction updates to the database.
Write Phase
In the Write Phase, the updates are applied to the database if the validation is successful, else; the updates are not
applied, and the transaction is rolled back.
Laws of concurrency control
1. First Law of Concurrency Control
Concurrent execution should not cause application programs to malfunction.
2. Second Law of Concurrency Control
Concurrent execution should not have lower throughput or much higher response times than serial
execution.
Lock Thrashing is the point where system performance(throughput) decreases with increasing load
(adding more active transactions). It happens due to the contention of locks. Transactions waste time on lock waits.
The default concurrency control mechanism depends on the table type
Disk-based tables (D-tables) are by default optimistic.
Main-memory tables (M-tables) are always pessimistic.
Pessimistic locking (Locking and timestamp) is useful if there are a lot of updates and relatively high chances
of users trying to update data at the same time.
Optimistic (Validation) locking is useful if the possibility for conflicts is very low – there are many records but
relatively few users, or very few updates and mostly read-type operations.
Optimistic concurrency control is based on the idea of conflicts and transaction restart while pessimistic concurrency
control uses locking as the basic serialization mechanism (it assumes that two or more users will want to update the
same record at the same time, and then prevents that possibility by locking the record, no matter how unlikely
conflicts are.
Properties
Optimistic locking is useful in stateless environments (such as mod_plsql and the like). Not only useful but critical.
optimistic locking -- you read data out and only update it if it did not change.
Optimistic locking only works when developers modify the same object. The problem occurs when multiple
developers are modifying different objects on the same page at the same time. Modifying one
object may affect the process of the entire page, which other developers may not be aware of.
pessimistic locking -- you lock the data as you read it out AND THEN modify it.
Lock Granularity:
A database is represented as a collection of named data items. The size of the data item chosen as the unit of
protection by a concurrency control program is called granularity. Locking can take place at the following level :
Database level.
Table level(Coarse-grain locking).
Page level.
Row (Tuple) level.
Attributes (fields) level.
Multiple Granularity
Let's start by understanding the meaning of granularity.
Granularity: It is the size of the data item allowed to lock.
It can be defined as hierarchically breaking up the database into blocks that can be locked.
The Multiple Granularity protocol enhances concurrency and reduces lock overhead.
It maintains the track of what to lock and how to lock.
It makes it easy to decide either to lock a data item or to unlock a data item. This type of hierarchy can be
graphically represented as a tree.
There are three additional lock modes with multiple granularities:
Intention-shared (IS): It contains explicit locking at a lower level of the tree but only with shared locks.
Intention-Exclusive (IX): It contains explicit locking at a lower level with exclusive or shared locks.
Shared & Intention-Exclusive (SIX): In this lock, the node is locked in shared mode, and some node is locked in
exclusive mode by the same transaction.
Compatibility Matrix with Intention Lock Modes: The below table describes the compatibility matrix for these
lock modes:
In our example:
– T1: reads the list of products
– T2: inserts a new product
– T1: re-reads: a new product appears!
Dealing With Phantoms
Lock the entire table, or
Lock the index entry for ‘blue’
– If the index is available
Or use predicate locks
– A lock on an arbitrary predicate
Dealing with phantoms is expensive
END
CHAPTER 9 RELATIONAL ALGEBRA AND QUERY PROCESSING
Relational algebra is a procedural query language. It gives a step-by-step process to obtain the result of the query.
It uses operators to perform queries.
What is an “Algebra”
Answer: Set of operands and operations that are “closed” under all compositions
What is the basis of Query Languages?
Answer: Two formal Query Languages form the basis of “real” query languages (e.g., SQL) are:
1) Relational Algebra: Operational, it provides a recipe for evaluating the query. Useful for representing execution
plans. A language based on operators and a domain of values. The operator's map values are taken from the domain
into other domain values. Domain: The set of relations/tables.
2) Relational Calculus: Let users describe what they want, rather than how to compute it. (Nonoperational, Non-
Procedural, declarative.)
SQL is an abstraction of relational algebra. It makes using it much easier than writing a bunch of math. Effectively,
the parts of SQL that directly relate to relational algebra are:
SQL -> Relational Algebra
Select columns -> Projection
Select row -> Selection (Where Clause)
INNER JOIN -> Set Union
OUTER JOIN -> Set Difference
JOIN -> Cartesian Product (when you screw up your join statement)
Select(σ) The SELECT operation is used for selecting a subset of the tuples according
to a given selection condition (Unary operator)
Projection(π) The projection eliminates all attributes of the input relation but those
mentioned in the projection list. (Unary operator)/ Projection operator has to
eliminate duplicates!
Union Operation(∪) UNION is symbolized by the symbol. It includes all tuples that are in tables
A or B.
Set Difference(-) - Symbol denotes it. The result of A - B, is a relation that includes all tuples
that are in A but not in B.
Intersection(∩) Intersection defines a relation consisting of a set of all tuples that are in both
A and B.
Cartesian Product(X) Cartesian operation is helpful to merge columns from two relations.
Inner Join Inner join includes only those tuples that satisfy the matching criteria.
Theta Join(θ) The general case of the JOIN operation is called a Theta join. It is denoted
by the symbol θ.
EQUI Join When a theta join uses only an equivalence condition, it becomes an equi
join.
Natural Join(⋈) Natural join can only be performed if there is a common attribute (column)
between the relations.
Outer Join In an outer join, along with tuples that satisfy the matching criteria.
Left Outer Join( ) In the left outer join, the operation allows keeping all tuples in the left
relation.
Right Outer join( ) In the right outer join, the operation allows keeping all tuples in the right
relation.
Full Outer Join( ) In a full outer join, all tuples from both relations are included in the result
irrespective of the matching condition.
Select Operation
Notation: ⴋp(r) p is called the selection predicate
Project Operation
Notation: πA1,..., Ak (r)
The result is defined as the relation of k columns obtained by deleting the columns that are not listed
Union Operation
Notation: r Us
Relational Calculus
There is an alternate way of formulating queries known as Relational Calculus. Relational calculus is a non-
procedural query language. In the non-procedural query language, the user is concerned with the details of how to
obtain the results. The relational calculus tells what to do but never explains how to do it. Most commercial
relational languages are based on aspects of relational calculus including SQL-QBE and QUEL.
It is based on Predicate calculus, a name derived from a branch of symbolic language. A predicate is a truth-valued
function with arguments.
Notations of RC
Differences in RA and RC
Sr. No. Key Relational Algebra Relational Calculus
Language Relational Algebra is a procedural query Relational Calculus is a non-procedural
1
Type language. or declarative query language.
Relational Algebra targets how to obtain the Relational Calculus targets what result to
2 Objective
result. obtain.
Relational Algebra specifies the order in which Relational Calculus specifies no such
3 Order
operations are to be performed. order of executions for its operations.
Relational Calculus can be domain
4 Dependency Relational Algebra is domain-independent.
dependent.
Programming Relational Algebra is close to programming Relational Calculus is not related to
5
Language language concepts. programming language concepts.
In TRS, the variables represent the tuples In DRS, the variables represent the value drawn from the
from specified relations. specified domain.
A tuple is a single element of relation. In A domain is equivalent to column data type and any constraints
database terms, it is a row. on the value of data.
Notation : Notation :
{T | P (T)} or {T | Condition (T)} { a1, a2, a3, …, an | P (a1, a2, a3, …, an)}
Example :
{T | EMPLOYEE (T) AND T.DEPT_ID = Example :
10} { | < EMPLOYEE > DEPT_ID = 10 }
Examples of RC:
Query Block in RA
SQL, Relational Algebra, Tuple Calculus, and domain calculus examples: Comparisons
Select Operation
R = (A, B)
Relational Algebra: σB=17 (r)
Tuple Calculus: {t | t ∈ r ∧ B = 17}
Domain Calculus: {<a, b> | <a, b> ∈ r ∧ b = 17}
Project Operation
R = (A, B)
Relational Algebra: ΠA(r)
After translating the given query, we can execute each relational algebra operation by using different algorithms. So,
in this way, query processing begins its working.
Query processor
Query processor assists in the execution of database
queries such as retrieval, insertion, update, or removal of data
Key components:
Data Manipulation Language (DML) compiler
Query parser
Query rewriter
Query optimizer
Query executor
Query Processing Workflow
Right from the moment the query is written and submitted by the user, to the point of its execution and the eventual
return of the results, there are several steps involved. These steps are outlined below in the following diagram.
Various phases of query executation in system. First query go from client process to server process and in PGA SQL
area then following
phases start:
1 Parsing (Parse query tree, (syntax check, semantic check, shared pool check) used for soft parse
2 Transformation (Binding)
3 Estimation/query optimization
4 Plan generation, row source generation
5 Query Execution & plan
6 Query result
Index and Table scan in the query execution process
Query Evaluation
The logic applied to the evaluation of SELECT statements, as described here, does not precisely reflect how the
DBMS Server evaluates your query to determine the most efficient way to return results. However, by applying this
logic to your queries and data, the results of your queries can be anticipated.
1. Evaluate the FROM clause. Combine all the sources specified in the FROM clause to create a Cartesian product
(a table composed of all the rows and columns of the sources). If joins are specified, evaluate each join to obtain its
results table, and combine it with the other sources in the FROM clause. If SELECT DISTINCT is specified, discard
duplicate rows.
2. Apply the WHERE clause. Discard rows in the result table that do not fulfill the restrictions specified in the
WHERE clause.
3. Apply the GROUP BY clause. Group results according to the columns specified in the GROUP BY clause.
4. Apply the HAVING clause. Discard rows in the result table that do not fulfill the restrictions specified in the
HAVING clause.
5. Evaluate the SELECT clause. Discard columns that are not specified in the SELECT clause. (In case of SELECT
FIRST n… UNION SELECT …, the first n rows of the result from the union are chosen.)
6. Perform any unions. Combine result tables as specified in the UNION clause. (In case of SELECT FIRST n…
UNION SELECT …, the first n rows of the result from the union are chosen.)
7. Apply for the ORDER BY clause. Sort the result rows as specified.
Steps to process a query: parsing, validation, resolution, optimization, plan compilation, execution.
The architecture of query engines:
Query processing algorithms iterate over members of input sets; algorithms are algebra operators. The physical
algebra is the set of operators, data representations, and associated cost functions that the database execution engine
supports, while the logical algebra is more related to the data model and expressible queries of the data model (e.g.
SQL).
Synchronization and transfer between operators are key. Naïve query plan methods include the creation of
temporary files/buffers, using one process per operator, and using IPC. The practical method is to implement all
operators as a set of procedures (open, next, and close), and have operators schedule each other within a single
process via simple function calls. Each time an operator needs another piece of data ("granule"), it calls its data input
operator's next function to produce one. Operators structured in such a manner are called iterators.
Note: Three SQL relational algebra query plans one pushed, nearly fully pushed
Query plans are algebra expressions and can be represented as trees. Left-deep (every right subtree is a leaf), right-
deep (every left-subtree is a leaf), and bushy (arbitrary) are the three common structures. In a left-deep tree, each
operator draws input from one input and an inner loop integrates over the other input.
Cost Estimation
The cost estimation of a query evaluation plan is calculated in terms of various resources that include: Number of
disk accesses. Execution time is taken by the CPU to execute a query.
Query Optimization
Summary of steps of processing an SQL query:
Lexical analysis, parsing, validation, Query Optimizer, Query Code Generator, Runtime Database Processor
The term optimization here has the meaning “choose a reasonably efficient strategy” (not necessarily the best
strategy)
Query optimization: choosing a suitable strategy to execute a particular query more efficiently
An SQL query undergoes several stages: lexical analysis (scanning, LEX), parsing (YACC), validation
Scanning: identify SQL tokens
Parser: check the query syntax according to the SQL grammar
Validation: check that all attributes/relation names are valid in the particular database being queried
Then create the query tree or the query graph (these are internal representations of the query)
Main techniques to implement query optimization
Heuristic rules (to order the execution of operations in a query)
Computing cost estimates of different execution strategies
Process for heuristics optimization
1. The parser of a high-level query generates an initial
internal representation;
2. Apply heuristics rules to optimize the internal
representation.
3. A query execution plan is generated to execute groups of
operations based on the access paths available on the files
involved in the query.
Internal Sorting: (sorting files that fit entirely in the main memory)
All sorting in "real" database systems uses merging techniques since very large data sets are expected. Sorting
modules' interfaces should follow the structure of iterators.
Exploit the duality of quicksort and mergesort. Sort proceeds in divide phase and combines phase. One of the two
phases is based on logical keys (indexes), the physically arranges data items (which phase is logical is particular to
an algorithm). Two sub algorithms: one for sorting a run within main memory, another for managing runs on disk or
tape. The degree of fan-in (number of runs merged in a given step) is a key parameter.
External sorting:
The first step is bulk loading the B+ tree index (i.e., sort data entries and records). Useful for eliminating duplicate
copies in a collection of records (Why?)
Sort-merge join algorithm involves sorting.
Hashing
Hashing should be considered for equality matches, in general.
Hashing-based query processing algos use the in-memory hash table of database objects; if data in the hash table is
bigger than the main memory (common case), then hash table overflow occurs. Three techniques for overflow
handling exist:
Avoidance: input set is partitioned into F files before any in-memory hash table is built. Partitions can be dealt with
independently. Partition sizes must be chosen well, or recursive partitioning will be needed.
Resolution: assume overflow won't occur; if it does, partition dynamically.
Hybrid: like resolution, but when partition, only write one partition to disk, keep the rest in memory.
Database tuning
END
CHAPTER 10 FILE STRUCTURES, INDEXING, AND HASHING
Overview: Relative data and information is stored collectively in file formats. A file is a sequence of records
stored in binary format.
File Organization
File Organization defines how file records are mapped onto disk blocks. We have four types of File Organization to
organize file records −
Sorted Files: Best if records must be retrieved in some order, or only a `range’ of records is needed.
Sequential File Organization
Store records in sequential order based on the value of the search key of each record. Each record organized by
index or key process is called a sequential file organization that would be much faster to find records based on the
key.
Hashing File Organization
A hash function is computed on some attribute of each record; the result specifies in which block of the file the
record is placed. Data structures to organize records via trees or hashing on some key Called a hashing file
organization.
Heap File Organization
A record can be placed anywhere in the file where there is space; there is no ordering in the file. Some records are
organized randomly Called a heap file organization.
Every record can be placed anywhere in the table file, wherever there is space for the record Virtually all databases
provide heap file organization.
Heap file organized table can search through the entire table file, looking for all rows where the value of account_id
is A-591. This is called a file scan.
File Operations
Operations on database files can be broadly classified into two categories −
1. Update Operations
2. Retrieval Operations
Update operations change the data values by insertion, deletion, or update. Retrieval operations, on the other hand,
do not alter the data but retrieve them after optional conditional filtering. In both types of operations, selection plays
a significant role. Other than the creation and deletion of a file, there could be several operations, which can be done
on files.
Open − A file can be opened in one of the two modes, read mode or write mode. In read mode, the operating
system does not allow anyone to alter data. In other words, data is read-only. Files opened in reading mode can be
shared among several entities. Write mode allows data modification. Files opened in write mode can be read but
cannot be shared.
Locate − Every file has a file pointer, which tells the current position where the data is to be read or written. This
pointer can be adjusted accordingly. Using the find (seek) operation, it can be moved forward or backward.
Read − By default, when files are opened in reading mode, the file pointer points to the beginning of the file. There
are options where the user can tell the operating system where to locate the file pointer at the time of opening a file.
The very next data to the file pointer is read.
Write − Users can select to open a file in write mode, which enables them to edit its contents. It can be deletion,
insertion, or modification. The file pointer can be located at the time of opening or can be dynamically changed if
the operating system allows it to do so.
Close − This is the most important operation from the operating system’s point of view. When a request to close a
file is generated, the operating system removes all the locks (if in shared mode).
Tree-Structured Indexing
Indexing
Indexing is a data structure technique to efficiently retrieve records from the database files based on some attributes
on which the indexing has been done. Indexing in database systems is like what we see in books.
Indexing is defined based on its indexing attributes.
Dense Index
In a dense index, there is an index record for every search key value in the database. This makes searching faster but
requires more space to store index records themselves. Index records contain a search key value and a pointer to the
actual record on the disk.
Sparse Index
In a sparse index, index records are not created for every search key. An index record here contains a search key and
an actual pointer to the data on the disk. To search a record, we first proceed by index record and reach the actual
location of the data. If the data we are looking for is not where we directly reach by following the index, then the
system starts a sequential search until the desired data is found.
Multilevel Index
Index records comprise search-key values and data pointers. The multilevel index is stored on the disk along with
the actual database files. As the size of the database grows, so does the size of the indices. There is an immense need
to keep the index records in the main memory to speed up the search operations. If the single-level index is used,
then a large size index cannot be kept in memory which leads to multiple disk accesses.
A multi-level Index helps in breaking down the index into several smaller indices to make the outermost level so
small that it can be saved in a single disk block, which can easily be accommodated anywhere in the main memory.
B+ Tree
A B+ tree is a balanced binary search tree that follows a multi-level index format. The leaf nodes of a B+ tree denote
actual data pointers. B+ tree ensures that all leaf nodes remain at the same height, thus balanced. Additionally, the
leaf nodes are linked using a link list; therefore, a B+ tree can support random access as well as sequential access.
Structure of B+ Tree
Every leaf node is at an equal distance from the root node. A B+ tree is of the order n where n is fixed for every
B+ tree.
Internal nodes −
Internal (non-leaf) nodes contain at least ⌈n/2⌉ pointers, except the root node.
At most, an internal node can contain n pointers.
Leaf nodes −
Leaf nodes contain at least ⌈n/2⌉ record pointers and ⌈n/2⌉ key values.
At most, a leaf node can contain n record pointers and n key values.
Every leaf node contains one block pointer P to point to the next leaf node and forms a linked list.
Hash Organization
Hashing uses hash functions with search keys as parameters to generate the address of a data record.
Bucket − A hash file stores data in bucket format. The bucket is considered a unit of storage. A bucket typically
stores one complete disk block, which in turn can store one or more records.
Hash Function − A hash function, h, is a mapping function that maps all the set of search keys K to the address
where actual records are placed. It is a function from search keys to bucket addresses.
Types of Hashing Techniques
There are mainly two types of SQL hashing methods/techniques:
1 Static Hashing
2 Dynamic Hashing/Extendible hashing
Static Hashing
In static hashing, when a search-key value is provided, the hash function always computes the same address.
Linear Probing − When a hash function generates an address at which data is already stored, the next free bucket is
allocated to it. This mechanism is called Open Hashing.
Data bucket – Data buckets are memory locations where the records are stored. It is also known as a Unit of storage.
Key: A DBMS key is an attribute or set of an attribute that helps you to identify a row(tuple) in a relation(table).
This allows you to find the relationship between two tables.
Hash function: A hash function, is a mapping function that maps all the set of search keys to the address where
actual records are placed.
Linear Probing – Linear probing is a fixed interval between probes. In this method, the next available data block is
used to enter the new record, instead of overwriting the older record.
Quadratic probing– It helps you to determine the new bucket address. It helps you to add Interval between probes by
adding the consecutive output of quadratic polynomial to starting value given by the original computation.
Hash index – It is an address of the data block. A hash function could be a simple mathematical function to even a
complex mathematical function.
Double Hashing –Double hashing is a computer programming method used in hash tables to resolve the issues of a
collision.
Bucket Overflow: The condition of bucket overflow is called a collision. This is a fatal stage for any static to
function.
Hashing function h(r) Mapping from the index’s search key to a bucket in which the (data entry for) record r
belongs.
What is Collision?
Hash collision is a state when the resultant hashes from two or more data in the data set, wrongly map the same
place in the hash table.
How to deal with Hashing Collision?
There is two technique that you can use to avoid a hash collision:
1. Rehashing: This method, invokes a secondary hash function, which is applied continuously until an empty slot is
found, where a record should be placed.
2. Chaining: The chaining method builds a Linked list of items whose key hashes to the same value. This method
requires an extra link field to each table position.
An index is an on-disk structure associated with a table or view that speeds the retrieval of rows from the table or
view. An index contains keys built from one or more columns in the table or view. Indexes are automatically created
when PRIMARY KEY and UNIQUE constraints are defined on table columns. An index on a file speeds up
selections on the search key fields for the index.
The index is a collection of buckets.
Bucket = primary page plus zero or more overflow pages. Buckets contain data entries.
Types of Indexes
1 Clustered Index
2 Non-Clustered Index
3 Column Store Index
4 Filtered Index
5 Hash-based Index
6 Dense primary index
7 sparse index
8 b or b+ tree index
9 FK index
10 Secondary index
11 File Indexing – B+ Tree
12 Bitmap Indexing
13 Inverted Index
14 Forward Index
15 Function-based index
16 Spatial index
17 Bitmap Join Index
18 Composite index
19 Primary key index If the search key contains a primary key, then it is called a primary index.
20 Unique index: Search key contains a candidate key.
21 Multilevel index(A multilevel index considers the index file, which we will now refer to as the first (or
base) level of a multilevel index, as an ordered file with a distinct value for each K(i))
22 Inner index: The main index file for the data
23 Outer index: A sparse index on the index
END
A schema is a collection of database objects, including logical structures such as tables, views, sequences, stored
procedures, synonyms, indexes, clusters, and database links.
A user owns a schema.
A user and a schema have the same name.
ACTIVATE A ROLE
SCOTT> set role SHARIF identified by devdb;
TO DISABLING ALL ROLE
SCOTT> set role none;
GRANT A PRIVILEGE
SONY can access user sham.emp table because SELECT PRIVILEGE given to ‘PUBLIC’. So that sham.emp is
available to everyone of the database. SONY has created a view EMP_VIEW based on sham.emp.
Note: If you revoke OBJECT PRIVILEGE from a user, that privilege also revoked to whom it was granted.
Note: If you grant RESOURCE role to the user, this privilege overrides all explicit tablespace quotas. The
UNLIMITED TABLESPACE system privilege lets the user allocate as much space in any tablespaces that make up
the database.
Database account locks and unlock
Alter user admin identified by admin account lock;
Select u.username from all_users u where u.username like 'info';
Database security and non-database(non database ) security
END
CHAPTER 12 BUSINESS INTELLIGENCE TERMINOLOGIES IN DATABASE
SYSTEMS
Overview: Database systems are used for processing day-to-day transactions, such as sending a text or booking a
ticket online. This is also known as online transaction processing (OLTP). Databases are good for storing
information about and quickly looking up specific transactions.
Decision support systems (DSS) are generally defined as the class of warehouse system that deals with solving a
semi-structured problem.
DSS
DSS helps businesses make sense of data so they can undergo more informed management decision-making. It has
three branches DWH, OLAP, and DM. I will discuss this in detail below.
Characteristics of a decision support system
DSS frameworks typically consist of three main components or characteristics:
The model management system: Uses various algorithms in creating, storing, and manipulating data models
The user interface: The front-end program enables end users to interact with the DSS
The knowledge base: A collection or summarization of all information including raw data, documents, and personal
knowledge
Data Mart
A data mart(s) can be created from an existing data warehouse—the top-down approach—or other sources, such as
internal operational systems or external data. Similar to a data warehouse, it is a relational database that stores
transactional data (time value, numerical order, reference to one or more objects) in columns and rows making it
easy to organize and access.
Data marts and data warehouses are both highly structured repositories where data is stored and managed until it is
needed. Data marts are designed for a specific line of business and DWH is designed for enterprise-wide range use.
The data mart is >100 and DWH is >100 and the Data mart is a single subject but DWH is a multiple subjects
repository. Data marts are independent data marts and dependent data marts.
Data mart contains a subset of organization-wide data. This subset of data is valuable to specific groups of an
organization.
Definition
Types of Dimensions
Conformed Conformed dimensions are the very fact to which it relates. This dimension is used in more
Dimensions than one-star schema or Datamart.
Outrigger A dimension may have a reference to another dimension table. These secondary dimensions
Dimensions are called outrigger dimensions. This kind of Dimension should be used carefully.
Shrunken Rollup Shrunken Rollup dimensions are a subdivision of rows and columns of a base dimension.
Dimensions These kinds of dimensions are useful for developing aggregated fact tables.
Dimension-to-
Dimensions may have references to other dimensions. However, these relationships can be
Dimension Table
modeled with outrigger dimensions.
Joins
Role-Playing A single physical dimension helps to reference multiple times in a fact table as each reference
Dimensions links to a logically distinct role for the dimension.
It is a collection of random transactional codes, flags, or text attributes. It may not logically
Junk Dimensions
belong to any specific dimension.
Characteristics of OLAP
In the FASMI characteristics of OLAP methods, the term derived from the first letters of the characteristics are:
Fast
It defines which system is targeted to deliver the most feedback to the client within about five seconds, with the
elementary analysis taking no more than one second and very few taking more than 20 seconds.
Analysis
It defines which method can cope with any business logic and statistical analysis that is relevant for the function and
the user, and keep it easy enough for the target client. Although some preprogramming may be needed we do not
think it acceptable if all application definitions have to allow the user to define new Adhoc calculations as part of the
analysis and to document the data in any desired method, without having to program so we exclude products (like
Oracle Discoverer) that do not allow the user to define new Adhoc calculation as part of the analysis and to
document on the data in any desired product that do not allow adequate end user-oriented calculation flexibility.
Share
It defines which the system tools all the security requirements for understanding and, if multiple write connection is
needed, concurrent update location at an appropriated level, not all functions need the customer to write data back,
but for the increasing number which does, the system should be able to manage multiple updates in a timely, secure
manner.
Multidimensional
This is the basic requirement. OLAP system must provide a multidimensional conceptual view of the data, including
full support for hierarchies, as this is certainly the most logical method to analyze businesses and organizations.
OLAP Operations
Since OLAP servers are based on a multidimensional view of data, we will discuss OLAP operations in
multidimensional data.
Here is the list of OLAP operations −
1. Roll-up
2. Drill-down
3. Slice and dice
4. Pivot (rotate)
Roll-up
Roll-up performs aggregation on a data cube in any of the following ways −
By climbing up a concept hierarchy for a dimension
By dimension reduction
The following diagram illustrates how roll-up works.
Drill-down is performed by stepping down a concept hierarchy for the dimension time.
Initially, the concept hierarchy was "day < month < quarter < year."
On drilling down, the time dimension descended from the level of the quarter to the level of the month.
When drill-down is performed, one or more dimensions from the data cube are added.
It navigates the data from less detailed data to highly detailed data.
Slice
The slice operation selects one particular dimension from a given cube and provides a new sub-cube. Consider the
following diagram that shows how a slice works.
Here Slice is performed for the dimension "time" using the criterion time = "Q1".
It will form a new sub-cube by selecting one or more dimensions.
Dice
Dice selects two or more dimensions from a given cube and provides a new sub-cube. Consider the following
diagram that shows the dice operation.
The dice operation on the cube based on the following selection criteria involves three dimensions.
(location = "Toronto" or "Vancouver")
(time = "Q1" or "Q2")
(item =" Mobile" or "Modem")
Pivot
The pivot operation is also known as rotation. It rotates the data axes in view to provide an alternative presentation
of data. Consider the following diagram that shows the pivot operation.
This type of data mining technique refers to the observation of data items in the dataset which do not match an
expected pattern or expected behavior. This technique can be used in a variety of domains, such as intrusion,
detection, fraud or fault detection, etc. Outer detection is also called Outlier Analysis or Outlier mining.
Sequential Patterns:
This data mining technique helps to discover or identify similar patterns or trends in transaction data for a certain
period.
Prediction:
Where the end user can predict the most repeated things.
Knowledge Extraction from Business intelligence techniques
1 Data Cleaning:
The data can have many irrelevant and missing parts. To handle this part, data cleaning is done. It involves handling
missing data, noisy data, etc.
Fill in missing values, smooth noisy data, identify or remove outliers, and resolve inconsistencies
2 Data Transformation:
This step is taken to transform the data into appropriate forms suitable for the mining process.
3 Data discretization
Part of data reduction but with particular importance especially for numerical data
4 Data Reduction:
Since data mining is a technique that is used to handle a huge amount of data. While working with a huge volume of
data, analysis became harder in such cases. To get rid of this, we use the data reduction technique. It aims to increase
storage efficiency and reduce data storage and analysis costs.
5 Data integration
Integration of multiple databases, data cubes, or files
Method of treating missing data
1 Ignoring and discarding data
2 Fill in the missing value manually
3 Use the global constant to fill the mission values
4 Imputation using mean, median, or mod,
5 Replace missing values using a prediction/ classification model
6 K-Nearest Neighbor (k-NN) approach (The best approach)
Information Retrieval (IR) can be defined as a software program that deals with the organization, storage,
retrieval, and evaluation of information from document repositories, particularly textual information.
An Information Retrieval (IR) model selects and ranks the document that is required by the user or the user has
asked for in the form of a query.
The software program deals with the Data retrieval deals with obtaining data from a database
organization, storage, retrieval, and evaluation management system such as ODBMS. It is A process of
of information from document repositories, identifying and retrieving the data from the database, based
particularly textual information. on the query provided by the user or application.
Small errors are likely to go unnoticed. A single error object means total failure.
The results obtained are approximate matches. The results obtained are exact matches.
END
CHAPTER 13 DBMS INTEGRATION WITH BPMS
Overview: BPMS,which are significant extensions of workflow management (WFM). DBMS and BPMS
should be used simultaneously they give better performance. BPMS takes or holds operational data and DBMS
holds transactional and log data but BPMS will hold All the transactional data go through BPMS. BPMS is run at
the execution level. BPMS also holds document flow data.
A key element of BPMN is the choice of shapes and icons used for the graphical elements identified in this
specification. The intent is to create a standard visual language that all process modelers will recognize and
understand. An implementation that creates and displays BPMN Process Diagrams SHALL use the graphical
elements, shapes, and markers illustrated in this specification.
Six Sigma is another set of practices that originate from manufacturing, in particular from engineering and
production practices at Motorola. The main characteristic of Six Sigma is its focus on the minimization of defects
(errors). Six Sigma places a strong emphasis on measuring the output of processes or activities, especially in terms
of quality. Six Sigma encourages managers to systematically compare the effects of improvement initiatives on the
outputs. Sigma symbolizes a single standard deviation from the mean.
The two main Six Sigma methodologies are DMAIC and DMADV. Each has its own set of recommended
procedures to be implemented for business transformation.
DMAIC is a data-driven method used to improve existing products or services for better customer satisfaction. It is
the acronym for the five phases: D – Define, M – Measure, A – Analyse, I – Improve, C – Control. DMAIC is
applied in the manufacturing of a product or delivery of a service.
DMADV is a part of the Design for Six Sigma (DFSS) process used to design or re-design different processes of
product manufacturing or service delivery. The five phases of DMADV are: D – Define, M – Measure, A – Analyse,
D – Design, V – Validate.
A business process is a collection of related, structured activities that produce a specific service or a particular
goal for a particular person(s).
Business Process management (BPM) includes methods, techniques, and software to design, enact, control
and analyze operational processes
The BPM lifecycle is considered to have five stages: design, model, execute, monitor, optimize, and Process
reengineering.
The difference between BP and BPMS is defined as BPM is a discipline that uses various methods to discover,
model, analyze, measure, improve, and optimize business processes.
BPM is a method, technique, or way of being/doing and BPMS is a collection of technologies to help build software
systems or applications to automate processes.
BPMS is a software tool used to improve an organization’s business processes through the definition, automation,
and analysis of business processes. It also acts as a valuable automation tool for businesses to generate a competitive
advantage through cost reduction, process excellence, and continuous process improvement. As BPM is a discipline
used by organizations to identify, document, and improve their business processes; BPMS is used to enable aspects
of BPM.
BPMN Task
A logical unit of work that is carried out as a single whole
Resource
A person or a machine that can perform specific tasks
Activity -the performance of a task by a resource
Case
A sequence of activities performed to achieve some goal, an order, an insurance claim, a car assembly
Work item
The combination of a case and a task that is just to be carried out
Process
Describes how a particular category of cases shall be managed
Control flow construct ->sequence, selection, iteration, parallelisation
BPMN concepts
Events
Things that happen instantaneously (e.g. an invoice
Activities
Units of work that have a duration (e.g. an activity to
Process, events, and activities are logically related
Sequence
The most elementary form of relation is Sequence, which implies that one event or activity A is followed by another
event or activity B.
Start event
Circles used with a thin border
End event
Circles used with a thick border
Label
Give a name or label to each activity and event
Token
Once a process instance has been spawned/born, we use a token to identify the progress (or state) of that instance.
Gateway
There is a gating mechanism that either allows or disallows the passage of tokens through the gateway
Split gateway
A point where the process flow diverges
Have one incoming sequence flow and multiple outgoing sequence flows (representing the branches that diverge)
Join gateway
A point where the process flow converges
Mutually exclusive
Only one of them can be true every time the XOR split is reached by a token
Exclusive (XOR) split
To model the relation between two or more alternative activities, like in the case of the approval or rejection of a
claim.
Exclusive (XOR) join
To merge two or more alternative branches that may have previously been forked with an XOR-split
Indicated with an empty diamond or empty diamond marked with an “X”
It would not be appropriate to comment on BPM without also talking about SOA (Service Oriented Architectures)
due to the close coupling between the two and its dominance in industry today. Service oriented architectures have
been around for a long time however, when referring to them these days, they imply the implementation of systems
using web services technology. A web service is a standard approach to making a reusable component (a piece of
software functionality) available and accessible across the web and can be thought of as a repeatable business task
such as checking a credit balance, determining if a product is available or booking a holiday. Web services are
typically the way in which a business process is implemented. BPM is about providing a workflow layer to
orchestrate the web services. It provides the context to SOA essentially managing the dynamic execution of services
and allows business users to interact with them as appropriate.
SOA can be thought of as an architectural style which formally separates services (the business functionality) from
the consumers (other business systems). Separation is achieved through a service contract between the consumer and
producer of the service. This contract should address issues such as availability, version control, security,
performance etc. Having said this many web services are freely available over the internet but use of them is risky
without a service level agreement as they may not exist in future however, this may not be an issue if similar
alternate web services are available for use. In addition to a service contract there must be a way for providers to
publish service contracts and for consumers to locate service contracts. These typically occur through standards such
as the Universal Description, Discovery and Integration (UDDI 1993) which is an XML (XML 2003) based markup
language from W3C that enables businesses to publish details of services available on the internet. The Web
Services Description Language (WSDL 2007) provides a way of describing web services in an XML format. Note
that WSDL tells you how to interact with the web service but says nothing about how it actually works behind the
interface. The standard for communication is via SOAP (Simple Object Access Protocol) (SOAP 2007) which is a
specification for exchanging information in web services. These standards are not described in detail here as
information about them is commonly available so the reader is referred elsewhere for further information. The
important issue to understand about SOA in this context, is that it separates the contract from the implementation of
that contract thus producing an architecture which is loosely coupled resulting in easily reconfigurable systems,
which can adapt to changes in business processes easily.
There has been a convergence in recent times towards integrating various approaches such as SOA with SaaS
(Software as a Service) (Bennett et al., 2000) and the Web with much talk about Web Oriented Architectures
(WOA) [ref]. This approach extends SOA to web-based applications in order allow businesses to open up relevant
parts of their IT systems to customers, vendors etc. as appropriate. This has now become a necessity in order to
address competitive advantage. WOA (Hinchcliffe 2006) is often considered to be a light-weight version of SOA
using RESTful Web services, open APIs and integration approaches such as mashups.
In order to manage the lifecycle of business processes in an SOA architecture, software is needed that will enable
you to, for example: expose services without the need for programming, compose services from other services,
deploy services on any platform (hardware and operating system), maintain security and usage policies, orchestrate
services i.e. centrally coordinate the invocation of multiple web services, automatically generate the WSDL; provide
a graphical design tool, a distributable runtime engine and service monitoring capabilities, have the ability to
graphically design transformations to and from non-XML formats. These are all typical functions provided by SOA
middleware along with a runtime environment which should include e.g. event detection, service hosting, intelligent
routing, message transformation processing, security capabilities, synchronous and asynchronous message delivery.
Often these functions will be divided into several products. An enterprise service bus (ESB) is typically at the core
of a SOA tool providing an event-driven, standards based messaging engine.
both. The term was coined by David Patterson, Garth A. Gibson, and Randy Katz at the University of
California, Berkeley in 1987.
Disk Array: Arrangement of several disks that gives abstraction of a single, large disk.
RAID techniques:
Key to lower I/O cost: reduce seek/rotation delays! Hardware vs. software solutions?
2. Data-transfer rate: the rate at which data can be retrieved from or stored on disk (e.g., 25-100 MB/s)
3. Mean time to failure (MTTF): average time the disk is expected to run continuously without any failure
BLOCK vs Page vs Sectors
Block Page Sectors
Block is also a sequence of bits and A page is made up of unit blocks or A sector is a physical spot on a
bytes groups of blocks. formatted disk that hold a info.
A block is made up of a contiguous Pages have fixed sizes, usually 2k or Each sector can hold 512 bytes of
sequence of sectors from a single 4k or 8k. data
track.. No fix size.
A block is also called a physical Recards that have no fixed size Any data transferred between the
record on hard drives and floppies depends on the data types of hard disk and the RAM is usually
columns sent in blocks
. The default NTFS Block size is A disk can read/write a page faster. Pages manage data that is stored
4096 bytes. Pages are virtual blocks Each block/page consists of some in RAM.
records.
4 tuples fit in one block if the block A block is virtual memory unit that A hard disk plate has many
size is 2 kb and 30 tuples fit on 1 stores tables rows and records concentric circles on it, called
block if the block size is 8kb. logically in its segments and A page tracks. Every track is further
Smallest unit of logical memory, it is is a physical memory unit that store divided into sectors.
used to read a file or write data to a data physically in disk file Page/block: processing with
file or physical memory unit called A page is loaded into the processor pages is easier/faster than the
page. from the main memory. block
It is also called variable length Fixed length records, inflexible OS prefer page not block but both
records having complex structure. structure in memory. are storage units.
If I insert a new row/record it will come in a block/page if the existing block/page has space. Otherwise, it assigned
a new block within the file.
Block Diagram depicting paging. Page Map Table(PMT) contains pages from page number 0 to 7
Pinned block: Memory block that is not allowed to be written back to disk.
Toss immediate strategy: Frees the space occupied by a block as soon as the final tuple of that block has been
processed
Example: We can say if we have an employee table and have email, name, CNIC... Empid = 12 bytes, name = 59
bytes, CNIC = 15 bytes.... so all employee table columns are 230 bytes. Its means each row in the employee table
have of 230 bytes. So its means we can store around 2 rows in one block. For example, say your hard drive has a
block size of 4K, and you have a 4.5K file. This requires 8K to store on your hard drive (2 whole blocks), but only
4.5K on a floppy (9 floppy-size blocks).
Example:
Architecture: The buffer manager stages pages from external storage to the main memory buffer pool. File and index
layers make calls to the buffer manager.
What is the steal approach in DBMS? What are the Buffer Manager Policies/Roles? Data
storage on disk?
Note: Buffer manager moves pages between the main memory buffer pool (volatile memory) from the external
storage disk (in non-volatile storage). When execution starts, the file and index layer make the call to the buffer
manager.
The steal approach is used when the buffer manager replaces an existing page in the cache, that has been updated by
a transaction not yet committed, by another page requested by another transaction.
No-force. The force rule means that REDO will never be needed during recovery since any committed transaction
will have all its updates on disk before it is committed.
The deferred update ( NO-UNDO ) recovery scheme a no-steal approach. However, typical database systems
employ a steal/no-force strategy. The advantage of steel is that it avoids the need for very large buffer space.
Steal/No-Steal
Similarly, it would be easy to ensure atomicity with a no-steal policy. The no-steal policy states
that pages cannot be evicted from memory (and thus written to disk) until the transaction commits.
Need support for undo: removing the effects of an uncommitted transaction on the disk
Force/No Force
Durability can be a very simple property to ensure if we use a force policy. The force policy states
when a transaction executes, force all modified data pages to disk before the transaction commits.
called parking. The basic difference between the magnetic tape and magnetic disk is that magnetic tape is used
for backups whereas, the magnetic disk is used as secondary storage.
Dynamic Storage-Allocation Problem/Algorithms
Memory allocation is a process by which computer programs are assigned memory or space. It is of four types:
First Fit Allocation
The first hole that is big enough is allocated to the program. In this type fit, the partition is allocated, which is the
first sufficient block from the beginning of the main memory.
Best Fit Allocation
The smallest hole that is big enough is allocated to the program. It allocates the process to the partition that is the
first smallest partition among the free partitions.
Worst Fit Allocation
The largest hole that is big enough is allocated to the program. It allocates the process to the partition, which is the
largest sufficient freely available partition in the main memory.
Next Fit allocation: It is mostly similar to the first Fit, but this Fit, searches for the first sufficient partition from the
last allocation point.
Note: First-fit and best-fit better than worst-fit in terms of speed and storage utilization
Static and Dynamic Loading:
To load a process into the main memory is done by a loader. There are two different types of loading :
Static loading:- loading the entire program into a fixed address. It requires more memory space.
Dynamic loading:- The entire program and all data of a process must be in physical memory for the process to
execute. So, the size of a process is limited to the size of physical memory.
Methods Involved in Memory Management
There are various methods and with their help Memory Management can be done intelligently by the Operating
System:
Fragmentation
As processes are loaded and removed from memory, the free memory space is broken into little pieces. It happens
after sometimes that processes cannot be allocated to memory blocks considering their small size and memory
blocks remain unused. This problem is known as Fragmentation.
Fragmentation Category −
1. External fragmentation
Total memory space is enough to satisfy a request or to reside a process in it, but it is not contiguous, so it cannot be
used.
2. Internal fragmentation
The memory block assigned to the process is bigger. Some portion of memory is left unused, as it cannot be used by
another process.
Two types of fragmentation are possible
1. Horizontal fragmentation
2. Vertical Fragmentation
Reconstruction of Hybrid Fragmentation
The original relation in hybrid fragmentation is reconstructed by performing union and full outer join.
3. Hybrid fragmentation can be achieved by performing horizontal and vertical partitions together.
4. Mixed fragmentation is a group of rows and columns in relation.
Segmentation is a memory management technique in which each job is divided into several segments of different
sizes, one for each module that contains pieces that perform related functions. Each segment is a different logical
address space of the program or A segment is a logical unit.
Segmentation with Paging
Both paging and segmentation have their advantages and disadvantages, it is better to combine these two schemes to
improve on each. The combined scheme is known as 'Page the Elements'. Each segment in this scheme is divided
into pages and each segment is maintained in a page table. So the logical address is divided into the following 3
parts:
Segment numbers(S)
Page number (P)
The displacement or offset number (D)
As shown in the following diagram, the Intel 386 uses segmentation with paging for memory management with a
two-level paging scheme
Swapping
Swapping is a mechanism in which a process can be swapped temporarily out of the main memory (or move) to
secondary storage (disk) and make that memory available to other processes. At some later time, the system swaps
back the process from the secondary storage to the main memory.
Though performance is usually affected by the swapping process it helps in running multiple and big processes in
parallel and that's the reason Swapping is also known as a technique for memory compaction.
Note: Bring a page into memory only when it is needed. The same page may be brought into memory several times
Paging
A page is also a unit of data storage. A page is loaded into the processor from the main memory. A page is made up
of unit blocks or groups of blocks. Pages have fixed sizes, usually 2k or 4k. A page is also called a virtual page or
memory page. When the transfer of pages occurs between main memory and secondary memory it is known as
paging.
Paging is a memory management technique in which process address space is broken into blocks of the same size
called pages (size is the power of 2, between 512 bytes and 8192 bytes). The size of the process is measured in the
number of pages.
Divide logical memory into blocks of the same size called pages.
Similarly, main memory is divided into small fixed-sized blocks of (physical) memory called frames and the size of
a frame is kept the same as that of a page to have optimum utilization of the main memory and to avoid external
fragmentation.
Divide physical memory into fixed-sized blocks called frames (size is the power of 2, between 512 bytes and 8192
bytes)
The basic difference between the magnetic tape and magnetic disk is that magnetic tape is used for backups whereas,
the magnetic disk is used as secondary storage.
Hard disk stores information in the form of magnetic fields. Data is stored digitally in the form of tiny magnetized
regions on the platter where each region represents a bit.
Microsoft SQL Server databases are stored on disk in two files: a data file and a log file
Note: To run a program of size n pages, need to find n free frames and load the program
Load time: Must generate relocatable code if memory location is not known at compile time
Execution time: Binding delayed until run time if the process can be moved during its execution from one memory
segment to another. Need hardware support for address maps (e.g., base and limit registers). Multistep Processing of
a User Program In memory is as follows:
The concept of a logical address space that is bound to separate physical address space is central to proper memory
management
Logical address – generated by the CPU; also referred to as virtual address
Physical address – address seen by the memory unit
Logical and physical addresses are the same in compile-time and load-time address-binding schemes; logical
(virtual) and physical addresses differ in the execution-time address-binding scheme
The user program deals with logical addresses; it never sees the real physical addresses
The logical address space of a process can be noncontiguous; the process is allocated physical memory whenever the
latter is available
END
Oracle Database creates server processes to handle the requests of user processes connected to an instance. A server
process can be either of the following: A dedicated server process, which services only one user process. A shared
server
process, which can service multiple user processes.
We can see the listener has the default name of "LISTENER" and is listening for TCP connections on port 1521.
The listener process is started when the server is started (or whenever the instance is started). The listener is only
required for connections from other servers, and the DBA performs the creation of the listener process. When a new
connection comes in over the network, the listener passes the connection to Oracle.
Hence, It is also a graceful shutdown, So it doesn’t require ICR in the next startup.
Shutdown Abort:
1. New connections are not allowed
2. Connected uses can’t perform an ongoing transaction
3. Idle sessions will be disconnected
4. Db gets shutdown abruptly (NO Commit /No Rollback)
Hence, It is an abrupt shutdown, So it requires ICR in the next startup.
standby database. The snapshot standby database is appropriate when you require a temporary, updatable version of
a physical standby
database.
What is Cloning?
Database Cloning is a procedure that can be used to create an identical copy of the existing Oracle database. DBAs
occasionally need to clone databases to test backup and recovery strategies or export a table that was dropped from
the production database and import it back into the production databases. Cloning can be done on a different host or
the same host even if it is different from the standby database.
Database Cloning can be done using the following methods,
Cold Cloning
Hot Cloning
RMAN Cloning
The basic memory structures associated with Oracle Database include:
System global area (SGA)
The SGA is a group of shared memory structures, known as SGA components, that contain data and control
information for one Oracle Database instance. All server and background processes share the SGA. Examples of
data stored in the SGA include cached data blocks and shared SQL areas.
Program global area (PGA)
A PGA is a nonshared memory region that contains data and control information exclusively for use by an Oracle
process. Oracle Database creates the PGA when an Oracle process starts.
One PGA exists for each server process and background process. The collection of individual PGAs is the total
instance PGA or instance PGA. Database initialization parameters set the size of the instance PGA, not individual
PGAs.
Oracle allocates logical database space for all data in a database. The units of database space allocation are data
blocks, extents, and segments.
The Relationships Among Segments, Extents, Data Blocks in the data file, Oracle block, and OS block:
Oracle Block: At the finest level of granularity, Oracle stores data in data blocks (also called logical blocks, Oracle
blocks, or pages). One data block corresponds to a specific number of bytes of physical database space on a disk.
Oracle Extent: The next level of logical database space is an extent. An extent is a specific number of contiguous
data blocks allocated for storing a specific type of information. It can be spared over two tablespaces.
Oracle Segment: The level of logical database storage greater than an extent is called a segment. A segment is a set
of extents, each of which has been allocated for a specific data structure and all of which are stored in the same
tablespace. For example, each table's data is stored in its data segment, while each index's data is stored in its index
segment. If the table or index is partitioned, each partition is stored in its segment.
Data block: Oracle manages the storage space in the data files of a database in units called data blocks. A data
block is the smallest unit of data used by a database.
Oracle block and data block are equal in data storage by logical and physical respectively like table's (logical) data is
stored in its data segment.
The high water mark is the boundary between used and unused space in a segment.
Operating system block: The data consisting of the data block in the data files are stored in operating system
blocks.
OS Page: The smallest unit of storage that can be atomically written to non-volatile storage is called a page
Details of Data storage in Oracle Blocks:
An extent is a set of logically contiguous data blocks allocated for storing a specific type of information. In the
Figure above, the 24 KB extent has 12 data blocks, while the 72 KB extent has 36 data blocks.
A segment is a set of extents allocated for a specific database object, such as a table. For example, the data for the
employee's table is stored in its data segment, whereas each index for employees is stored in its index segment.
Every database object that consumes storage consists of a single segment.
A big file tablespace eases database administration because it consists of only one data file. The
a single data file can be up to 128TB (terabytes) in size if the tablespace block size is 32KB; if you
use the more common 8KB block size, 32TB is the maximum size of a big file tablespace.
Oracle Database must use logical space management to track and allocate the extents in a tablespace. When a
database object requires an extent, the database must have a method of finding and providing it. Similarly, when an
object no longer requires an extent, the database must have a method of making the free extent available.
Oracle Database manages space within a tablespace based on the type that you create.
Instead of setting the total memory size, you set many initialization parameters to manage components of the SGA
and instance PGA individually.
SGA (System Global Area) is an area of memory (RAM) allocated when an Oracle Instance starts up. The SGA's
size and function are controlled by initialization (INIT.ORA or SPFILE) parameters.
In general, the SGA consists of the following subcomponents, as can be verified by querying the V$SGAINFO:
SELECT FROM v$sgainfo;
The common components are:
Data buffer cache - cache data and index blocks for faster access.
Shared pool - cache parsed SQL and PL/SQL statements.
Dictionary Cache - information about data dictionary objects.
Redo Log Buffer - committed transactions that are not yet written to the redo log files.
JAVA pool - caching parsed Java programs.
Streams pool - cache Oracle Streams objects.
Large pool - used for backups, UGAs, etc.
Automatic Shared Memory Management simplifies the configuration of the SGA and is the recommended
memory configuration. To use Automatic Shared Memory Management, set the SGA_TARGET initialization
parameter to a nonzero value and set the STATISTICS_LEVEL initialization parameter to TYPICAL or ALL. The
value of the SGA_TARGET parameter should be set to the amount of memory that you want to dedicate to the
SGA. In response to the workload on the system, the automatic SGA management distributes the memory
appropriately for the following memory pools:
1. Database buffer cache (default pool)
2. Shared pool
3. Large pool
4. Java pool
5. Streams pool
END
CHAPTER 16 DATABASE BACKUPS AND RECOVERY, LOGS MANAGEMENT
Physical backups Physical backups, which are the primary concern in a backup and recovery strategy, are copies
of physical database files. You can make physical backups with either the Oracle Recovery Manager (RMAN)
utility or operating system utilities. These are copies of physical database files. For example, a physical backup
might copy database content from a local disk drive to another secure location. Physical backup Types (cold, hot,
full, incremental)
During an Oracle tablespace hot backup, you (or your script) puts a tablespace into backup mode, then copy the data
files to disk or tape, then take the tablespace out of backup mode.
Backup sets are logical entities produced by the RMAN BACKUP command.
Oracle Recovery Manager (RMAN)
It's done by server session (Restore files, Backup data Files, Recover Data files). It's also recommended. A user can
log in to RMAN and command it to back up a database. RMAN can write backup sets to disk and tape cold backup
(offline database backup).
RMAN is a powerful and versatile program that allows you to make a backup or image copy of your data. When you
specify files or archived logs using the RMAN backup command, RMAN creates a backup set as output.
A backup set is one or more datafiles, control files, or archived redo logs that are written in an RMAN-specific
format; it requires you to use the RMAN restore command for recovery operations. In contrast, when you use the
copy command to create an image copy of a file, it is in an instance-usable format--you do not need to invoke
RMAN to restore or recover it.
When you issue RMAN commands such as backup or copy, RMAN establishes a connection to an Oracle server
session. The server session then backs up the specified datafile, control file, or archived log from the target database.
RMAN obtains the information it needs from either the control file or the optional recovery catalog. The recovery
catalog is a central repository containing a variety of information useful for backup and recovery. Conveniently,
RMAN automatically establishes the names and locations of all the files that you need to back up.
RMAN provides several advantages. One crucial advantage to using RMAN is its incremental backup feature. In
traditional backup methods, you must perform a full backup in which you back up all the data blocks ever used in a
datafile. The incremental backup feature allows you to back up only those data blocks that have changed since a
previous backup.
Using RMAN, you can perform two types of incremental backups: a differential backup or a cumulative backup. In
a differential level n incremental backup, you back up all blocks that have changed since the most recent level n or
lower backup. For example, in a differential level 2 backup, RMAN determines which level 1 or level 2 backup
occurred most recently and backs up all blocks modified since that backup.
In a cumulative level n backup, RMAN backs up all the blocks used since the most recent backup at level n-1 or
less. For example, in a cumulative level 3 backup, RMAN determines which level 2 or level 1 backup occurred most
recently and backs up all blocks used since that backup.
Hot backup - also known as dynamic or online backup, is a backup performed on data while the database is
actively online and accessible to users.
Cold backup—Users cannot modify the database during a cold backup, so the database and the backup copy are
always synchronized. Cold backup is used only when the service level allows for the required system downtime.
Full—Creates a copy of data that can include parts of a database such as the control file, transaction files (redo logs),
tablespaces, archive files, and data files. Regular cold full physical backups are recommended. The database must be
in archive log mode for a full physical backup.
Incremental—Captures only changes made after the last full physical backup. Incremental backup can be done with
a hot backup.
Cold-full backup - A cold-full backup is when the database is shut down, all of the physical files are backed up, and
the database is started up again.
Cold-partial backup - A cold-partial backup is used when a full backup is not possible due to some physical
constraints.
Hot-full backup - A hot-full backup is one in which the database is not taken off-line during the backup process.
Rather, the tablespace and data files are put into a backup state.
Overview of the RMAN Environment
Recovery Manager (RMAN) is an Oracle Database client that performs backup and recovery tasks on your
databases and automates administration of your backup strategies. It greatly simplifies backing up, restoring, and
recovering database files.
Starting RMAN and Connecting to a Database
The RMAN client is started by issuing the rman command at the command prompt of your operating system.
RMAN then displays a prompt for your commands as shown in the following example:
% rman
RMAN>
You can connect to a database with command-line options or by using the CONNECT TARGET command. The
following example starts RMAN and then connects to a target database through Oracle Net, AS SYSDBA is not
specified because it is implied. RMAN prompts for a password.
% rman
RMAN> CONNECT TARGET SYS@prod
% rman
RMAN> CONNECT TARGET /
RMAN> EXIT
The RMAN environment consists of the utilities and databases that play a role in backing up your data. At a
minimum, the environment for RMAN must include the following components:
Backing Up a Database
Use the BACKUP command to back up files. RMAN backs up data to the configured default device for the type of
backup requested. By default, RMAN creates backups on disk. If a fast recovery area is enabled, and if you do not
specify the FORMAT parameter (see Table 2-1), then RMAN creates backups in the recovery area and
automatically gives them unique names.
By default, RMAN creates backup sets rather than image copies. A backup set consists of one or more backup
pieces, which are physical files written in a format that only RMAN can access. A multiplexed backup set contains
the blocks from multiple input files. RMAN can write backup sets to disk or tape.
If you specify BACKUP AS COPY, then RMAN copies each file as an image copy, which is a bit-for-bit copy of a
database file created on disk. Image copies are identical to copies created with operating system commands like cp
on Linux or COPY on Windows, but are recorded in the RMAN repository and so are usable by RMAN. You can
use RMAN to make image copies while the database is open.
A target database
An Oracle database to which RMAN is connected with the TARGET keyword. A target database is a database on
which RMAN is performing backup and recovery operations. RMAN always maintains metadata about its
operations on a database in the control file of the database. The RMAN metadata is known as the RMAN repository.
A media manager
An application required for RMAN to interact with sequential media devices such as tape libraries. A media
manager controls these devices during backup and recovery, managing the loading, labeling, and unloading of
media. Media management devices are sometimes called SBT (system backup to tape) devices.
A recovery catalog
A separate database schema used to record RMAN activity against one or more target databases. A recovery catalog
preserves RMAN repository metadata if the control file is lost, making it much easier to restore and recover
following the loss of the control file. The database may overwrite older records in the control file, but RMAN
maintains records forever in the catalog unless the records are deleted by the user.
Hot-partial backup - A hot-partial backup is one in which the database is not taken off-line during the backup
process, plus different tablespaces are backed up on different nights.
Consistent and Inconsistent Backups A consistent backup is one in which the files being backed up contain all
changes up to the same system change number (SCN). This means that the files in the backup contain all the data
taken from the same point in time. Unlike an inconsistent backup, a consistent whole database backup does not
require recovery after it is restored.
An inconsistent backup is a backup of one or more database files that you make while the database is open or after
the database has shut down abnormally.
Image Backup/mirror backup
A full image backup, or mirror backup, is a replica of everything on your computer's hard drive, from the operating
system, boot information, apps, and hidden files to your preferences and settings. Imaging software not only
captures individual files but everything you need to get your system running again. Image copies are exact byte-for-
byte copies of files. RMAN prefers to use an image copy over a backup set.
Backing Up a Database in ARCHIVELOG Mode
If a database runs in ARCHIVELOG mode, then you can back up the database while it is open. The backup is called
an inconsistent backup because redo is required during recovery to bring the database to a consistent state. If you
have the archived redo logs needed to recover the backup, open database backups are as effective for data
protection as consistent backups.
To back up the database and archived redo logs while the database is open:
Start RMAN and connect to a target database.
Run the BACKUP DATABASE command.
For example, enter the following command at the RMAN prompt to back up the database and all archived redo log
files to the default backup device:
RMAN> BACKUP DATABASE PLUS ARCHIVELOG;
Backing Up a Database in NOARCHIVELOG Mode
If a database runs in NOARCHIVELOG mode, then the only valid database backup is a consistent backup. For the
backup to be consistent, the database must be mounted after a consistent shutdown. No recovery is required after
restoring the backup.
For example, enter the following commands to guarantee that the database is in a consistent state for a
backup:
For example, enter the following command at the RMAN prompt to back up the database to the default
backup device:
The following variation of the command creates image copy backups of all datafiles in the database:
The BACKUP command includes a host of options, parameters, and clauses that control backup output. Table 2-
1 lists some typical backup options.
FORMA Specifies a location and name for backup pieces and copies. You must use BACKUP
T substitution variables to generate unique filenames. FORMAT
'AL_%d/%t/%s/%p'
The most common substitution variable is %U, which generates a unique ARCHIVELOG LIKE
name. Others include %d for the DB_NAME, %t for the backup set time '%arc_dest%';
stamp, %s for the backup set number, and %p for the backup piece number.
TAG Specifies a user-defined string as a label for the backup. If you do not specify BACKUP
a tag , then RMAN assigns a default tag with the date and time. Tags are TAG
always stored in the RMAN repository in uppercase. 'weekly_full_db_bkup'
DATABASE
MAXSETSIZE 10M;
Kill the DB instance, if running. You can do shut abort or kill pmon at OS level
The following example terminates the database instance (if it is started) and mounts the
database:
The following example uses the preconfigured disk channel to restore the database:
Recovering Tablespaces
If you cannot restore a datafile to a new location, then use the RMAN SET NEWNAME command
within a RUN command to specify the new filename. Afterward, use a SWITCH DATAFILE
ALL command, which is equivalent to using the SQL statement ALTER DATABASE RENAME
FILE, to update the control file to reflect the new names for all datafiles for which a SET
NEWNAME has been issued in the RUN command.
The following RUN command, which you execute at the RMAN prompt, sets a new name for the
datafile in the users tablespace:
RUN
{
SET NEWNAME FOR DATAFILE '/disk1/oradata/prod/users01.dbf'
TO '/disk2/users01.dbf';
RESTORE TABLESPACE users;
SWITCH DATAFILE ALL; # update control file with new filenames
RECOVER TABLESPACE users;
}
Bring the tablespace online, as shown in the following example:
RMAN> SQL 'ALTER TABLESPACE users ONLINE';
Optionally, list the current tablespaces and datafiles, as shown in the following command:
RMAN> REPORT SCHEMA;
Run the RESTORE DATABASE command with the PREVIEW option.
The following command specifies SUMMARY so that the backup metadata is not displayed in verbose
mode (sample output included):
Checkpoint
The checkpoint is like a bookmark. While the execution of the transaction, such checkpoints are marked, and the
transaction is executed then using the steps of the transaction, the log files will be created.
Checkpoint declares a point before which all the logs are stored permanently in the storage disk and are in an
inconsistent state. In the case of crashes, the amount of work and time is saved as the system can restart from the
checkpoint. Checkpointing is a quick way to limit the number of logs to scan on recovery.
Store the LSN of the most recent checkpoint at a master record on a disk
System Catalog
A repository of information describing the data in the database (metadata, data about data)
Data Replication
Replication is the process of copying and maintaining database objects in multiple databases that make up a
distributed database system. Replication can improve the performance and protect the availability of applications
because alternate data access options exist.
Oracle provides its own set of tools to replicate Oracle and integrate it with other databases. In this post, you will
explore the tools provided by Oracle as well as open-source tools that can be used for Oracle database replication by
implementing custom code.
The catalog is needed to keep track of the location of each fragment & replica
Data replication techniques
Synchronous vs. asynchronous
Synchronous: all replicas are up-to-date
Asynchronous: cheaper but delay in synchronization
Regarding the timing of data transfer, there are two types of data replication:
Asynchronous replication is when the data is sent to the model server -- the server where the replicas take data
from the client. Then, the model server pings the client with a confirmation saying the data has been received. From
there, it goes about copying data to the replicas at an unspecified or monitored pace.
Synchronous replication is when data is copied from the client-server to the model server and then replicated to
all the replica servers before the client is notified that data has been replicated. This takes longer to verify than the
asynchronous method, but it presents the advantage of knowing that all data was copied before proceeding.
Asynchronous database replication offers flexibility and ease of use, as replications happen in the background.
Methods to Setup Oracle Database Replication
You can easily set up the Oracle Database Replication using the following methods:
Method 1: Oracle Database Replication Using Hevo Data
Method 2: Oracle Database Replication Using A Full Backup And Load Approach
Method 3: Oracle Database Replication Using a Trigger-Based Approach
Method 4: Oracle Database Replication Using Oracle Golden Gate CDC
Method 5: Oracle Database Replication Using Custom Script-Based on Binary Log
Oracle types of data replication and integration in OLAP
Three main architectures:
Consolidation database: All data is moved into a single database and managed from a central location. Oracle Real
Application Clusters (Oracle RAC), Grid computing, and Virtual Private Database (VPD) can help you consolidate
information into a single database that is highly available, scalable, and secure.
Federation: Data appears to be integrated into a single virtual database while remaining in its current distributed
locations. Distributed queries, distributed SQL, and Oracle Database Gateway can help you create a federated
database.
Sharing Mediation: Multiple copies of the same information are maintained in multiple databases and application
data stores. Data replication and messaging can help you share information at multiple databases.
END
CHAPTER 17 PREREQUISITES OF STORAGE MANAGEMENT AND ORACLE
INSTALLATION
Overview of Hardware Requirements
Hardware requirements you must meet before installing Oracle Management Service (OMS), a standalone Oracle
Management Agent (Management Agent), and Oracle Management Repository (Management Repository).
Physical memory (RAM)=> 256 MB minimum; 512 MB recommended, On Windows Vista, the minimum
requirement is 512 MB
Virtual memory=> Double the amount of RAM
Disk space=> Basic Installation Type total: 2.04 GB, advanced Installation Types total: 1.94 GB
Video adapter=> 256 colors
Processor=> 550 MHz minimum, On Windows Vista, the minimum requirement is 800 MHz
In particular, here I will discuss the following:
1. CPU, RAM, Heap Size, and Hard Disk Space Requirements for OMS
2. CPU, RAM, and Hard Disk Space Requirements for Standalone Management Agent
3. CPU, RAM, and Hard Disk Space Requirements for Management Repository
CPU, RAM, Heap Size, and Hard Disk Space Requirements for OMS
Host Small Medium Large
CPU Cores/Host 2 4 8
RAM 4 GB 6 GB 8 GB
RAM with ADPFoot 1 , JVMDFoot 2 6GB 10 GB 14 GB
Oracle WebLogic Server JVM Heap Size 512 MB 1 GB 2 GB
Hard Disk Space 7 GB 7 GB 7 GB
Hard Disk Space with ADP, JVMD 10 GB 12 GB 14 GB
Note: While installing an additional OMS (by cloning an existing one), if you have installed BI publisher on the
source host, then ensure that you have 7 GB of additional hard disk space on the destination host, so a total of 14
GB.
CPU, RAM, and Hard Disk Space Requirements for Standalone Management Agent
For a standalone Oracle Management Agent, ensure that you have 2 CPU cores per host, 512 MB of RAM, and 1
GB of hard disk space.
CPU, RAM, and Hard Disk Space Requirements for Management Repository
In this table RAM and Hard Disk Space Requirements for Management Repository
Host Small Medium Large
CPU Cores/HostFoot 1 2 4 8
RAM 4 GB 6 GB 8 GB
Hard Disk Space 50 GB 200 GB 400 GB
Requirement Value
Virtual memory (swap) If physical memory is between 2 GB and 16 GB, then set
virtual memory to 1 times the size of the RAM
If physical memory is more than 16 GB, then set virtual
memory to 16 GB
Linux 4 GB 8 GB
UNIX 4 GB 8 GB
Windows 4 GB 8 GB
Type Size
Type Size
Maximum possible file size with 16 K sized 64 Gigabytes (GB) (4,194,304 * 16,384) = 64 gigabytes
blocks (GB)
2 KB 20,000
4 KB 40,000
8 KB 65,536
16 KB 65,536
In this section, you will be installing the Oracle Database and creating an Oracle Home User account.
Here OUI is used to install Oracle Software
1. Expand the database folder that you extracted in the previous section. Double-click setup.
2. Click Yes in the User Account Control window to continue with the installation.
3. The Configure Security Updates window appears. Enter your email address and My Oracle Support password to
receive security issue notifications via email. If you do not wish to receive notifications via email, deselect.
Select "Skip software updates" if do not want to apply any updates.
Accept the default and click Next.
4. The Select Installation Option window appears with the following options:
Select "Create and configure a database" to install the database, create database instance and configure the database.
Select "Install database software only" to only install the database software.
Select "Upgrade an existing database" to upgrade the database that is already installed.
In this OBE, we create and configure the database. Select the Create and configure a database option and click Next.
5. The System Class window appears. Select Desktop Class or Server Class depending on the type of system you
are using. In this OBE, we will perform the installation on a desktop/laptop. Select Desktop class and click Next.
6. The Oracle Home User Selection window appears. Starting with Oracle Database 12c Release 1 (12.1), Oracle
Database on Microsoft Windows supports the use of an Oracle Home User, specified at the time of installation. This
Oracle Home User is used to run the Windows services for a Oracle Home, and is similar to the Oracle User on
Oracle Database on Linux. This user is associated with an Oracle Home and cannot be changed to a different user
post installation.
Note: Different Oracle homes on a system can share the same Oracle Home User or use different Oracle Home Users.
The Oracle Home User is different from an Oracle Installation User. The Oracle Installation User is the user who
requires administrative privileges to install Oracle products. The Oracle Home User is used to run the Windows
services for the Oracle Home.
The window provides the following options:
1. If you select "Use Existing Windows User", the user credentials provided must be a standard Windows user
account (not an administrator).
2. If this is a single instance database installation, the user can be a local user, a domain user, or a managed
services account.
3. If this is an Oracle RAC database installation, the existing user must be a Windows domain user. The
Oracle installer will display an error if this user has administrator privileges.
4. If you select "Create New Windows User", the Oracle installer will create a new standard Windows user
account. This user will be assigned as the Oracle Home User. Please note that this user will not have login
privileges. This option is not available for an Oracle RAC Database installation.
5. If you select "Use Windows Built-in Account", the system uses the Windows Built-in account as the Oracle
Home User.
Select the Create New Windows User option. Enter the user name as OracleHomeUser1 and password as
Welcome1. Click Next.
Note: Remember the Windows User password. It will be required later to administer or manage database services.
7. The Typical Install Configuration window appears. Click on a text field and then the balloon icon ( )to know
more about the field. Note that by default, the installer creates a container database along with a pluggable database
called "pdborcl". The pluggable database contains the sample HR schema.
8. Change the Global database name to orcl. Enter the “Administrative password” as Oracle_1. This password will
be used later to log into administrator accounts such as SYS and SYSTEM. Click Next.
9. The prerequisite checks are performed and a Summary window appears. Review the settings and click Install.
Note: Depending on your firewall settings, you may need to grant permissions to allow java to access the network.
11. The Database Configuration Assistant started and creates your the database.
12. After the Database Configuration Assistant creates the database, you can navigate to https://localhost:5500/em
as a SYS user to manage the database using Enterprise Manager Database Express. You can click “Password
Management…” to unlock accounts. Click OK to continue.
13. The Finish window appears. Click Close to exit the Oracle Universal Installer.
14. To verify the installation Navigate to C:\Windows\system32 using Windows Explorer. Double-click services.
The Services window appears, displaying a list of services.
There is no need to spend time on the GUI at the very beginning. Thus, the developer can directly start with
implementing the business logic.
This is the reason why Oracle APEX is feasible to create rapid GUI-Prototypes without logic. Thus, prospective
customers can get an idea of how their future application will look.
Apex history
APEX is a very powerful development tool, which is used to create web-based database-centric applications. The
tool itself consists of a schema in the database with a lot of tables, views, and PL/SQL code. It’s available for every
edition of the database. The techniques that are used with this tool are PL/SQL, HTML, CSS, and JavaScript.
Before APEX there was WebDB, which was based on the same techniques. WebDB became part of Oracle Portal
and disappeared in silence. The difference between APEX and WebDB is that WebDB generates packages that
generate the HTML pages, while APEX generates the HTML pages at runtime from the repository. Despite this
approach APEX is amazingly fast.
APEX became available to the public in 2004 and then it was part of version 10g of the database. At that time it was
called HTMLDB and the first version was 1.5. Before HTMLDB, it was called Oracle Flows, Oracle Platform, and
Project Marvel.
Note: Starting with Oracle Database 12c Release 2 (12.2), Oracle Application Express is included in the Oracle
Home on disk and is no longer installed by default in the database.
Oracle Application Express is included with the following Oracle Database releases:
Oracle Database 19c – Oracle Application Express Release 18.1.
Oracle Database 18c – Oracle Application Express Release 5.1.
Oracle Database 12c Release 2 (12.2)- Oracle Application Express Release 5.0.
Oracle Database 12c Release 1 (12.1) – Oracle Application Express Release 4.2.
Oracle Database 11g Release 2 (11.2) – Oracle Application Express Release 3.2.
Oracle Database 11g Release 1 (11.1) – Oracle Application Express Release 3.0.
The Oracle Database releases less frequently than Oracle Application Express. Therefore, Oracle recommends
updating to the latest Oracle Application Express release available on Oracle Technology Network.
Within each application, you can also specify a Compatibility Mode in the Application Definition.
The Compatibility Mode attribute controls the compatibility mode of the Application Express runtime
engine. Compatibility Mode options include Pre 4.1, 4.1, 4.2, 5.0, 5.1/18.1, 18.2, 19.1, and 19.2. or upper versions.
This release of Oracle APEX introduces Approvals and the Unified Task List, Simplified Create Page wizards,
Readable Application Export formats, and Data Generator. APEX 22.1 also brings several enhancements existing
components, such as tokenized row search, an easy way to sort regions, improvements to faceted search, additional
customization of the PWA service worker, a more streamlined developer experience, and much more!
Version 21
This release of Oracle APEX introduces Smart Filters, Progressive Web Apps, and REST Service Catalogs. APEX
21.2 also brings greater UI flexibility with Universal Theme, new and updated page components, numerous
improvements to the developer experience, and a whole lot more!
Especially now Oracle has pointed out APEX as one of the important tools for building applications in their Oracle
Database Cloud Service, this interest will only grow. APEX shared a lot of the characteristics of cloud computing,
even before cloud computing became popular.
These characteristics include:
Elasticity
Browser-based development and runtime
RESTful web services (REST stands for Representational State Transfer)
Because the database is doing all the hard work, the architecture is fairly simple. We only have to add a web server.
We can choose one of the following web servers:
Oracle HTTP Server (OHS)
Embedded PL/SQL Gateway (EPG)
APEX Listener
Oracle APEX has a strong history, starting with version 1.5, which came out in 2004 – it was known as HTML DB
then (before it also had other names, like Flows and Project Marvel).
Oracle APEX is a part of the Oracle RAD architecture and technology stack. What does it mean?
“R” stands for REST, or rather ORDS – Oracle REST Data Services. ORDS is responsible for asking the database
for the page and rendering it back to the client;
“A” stands for APEX, Oracle Application Express, the topic of this article;
“D” stands for Database, which is the place an APEX application resides in.
Other methodologies that work well with Oracle Application Express include:
Spiral - This approach is actually a series of short waterfall cycles. Each waterfall cycle yields new requirements
and enables the development team to create a robust series of prototypes.
Rapid application development (RAD) life cycle - This approach has a heavy emphasis on creating a prototype
that closely resembles the final product. The prototype is an essential part of the requirements phase. One
disadvantage of this model is that the emphasis on creating the prototype can cause scope creep; developers can lose
sight of their initial goals in the attempt to create the perfect application.
These include Oauth client, APEX User, Database Schema User, and OS User. While it is important to ensure your
ORDS web services are secured, you also need to consider what a client has access to once authenticate. As a quick
reminder, authentication confirms your identity and allows you into the system, authorization decides what you can
do once you are in.
Oracle REST Data Services is a Java EE-based alternative for Oracle HTTP Server and mod_plsql.
The Java EE implementation offers increased functionality including a command-line based configuration,
enhanced security, file caching, and RESTful web services.
Oracle REST Data Services also provides increased flexibility by supporting deployments using Oracle WebLogic
Server, GlassFish Server, Apache Tomcat, and a standalone mode.
The Oracle Application Express architecture requires some form of the webserver to proxy requests between a web
browser and the Oracle Application Express engine. Oracle REST Data Services satisfies this need but its use goes
beyond that of Oracle Application Express configurations.
Oracle REST Data Services simplifies the deployment process because there is no Oracle home required, as
connectivity is provided using an embedded JDBC driver.
Oracle REST Data Services is a Java Enterprise Edition (Java EE) based data service that provides enhanced
security, file caching features, and RESTful Web Services. Oracle REST Data Services also increases flexibility
through support for deployment in standalone mode, as well as using servers like Oracle WebLogic Server and
Apache Tomcat.
ORDS
ORDS, a Java-based application, enables developers with SQL and database skills to develop REST APIs for Oracle
Database. You can deploy ORDS on web and application servers, including WebLogic®, Tomcat®, and
Glassfish®, as shown in the following image:
ORDS is our middle tier JAVA application that allows you to access your Oracle Database resources via REST
APIs. Use standard HTTP(s) calls (GET|POST|PUT|DELETE) via URIs that ORDS makes available
(/ords/database123/user3/module5/something/)
ORDS will route your request to the appropriate database, and call the appropriate query or PL/SQL anonymous
block), and return the output and HTTP codes.
For most calls, that’s going to be the results of a SQL statement – paginated and formatted as JSON.
Oracle Cloud
You can run APEX in an Autonomous Database (ADB) – an elastic database that you can scale up. It’s self-driving,
self-healing, and can repair and upgrade itself. It comes in two flavours:
1. Autonomous Transaction Processing (ATP) – basically transaction processing, it’s where APEX sees most use;
2. Autonomous Data Warehouse (ADW) – for more query-driven APEX applications. Reporting data is also a
common use of Oracle APEX.
You can also use the new Database Cloud Service (DCS) – an APEX-only solution. For a fee, you can have a
commercial application running on a database cloud service.
Workspace utility
Application Components
Supporting objects
Utility components
Remote development
Autonomous Always Free – you can choose the Autonomous Always Free option, running either on ATP or AWS.
It’s free for commercial use, but it doesn’t benefit from the scalability of the autonomous databases.
Oracle Express Free Edition – you can also run a free version, which is called Oracle Express Free Edition, on-
premise, but in this case, there’s a limit on how much data you can store there.
Fan-made and official containers – there are also various fan-made and official containers with APEX installed
available on the Internet.
When Oracle Application Express installs, the Instance administrator does not have the ability to assign Oracle
default schemas to workspaces. Default schemas such as SYS, SYSTEM, and RMAN are reserved by Oracle for
various product features and for internal use. Access to a default schema can be a very powerful privilege. For
example, a workspace with access to the default schema SYSTEM can run applications that parse as
the SYSTEM user.
In order for an Instance administrator to have the ability to assign most Oracle default schemas to workspaces, the
DBA must explicitly grant the privilege using SQL*Plus to run a procedure within
the APEX_INSTANCE_ADMIN package.
DBAs can grant an Instance administrator the ability to assign Oracle schemas to workspaces.
A DBA grants an Instance administrator the ability to assign Oracle schemas to workspaces by using SQL*Plus to
run the APEX_INSTANCE_ADMIN.UNRESTRICT_SCHEMA procedure from within the Application Express
engine schema. For example:
COMMIT;
A DBA revokes the privilege to assign default schemas using SQL*Plus to run the
APEX_INSTANCE_ADMIN.RESTRICT_SCHEMA procedure from within the Application Express engine
schema. For example:
COMMIT;
This example would prevent the Instance administrator from assigning the RMAN schema to any workspace. It does
not, however, prevent workspaces that have already had the RMAN schema assigned to them from using the RMAN
schema.
The DBA can grant an Oracle Application Express administrator the ability to assign Oracle default schemas to
workspaces by using SQL*Plus to run the APEX_SITE_ADMIN_PRIVS.UNRESTRICT_SCHEMA procedure
from within the Application Express engine schema. For example:
COMMIT;
This example would enable the Oracle Application Express administrator to assign the SYSTEM schema to any
workspace.
The DBA can revoke this privilege using SQL*Plus to run the
APEX_SITE_ADMIN_PRIVS.RESTRICT_SCHEMA procedure from within the Application Express engine
schema. For example:
COMMIT;
This example would display the text of a query that dumps the tables that defines the schema and workspace
restrictions.
Oracle APEX
Identifies Authorization
APEX_APPLICATION_AUTHORI Schemes which can be applied
APEX_APPLICATIONS
ZATION at the application, page or
component level
Identifies Application
APEX_APPLICATION_COMPUT
Computations which can run APEX_APPLICATIONS
ATIONS
for every page or on login
APEX_APPLICATION_LOCKED_
Locked pages of an application APEX_APPLICATIONS
PAGES
APEX_APPLICATION_PAGE_GR
Identifies page groups APEX_APPLICATION_PAGES
OUPS
Identifies subscriptions
APEX_APPLICATION_PAGE_IR APEX_APPLICATION_PAGE_I
scheduled in saved reports for
_SUB R_RPT
an interactive report
Identifies Validations
APEX_APPLICATION_PAGE_VA
associated with an Application APEX_APPLICATION_PAGES
L
Page
Identifies Application
APEX_APPLICATION_PROCESS Processes which can run for
APEX_APPLICATIONS
ES every page, on login or upon
demand
an application.
Application dynamic
translations. These are created
APEX_APPLICATION_TRANS_D in the Translation section of
APEX_APPLICATIONS
YNAMIC Shared Components, and
referenced at runtime via the
function APEX_LANG.LANG.
Repository of translation
APEX_APPLICATION_TRANS_R strings. These are populated
APEX_APPLICATIONS
EPOS from the translation seeding
process.
Identifies a collection of
APEX_APPL_LOAD_TABLE_RU
transformation rules that are to APEX_APPLICATIONS
LES
be used on the load tables.
APEX_APPLICATION_PAGE_
APEX_APPL_PAGE_CARDS Cards definitions
REGIONS
APEX_APPL_PAGE_CARD_ACT
Card actions definitions APEX_APPL_PAGE_CARDS
IONS
This article presents how to install and configure Apex 21.2 with standalone ORDS 21.2
In previous versions an upgrade was required when a release affected the first two numbers of the version (4.2 to 5.0
or 5.1 to 18.1), but if the first two numbers of the version were not affected (5.1.3 to 5.1.4) you had to download and
apply a patch, rather than do the full installation. This is no longer the case.
Steps
Setup (Download both software having equal version and paste unzip files at same location in directory)
Installation
Embedded PL/SQL Gateway (EPG) Configuration
Oracle REST Data Services (ORDS) Configuration
Oracle HTTP Server (OHS) Configuration
Network ACLs
Step One
Create a new tablespace to act as the default tablespace for APEX.
-- For Oracle Managed Files (OMF).
CREATE TABLESPACE apex DATAFILE SIZE 100M AUTOEXTEND ON NEXT 1M;
-- For non-OMF.
CREATE TABLESPACE apex DATAFILE ‘/path/to/datafiles/apex01.dbf’ SIZE 100M AUTOEXTEND ON NEXT
1M;
CREATE TABLESPACE lmtbsb DATAFILE ‘/u02/oracle/data/lmtbsb01.dbf’ SIZE 50M
EXTENT MANAGEMENT LOCAL AUTOALLOCATE;
Create or alter database create tablespace alter data file command
Tablespaces allocate space in extents. Tablespaces can use two different methods to keep track of their free and used
space:
When you create a tablespace, you choose one of these methods of space management. Later, you can change the
management method with the DBMS_SPACE_ADMIN PL/SQL package.
Step two
Installation
Change directory to the directory holding the unzipped APEX software.
$ cd /home/oracle/apex
In this directory there are 3 important files:
apexins.sql – install apex in database
apxchpwd.sql – change password for main apex user ADMIN
apex_rest_config.sql – configures ords in database
Step three
IF: Connect to SQL*Plus as the SYS user and run the "apexins.sql" script, specifying the relevant tablespace names
and image URL.
SQL> CONN sys@pdb1 AS SYSDBA
SQL> -- @apexins.sql tablespace_apex tablespace_files tablespace_temp images
SQL> @apexins.sql APEX APEX TEMP /i/
Or Else
Logon to database as SYSDBA and switch to pluggable database orclpdb1 and run installation script. You can
install apex on dedicated tablespaces if required.
sqlplus / as sysdba
alter session set container=orclpdb1;
@apexins.sql SYSAUX SYSAUX TEMP /i/
(Description of the command:
@apexins.sql tablespace_apex tablespace_files tablespace_temp images
tablespace_apex - name of the tablespace for APEX user.
tablespace_files - name of the tablespace for APEX files user.
tablespace_temp - name of the temporary tablespace.
images - virtual directory for APEX images.
Define the virtual image directory as /i/ for future updates.)
Step four
If you want to add the user silently, you could run the following code, specifying the required password and email.
BEGIN
APEX_UTIL.set_security_group_id( 10 );
APEX_UTIL.create_user(
p_user_name => 'ADMIN',
p_email_address => 'me@example.com',
p_web_password => 'PutPasswordHere',
p_developer_privs => 'ADMIN' );
APEX_UTIL.set_security_group_id( null );
COMMIT;
END;
/
Note:
Oracle Application Express is installed in the APEX_210200 schema.
The structure of the link to the Application Express
administration services is as follows:
http://host:port/ords/apex_admin
The structure of the link to the Application Express
development interface is as follows:
http://host:port/ords
Or
When Oracle Application Express installs, it creates three new database accounts all with status LOCKED in
database:
APEX_210200– The account that owns the Oracle Application Express schema and metadata.
FLOWS_FILES – The account that owns the Oracle Application Express uploaded files.
APEX_PUBLIC_USER – The minimally privileged account is used for Oracle Application Express configuration
with ORDS.
Create and change password for ADMIN account. When prompted enter a password for the ADMIN account.
sqlplus / as sysdba
alter session set container=orclpdb1;
@apxchpwd.sql
output
SQL> @apxchpwd.sql
This script can be used to change the password of an Application Express
instance administrator. If the user does not yet exist, a user record will be
created.
Enter the administrator's username [ADMIN]
User "ADMIN" does not yet exist and will be created.
Enter ADMIN's email [ADMIN]
Enter ADMIN's password []
Created instance administrator ADMIN.
Step Five
For this post, I chose the first option, which Oracle recommends: Install APEX and ORDS and configure ORDS.
Step Six
Now you need to decide which gateway to use to access APEX. The Oracle recommendation is ORDS.
Note: Oracle REST Data Services (ORDS), formerly known as the APEX Listener, allows APEX applications to be
deployed without the use of Oracle HTTP Server (OHS) and mod_plsql or the Embedded PL/SQL Gateway. ORDS
version 3.0 onward also includes JSON API support to work in conjunction with the JSON support in the database.
ORDS can be deployed on WebLogic, Tomcat or run in standalone mode. This article describes the installation of
ORDS on Tomcat 8 and 9.
For Lone-PDB installations (a CDB with one PDB), or for CDBs with small numbers of PDBs, ORDS can be
installed directly into the PDB.
If you are using many PDBs per CDB, you may prefer to install ORDS into the CDB to allow all PDBs to share the
same connection pool.
Create directory /home/oracle/ords for ords software and unzip it
mkdir /home/oracle/ords
cp ords-21.4.2.062.1806.zip /home/oracle/ords
cd /home/oracle/ords
unzip ords-21.4.2.062.1806.zip
Create configuration directory /home/oracle/ords/conf for ords standalone
mkdir /home/oracle/ords/conf
Run ords first time you are asked for:
directory to save configuration: /home/oracle/ords/conf
password for ORDS_PUBLIC_USER(be created): Dbaora$
administrator user: SYS
password for SYS AS SYSDBA: !!! you must know it from your DBA !!!
use PL/SQL Gateway or not: 1 for yes
password for APEX_PUBLIC_USER: Dbaora$
password for APEX_LISTENER: Dbaora$
feature to enable: 1 for SQL Developer Web (Enables all features)
wish to start in standalone mode: 1 for standalone mode
[oracle@oel8 ords]$ java -jar ords.war
This Oracle REST Data Services instance has not yet been configured.
Please complete the following prompts
Enter the location to store configuration data: /home/oracle/ords/conf
Enter the database password for ORDS_PUBLIC_USER:
Confirm password:
Requires to login with administrator privileges to verify Oracle REST Data Services schema.
Enter the administrator username:sys
Enter the database password for SYS AS SYSDBA:
Confirm password:
Connecting to database user: SYS AS SYSDBA url: jdbc:oracle:thin:@//oel8.dbaora.com:1521/orclpdb1
Retrieving information.
Enter 1 if you want to use PL/SQL Gateway or 2 to skip this step.
If using Oracle Application Express or migrating from mod_plsql then you must enter 1 [1]:
Enter the database password for APEX_PUBLIC_USER:
Confirm password:
OR
Embedded PL/SQL Gateway (EPG) Configuration
If you want to use the Embedded PL/SQL Gateway (EPG) to front APEX, you can follow the instructions here. This
is used for both the first installation and upgrades.
Run the "apex_epg_config.sql" script, passing in the base directory of the installation software as a parameter.
SQL> CONN sys@pdb1 AS SYSDBA
SQL> @apex_epg_config.sql /home/oracle
OR
Oracle HTTP Server (OHS) Configuration
If you want to use Oracle HTTP Server (OHS) to front APEX, you can follow the instructions here.
Change the password and unlock the APEX_PUBLIC_USER account. This will be used for any Database Access
Descriptors (DADs).
SQL> ALTER USER APEX_PUBLIC_USER IDENTIFIED BY myPassword ACCOUNT UNLOCK;
Step Seven
Unlock the ANONYMOUS account.
SQL> CONN sys@cdb1 AS SYSDBA
DECLARE
l_passwd VARCHAR2(40);
BEGIN
l_passwd := DBMS_RANDOM.string('a',10) || DBMS_RANDOM.string('x',10) || '1#';
-- Remove CONTAINER=ALL for non-CDB environments.
EXECUTE IMMEDIATE 'ALTER USER anonymous IDENTIFIED BY ' || l_passwd || ' ACCOUNT UNLOCK
CONTAINER=ALL';
END;
/
Check the port setting for XML DB Protocol Server.
SQL> CONN sys@pdb1 AS SYSDBA
SQL> SELECT DBMS_XDB.gethttpport FROM DUAL;
GETHTTPPORT
-----------
0
1 row selected.
SQL>
If it is set to "0", you will need to set it to a non-zero value to enable it.
SQL> CONN sys@pdb1 AS SYSDBA
# su - tomcat
$ cd /u01/ords
$ $JAVA_HOME/bin/java -jar ords.war uninstall
Enter the name of the database server [ol7-122.localdomain]:
Enter the database listen port [1521]:
Enter 1 to specify the database service name, or 2 to specify the database SID [1]:
Enter the database service name [pdb1]:
Requires SYS AS SYSDBA to verify Oracle REST Data Services schema.
Enter the database password for SYS AS SYSDBA:
Confirm password:
Retrieving information
Uninstalling Oracle REST Data Services
... Log file written to /u01/ords/logs/ords_uninstall_core_2018-06-14_155123_00142.log
Completed uninstall for Oracle REST Data Services. Elapsed time: 00:00:10.876
$
In older versions of ORDS you had to extract scripts to perform the uninstall in the following way.
su - tomcat
cd /u01/ords
$JAVA_HOME/bin/java -jar ords.war ords-scripts --scriptdir /tmp
Perform the uninstall from the "oracle" user using the following commands.
su -oracle
cd /tmp/scripts/uninstall/core/
sqlplus sys@pdb1 as sysdba
@ords_manual_uninstall /tmp/scripts/logs
Oracle APEX is a full spectrum technology. It can be used by so-called citizen developers, who can use the wizard
to create some simple applications to get going. However, these people can team up with a technical developer to
create a more complex application together, and in such a case it also goes full spectrum – code by code, line by
line, back-end development, front-end development, database development. If you get a perfect mix of front-end and
back-end developers, then you can create a truly great APEX application.
Our methodology is composed of different elements related to all aspects of an APEX development project.
This methodology is referred to as a waterfall because the output from one stage is the input for the next stage. A
primary problem with this approach is that it is assumed that all requirements can be established in advance.
Unfortunately, requirements often change and evolve during the development process.
The Oracle Application Express development environment enables developers to take a more iterative approach to
development. Unlike many other development environments, creating prototypes is easy. With Oracle Application
Express, developers can:
Use built-in wizards to quickly design an application user interface
Apex Development
Migration of Applications
After converting your forms files into XML files, sign into your APEX workspace and be sure you're using the
schema that contains all database objects needed in the forms. Now, create a Migration Project and upload the XML
files, following these steps:
1. Click App Builder.
2. Navigate to the right panel, click Oracle Forms Migrations.
3. Click Create Project.
4. Enter Project Name and Description.
5. Select the schema.
6. Upload the XML file.
7. Click Next.
8. Click Upload Another File if you have more XML files, otherwise click Create.
Now let's review each component in the upload forms to determine proper regions to use in the APEX Application.
Also, let's review the Triggers and Program Units in order to identify the business logic in your Forms Application
and determine if it will need to be replicated or not.
Oracle Forms applications still play a vital role, but many are looking for ways to modernize their
applications. Modernize your Oracle Forms applications by migrating them to Oracle Application Express (Oracle
APEX) in the cloud.
Your stored procedures and PL/SQL packages work natively in Oracle APEX, making it the clear platform of choice
for easily transitioning Oracle Forms applications to modern web applications with more capabilities, less
complexity, and lower development and maintenance costs.
Oracle APEX is a low-code development platform that enables you to build scalable, secure enterprise apps, with
world-class features, that you can deploy anywhere. You can quickly develop and deploy compelling apps that solve
real problems and provide immediate value. You won't need to be an expert in a vast array of technologies to deliver
sophisticated solutions.
Architecture
This architecture shows the process of migrating on-premises Oracle Forms applications to Oracle Application
Express (APEX) applications with the help of an XML converter, and then moving them to the cloud.The following
diagram illustrates this reference architecture.
For resources that require maximum security, Oracle recommends that you use security zones. A security zone is a
compartment associated with an Oracle-defined recipe of security policies that are based on best practices. For
example, the resources in a security zone must not be accessible from the public internet and they must be encrypted
using customer-managed keys. When you create and update resources in a security zone, Oracle Cloud
Infrastructure validates the operations against the policies in the security-zone recipe, and denies operations that
violate any of the policies.
Schema
Retain the database structure that Oracle Forms was built on, as is, and use that as the schema for Oracle APEX.
Business Logic
Most of the business logic for Oracle Forms is in triggers, program units, and events. Before starting the migration
of Oracle Forms to Oracle APEX, migrate the business logic to stored procedures, functions, and packages in the
database.
Considerations
Consider the following key items when migrating Oracle Forms Object navigator components to Oracle Application
Express (APEX):
Data Blocks
A data block from Oracle Forms relates to Oracle APEX with each page broken up into several regions and
components. Review the Oracle APEX Component Templates available in the Universal Theme.
Triggers
In Oracle Forms, triggers control almost everything. In Oracle APEX, control is based on flexible conditions that are
activated when a page is submitted and are managed by validations, computations, dynamic actions, and processes.
Alerts
Most messages in Oracle APEX are generated when you submit a page.
Attached Libraries
Oracle APEX takes care of the JavaScript and CSS libraries that support the Universal Theme, which supports all of
the components that you need for flexible, dynamic applications. You can include your own JavaScript and CSS in
several ways, mostly through page attributes. You can choose to add inline code as reference files that exist either in
the database as a BLOB (#APP_IMAGES#) or sit on the middle tier, typically served by Oracle REST Data Services
(ORDS). When a reference file is on an Oracle WebLogic Server, the file location is prefixed with
#IMAGE_PREFIX#.
Editors
Oracle APEX has a text area and a rich text editor, which is equivalent to Editors in Oracle Forms.
List of Values (LOV)
In APEX, the LOV is coupled with the Item type. A radio group works well with a small handful of values. Select
Lists for middle-sized sets, and select Popup LOV for large data sets. You can use the queries from Record Group in
Oracle Forms for the LOV query in Oracle APEX. LOV's in Oracle APEX can be dynamically driven by a SQL
query, or be statically defined. A static definition allows a variety of conditions to be applied to each entry. These
LOVs can then be associated with Items such as Radio Groups & Select Lists, or with a column in a report, to
translate a code to a label.
Parameters
Page Items in Oracle APEX are populated between pages to pass information to the next page, such as the selected
record in a report. Larger forms with a number of items are generally submitted as a whole, where the page process
handles the data, and branches to the next page. These values can be protected from URL tampering by session state
security, at item, page, and application levels, often by default.
Popup Menus
Popup Menus are not available out of the box in Oracle APEX, but you can build them by using Lists and
associating a button with the menu.
Program Units
Migrate the Stored procedures and functions defined in program units in Oracle Forms into Database Stored
Procedures/Functions and use Database Stored procedures/functions in Oracle APEX
processes/validations/computations.
Property Classes
Property Classes in Oracle Forms allow the developer to utilize common attributes among each instance of a
component. In APEX you can define User Interface Defaults in the data dictionary, so that each time items or
reports are created for specific tables or columns, the same features are applied by default. As for the style of the
application, you can apply classes to components that carry a particular look and feel. The Universal Theme has a
default skin that you can reconfigure declaratively.
Record Groups
Use queries in Record Groups to define the Dynamic LOV in Oracle APEX.
Reports
Interactive Reports in Oracle APEX come with a number of runtime manipulation options that give users the power
to customize and manipulate the reports. Classic Reports are simple reports that don't provide runtime manipulation
options, but are based on SQL.
Menus
Oracle Forms have specific menu files, controlled by database roles. Updating the .mmx file required that there be
no active users. The menu in Oracle APEX can either be across the top, or down the left side. These menus can be
statically defined, or dynamically driven. Static navigation entries can be controlled by authorization schemes, or
custom conditions. Dynamic menus can have security tables integrated within the SQL.
Properties
The Page Designer introduced in Oracle APEX is similar to Oracle Forms, particularly with regard to the ability to
edit multiple components at once, only intersecting attributes.
RMAN Backup/Restore
If lost the APEX tablespace but your database is currently functioning. If this is the case, and assuming your APEX
tablespace does not span multiple datafiles, you can attempt to swap out the datafile. Please force a backup in rman
before trying any of this.
There are a few different options here. All you really need are the following
Datafile
Control file
Archive / redologs (if you want to move forward or backward in time)
Then run rman target / from bash terminal. In rman run the following.
RESTORE CONTROLFILE FROM '/tmp/oradata/your_ctrl_file_dir'
ALTER TABLESPACE apex OFFLINE IMMEDIATE';
SET NEWNAME FOR DATAFILE '/tmp/oradata/apex01.dbf' TO
RESTORE TABLESPACE apex;
SWITCH DATAFILE ALL;
RECOVER TABLESPACE apex;
Swap out Datafile
First find the location of your datafiles. You can find them by running the following in sqlplus / as sysdba or
whatever client you use
spool '/tmp/spool.out'
select value from v$parameter where name = 'db_create_file_dest';
select tablespace name from dba_data_files;
View the spool.out file and
Verify the location of your datafiles
See if the datafile still is associated with that tablespace.
If the tablespace is still there run
select file_name, status from dba_data_files WHERE tablespace name = < name >
You want your your datafile to be available. Then you want to set the tablespace to read only and take it offline
alter tablespace < name > read only;
alter tablespace < name > offline;
Now copy your dbf file the directory returned from querying db_create_file_dest value. Don't overwrite the old one,
then run.
alter tablespace < name > rename datafile '/u03/waterver/oradata/yourold.dbf' to
'/u03/waterver/oradata/yournew.dbf'
This updates your controlfile to point to the new datafile.
You can then bring your tablespace back online and back in read write mode. You may also want to verify the status
of the tablespace status, the name of the datafile associated with that tablespace, etc.
When you create an authorization scheme you select an authorization scheme type. The authorization scheme type
determines how an authorization scheme is applied. Developers can create new authorization type plug-ins to extend
this list.
Exists SQL Query Enter a query that causes the authorization scheme to pass if it returns at
least one row and causes the scheme to fail if it returns no rows
NOT Exists SQL Query Enter a query that causes the authorization scheme to pass if it returns no
rows and causes the scheme to fail if it returns one or more rows
PL/SQL Function Returning Boolean Enter a function body. If the function returns true, the authorization
succeeds.
Item in Expression 1 is NULL Enter an item name. If the item is null, the authorization succeeds.
Item in Expression1 is NOT NULL Enter an item name. If the item is not null, the authorization succeeds.
Value of Item in Expression 1 Equals Enter and item name and value.The authorization succeeds if the item's
Expression 2 value equals the authorization value.
Value of Item in Expression 1 Does Enter an item name and a value. The authorization succeeds if the item's
NOT Equal Expression 2 value is not equal to the authorization value.
Value of Preference in Expression 1 Enter an preference name and a value. The authorization succeeds if the
Does NOT Equal Expression 2 preference's value is not equal to the authorization value.
Value of Preference in Expression 1 Enter an preference name and a value. The authorization succeeds if the
Equals Expression 2 preference's value equal the authorization value.
Is In Group Enter a group name. The authorization succeeds if the group is enabled as
a dynamic group for the session.
Is Not In Group Enter a group name. The authorization succeeds if the group is not
enabled as a dynamic group for the session.
Run the APEX installation script against the target database. The same script is used for new installations
and upgrades. The script automatically senses whether there is a version of APEX present and
automatically takes the appropriate action.
Update the existing version of the /i/ virtual directory with the images, javascript, css, etc. with the current
versions APEX installation medium.
For the standard HTTP Server installations, this is just a simple copy command.
For the Embedded PL/SQL Gateway (EPG), the script apxldimg.sql is used to load the images into the
database.
For the APEX Listener / Oracle REST Data Services (ORDS), recreate the i.jar file that contains the
references to the images, javascript, css, etc. from the APEX installation media OR copy the new versions
of the files to the existing location referenced by the current APEX Listener / ORDS / web server.
For APEX (HTML DB) versions 1.5 - 3.1, the schema name is: FLOWS_XXXXXX.
For example: FLOWS_010500 for HTML DB version 1.5.x
For APEX (HTML DB) versions 3.2.x and above, the schema name is: APEX_XXXXXX.
For example: APEX_210100 for APEX version 21.1.
If the query returns 0, it is a runtime only installation, and apxrtins.sql should be used for the upgrade. If
the query returns 1, this is a development install and apexins.sql should be used
The full download is needed if the first two digits of the APEX version are different. For example, the full
Application Express download is needed to go from 20.0 to 21.1. See <Note 752705.1> ORA-1435: User Does not
Exist" When Upgrading APEX Using apxpatch.sql: for more information. The patch is needed if only the third digit
of the version changes. So when upgrading from from 21.1.0 patch to upgrade to 21.1.2.
END
Others support are in are SOAP, UDDI, Web services description language, JSR-181.
WebLogic is an Application Server that runs on a middle tier, between back-end databases and related applications
and browser-based thin clients. WebLogic Server mediates the exchange of requests from the client tier with
responses from the back-end tier.
WebLogic Server is based on Java Platform, Enterprise Edition (Java EE) (formerly known as Java 2 Platform,
Enterprise Edition or J2EE), the standard platform used to create Java-based multi-tier enterprise applications.
Oracle WebLogic Server vs. Apache Tomcat
The Apache Tomcat web server is often compared with WebLogic Server. The Tomcat web server serves static
content in web applications delivered in Java servlets and JavaServer Pages.
A Java DataBase Connectivity (JDBC) resource is a specific type of WebLogic resource that is related to JDBC. To
secure JDBC database access, you can create security policies and security roles for all connection pools as a group,
individual connection pools, and MultiPools.
Oracle's service oriented architecture (SOA)
SOA is not a new concept. Sun defined SOA in the late 1990's to describe Jini, which is an environment for dynamic
discovery and use of services over a network. Web services have taken the concept of services introduced by Jini
technology and implemented it as services delivered over the web using technologies such as XML, Web Services
Description Language (WSDL), Simple Object Access Protocol (SOAP), and Universal Description, Discovery, and
Integration(UDDI). SOA is emerging as the premier integration and architecture framework in today's complex and
heterogeneous computing environment.
SOA uses the find-bind-execute paradigm as shown in Figure 1. In this paradigm, service providers register their
service in a public registry. This registry is used by consumers to find services that match certain criteria. If the
registry has such a service, it provides the consumer with a contract and an endpoint address for that service.
challenge is to define a service interface that is at the right level of abstraction. Services should provide coarse-
grained functionality.
Oracle Fusion Applications Architecture
Oracle offers three distinct products as part of the Oracle WebLogic Server 11g family:
Oracle WebLogic Server Standard Edition (SE)
Oracle WebLogic Server Enterprise Edition (EE)
Oracle WebLogic Suite
Oracle WebLogic 11g Server Standard Edition The WebLogic Server Standard Edition (SE) is a full-featured
server, but is mainly intended for developers to develop enterprise applications quickly. WebLogic Server SE
implements all the Java EE standards and offers management capabilities through the Administration Console.
Oracle WebLogic 11g Server Enterprise Edition Oracle WebLogic Server EE is designed for mission-critical
applications that require high availability and advanced diagnostic capabilities. The EE version contains all the
features of th SE version, of course, but in addition supports clustering of servers for high availability and the ability
to manage multiple domains, plus various diagnostic tools.
Oracle WebLogic Suite 11g
Oracle WebLogic Suite offers support for dynamic scale-out applications with features such as in-memory data grid
technology and comprehensive management capabilities.
It consists of the following components:
Oracle WebLogic Server EE
Oracle Coherence (provides in-memory caching)
Oracle Top Link (provides persistence functionality)
Oracle JRockit (for low-latency, high-throughput transactions)
Enterprise Manager (Admin & Operations)
Development Tools (jdeveloper/eclipse)
D:\uninstall\p12979653_111150_Generic\12979653>D:\app\oracle\middleware\oracle_c
Common\OPatch\opatch.bat apply
The second one patches EM.Set the ORACLE_HOME environment variable to the "oracle_common" directory of
your WebLogic install
To apply patch # 12917525
D:\uninstall\p12917525_111150_Generic\12917525>D:\app\oracle\middleware\oracle_c
Common\OPatch\opatch.bat apply
The patches (12979653 and 12917525) are only available via support.oracle.com:
https://updates.oracle.com/ARULink/PatchDetails/process_form?patch_num=12979653
https://updates.oracle.com/ARULink/PatchDetails/process_form?patch_num=12917525
5) Create a Weblogic Domain with a managed server
D:\app\oracle\middleware\oracle_common\common\bin
Click config
check oracle enterprice manager-11.1.1.0 [oracle_common]
check oracle JRF-11.1.1.0
Enter domain name of your choice and check domain location and application location
Enter administrator username and password
select production mode & check the jdk already installed
check administration server and Managed Server
ADF Administration Server
Enter AdminServer name
e.g. ADFSERVER Port : 7001
Configure Managed Server
Add
Enter name of managed server name
e.g. adfmanage port: 7002
Click next on configure cluster (we don't need to configure cluster at that stage)
Add adfmachine
e.g adfmachine port 5556
move adfmanage configure in step 8 below adfmachine configure in step 10
click next and complete the setup
6) Update the EM JSF libraries by running the "upgradeADF" function in wlst (in disconnected mode):
use weblogic console to upgrade to JSF 2.0
C:\...\weblogic-home\oracle_common\common\bin\wlst.bat
upgradeADF('C:\...\weblogic-home\user_projects\domains\your-domain')
OR
For JSF Upgradation
Open http://192.192.11.166:7001/console
Press Lock and Edit on
select jsf (1.2,1.2.9) and click update
set the jsf 2.0 target as adfmanage to deploy on adfmanage server other wise by default it install on adf admin server
click lock & Edit > Deployment >install>upload your file>next next to complete the Deployment
Activate deployment to complete the ear file deployment.
restart the services.
8) Configure Data Source (This should be communicated by development team)
(DESCRIPTION_LIST=
(DESCRIPTION=
(ADDRESS=(PROTOCOL=tcp)(HOST=sales1-svr)(PORT=1521))
(ADDRESS=(PROTOCOL=tcp)(HOST=sales2-svr)(PORT=1521)))
(CONNECT_DATA=
(SERVICE_NAME=sales.us.example.com)))
(DESCRIPTION=
(ADDRESS=(PROTOCOL=tcp)(HOST=hr1-svr)(PORT=1521))
(ADDRESS=(PROTOCOL=tcp)(HOST=hr2-svr)(PORT=1521)))
(CONNECT_DATA=
(SERVICE_NAME=hr.us.example.com))))
(DESCRIPTION=
(SOURCE_ROUTE=yes)
(ADDRESS_LIST=
(ADDRESS=(PROTOCOL=tcp)(HOST=host1)(PORT=1630)) # 1
(ADDRESS_LIST=
(FAILOVER=on)
(LOAD_BALANCE=off) # 2
(ADDRESS=(PROTOCOL=tcp)(HOST=host2a)(PORT=1630))
(ADDRESS=(PROTOCOL=tcp)(HOST=host2b)(PORT=1630)))
(ADDRESS=(PROTOCOL=tcp)(HOST=host3)(PORT=1521))) # 3
(CONNECT_DATA=(SERVICE_NAME=sales.us.example.com)))
The client is instructed to connect to the protocol address of the first Oracle Connection Manager, as
indicated by:
(ADDRESS=(PROTOCOL=tcp)(HOST=host1)(PORT=1630))
16) Apply patch 13327994, if required (Rep 501 Error and report engine crash)
D:\uninstall\p12632886_111140_MSWIN-x86-64\12632886>d:\app\oracle\middleware\frhome\OPatch\opatch
17) Set KEEPALIVE=OFF for OHS web tier in EM 11g at the following path : webtier > ohs1 > oracle HTTP
Server > Advanced Configuration > httpd. conf
18) add in formsweb.cfg , OtherParams=term=D:\app\oracle\middleware\frinstance\config\FormsComponent\forms\
fmrpcweb.res
19) make tns entery in tnsnames.ora at the path D:\app\oracle\middleware\frinstance\config\
J2EE Platform
WebLogic Server contains Java 2 Platform, Enterprise Edition (J2EE) technologies. J2EE is the standard platform
for developing multitier enterprise applications based on the Java programming language. The technologies that
make up J2EE were developed collaboratively by Sun Microsystems and other software vendors, including BEA
Systems.
J2EE applications are based on standardized, modular components. WebLogic Server provides a complete set of
services for those components and handles many details of application behavior automatically, without requiring
programming.
J2EE Platform and WebLogic Server
WebLogic Server implements Java 2 Platform, Enterprise Edition (J2EE) version 1.3 technologies. J2EE is the
standard platform for developing multi-tier Enterprise applications based on the Java programming language. The
technologies that make up J2EE were developed collaboratively by Sun Microsystems and other software vendors,
including BEA Systems.
WebLogic Server J2EE applications are based on standardized, modular components. WebLogic Server provides a
complete set of services for those modules and handles many details of application behavior automatically, without
requiring programming.
ODBC and JDBC details
Download .iso or the ISO files on a computer from the internet and store it in the CD-ROM or USB stick after
making it bootable using Pen Drive Linux and UNetBootin
1. Boot into the USB Stick
You need to restart your computer after attaching CD –ROM or pen drive into the computer. Press enter at the time
of boot, here select the CD-ROM or pen drive option to start the further boot process. Try for a manual boot setting
by holding F12 key to start the boot process. This will allow you to select from various boot options before starting
the system. All the options either it is USB or CD ROM or number of operating systems you will get a list from
which you need to select one.
2. Derive Selection
Select the drive for installation of OS to be completed. Select “Erase Disk and install Ubuntu” in case you want to
replace the existing OS otherwise select “Something else” option and click INSTALL NOW.
3. Start Installation
A small panel will ask for confirmation. Click Continue in case you don’t want to change any information provided.
Select your location on the map and install Linux.
Provide the login details.
Use the .iso file or ISO file that can be downloaded from the internet and start the virtual box.
Here we need to allocate RAM to virtual OS. It should be 2 GB as per minimum requirement.
Choose a type of storage on physical hard disk. And choose the disk size(min 12 GB as per requirement)
Then choose how much you want to shrink your drive. It is recommended that you set aside at least 20GB
(20,000MB) for Linux.
Select the drive for completing the OS installation. Select “Erase Disk and install Ubuntu” in case you want to
replace the existing OS otherwise select “Something else” option and click INSTALL NOW.
You are almost done. It should take 10-15 minutes to complete the installation. Once the installation finishes, restart
the system.
Some of those kinds of requiring intermediate Linux commands are mentioned below:
1. Rm: Rm command is used for mainly deleting or removing files or multiple files. If we use this rm
command recursively, then it will remove the entire directory.
2. Uname: This command is very much useful for displaying the entire current system information properly. It
helps for displaying Linux system information in the Linux environment in a proper way for understanding the
system’s current configuration.
3. Uptime: The uptime command is also one of the key commands for the Kali Linux platform, which gives
information about how long the system is running.
4. Users: These Kali Linux commands are used for displaying the login user name who is currently logged in
on the Linux system.
5. Less: Less command is very much used for displaying the file without opening or using cat or vi
commands. This command is basically one of the powerful extensions of the ‘more’ command in the Linux
environment.
6. More: This command is used for displaying proper output in one page at a time. It is mainly useful for
reading one long file by avoiding scrolling the same.
7. Sort: This is for using sorting the content of one specific define file. This is very much useful for displaying
some of the critical contents of a big file in sorted order. If we user including this sort command, then it will give
reverse order of the content.
8. Vi: This is one of the key editor available from the first day onwards in UNIX or Linux platform. It
normally provided two kinds of mode, normal and insert.
9. Free: It is provided details information of free memory or RAM available in a Linux system.
10. History: This command is holding the history of all the executed command on the Linux platform.
Operating System Minimum Physical Memory Required Minimum Available Memory Required
Linux 4 GB 8 GB
UNIX 4 GB 8 GB
Windows 4 GB 8 GB
B.1 Prerequisites
Install a 64-bit JDK 1.7 based on your platform.
Add the JDK 1.7 location to the system path.
B.2 Installing the WebLogic Server
Use these steps to install WebLogic Server 11g.
Run the Oracle WebLogic 10.3.6.0 installer from the image that you downloaded from the Oracle Software Delivery Cloud.
The item name of the installer is Oracle WebLogic Server 11gR1 (10.3.6) Generic and Coherence (V29856-01).
The filename of the installer is: wls1036_generic.jar
For Windows, open a command window
> java -jar wls1036_generic.jar
On UNIX platforms, the command syntax to run the installer is platform dependent.
For Linux and AIX (non-Hybrid JDK)
> java -jar wls1036_generic.jar
For Solaris and HP-UX (Hybrid JDK)
For example, to start the Administration Console for a local instance of Oracle WebLogic Server running on your
system, enter the following URL in a web browser:
http://localhost:7001/console/
If you started the Administration Console using secure socket layer (SSL), you must add s after http, as follows:
https://<HOST>:<PORT>/console
When the login page of the WebLogic Administration Console appears, enter your administrative credentials.
Check Task
Server Make and Architecture Confirm that server make, model, core architecture,
and host bus adaptors (HBA) or network interface
controllers (NICs) are supported to run with Oracle
Database and Oracle Grid Infrastructure.
Runlevel 3 or 5
Server Display Cards At least 1024 x 768 display resolution, which Oracle
Universal Installer requires.
Check Task
Item Task
Operating system general requirements OpenSSH installed manually, if you do not have it
installed already as part of a default Linux
installation.
Linux x86-64 operating system requirements The following Linux x86-64 kernels are supported:
Oracle Database Preinstallation RPM for Oracle If you use Oracle Linux, then Oracle recommends
Linux that you run the Oracle Database Preinstallation
RPM for your Linux release to configure your
operating system for Oracle Database and Oracle
Grid Infrastructure installations.
Check Task
Swap space allocation relative to RAM (Oracle Between 1 GB and 2 GB: 1.5 times the size of the
Database) RAM
Between 2 GB and 16 GB: Equal to the size of the
RAM
More than 16 GB: 16 GB
Note: If you enable HugePages for your Linux
servers, then you should deduct the memory
allocated to HugePages from the available RAM
before calculating swap space.
Swap space allocation relative to RAM (Oracle Between 8 GB and 16 GB: Equal to the size of the
Restart) RAM
More than 16 GB: 16 GB
Note: If you enable HugePages for your Linux
servers, then you should deduct the memory
allocated to HugePages from the available RAM
before calculating swap space.
Oracle Inventory (oraInventory) and OINSTALL For upgrades, the installer detects an
Group Requirements existing oraInventory directory from
the /etc/oraInst.loc file, and uses
the existing oraInventory.
For new installs, if you have not
configured an oraInventory directory, then
you can specify the oraInventory directory
during the software installation and Oracle
Universal Installer will set up the software
directories for you. The Oracle inventory is
one directory level up from the Oracle base
for the Oracle software installation and
designates the installation owner's primary
group as the Oracle inventory group.
Ensure that the oraInventory path that you
specify is in compliance with the Oracle
Optimal Flexible Architecture
recommendations.
Check Task
installation owner.
Groups and users Oracle recommends that you create groups and user
accounts required for your security plans before
starting installation. Installation owners have
resource limits settings and other requirements.
Group and user names must use only ASCII
characters.
Mount point paths for the software binaries Oracle recommends that you create an Optimal
Flexible Architecture configuration as described in
the appendix "Optimal Flexible Architecture"
in Oracle Database Installation Guide for your
platform.
Ensure that the Oracle home (the Oracle home path The ASCII character restriction includes
you select for Oracle Database) uses only ASCII installation owner user names, which are used as a
characters default for some home paths, as well as other
directory names you may select for paths.
Unset Oracle software environment variables If you have an existing Oracle software installation,
and you are using the same user to install this
installation, then unset the following environment
variables: $ORACLE_HOME,$ORA_NLS10,
and $TNS_ADMIN.
Set locale (if needed) Specify the language and the territory, or locale, in
which you want to use Oracle components. A locale
is a linguistic and cultural environment in which a
system or program is running. NLS (National
Language Support) parameters determine the
locale-specific behavior on both servers and clients.
The locale setting of a component determines the
language of the user interface of the component,
and the globalization behavior, such as date and
number formatting.
Check Shared Memory File System Mount By default, your operating system includes an entry
in /etc/fstab to mount /dev/shm. However, if
your Cluster Verification Utility (CVU) or installer
Check Task
rw and exec permissions set on it
Without noexec or nosuid set on it
Note:
These options may not be listed as they are usually
set as the default permissions by your operating
system.
Use this checklist to review storage minimum requirements and assist with configuration
planning.
Check Task
Minimum local disk storage space for Oracle For Linux x86-64:
software At least 6.0 GB for an Oracle Grid Infrastructure
for a standalone server installation.
At least 7.8 GB for Oracle Database Enterprise
Edition.
At least 7.8 GB for Oracle Database Standard
Edition 2.
Note:
Oracle recommends that you allocate
approximately 100 GB to allow additional space for
applying any future patches on top of the existing
Oracle home. For specific patch-related disk space
requirements, please refer to your patch
documentation.
Check Task
Select Database File Storage Option Ensure that you have one of the following storage
options available:
Determine your recovery plan If you want to enable recovery during installation,
then be prepared to select one of the following
options:
File system: Configure a fast recovery area
on a file system during installation
Oracle Automatic Storage Management:
Configure a fast recovery area disk group
using Oracle ASMCA.
Prerequisites
Once you have downloaded and setup OL8, there are some prerequisite setups that needs to be performed before
kicking of the installation. These steps are shown below.
Get the IP Address using ‘ifconfig’ or ‘ip addr’ command. For example:
oracledb19col8.rishoradev.com
Amend the IP address and hostname to “/etc/hosts” file to resolve the hostname. You can use the vi editor for this.
192.168.XX.X oracledb19col8.rishoradev.com
Next, download “oracle-database-preinstall-19c” package. This package will perform all the setups that are
necessary to install 19c.
....
....
Installed:
ksh-20120801-254.0.1.el8.x86_64 libaio-devel-0.3.112-1.el8.x86_64
libnsl-2.28-151.0.1.el8.x86_64 lm_sensors-libs-3.4.0-23.20180522git70f7e08.el8.x86_64
oracle-database-preinstall-19c-1.0-2.el8.x86_64 sysstat-11.7.3-6.0.1.el8.x86_64
Complete!
The next step is not mandatory. But I ran the ‘yum update’ because I wanted to make sure I had also the latest OS
packages. It might take a while for all the packages to be installed.
Edit “/etc/selinux/config” file and set “SELINUX=permissive“. It is recommended that you restart the server after
this step.
Disable firewall.
Create the directory structure for Oracle 19c to be installed and grant privileges.
Login using “oracle” user.
Unzip the Oracle software in ‘/u01/app/oracle/product/19c/dbhome_1’ directory, using the ‘unzip’ command as
shown below. We’ll set this path as the ORACLE_HOME later on during the installation.
Create a directory for hosting the scripts and navigate to the directory.
>
>
>
> EOF
Issue the following command to add the reference of the environment file created above in
the “/home/oracle/.bash_profile”.
Copy the Oracle software that you have downloaded to a directory. I have copied it under dbhome1.
total 2987996
Unzip the Oracle software in ‘/u01/app/oracle/product/19c/dbhome_1’ directory, using the ‘unzip’ command as
shown below. We’ll set this path as the ORACLE_HOME later on during the installation.
[oracle@oracledb19col8 dbhome_1]$ ls
addnode clone cv deinstall drdaas hs javavm ldap mgw olap ord owm
QOpatch relnotes runInstaller sqldeveloper srvm utl
apex crs data demo dv install jdbc lib network OPatch ords perl R
root.sh schagent.conf sqlj suptools wwg
assistants css dbjava diagnostics env.ora instantclient jdk LINUX.X64_193000_db_home.zip nls opmn
oss plsql racg root.sh.old sdk sqlpatch ucp xdk
bin ctx dbs dmu has inventory jlib md odbc oracore oui precomp rdbms
root.sh.old.1 slax sqlplus usm
This completes all the prerequite steps and now we are all set to kick off the installation.
Installation
For installing Oracle, you can either chose to use the Interactive mode or the Silent mode. The interactive mode
would open up the GUI screens and user input would be required at every step, whereas, for the silent mode, all the
required parameters are passed using the command line, and hence, it does not display any screens.
For interactive mode, I generally launch the installer through MobaXterm. Download MobaXterm on the Host
machine, open a console and connect to your Linux machine using ‘ssh’ and IP address of the Linux machine
with oracle user, as shown in the screenshot below.
Navigate to the folder where you have unzipped the Oracle using MobaXterm console and execute ‘runInstaller’.
Note: If you are installing the software on Linux 8, you will get the following error when the installer is launched.
Execute the following command before you launch the installer, to get around the above error.
Now, if you execute the runInstaller, if will work just fine, and the installer would open without any issues.
You can go through the subsequent steps in the interactive mode to complete the installation. However, for this post,
we are going use the silent mode to install the software. You can find more details on the silent more here.
To install Oracle using the silent installation, login as oracle user, navigate to the folder where you have unzipped
the software, and run the following command.
> oracle.install.option=INSTALL_DB_SWONLY \
> ORACLE_HOSTNAME=${ORACLE_HOSTNAME} \
> UNIX_GROUP_NAME=oinstall \
> INVENTORY_LOCATION=${ORA_INVENTORY} \
> SELECTED_LANGUAGES=en,en_GB \
> ORACLE_HOME=${ORACLE_HOME} \
> ORACLE_BASE=${ORACLE_BASE} \
> oracle.install.db.InstallEdition=EE \
> oracle.install.db.OSDBA_GROUP=dba \
> oracle.install.db.OSBACKUPDBA_GROUP=dba \
> oracle.install.db.OSDGDBA_GROUP=dba \
> oracle.install.db.OSKMDBA_GROUP=dba \
> oracle.install.db.OSRACDBA_GROUP=dba \
> SECURITY_UPDATES_VIA_MYORACLESUPPORT=false \
> DECLINE_SECURITY_UPDATES=true
On successful completion, the installer will prompt to run the root scripts.
1. /u01/app/oraInventory/orainstRoot.sh
2. /u01/app/oracle/product/19c/dbhome_1/root.sh
[oracledb19col8]
[oracledb19col8]
Database Creation
This should complete the installation process. The next stage will be to create the database.
Before we create the database, the first thing we need to do is to start the listener services, using “lsnrctl start”.
Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
------------------------
Alias LISTENER
SNMP OFF
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=oracledb19col8.rishoradev.com)(PORT=1521)))
Once the listener is up and running, you need to create the database using the Database Configuration Assistant
(DBCA). This can be done using the interactive mode by issuing the dbca command, through MobaXterm. Once
you execute the dbca command, the GUI should pop up .
-templateName General_Purpose.dbc \
-characterSet AL32UTF8 \
-sysPassword Welcome1 \
-systemPassword Welcome1 \
-createAsContainerDatabase true \
-numberOfPDBs 1 \
-pdbName ${PDB_NAME} \
-pdbAdminPassword Welcome1 \
-databaseType MULTIPURPOSE \
-memoryMgmtType auto_sga \
-totalMemory 2000 \
-storageType FS \
-datafileDestination "${DATA_DIR}" \
-redoLogFileSize 50 \
-emConfiguration NONE \
-ignorePreReqs
This would create the database for you. Now you have successfully installed Oracle Database 19c.
....
....
BANNER_FULL
Version 19.3.0.0.0
Post-Installation Steps
#!/bin/bash
. /home/oracle/scripts/setEnv.sh
export ORAENV_ASK=NO
. oraenv
export ORAENV_ASK=YES
dbstart \$ORACLE_HOME
EOF
#!/bin/bash
. /home/oracle/scripts/setEnv.sh
export ORAENV_ASK=NO
. oraenv
export ORAENV_ASK=YES
dbshut \$ORACLE_HOME
EOF
Set the restart flag for the instance(and for every instance) to ‘Y’ in the ‘/etc/oratab’ file. You can use the ‘vi’
editor.
Here, we have created only one contaner database, so I have edited the line as highlighted below:
Once you have edited the ‘/etc/oratab’ file, you can now start and stop the database by calling the scripts, start_all.sh
and stop_all.sh respectively, from the “oracle” user.
Enable Oracle Managed Files (OMF) and set the pluggable databse to start automatically when the instance is
started.
> exit;
> EOF
===========================END=========================