Professional Documents
Culture Documents
Oracle 9i Data Warehousing Guide
Oracle 9i Data Warehousing Guide
Release 1 (9.0.1)
June 2001
Part No. A90237-01
Oracle9i Data Warehousing Guide, Release 1 (9.0.1)
Contributors: Patrick Amor, Hermann Baer, Srikanth Bellamkonda, Randy Bello, Tolga Bozkaya, Benoit
Dageville, John Haydu, Lilian Hobbs, Hakan Jakobsson, George Lumpkin, Jack Raitto, Ray Roccaforte,
Gregory Smith, Ashish Thusoo, Jean-Francois Verrier, Gary Vincent, Andy Witkowski, Zia Ziauddin
The Programs (which include both the software and documentation) contain proprietary information of
Oracle Corporation; they are provided under a license agreement containing restrictions on use and
disclosure and are also protected by copyright, patent, and other intellectual and industrial property
laws. Reverse engineering, disassembly, or decompilation of the Programs is prohibited.
The information contained in this document is subject to change without notice. If you find any problems
in the documentation, please report them to us in writing. Oracle Corporation does not warrant that this
document is error free. Except as may be expressly permitted in your license agreement for these
Programs, no part of these Programs may be reproduced or transmitted in any form or by any means,
electronic or mechanical, for any purpose, without the express written permission of Oracle Corporation.
If the Programs are delivered to the U.S. Government or anyone licensing or using the programs on
behalf of the U.S. Government, the following notice is applicable:
Restricted Rights Notice Programs delivered subject to the DOD FAR Supplement are "commercial
computer software" and use, duplication, and disclosure of the Programs, including documentation,
shall be subject to the licensing restrictions set forth in the applicable Oracle license agreement.
Otherwise, Programs delivered subject to the Federal Acquisition Regulations are "restricted computer
software" and use, duplication, and disclosure of the Programs shall be subject to the restrictions in FAR
52.227-19, Commercial Computer Software - Restricted Rights (June, 1987). Oracle Corporation, 500
Oracle Parkway, Redwood City, CA 94065.
The Programs are not intended for use in any nuclear, aviation, mass transit, medical, or other inherently
dangerous applications. It shall be the licensee's responsibility to take all appropriate fail-safe, backup,
redundancy, and other measures to ensure the safe use of such applications if the Programs are used for
such purposes, and Oracle Corporation disclaims liability for any damages caused by such use of the
Programs.
Oracle is a registered trademark, and LogMiner, Oracle9i, Oracle Call Interface, Oracle Database
Configuration Assistant, Oracle Enterprise Manager, Oracle interMedia, Oracle Net, Oracle Spatial, Oracle
Store, Oracle Text, Oracle Trace, PL/SQL, and Real Application Clusters, and SQL*Plus are trademarks or
registered trademarks of Oracle Corporation. Other names may be trademarks of their respective
owners.
Contents
Preface.......................................................................................................................................................... xix
Part I Concepts
iii
Star Schemas .................................................................................................................................. 2-4
Other Schemas............................................................................................................................... 2-4
Data Warehousing Objects................................................................................................................ 2-5
Fact Tables...................................................................................................................................... 2-5
Dimension Tables ......................................................................................................................... 2-6
Unique Identifiers ......................................................................................................................... 2-7
Relationships ................................................................................................................................. 2-8
Typical Example of Data Warehousing Objects and Their Relationships............................ 2-8
iv
Striping, Mirroring, and Media Recovery............................................................................... 4-10
RAID 5.......................................................................................................................................... 4-11
The Importance of Specific Analysis........................................................................................ 4-12
6 Indexes
Bitmap Indexes.................................................................................................................................... 6-2
Bitmap Join Indexes...................................................................................................................... 6-6
B-tree Indexes .................................................................................................................................... 6-10
Local Indexes Versus Global Indexes ........................................................................................... 6-10
7 Integrity Constraints
Why Integrity Constraints are Useful in a Data Warehouse ...................................................... 7-2
Overview of Constraint States ......................................................................................................... 7-3
Typical Data Warehouse Integrity Constraints ............................................................................. 7-4
UNIQUE Constraints in a Data Warehouse ............................................................................. 7-4
FOREIGN KEY Constraints in a Data Warehouse................................................................... 7-5
RELY Constraints ......................................................................................................................... 7-6
Integrity Constraints and Parallelism........................................................................................ 7-7
Integrity Constraints and Partitioning ...................................................................................... 7-7
View Constraints .......................................................................................................................... 7-7
8 Materialized Views
Overview of Data Warehousing with Materialized Views......................................................... 8-2
v
Materialized Views for Data Warehouses................................................................................. 8-2
Materialized Views for Distributed Computing ...................................................................... 8-3
Materialized Views for Mobile Computing .............................................................................. 8-3
The Need for Materialized Views .............................................................................................. 8-3
Components of Summary Management ................................................................................... 8-5
Terminology .................................................................................................................................. 8-7
Schema Design Guidelines for Materialized Views ................................................................ 8-8
Types of Materialized Views .......................................................................................................... 8-10
Materialized Views with Aggregates....................................................................................... 8-10
Materialized Views Containing Only Joins ............................................................................ 8-16
Nested Materialized Views ....................................................................................................... 8-18
Creating Materialized Views .......................................................................................................... 8-22
Naming......................................................................................................................................... 8-23
Storage Characteristics............................................................................................................... 8-23
Build Methods ............................................................................................................................. 8-24
Enabling Query Rewrite ............................................................................................................ 8-24
Query Rewrite Restrictions ....................................................................................................... 8-25
Refresh Options........................................................................................................................... 8-26
ORDER BY Clause ...................................................................................................................... 8-30
Materialized View Logs ............................................................................................................. 8-30
Using Oracle Enterprise Manager ............................................................................................ 8-31
Using Materialized Views with NLS Parameters .................................................................. 8-31
Registering Existing Materialized Views..................................................................................... 8-32
Partitioning and Materialized Views ............................................................................................ 8-34
Partition Change Tracking ........................................................................................................ 8-34
Partitioning a Materialized View ............................................................................................. 8-38
Partitioning a Prebuilt Table ..................................................................................................... 8-39
Rolling Materialized Views ....................................................................................................... 8-40
Choosing Indexes for Materialized Views................................................................................... 8-40
Invalidating Materialized Views ................................................................................................... 8-41
Security Issues with Materialized Views ..................................................................................... 8-41
Altering Materialized Views........................................................................................................... 8-42
Dropping Materialized Views........................................................................................................ 8-42
Analyzing Materialized View Capabilities ................................................................................. 8-43
Using the DBMS_MVIEW.EXPLAIN_MVIEW Procedure ................................................... 8-43
vi
MV_CAPABILITIES_TABLE.CAPABILITY_NAME Details ............................................... 8-46
MV_CAPABILITIES_TABLE Column Details ....................................................................... 8-48
Overview of Materialized View Management Tasks ................................................................ 8-49
9 Dimensions
What are Dimensions? ....................................................................................................................... 9-2
Creating Dimensions ......................................................................................................................... 9-4
Multiple Hierarchies .................................................................................................................... 9-7
Using Normalized Dimension Tables ....................................................................................... 9-9
Dimension Wizard...................................................................................................................... 9-10
Viewing Dimensions........................................................................................................................ 9-10
Using The DEMO_DIM Package.............................................................................................. 9-10
Using Oracle Enterprise Manager............................................................................................ 9-11
Using Dimensions with Constraints............................................................................................. 9-11
Validating Dimensions .................................................................................................................... 9-12
Altering Dimensions........................................................................................................................ 9-13
Deleting Dimensions ....................................................................................................................... 9-14
vii
Extraction Via Distributed Operations .................................................................................. 11-11
viii
Complete Refresh ..................................................................................................................... 14-10
Fast Refresh ............................................................................................................................... 14-11
ON COMMIT Refresh.............................................................................................................. 14-11
Manual Refresh Using the DBMS_MVIEW Package .......................................................... 14-11
Refresh Specific Materialized Views with REFRESH.......................................................... 14-12
Refresh All Materialized Views with REFRESH_ALL_MVIEWS ..................................... 14-13
Refresh Dependent Materialized Views with REFRESH_DEPENDENT......................... 14-13
Using Job Queues for Refresh................................................................................................. 14-15
When Refresh is Possible......................................................................................................... 14-15
Recommended Initialization Parameters for Parallelism ................................................... 14-15
Monitoring a Refresh ............................................................................................................... 14-15
Checking the Status of a Materialized View......................................................................... 14-16
Tips for Refreshing Materialized Views with Aggregates ................................................. 14-16
Tips for Refreshing Materialized Views Without Aggregates........................................... 14-19
Tips for Refreshing Nested Materialized Views .................................................................. 14-20
Tips After Refreshing Materialized Views............................................................................ 14-21
Using Materialized Views With Partitioned Tables................................................................. 14-22
Fast Refresh with Partition Change Tracking ...................................................................... 14-22
Fast Refresh with CONSIDER FRESH................................................................................... 14-26
ix
16 Summary Advisor
Overview of the Summary Advisor in the DBMS_OLAP Package ........................................ 16-2
Summary Advisor Wizard ........................................................................................................ 16-6
Using the Summary Advisor .......................................................................................................... 16-6
Identifier Numbers ..................................................................................................................... 16-7
Workload Management ............................................................................................................. 16-8
Loading a User-Defined Workload .......................................................................................... 16-9
Loading a Trace Workload ...................................................................................................... 16-11
Loading a SQL Cache Workload ............................................................................................ 16-15
Validating a Workload............................................................................................................. 16-17
Removing a Workload ............................................................................................................. 16-18
Using Filters with the Summary Advisor ............................................................................. 16-18
Removing a Filter...................................................................................................................... 16-22
Recommending Materialized Views...................................................................................... 16-23
SQL Script Generation.............................................................................................................. 16-27
Summary Data Report ............................................................................................................. 16-29
When Recommendations are no Longer Required.............................................................. 16-31
Stopping the Recommendation Process ................................................................................ 16-32
Sample Sessions ........................................................................................................................ 16-32
Estimating Materialized View Size ............................................................................................. 16-37
ESTIMATE_MVIEW_SIZE Parameters ................................................................................. 16-37
Is a Materialized View Being Used?............................................................................................ 16-38
DBMS_OLAP.EVALUATE_MVIEW_STRATEGY Procedure ........................................... 16-39
x
18 SQL for Aggregation in Data Warehouses
Overview of SQL for Aggregation in Data Warehouses ........................................................... 18-2
Analyzing Across Multiple Dimensions ................................................................................. 18-3
Optimized Performance ............................................................................................................ 18-4
An Aggregate Scenario .............................................................................................................. 18-5
Interpreting NULLs in Examples ............................................................................................. 18-6
ROLLUP Extension to GROUP BY................................................................................................ 18-7
When to Use ROLLUP ............................................................................................................... 18-7
ROLLUP Syntax.......................................................................................................................... 18-7
Partial Rollup .............................................................................................................................. 18-8
CUBE Extension to GROUP BY ................................................................................................... 18-10
When to Use CUBE .................................................................................................................. 18-10
CUBE Syntax ............................................................................................................................. 18-10
Partial CUBE.............................................................................................................................. 18-12
Calculating Subtotals without CUBE .................................................................................... 18-13
GROUPING Functions .................................................................................................................. 18-13
GROUPING Function .............................................................................................................. 18-13
When to Use GROUPING ....................................................................................................... 18-16
GROUPING_ID Function........................................................................................................ 18-17
GROUP_ID Function................................................................................................................ 18-18
GROUPING SETS Expression ..................................................................................................... 18-19
Composite Columns....................................................................................................................... 18-21
Concatenated Groupings............................................................................................................... 18-24
Concatenated Groupings and Hierarchical Data Cubes..................................................... 18-26
Considerations when Using Aggregation.................................................................................. 18-28
Hierarchy Handling in ROLLUP and CUBE........................................................................ 18-28
Column Capacity in ROLLUP and CUBE............................................................................. 18-29
HAVING Clause Used with GROUP BY Extensions .......................................................... 18-29
ORDER BY Clause Used with GROUP BY Extensions ....................................................... 18-30
Using Other Aggregate Functions with ROLLUP and CUBE ........................................... 18-30
Computation Using the WITH Clause ....................................................................................... 18-30
xi
RANK and DENSE_RANK ....................................................................................................... 19-5
Top N Ranking .......................................................................................................................... 19-12
Bottom N Ranking .................................................................................................................... 19-13
CUME_DIST .............................................................................................................................. 19-13
PERCENT_RANK..................................................................................................................... 19-14
NTILE ......................................................................................................................................... 19-15
ROW_NUMBER........................................................................................................................ 19-16
Windowing Aggregate Functions ................................................................................................ 19-17
Treatment of NULLs as Input to Window Functions.......................................................... 19-18
Windowing Functions with Logical Offset ........................................................................... 19-18
Cumulative Aggregate Function ............................................................................................ 19-18
Moving Aggregate Function ................................................................................................... 19-19
Centered Aggregate Function................................................................................................. 19-20
Windowing Aggregate Functions with Logical Offsets...................................................... 19-21
Variable Sized Window ........................................................................................................... 19-22
Windowing Aggregate Functions with Physical Offsets.................................................... 19-23
FIRST_VALUE and LAST_VALUE........................................................................................ 19-24
Reporting Aggregate Functions ................................................................................................... 19-24
Reporting Aggregate Example................................................................................................ 19-26
RATIO_TO_REPORT ............................................................................................................... 19-27
LAG/LEAD Functions .................................................................................................................... 19-28
LAG/LEAD Syntax .................................................................................................................. 19-28
FIRST/LAST Functions.................................................................................................................. 19-29
FIRST/LAST Syntax ................................................................................................................. 19-29
FIRST/LAST As Regular Aggregates .................................................................................... 19-30
FIRST/LAST As Reporting Aggregates ................................................................................ 19-31
Linear Regression Functions......................................................................................................... 19-32
REGR_COUNT.......................................................................................................................... 19-32
REGR_AVGY and REGR_AVGX ........................................................................................... 19-33
REGR_SLOPE and REGR_INTERCEPT ................................................................................ 19-33
REGR_R2.................................................................................................................................... 19-33
REGR_SXX, REGR_SYY, and REGR_SXY............................................................................. 19-33
Linear Regression Statistics Examples................................................................................... 19-33
Sample Linear Regression Calculation .................................................................................. 19-34
Inverse Percentile Functions......................................................................................................... 19-35
xii
Normal Aggregate Syntax....................................................................................................... 19-35
Inverse Percentile Restrictions................................................................................................ 19-38
Hypothetical Rank and Distribution Functions ....................................................................... 19-39
Hypothetical Rank and Distribution Syntax ........................................................................ 19-39
WIDTH_BUCKET Function ......................................................................................................... 19-40
WIDTH_BUCKET Syntax........................................................................................................ 19-41
User-Defined Aggregate Functions ............................................................................................. 19-43
CASE Expressions........................................................................................................................... 19-44
Creating Histograms with User-defined Buckets ................................................................ 19-45
xiii
Setting the Degree of Parallelism ........................................................................................... 21-32
How Oracle Determines the Degree of Parallelism for Operations .................................. 21-34
Balancing the Workload........................................................................................................... 21-37
Parallelization Rules for SQL Statements.............................................................................. 21-38
Enabling Parallelism for Tables and Queries........................................................................ 21-46
Degree of Parallelism and Adaptive Multiuser: How They Interact ................................ 21-46
Forcing Parallel Execution for a Session................................................................................ 21-47
Controlling Performance with the Degree of Parallelism................................................... 21-48
Tuning General Parameters for Parallel Execution .................................................................. 21-48
Parameters Establishing Resource Limits for Parallel Operations .................................... 21-49
Parameters Affecting Resource Consumption ..................................................................... 21-58
Parameters Related to I/O....................................................................................................... 21-65
Monitoring and Diagnosing Parallel Execution Performance ............................................... 21-67
Is There Regression? ................................................................................................................. 21-68
Is There a Plan Change?........................................................................................................... 21-69
Is There a Parallel Plan? ........................................................................................................... 21-69
Is There a Serial Plan? .............................................................................................................. 21-69
Is There Parallel Execution? .................................................................................................... 21-70
Is The Workload Evenly Distributed? ................................................................................... 21-70
Monitoring Parallel Execution Performance with Dynamic Performance Views........... 21-71
Monitoring Session Statistics .................................................................................................. 21-74
Monitoring System Statistics................................................................................................... 21-76
Monitoring Operating System Statistics................................................................................ 21-77
Affinity and Parallel Operations.................................................................................................. 21-77
Affinity and Parallel Queries .................................................................................................. 21-78
Affinity and Parallel DML....................................................................................................... 21-78
Miscellaneous Parallel Execution Tuning Tips ......................................................................... 21-79
Formula for Memory, Users, and Parallel Execution Server Processes............................ 21-80
Setting Buffer Pool Size for Parallel Operations................................................................... 21-82
Balancing the Formula ............................................................................................................. 21-82
Parallel Execution Space Management Issues ...................................................................... 21-83
Overriding the Default Degree of Parallelism...................................................................... 21-84
Rewriting SQL Statements....................................................................................................... 21-85
Creating and Populating Tables in Parallel .......................................................................... 21-86
Creating Temporary Tablespaces for Parallel Sort and Hash Join .................................... 21-87
xiv
Executing Parallel SQL Statements ........................................................................................ 21-88
Using EXPLAIN PLAN to Show Parallel Operations Plans .............................................. 21-89
Additional Considerations for Parallel DML ....................................................................... 21-89
Creating Indexes in Parallel .................................................................................................... 21-93
Parallel DML Tips..................................................................................................................... 21-94
Incremental Data Loading in Parallel.................................................................................... 21-97
Using Hints with Cost-Based Optimization ....................................................................... 21-100
22 Query Rewrite
Overview of Query Rewrite............................................................................................................ 22-2
Cost-Based Rewrite .................................................................................................................... 22-3
When Does Oracle Rewrite a Query? ...................................................................................... 22-4
Enabling Query Rewrite.................................................................................................................. 22-7
Initialization Parameters for Query Rewrite .......................................................................... 22-8
Controlling Query Rewrite ....................................................................................................... 22-8
Privileges for Enabling Query Rewrite ................................................................................... 22-9
Accuracy of Query Rewrite..................................................................................................... 22-10
How Oracle Rewrites Queries...................................................................................................... 22-11
Text Match Rewrite Methods.................................................................................................. 22-12
General Query Rewrite Methods ........................................................................................... 22-13
When are Constraints and Dimensions Needed? ................................................................ 22-14
Special Cases for Query Rewrite ................................................................................................. 22-45
Query Rewrite Using Partially Stale Materialized Views .................................................. 22-45
Query Rewrite Using Complex Materialized Views........................................................... 22-48
Query Rewrite Using Nested Materialized Views .............................................................. 22-48
Query Rewrite with CUBE, ROLLUP, and Grouping Sets................................................. 22-50
Did Query Rewrite Occur?............................................................................................................ 22-55
Explain Plan............................................................................................................................... 22-55
DBMS_MVIEW.EXPLAIN_REWRITE Procedure ............................................................... 22-56
Design Considerations for Improving Query Rewrite Capabilities .................................... 22-61
Constraints................................................................................................................................. 22-61
Dimensions ................................................................................................................................ 22-61
Outer Joins ................................................................................................................................. 22-61
Text Match ................................................................................................................................. 22-62
Aggregates ................................................................................................................................. 22-62
xv
Grouping Conditions ............................................................................................................... 22-62
Expression Matching ................................................................................................................ 22-63
Date Folding .............................................................................................................................. 22-63
Statistics...................................................................................................................................... 22-63
Part VI Miscellaneous
A Glossary
xvi
Send Us Your Comments
Oracle9i Data Warehousing Guide, Release 9.0.1
Part No. A90237-01
Oracle Corporation welcomes your comments and suggestions on the quality and usefulness of this
document. Your input is an important part of the information used for revision.
■ Did you find any errors?
■ Is the information clearly presented?
■ Do you need more information? If so, where?
■ Are the examples correct? Do you need more examples?
■ What features did you like most?
If you find any errors or have any other suggestions for improvement, please indicate the document
title and part number, and the chapter, section, and page number (if available). You can send com-
ments to us in the following ways:
■ Electronic mail: infodev_us@oracle.com
■ FAX: (650) 506-7227 Attn: Server Technologies Documentation Manager
■ Postal service:
Oracle Corporation
Server Technologies Documentation
500 Oracle Parkway, Mailstop 4op11
Redwood Shores, CA 94065
USA
If you would like a reply, please give your name, address, telephone number, and (optionally) elec-
tronic mail address.
If you have problems with the software, please contact your local Oracle Support Services.
xvii
xviii
Preface
xix
Audience
Oracle9i Data Warehousing Guide is intended for database administrators, system
administrators, and database application developers who perform the following
tasks:
■ design, maintain, and use data warehouses.
To use this document, you need to be familiar with relational database concepts,
basic Oracle server concepts, and the operating system environment under which
you are running Oracle.
Organization
This document contains:
Chapter 6, Indexes
This chapter describes how to use indexes in data warehouses.
xx
Chapter 8, Materialized Views
This chapter describes how to use materialized views in data warehouses.
Chapter 9, Dimensions
This chapter describes how to use dimensions in data warehouses.
xxi
Chapter 21, Using Parallel Execution
This chapter describes how to tune data warehouses using parallel execution.
Appendix A, "Glossary"
This chapter defines commonly used data warehousing terms.
Related Documentation
For more information, see these Oracle resources:
■ Oracle9i Database Performance Guide and Reference
Many of the examples in this book use the sample schemas of the seed database,
which is installed by default when you install Oracle. Refer to Oracle9i Sample
Schemas for information on how these schemas were created and how you can use
them yourself.
In North America, printed documentation is available for sale in the Oracle Store at
http://oraclestore.oracle.com/
Customers in Europe, the Middle East, and Africa (EMEA) can purchase
documentation from
http://www.oraclebookshop.com/
xxii
If you already have a username and password for OTN, then you can go directly to
the documentation section of the OTN Web site at
http://technet.oracle.com/docs/index.htm
Conventions
This section describes the conventions used in the text and code examples of the
this documentation set. It describes:
■ Conventions in Text
■ Conventions in Code Examples
Conventions in Text
We use various conventions in text to help you more quickly identify special terms.
The following table describes those conventions and provides examples of their use.
xxiii
Convention Meaning Example
lowercase Lowercase monospace typeface indicates Enter sqlplus to open SQL*Plus.
monospace executables, filenames, directory names,
The password is specified in the orapwd file.
(fixed-width and sample user-supplied elements. Such
font) elements include computer and database Back up the datafiles and control files in the
names, net service names, and connect /disk1/oracle/dbs directory.
identifiers, as well as user-supplied
The department_id, department_name,
database objects and structures, column
and location_id columns are in the
names, packages and classes, usernames
hr.departments table.
and roles, program units, and parameter
values. Set the QUERY_REWRITE_ENABLED
initialization parameter to true.
Note: Some programmatic elements use a
mixture of UPPERCASE and lowercase. Connect as oe user.
Enter these elements as shown.
The JRepUtil class implements these
methods.
lowercase Lowercase monospace italic font You can specify the parallel_clause.
monospace represents placeholders or variables.
Run Uold_release.SQL where old_
(fixed-width
release refers to the release you installed
font) italic
prior to upgrading.
The following table describes typographic conventions used in code examples and
provides examples of their use.
xxiv
Convention Meaning Example
... Horizontal ellipsis points indicate either:
■ That we have omitted parts of the CREATE TABLE ... AS subquery;
code that are not directly related to
the example
SELECT col1, col2, ... , coln FROM
■ That you can repeat a portion of the
employees;
code
. Vertical ellipsis points indicate that we
. have omitted several lines of code not
. directly related to the example.
Other notation You must enter symbols other than acctbal NUMBER(11,2);
brackets, braces, vertical bars, and ellipsis
acct CONSTANT NUMBER(4) := 3;
points as shown.
Italics Italicized text indicates placeholders or CONNECT SYSTEM/system_password
variables for which you must supply
DB_NAME = database_name
particular values.
UPPERCASE Uppercase typeface indicates elements SELECT last_name, employee_id FROM
supplied by the system. We show these employees;
terms in uppercase in order to distinguish
SELECT * FROM USER_TABLES;
them from terms you define. Unless terms
appear in brackets, enter them in the DROP TABLE hr.employees;
order and with the spelling shown.
However, because these terms are not
case sensitive, you can enter them in
lowercase.
lowercase Lowercase typeface indicates SELECT last_name, employee_id FROM
programmatic elements that you supply. employees;
For example, lowercase indicates names
sqlplus hr/hr
of tables, columns, or files.
CREATE USER mjones IDENTIFIED BY ty3MU9;
Note: Some programmatic elements use a
mixture of UPPERCASE and lowercase.
Enter these elements as shown.
xxv
Documentation Accessibility
Oracle's goal is to make our products, services, and supporting documentation
accessible to the disabled community with good usability. To that end, our
documentation includes features that make information available to users of
assistive technology. This documentation is available in HTML format, and contains
markup to facilitate access by the disabled community. Standards will continue to
evolve over time, and Oracle is actively engaged with other market-leading
technology vendors to address technical obstacles so that our documentation can be
accessible to all of our customers. For additional information, visit the Oracle
Accessibility Program Web site at
http://www.oracle.com/accessibility/
JAWS, a Windows screen reader, may not always correctly read the code examples
in this document. The conventions for writing code require that closing braces
should appear on an otherwise empty line; however, JAWS may not always read a
line of text that consists solely of a bracket or brace.
xxvi
Part I
Concepts
Subject Oriented
Data warehouses are designed to help you analyze data. For example, to learn more
about your company’s sales data, you can build a warehouse that concentrates on
sales. Using this warehouse, you can answer questions like "Who was our best
customer for this item last year?" This ability to define a data warehouse by subject
matter, sales in this case, makes the data warehouse subject oriented.
Integrated
Integration is closely related to subject orientation. Data warehouses must put data
from disparate sources into a consistent format. They must resolve such problems
as naming conflicts and inconsistencies among units of measure. When they achieve
this, they are said to be integrated.
Nonvolatile
Nonvolatile means that, once entered into the warehouse, data should not change.
This is logical because the purpose of a warehouse is to enable you to analyze what
has occurred.
Time Variant
In order to discover trends in business, analysts need large amounts of data. This is
very much in contrast to online transaction processing (OLTP) systems, where
performance requirements demand that historical data be moved to an archive. A
data warehouse’s focus on change over time is what is meant by the term time
variant.
Complex data
structures Multidimensional
(3NF databases) data structures
One major difference between the types of system is that data warehouses are not
usually in third normal form (3NF), a type of data normalization common in OLTP
environments.
Data warehouses and OLTP systems have very different requirements. Here are
some examples of differences between typical data warehouses and OLTP systems:
■ Workload
Data warehouses are designed to accommodate ad hoc queries. You might not
know the workload of your data warehouse in advance, so a data warehouse
should be optimized to perform well for a wide variety of possible query
operations.
OLTP systems support only predefined operations. Your applications might be
specifically tuned or designed to support only these operations.
■ Data Modifications
A data warehouse is updated on a regular basis by the ETL process (run nightly
or weekly) using bulk data modification techniques. The end users of a data
warehouse do not directly update the data warehouse.
In OLTP systems, end users routinely issue individual data modification
statements to the database. The OLTP database is always up to date, and reflects
the current state of each business transaction.
■ Schema Design
Data warehouses often use denormalized or partially denormalized schemas
(such as a star schema) to optimize query performance.
OLTP systems often use fully normalized schemas to optimize
update/insert/delete performance, and to guarantee data consistency.
■ Typical Operations
A typical data warehouse query scans thousands or millions of rows. For
example, "Find the total sales for all customers last month."
A typical OLTP operation accesses only a handful of records. For example,
"Retrieve the current order for this customer."
■ Historical Data
Data warehouses usually store many months or years of data. This is to support
historical analysis.
OLTP systems usually store data from only a few weeks or months. The OLTP
system stores only historical data as needed to successfully meet the
requirements of the current transaction.
Operational Analysis
System
Metadata
In Figure 1–2, the metadata and raw data of a traditional OLTP system is present, as
is an additional type of data, summary data. Summaries are very valuable in data
warehouses because they pre-compute long operations in advance. For example, a
typical data warehouse query is to retrieve something like August sales. Summaries
in Oracle are called materialized views.
Data Staging
Sources Area Warehouse Users
Operational Analysis
System
Metadata
Figure 1–4 Architecture of a Data Warehouse with a Staging Area and Data Marts
Metadata
This section deals with the issues in logical design in a data warehouse.
It contains the following chapter:
■ Logical Design in Data Warehouses
2
Logical Design in Data Warehouses
This chapter tells you how to design a data warehousing environment and includes
the following topics:
■ Logical versus Physical Design in Data Warehouses
■ Creating a Logical Design
■ Data Warehousing Schemas
■ Data Warehousing Objects
Your logical design should result in (1) a set of entities and attributes corresponding
to fact tables and dimension tables and (2) a model of operational data from your
source into subject-oriented information in your target data warehouse schema.
You can create the logical design using a pen and paper, or you can use a design
tool such as Oracle Warehouse Builder (specifically designed to support modeling
the ETL process) or Oracle Designer (a general purpose modeling tool).
Star Schemas
The star schema is the simplest data warehouse schema. It is called a star schema
because the diagram resembles a star, with points radiating from a center. The
center of the star consists of one or more fact tables and the points of the star are the
dimension tables, as shown in Figure 2–1.
products times
sales
(amount_sold,
quantity_sold)
Fact Table
customers channels
The most natural way to model a data warehouse is as a star schema, only one join
establishes the relationship between the fact table and any one of the dimension
tables.
A star schema optimizes performance by keeping queries simple and providing fast
response time. All the information about each level is stored in one row.
Other Schemas
Some schemas in data warehousing environments use third normal form rather
than star schemas. Another schema that is sometimes useful is the snowflake
schema, which is a star schema with normalized dimensions in a tree structure.
Fact Tables
A fact table typically has two types of columns: those that contain numeric facts
(often called measurements), and those that are foreign keys to dimension tables. A
fact table contains either detail-level facts or facts that have been aggregated. Fact
tables that contain aggregated facts are often called summary tables. A fact table
usually contains facts with the same level of aggregation. Though most facts are
additive, they can also be semi-additive or non-additive. Additive facts can be
aggregated by simple arithmetical addition. A common example of this is sales.
Non-additive facts cannot be added at all. An example of this is averages.
Semi-additive facts can be aggregated along some of the dimensions and not along
others. An example of this is inventory levels, where you cannot tell what a level
means simply by looking at it.
Dimension Tables
A dimension is a structure, often composed of one or more hierarchies, that
categorizes data. Dimensional attributes help to describe the dimensional value.
They are normally descriptive, textual values. Several distinct dimensions,
combined with facts, enable you to answer business questions. Commonly used
dimensions are customers, products, and time.
Dimension data is typically collected at the lowest level of detail and then
aggregated into higher level totals that are more useful for analysis. These natural
rollups or aggregations within a dimension table are called hierarchies.
Hierarchies
Hierarchies are logical structures that use ordered levels as a means of organizing
data. A hierarchy can be used to define data aggregation. For example, in a time
dimension, a hierarchy might aggregate data from the month level to the quarter
level to the year level. A hierarchy can also be used to define a navigational drill
path and to establish a family structure.
Within a hierarchy, each level is logically connected to the levels above and below it.
Data values at lower levels aggregate into the data values at higher levels. A
dimension can be composed of more than one hierarchy. For example, in the
product dimension, there might be two hierarchies—one for product categories
and one for product suppliers.
Dimension hierarchies also group levels from general to granular. Query tools use
hierarchies to enable you to drill down into your data to view different levels of
granularity. This is one of the key benefits of a data warehouse.
When designing hierarchies, you must consider the relationships in business
structures. For example, a divisional multilevel sales organization.
Hierarchies impose a family structure on dimension values. For a particular level
value, a value at the next higher level is its parent, and values at the next lower level
are its children. These familial relationships enable analysts to access data quickly.
region
subregion
country_name
customer
Unique Identifiers
Unique identifiers are specified for one distinct record in a dimension table.
Artificial unique identifiers are often used to avoid the potential problem of unique
identifiers changing. Unique identifiers are represented with the # character. For
example, #customer_id.
Relationships
Relationships guarantee business integrity. An example is that if a business sells
something, there is obviously a customer and a product. Designing a relationship
between the sales information in the fact table and the dimension tables products
and customers enforces the business rules in databases.
Relationship
products customers
Fact Table #cust_id
#prod_id
sales cust_last_name
cust_id cust_city
cust_state_province Hierarchy
prod_id
times channels
promotions
Dimension Table Dimension Table
Dimension Table
This chapter describes the physical design of a data warehousing environment, and
includes the following:
■ Moving from Logical to Physical Design
■ Physical Design
See Also:
■ Chapter 5, "Parallelism and Partitioning in Data Warehouses"
for further information regarding partitioning
■ Oracle9i Database Concepts for further conceptual material
regarding all design matters
Physical Design
During the logical design phase, you defined a model for your data warehouse
consisting of entities, attributes, and relationships. The entities are linked together
using relationships. Attributes are used to describe the entities. The unique
identifier (UID) distinguishes between one instance of an entity and another.
Figure 3–1 offers you a graphical way of looking at the different ways of thinking
about logical and physical designs.
Integrity Materialized
Relationships Constraints Views
- Primary Key
- Foreign Key
- Not Null
Attributes Dimensions
Columns
Unique
Identifiers
During the physical design process, you translate the expected schemas into actual
database structures. At this time, you have to map:
■ Entities to Tables
■ Relationships to Foreign Key Constraints
■ Attributes to Columns
■ Primary Unique Identifiers to Primary Key Constraints
■ Unique Identifiers to Unique Key Constraints
Tablespaces
A tablespace consists of one or more datafiles, which are physical structures within
the operating system you are using. A datafile is associated with only one
tablespace. From a design perspective, tablespaces are containers for physical
design structures.
Tablespaces need to be separated by differences. For example, tables should be
separated from their indexes and small tables should be separated from large tables.
Tablespaces should also represent logical business units if possible. Because a
tablespace is the coarsest granularity for backup and recovery or the transportable
tablespaces mechanism, the logical business design affects availability and
maintenance operations.
Views
A view is a tailored presentation of the data contained in one or more tables or
other views. A view takes the output of a query and treats it as a table. Views do not
require any space in the database.
Integrity Constraints
Integrity constraints are used to enforce business rules associated with your
database and to prevent having invalid information in the tables. Integrity
constraints in data warehousing differ from constraints in OLTP environments. In
OLTP environments, they primarily prevent the insertion of invalid data into a
record, which is not a big problem in data warehousing environments because
accuracy has already been guaranteed. In data warehousing environments,
constraints are only used for query rewrite. NOT NULL constraints are particularly
common in data warehouses. Under some specific circumstances, constraints need
space in the database. These constraints are in the form of the underlying unique
index.
Materialized Views
Materialized views are query results that have been stored in advance so
long-running calculations are not necessary when you actually execute your SQL
statements. From a physical design point of view, materialized views resemble
tables or partitioned tables.
Dimensions
A dimension is a schema object that defines hierarchical relationships between
columns or column sets. A hierarchical relationship is a functional dependency
from one level of a hierarchy to the next one. A dimension is a container of logical
relationships and does not require any space in the database. A typical dimension is
city, state (or province), region, and country.
This chapter explains some of the hardware and I/O issues in a data warehousing
environment and includes the following topics:
■ Overview of Hardware and I/O Considerations in Data Warehouses
■ RAID Configurations
Controller 1 Controller 2
1 1 1 1 tablespace 1
0001 0001 0001 0001
2 2 2 2 tablespace 2
0002 0002 0002 0002
3 3 3 3 tablespace 3
4 4 4 4 tablespace 4
5 5 5 5 tablespace 5
See Also: Oracle9i Database Concepts for further details about disk
striping
You should stripe tablespaces for tables, indexes, rollback segments, and temporary
tablespaces. You must also spread the devices over controllers, I/O channels, and
internal buses. To make striping effective, you must make sure that enough
controllers and other I/O components are available to support the bandwidth of
parallel data movement into and out of the striped tablespaces.
You can use RAID systems or you can perform striping manually through careful
data file allocation to tablespaces.
The striping of data across physical drives has several consequences besides
balancing I/O. One additional advantage is that logical files can be created that are
larger than the maximum size usually supported by an operating system. There are
disadvantages however. Striping means that it is no longer possible to locate a
single datafile on a specific physical drive. This can cause the loss of some
application tuning capabilities. Also, it can cause database recovery to be more
time-consuming. If a single physical disk in a RAID array needs recovery, all the
disks that are part of that logical RAID device must be involved in the recovery.
Automatic Striping
Automatic striping is usually flexible and easy to manage. It supports many
scenarios such as multiple users running sequentially or as single users running in
parallel. Two main advantages make automatic striping preferable to manual
striping, unless the system is very small or availability is the main concern:
■ For parallel scan operations (such as full table scan or fast full scan), operating
system striping increases the number of disk seeks. Nevertheless, this is largely
offset by the large I/O size (DB_BLOCK_SIZE * MULTIBLOCK_READ_COUNT),
which should enable this operation to reach the maximum I/O throughput for
your platform. This maximum is in general limited by the number of controllers
or I/O buses of the platform, not by the number of disks (unless you have a
small configuration and/or are using large disks).
■ For index probes (for example, within a nested loop join or parallel index range
scan), operating system striping enables you to avoid hot spots by evenly
distributing I/O across the disks.
Oracle Corporation recommends using a large stripe size of at least 64 KB. Stripe
size must be at least as large as the I/O size. If stripe size is larger than I/O size by a
factor of two or four, then trade-offs may arise. The large stripe size can be
advantageous because it lets the system perform more sequential operations on
each disk; it decreases the number of seeks on disk. Another advantage of large
stripe sizes is that more users can work on the system without affecting each other.
The disadvantage is that large stripes reduce the I/O parallelism, so fewer disks are
simultaneously active. If you encounter problems, increase the I/O size of scan
operations (for example, from 64 KB to 128 KB), instead of changing the stripe size.
The maximum I/O size is platform-specific (in a range, for example, of 64 KB to 1
MB).
With automatic striping, from a performance standpoint, the best layout is to stripe
data, indexes, and temporary tablespaces across all the disks of your platform. This
layout is also appropriate when you have little information about system usage. To
increase availability, it may be more practical to stripe over fewer disks to prevent a
single disk value from affecting the entire data warehouse. However, for better
performance, it is crucial to stripe all objects over multiple disks. In this way,
maximum I/O performance (both in terms of throughput and in number of I/Os
per second) can be reached when one object is accessed by a parallel operation. If
multiple objects are accessed at the same time (as in a multiuser configuration),
striping automatically limits the contention.
Manual Striping
You can use manual striping on all platforms. To do this, add multiple files to each
tablespace, with each file on a separate disk. If you use manual striping correctly,
your system’s performance improves significantly. However, you should be aware
of several drawbacks that can adversely affect performance if you do not stripe
correctly.
When using manual striping, the degree of parallelism (DOP) is more a function of
the number of disks than of the number of CPUs. First, it is necessary to have one
server process per datafile to drive all the disks and limit the risk of experiencing
I/O bottlenecks. Second, manual striping is very sensitive to datafile size skew,
which can affect the scalability of parallel scan operations. Third, manual striping
requires more planning and set-up effort than automatic striping.
An advantage of local striping is that if one disk fails, it does not affect other
partitions. Moreover, you still have some striping even if you have data in only one
partition.
A disadvantage of local striping is that you need many disks to implement it—each
partition requires multiple disks of its own. Another major disadvantage is that
when partitions are reduced to a few or even a single partition, the system retains
limited I/O bandwidth. As a result, local striping is not optimal for parallel
operations. For this reason, consider local striping only if your main concern is
availability, rather than parallel execution.
,,
,,
Partition 1 Partition 2
Stripe 1 Stripe 3
Stripe 2 Stripe 4
Global striping, illustrated in Figure 4–3, entails overlapping disks and partitions.
Partition 1 Partition 2
Stripe 1
Stripe 2
Global striping is advantageous if you have partition pruning and need to access
data in only one partition. Spreading the data in that partition across many disks
improves performance for parallel execution operations. A disadvantage of global
striping is that if one disk fails, all partitions are affected if the disks are not
mirrored.
Analyzing Striping
Two considerations arise when analyzing striping issues for your applications. First,
consider the cardinality of the relationships among the objects in a storage system.
Second, consider what you can optimize in your striping effort: full table scans,
general tablespace availability, partition scans, or some combinations of these goals.
Cardinality and optimization are discussed in the following section.
1 p s 1 1 f m n
table partitions tablespace files devices
Figure 4–4 shows the cardinality of the relationships among objects in a typical
Oracle storage system. For every table there may be:
■ p partitions, shown in Figure 4–4 as a one-to-many relationship
■ s partitions for every tablespace, shown in Figure 4–4 as a many-to-one
relationship
■ f files for every tablespace, shown in Figure 4–4 as a one-to-many relationship
■ m files to n devices, shown in Figure 4–4 as a many-to-many relationship
Goals. You may wish to stripe an object across devices to achieve one of three goals:
■ Goal 1: To optimize full table scans, place a table on many devices.
■ Goal 2: To optimize availability, restrict the tablespace to a few devices.
■ Goal 3: To optimize partition scans, achieve intra-partition parallelism by
placing each partition on many devices.
To attain both Goals 1 and 2 (having the table reside on many devices, with the
highest possible availability), maximize the number of partitions p and minimize
the number of partitions per tablespace s.
To maximize Goal 1 but with minimal intra-partition parallelism, place each
partition in its own tablespace. Do not used striped files, and use one file per
tablespace.
To minimize Goal 2 and thereby minimize availability, set f and n equal to 1. When
you minimize availability, you maximize intra-partition parallelism. Goal 3 conflicts
with Goal 2 because you cannot simultaneously maximize the formula for Goal 3
and minimize the formula for Goal 2. You must compromise to achieve some of the
benefits of both goals.
Goal 1: To optimize full table scans. Having a table reside on many devices
ensures scalable full table scans.
To calculate the optimal number of devices per table, use this formula:
You can do this by having t partitions, with every partition in its own tablespace, if
every tablespace has one file, and these files are not striped.
t x 1 / p x 1 x 1, up to t devices
If the table is not partitioned, but is in one tablespace in one file, stripe it over n
devices.
1 x 1 x n devices
There are a maximum of t partitions, every partition in its own tablespace, f files in
each tablespace, each tablespace on a striped device:
t x f x n devices
Partitions can reside in a tablespace that can have many files. You can have either
■ Many files per tablespace or
■ A striped file
RAID Configurations
RAID systems, also called disk arrays, can be hardware- or software-based systems.
The difference between the two is how CPU processing of I/O requests is handled.
In software-based RAID systems, the operating system or an application level
handles the I/O request, while in hardware-based RAID systems, disk controllers
handle I/O requests. RAID usage is transparent to Oracle. All the features specific
to a given RAID configuration are handled by the operating system and Oracle does
not need to worry about them.
Primary logical database structures have different access patterns during read and
write operations. Therefore, different RAID implementations will be better suited
for these structures. The purpose of this chapter is to discuss some of the basic
decisions you must make when designing the physical layout of your data
warehouse implementation. It is not meant as a replacement for operating system
and storage documentation or a consultant’s analysis of your I/O requirements.
There are advantages and disadvantages to using RAID, and those depend on the
RAID level under consideration and the specific system in question. The most
common configurations in data warehouses are:
■ RAID 0 (Striping)
■ RAID 1 (Mirroring)
■ RAID 0+1 (Striping and Mirroring)
■ RAID 5
RAID 0 (Striping)
RAID 0 is a non-redundant disk array, so there will be data loss with any disk
failure. If something on the disk becomes corrupted, you cannot restore or
recalculate that data. RAID 0 provides the best write throughput performance
because it never updates redundant information. Read throughput is also quite
good, but you can improve it by combining RAID 0 with RAID 1.
Oracle does not recommend using RAID 0 systems without RAID 1 because the loss
of one disk in the array will affect the complete system and make it unavailable.
RAID 0 systems are used mainly in environments where performance and capacity
are the primary concerns rather than availability.
RAID 1 (Mirroring)
RAID 1 provides full data redundancy by complete mirroring of all files. If a disk
failure occurs, the mirrored copy is used to transparently service the request. RAID
1 mirroring requires twice as much disk space as there is data. In general, RAID 1 is
most useful for systems where complete redundancy of data is required and disk
space is not an issue. For large datafiles or systems with less disk space, RAID 1
may not be feasible, because it requires twice as much disk space as there is data.
Writes under RAID 1 are no faster and no slower than usual. Reading data can be
faster than on a single disk because the system can choose to read the data from the
disk that can respond faster.
substitute for, backups and log archives. Mirroring can help your system recover
from disk failures more quickly than using a backup, but mirroring is not as robust.
Mirroring does not protect against software faults and other problems against
which an independent backup would protect your system.
You can effectively use mirroring if you are able to reload read-only data from the
original source tapes. If you have a disk failure, restoring data from backups can
involve lengthy downtime, whereas restoring from a mirrored disk enables your
system to get back online quickly or even stay online while the crashed disk is
replaced and resynchronized.
RAID 5
RAID 5 systems provide redundancy for the original data while storing parity
information as well. The parity information is striped over all disks in the system to
avoid a single disk as a bottleneck during write operations. The I/O throughput of
RAID 5 systems depends upon the implementation and the striping size. For a
typical RAID 5 system, the throughput is normally lower than RAID 0 + 1
configurations. In particular, the performance for high concurrent write operations
such as parallel load can be poor.
Many vendors use memory (as battery-backed cache) in front of the disks to
increase throughput and to become comparable to RAID 0+1. Contact your disk
array vendor for specific details.
Data warehouses often contain large tables and require techniques both for
managing these large tables and for providing good query performance across these
large tables. This chapter discusses two key methodologies for addressing these
needs: parallelism and partitioning.
These topics are discussed:
■ Overview of Parallel Execution
■ Granules of Parallelism
■ Partitioning Design Considerations
Granules of Parallelism
Different parallel operations use different types of parallelism. The optimal physical
database layout depends on the parallel operations that are most prevalent in your
application or even of the necessity of using partitions.
The basic unit of work in parallelism is a called a granule. Oracle divides the
operation being parallelized (for example, a table scan, table update, or index
creation) into granules. Parallel execution processes execute the operation one
granule at a time. The number of granules and their size correlates with the degree
of parallelism (DOP). It also affects how well the work is balanced across query
server processes. There is no way you can enforce a specific granule strategy as
Oracle makes this decision internally.
deleting portions of data) might influence partition layout more than performance
considerations.
Partition Granules
When Oracle uses partition granules, a query server process works on an entire
partition or subpartition of a table or index. Because partition granules are statically
determined by the structure of the table or index when a table or index is created,
partition granules do not give you the flexibility in parallelizing an operation that
block granules do. The maximum allowable DOP is the number of partitions. This
might limit the utilization of the system and the load balancing across parallel
execution servers.
When Oracle uses partition granules for parallel access to a table or index, you
should use a relatively large number of partitions (ideally, three times the DOP), so
that Oracle can effectively balance work across the query server processes.
Partition granules are the basic unit of parallel index range scans and of parallel
operations that modify multiple partitions of a partitioned table or index. These
operations include parallel update, parallel delete, parallel creation of partitioned
indexes, and parallel creation of partitioned tables.
Types of Partitioning
This section describes the partitioning features that significantly enhance data
access and improve overall application performance. This is especially true for
applications that access tables and indexes with millions of rows and many
gigabytes of data.
Partitioning Methods
Oracle offers four partitioning methods:
■ Range Partitioning
■ Hash Partitioning
■ List Partitioning
■ Composite Partitioning
Each partitioning method has different advantages and design considerations.
Thus, each method is more appropriate for a particular situation.
where:
column_list
is an ordered list of values for the columns in the column list. Each value must be
either a literal or a TO_DATE or RPAD function with constant arguments. Only the
VALUES LESS THAN clause is allowed. This clause specifies a non-inclusive upper
bound for the partitions. All partitions, except the first, have an implicit low value
specified by the VALUES LESS THAN literal on the previous partition. Any binary
values of the partition key equal to or higher than this literal are added to the next
higher partition. Highest partition being where MAXVALUE literal is defined.
Keyword, MAXVALUE, represents a virtual infinite value that sorts higher than any
other value for the data type, including the null value.
(
PARTITION sales_jan2000 VALUES LESS THAN(TO_DATE('02/01/2000','DD/MM/YYYY')),
PARTITION sales_feb2000 VALUES LESS THAN(TO_DATE('03/01/2000','DD/MM/YYYY')),
PARTITION sales_mar2000 VALUES LESS THAN(TO_DATE('04/01/2000','DD/MM/YYYY')),
PARTITION sales_apr2000 VALUES LESS THAN(TO_DATE('05/01/2000','DD/MM/YYYY')),
);
List Partitioning List partitioning enables you to explicitly control how rows map to
partitions. You do this by specifying a list of discrete values for the partitioning
column in the description for each partition. This is different from range
partitioning, where a range of values is associated with a partition and with hash
partitioning, where you have no control of the row-to-partition mapping. The
advantage of list partitioning is that you can group and organize unordered and
unrelated sets of data in a natural way.
Index Partitioning
You can choose whether or not to inherit the partitioning strategy of the underlying
tables. You can create both local and global indexes on a table partitioned by range,
hash, or composite methods. Local indexes inherit the partitioning attributes of
their related tables. For example, if you create a local index on a composite table,
Oracle automatically partitions the local index using the composite method.
Oracle supports only range partitioning for global partitioned indexes. You cannot
partition global indexes using the hash or composite partitioning methods.
This SQL example creates the table sales for a period of two years, 1999 and 2000,
and partitions it by range according to the column s_salesdate to separate the
data into eight quarters, each corresponding to a partition. In the example, the
partitioning granularity is not restricted to any logical range.
CREATE TABLE sales
(s_productid NUMBER,
s_saledate DATE,
s_custid NUMBER,
s_totalprice NUMBER)
PARTITION BY RANGE(s_saledate)
(PARTITION sal99q1 VALUES LESS THAN (TO_DATE('01-APR-1999', 'DD-MON-YYYY')),
PARTITION sal99q2 VALUES LESS THAN (TO_DATE('01-JUL-1999', 'DD-MON-YYYY')),
PARTITION sal99q3 VALUES LESS THAN (TO_DATE('01-OCT-1999', 'DD-MON-YYYY')),
PARTITION sal99q4 VALUES LESS THAN (TO_DATE('01-JAN-2000', 'DD-MON-YYYY')),
PARTITION sal00q1 VALUES LESS THAN (TO_DATE('01-APR-2000', 'DD-MON-YYYY')),
PARTITION sal00q2 VALUES LESS THAN (TO_DATE('01-JUL-2000', 'DD-MON-YYYY')),
PARTITION sal00q3 VALUES LESS THAN (TO_DATE('01-OCT-2000', 'DD-MON-YYYY')),
PARTITION sal00q4 VALUES LESS THAN (TO_DATE('01-JAN-2001', 'DD-MON-YYYY')));
When to Use Hash Partitioning The way Oracle distributes data in hash partitions does
not correspond to a business or a logical view of the data, as it does in range
partitioning. Consequently, hash partitioning is not an effective way to manage
historical data. However, hash partitions share some performance characteristics
with range partitions. For example, partition pruning is limited to equality
predicates. You can also use partition-wise joins, parallel index access, and parallel
DML.
If you add or merge a hashed partition, Oracle automatically rearranges the rows to
reflect the change in the number of partitions and subpartitions. The hash function
that Oracle uses is especially designed to limit the cost of this reorganization.
Instead of reshuffling all the rows in the table, Oracles uses an "add partition" logic
that splits one and only one of the existing hashed partitions. Conversely, Oracle
coalesces a partition by merging two existing hashed partitions.
Although the hash function’s use of "add partition" logic dramatically improves the
manageability of hash partitioned tables, it means that the hash function can cause a
skew if the number of partitions of a hash partitioned table, or the number of
subpartitions in each partition of a composite table, is not a power of two. In the
worst case, the largest partition can be twice the size of the smallest. So for optimal
performance, create a number of partitions and subpartitions per partition that is a
power of two. For example, 2, 4, 8, 16, 32, 64, 128, and so on.
This example creates four hashed partitions for the table sales using the column
s_productid as the partition key:
Specify partition names only if you want some of the partitions to have different
properties from those of the table. Otherwise, Oracle automatically generates
internal names for the partitions. Also, you can use the STORE IN clause to assign
hash partitions to tablespaces in a round-robin manner.
When to Use List Partitioning You should use list partitioning when you want to
specifically map rows to partitions based on discrete values.
Unlike range and hash partitioning, multi-column partition keys are not supported
for list partitioning. If a table is partitioned by list, the partitioning key can only
consist of a single column of the table.
When to Use Composite Partitioning Composite partitioning offers the benefits of both
range and hash partitioning. With composite partitioning, Oracle first partitions by
range. Then within each range Oracle creates subpartitions and distributes data
within them using the same hashing algorithm it uses for hash partitioned tables.
Data placed in composite partitions is logically ordered only by the boundaries that
define the range level partitions. The partitioning of data within each partition has
no logical organization beyond the identity of the partition to which the
subpartitions belong.
Consequently, tables and local indexes partitioned using the composite method:
■ Support historical data at the partition level
■ Support the use of subpartitions as units of parallelism for parallel operations
such as PDML or space management and backup and recovery
■ Are eligible for partition pruning and partition-wise joins on the range and hash
dimensions
Using Composite Partitioning Use the composite partitioning method for tables and
local indexes if:
■ Partitions must have a logical meaning to efficiently support historical data
■ The contents of a partition can be spread across multiple tablespaces, devices,
or nodes (of an MPP system)
■ You require both partition pruning and partition-wise joins even when the
pruning and join predicates use different columns of the partitioned table
■ You require a degree of parallelism that is greater than the number of partitions
for backup, recovery, and parallel operations
Most large tables in a data warehouse should use range partitioning. Composite
partitioning should be used for very large tables or for data warehouses with a
well-defined need for the conditions listed above. When using the composite
method, Oracle stores each subpartition on a different segment. Thus, the
subpartitions may have properties that differ from the properties of the table or
from the partition to which the subpartitions belong.
The following example partitions the table sales by range on the column s_
saledate to create four partitions that order data by time. Then, within each range
Each hashed subpartition contains sales data for a single quarter ordered by
product code. The total number of subpartitions is 4x16 or 64.
Partition Pruning
Partition pruning is an essential performance feature for data warehouses. In
partition pruning, the cost-based optimizer analyzes FROM and WHERE clauses in
SQL statements to eliminate unneeded partitions when building the partition access
list. This enables Oracle to perform operations only on those partitions that are
relevant to the SQL statement. Oracle prunes partitions when you use range,
equality, and IN-list predicates on the range partitioning columns, and when you
use equality and IN-list predicates on the hash partitioning columns.
Partition pruning dramatically reduces the amount of data retrieved from disk and
shortens the use of processing time, improving query performance and resource
utilization. If you partition the index and table on different columns (with a global,
partitioned index), partition pruning also eliminates index partitions even when the
partitions of the underlying table cannot be eliminated.
On composite partitioned objects, Oracle can prune at both the range partition level
and at the hash subpartition level using the relevant predicates. Refer to the table
sales from the previous example, partitioned by range on the column s_
salesdate and subpartitioned by hash on column s_productid, and consider
the following example:
Oracle uses the predicate on the partitioning columns to perform partition pruning
as follows:
■ When using range partitioning, Oracle accesses only partitions sal99q2 and
sal99q3.
■ When using hash subpartitioning, Oracle accesses only the one subpartition in
each partition that stores the rows with s_productid=1200. The mapping
between the subpartition and the predicate is calculated based on Oracle’s
internal hash distribution function.
Although "Partition Pruning with DATE Example" on page 5-14 uses the
DD-MON-RR format, which is not the same as the base partition in "Hash
Partitioning Example" on page 5-11, the optimizer can still prune properly.
If you execute an EXPLAIN PLAN statement on the query, the PARTITION_START
and PARTITION_STOP columns of the output table do not specify which partitions
Oracle is accessing. Instead, you see the keyword KEY for both columns. The
keyword KEY for both columns means that partition pruning occurs at run-time. It
can also affect the execution plan because the information about the pruned
partitions is missing compared to the same statement using the same TO_DATE
function than the partition table definition.
Partition-wise Joins
Partition-wise joins reduce query response time by minimizing the amount of data
exchanged among parallel execution servers when joins execute in parallel. This
significantly reduces response time and improves the use of both CPU and memory
resources. In Oracle Real Application Cluster environments, partition-wise joins
also avoid or at least limit the data traffic over the interconnect, which is the key to
achieving good scalability for massive join operations.
Partition-wise joins can be full or partial. Oracle decides which type of join to use.
This large join is typical in data warehousing environments. The entire customer
table is joined with one quarter of the sales data. In large data warehouse
applications, this might mean joining millions of rows. The join method to use in
that case is obviously a hash join. You can reduce the processing time for this hash
join even more if both tables are equipartitioned on the customerid column. This
enables a full partition-wise join.
When you execute a full partition-wise join in parallel, the granule of parallelism, as
described under "Granules of Parallelism" on page 5-3, is a partition. As a result, the
Hash - Hash This is the simplest method: the customer and sales tables are both
partitioned by hash into 16 partitions, on the s_customerid and c_customerid
columns. This partitioning method enables full partition-wise join when the tables
are joined on s_customerid and c_customerid, both representing the same
customer identification number. Because you are using the same hash function to
distribute the same information (customer ID) into the same number of hash
partitions, you can join the equivalent partitions. They are storing the same values.
In serial, this join is performed between pairs of matching hash partitions, one at a
time. When one partition pair has been joined, the join of another partition pair
begins. The join completes when the 16 partition pairs have been processed.
Sales H1 H2 H3 H16
...
Customer H1 H2 H3 H16
Parallel
Execution Server Server Server Server
Servers
In Figure 5–1, assume that the degree of parallelism and the number of partitions
are the same, in other words, 16 for both. Defining more partitions than the degree
of parallelism may improve load balancing and limit possible skew in the
execution. If you have more partitions than query servers, when one query server
completes the join of one pair of partitions, it requests that the query coordinator
give it another pair to join. This process repeats until all pairs have been processed.
This method enables the load to be balanced dynamically when the number of
partition pairs is greater than the degree of parallelism, for example, 64 partitions
with a degree of parallelism of 16.
Composite - Hash This method is a variation of the hash-hash method. The sales
table is a typical example of a table storing historical data. For all the reasons
mentioned under the heading "When to Use Range Partitioning" on page 5-9, range
is the logical initial partitioning method.
For example, assume you want to partition the sales table into eight partitions by
range on the column s_salesdate. Also assume you have two years and that each
partition represents a quarter. Instead of using range partitioning, you can use
composite partitioning to enable a full partition-wise join while preserving the
partitioning on s_salesdate. Partition the sales table by range on s_
salesdate and then subpartition each partition by hash on s_customerid using
16 subpartitions per partition, for a total of 128 subpartitions. The customer table
can still use hash partitioning with 16 partitions.
When you use the method just described, a full partition-wise join works similarly
to the one created by the hash/hash method. The join is still divided into 16 smaller
joins between hash partition pairs from both tables. The difference is that now each
hash partition in the sales table is composed of a set of 8 subpartitions, one from
each range partition.
Figure 5–2 illustrates how the hash partitions are formed in the sales table. Each
cell represents a subpartition. Each row corresponds to one range partition, for a
total of 8 range partitions. Each range partition has 16 subpartitions. Each column
corresponds to one hash partition for a total of 16 hash partitions; each hash
partition has 8 subpartitions. Note that hash partitions can be defined only if all
partitions have the same number of subpartitions, in this case, 16.
Hash partitions are implicit in a composite table. However, Oracle does not record
them in the data dictionary, and you cannot manipulate them with DDL commands
as you can range partitions.
customerid
1999 - Q1
1999 - Q2
1999 - Q3
salesdate
1999 - Q4
2000 - Q1
2000 - Q2
2000 - Q3
2000 - Q4
Hash partition #9
■ The rules for data placement on MPP systems apply here. The only difference is
that a hash partition is now a collection of subpartitions. You must ensure that
all these subpartitions are placed on the same node as the matching hash
partition from the other table. For example, in Figure 5–2, store hash partition 9
of the sales table shown by the eight circled subpartitions, on the same node
as hash partition 9 of the customer table.
Composite - Composite (Hash Dimension) If needed, you can also partition the
customer table by the composite method. For example, you partition it by range
on a postal code column to enable pruning based on postal code. You then
subpartition it by hash on customerid using the same number of partitions (16) to
enable a partition-wise join on the hash dimension.
Range - Range You can also join range partitioned tables in a partition-wise manner,
but this is relatively uncommon. This is more complex to implement because you
must know the distribution of the data before performing the join. Furthermore, if
you do not correctly identify the partition bounds so that you have partitions of
equal size, data skew during the execution may result.
The basic principle for using range-range is the same as for using hash-hash: you
must equipartition both tables. This means that the number of partitions must be
the same and the partition bounds must be identical. For example, assume that you
know in advance that you have 10 million customers, and that the values for
customerid vary from 1 to 10,000,000. In other words, you have 10 million
possible different values. To create 16 partitions, you can range partition both tables,
sales on c_customerid and customer on s_customerid. You should define
partition bounds for both tables in order to generate partitions of the same size. In
this example, partition bounds should be defined as 625001, 1250001, 1875001, ...
10000001, so that each partition contains 625000 rows.
Range - Composite, Composite - Composite (Range Dimension) Finally, you can also
subpartition one or both tables on another column. Therefore, the range/composite
and composite/composite methods on the range dimension are also valid for
enabling a full partition-wise join on the range dimension.
...
Parallel
execution JOIN
server
set 2
Parallel
execution
server
Server Server ... Server re-distribution
hash(c_customerid)
set 1
customers SELECT
Considerations for full partition-wise joins also apply to partial partition-wise joins:
■ The degree of parallelism does not need to equal the number of partitions. In
Figure 5–3, the query executes with two sets of 16 query servers. In this case,
Oracle assigns 1 partition to each query server of the second set. Again, the
number of partitions should always be a multiple of the degree of parallelism.
■ In Oracle Real Application Cluster environments on shared-nothing platforms
(MPPs), each hash partition of sales should preferably have affinity to only
one node in order to avoid remote I/Os. Also, spread partitions over all nodes
to avoid bottlenecks and use all CPU resources available on the system. A node
can host multiple partitions when there are more partitions than nodes.
Composite As with full partition-wise joins, the prime partitioning method for the
sales table is to use the range method on column s_salesdate. This is because
sales is a typical example of a table that stores historical data. To enable a partial
partition-wise join while preserving this range partitioning, subpartition sales by
hash on column s_customerid using 16 subpartitions per partition. Pruning and
partial partition-wise joins can be used together if a query joins customer and
sales and if the query has a selection predicate on s_salesdate.
When sales is composite, the granule of parallelism for a partial partition-wise
join is a hash partition and not a subpartition. Refer to Figure 5–2 for an illustration
of a hash partition in a composite table. Again, the number of hash partitions
should be a multiple of the degree of parallelism. Also, on an MPP system, ensure
that each hash partition has affinity to a single node. In the previous example, the
eight subpartitions composing a hash partition should have affinity to the same
node.
Range Finally, you can use range partitioning on s_customerid to enable a partial
partition-wise join. This works similarly to the hash method, but a side effect of
range partitioning is that the resulting data distribution could be skewed if the size
of the partitions differs. Moreover, this method is more complex to implement
because it requires prior knowledge of the values of the partitioning column that is
also a join key.
This improved performance from using parallel execution is even more noticeable
in Oracle Real Application Cluster configurations with internode parallel execution.
Partition-wise joins dramatically reduce interconnect traffic. Using this feature is for
large DSS configurations that use Oracle Real Application Clusters.
Currently, most Oracle Real Application Clusters platforms, such as MPP and SMP
clusters, provide limited interconnect bandwidths compared with their processing
powers. Ideally, interconnect bandwidth should be comparable to disk bandwidth,
but this is seldom the case. As a result, most join operations in Oracle Real
Application Clusters experience high interconnect latencies without parallel
execution of partition-wise joins.
Reduction of Memory Requirements Partition-wise joins require less memory than the
equivalent join operation of the complete data set of the tables being joined.
In the case of serial joins, the join is performed at the same time on a pair of
matching partitions. If data is evenly distributed across partitions, the memory
requirement is divided by the number of partitions. There is no skew.
In the parallel case, memory requirements depend on the number of partition pairs
that are joined in parallel. For example, if the degree of parallelism is 20 and the
number of partitions is 100, 5 times less memory is required because only 20 joins of
two partitions are performed at the same time. The fact that partition-wise joins
require less memory has a direct effect on performance. For example, the join
probably does not need to write blocks to disk during the build phase of a hash join.
This chapter describes how to use indexes in a data warehousing environment and
discusses the following types of index:
■ Bitmap Indexes
■ B-tree Indexes
■ Local Indexes Versus Global Indexes
Indexes 6-1
Bitmap Indexes
Bitmap Indexes
Bitmap indexes are widely used in data warehousing environments. The
environments typically have large amounts of data and ad hoc queries, but a low
level of concurrent DML transactions. For such applications, bitmap indexing
provides:
■ Reduced response time for large classes of ad hoc queries
■ Reduced storage requirements compared to other indexing techniques
■ Dramatic performance gains even on hardware with a relatively small number
of CPUs or a small amount of memory
■ Efficient maintenance during parallel DML and loads
Fully indexing a large table with a traditional B-tree index can be prohibitively
expensive in terms of space because the indexes can be several times larger than the
data in the table. Bitmap indexes are typically only a fraction of the size of the
indexed data in the table.
Note: Bitmap indexes are available only if you have purchased the
Oracle9i Enterprise Edition. See Oracle9i Database New Features for
more information about the features available in Oracle9i and the
Oracle9i Enterprise Edition.
An index provides pointers to the rows in a table that contain a given key value. A
regular index stores a list of rowids for each key corresponding to the rows with
that key value. In a bitmap index, a bitmap for each key value replaces a list of
rowids.
Each bit in the bitmap corresponds to a possible rowid, and if the bit is set, it means
that the row with the corresponding rowid contains the key value. A mapping
function converts the bit position to an actual rowid, so that the bitmap index
provides the same functionality as a regular index. If the number of different key
values is small, bitmap indexes save space.
Bitmap indexes are most effective for queries that contain multiple conditions in the
WHERE clause. Rows that satisfy some, but not all, conditions are filtered out before
the table itself is accessed. This improves response time, often dramatically.
Cardinality
The advantages of using bitmap indexes are greatest for low cardinality columns in
which the number of distinct values is small compared with the number of rows in
the table. A gender column, which has only two distinct values (male and female),
is ideal for a bitmap index. However, data warehouse administrators also build
bitmap indexes on columns with higher cardinalities.
For example, on a table with one million rows, a column with 10,000 distinct values
is a candidate for a bitmap index. A bitmap index on this column can out-perform a
B-tree index, particularly when this column is often queried in conjunction with
other indexed columns. In fact, in a typical data warehouse environments, a bitmap
indexes can be considered for any non-unique column.
B-tree indexes are most effective for high-cardinality data: that is, for data with
many possible values, such as customer_name or phone_number. In a data
warehouse, B-tree indexes should be used only for unique columns or other
columns with very high cardinalities (that is, columns that are almost unique). The
majority of indexes in a data warehouse should be bitmap indexes.
In ad hoc queries and similar situations, bitmap indexes can dramatically improve
query performance. AND and OR conditions in the WHERE clause of a query can be
resolved quickly by performing the corresponding Boolean operations directly on
the bitmaps before converting the resulting bitmap to rowids. If the resulting
number of rows is small, the query can be answered quickly without resorting to a
full table scan.
Indexes 6-3
Bitmap Indexes
Each entry (or bit) in the bitmap corresponds to a single row of the customers
table. The value of each bit depends upon the values of the corresponding row in
the table. For instance, the bitmap cust_gender='F' contains a one as its first bit
because the region is east in the first row of the customers table. The bitmap
cust_gender='F' has a zero for its third bit because the gender of the third row
is not F.
An analyst investigating demographic trends of the company's customers might
ask, "How many of our married customers have an income level of G or H?" This
corresponds to the following SQL query:
SELECT COUNT(*) FROM customers
WHERE cust_marital_status = 'married'
AND cust_income_level IN ('H: 150,000 - 169,999', 'G: 130,000 - 149,999');
Bitmap indexes can efficiently process this query by merely counting the number of
ones in the bitmap illustrated in Figure 6–1. The result set will be found by using
bitmap or merge operations without the necessity of a conversion to rowids. To
identify additional specific customer attributes that satisfy the criteria, use the
resulting bitmap to access the table after a bitmap to rowid conversion.
0 0 0 0 0 0
1 1 0 1 1 1
1 0 1 1 1 1
AND OR = AND =
0 0 1 0 1 0
0 1 0 0 1 0
1 1 0 1 1 1
Indexes 6-5
Bitmap Indexes
The above query will use a bitmap index on cust_marital_status. Note that
this query would not be able to use a B-tree index.
SELECT COUNT(*) FROM emp;
Any bitmap index can be used for the above query because all table rows are
indexed, including those that have NULL data. If nulls were not indexed, the
optimizer would be able to use indexes only on columns with NOT NULL
constraints.
The following query shows how to use the above bitmap join index and illustrates
its bitmap pattern:
SELECT sales.time_id, customers.cust_gender, sales.amount
FROM sales, customers
WHERE sales.cust_id = customers.cust_id
TIME_ID C AMOUNT
--------- - ----------
01-JAN-98 M 2291
01-JAN-98 F 114
01-JAN-98 M 553
01-JAN-98 M 0
01-JAN-98 M 195
01-JAN-98 M 280
01-JAN-98 M 32
...
Indexes 6-7
Bitmap Indexes
You can create other bitmap join indexes using more than one column or more than
one table, as shown in the below examples.
Indexes 6-9
B-tree Indexes
B-tree Indexes
A B-tree index is organized like an upside-down tree. The bottom level of the index
holds the actual data values and pointers to the corresponding rows, much as the
index in a book has a page number associated with each index entry.
In general, you use B-tree indexes when you know that your typical query refers to
the indexed column and retrieves a few rows. In these queries, it is faster to find the
rows by looking at the index. However, using the book index analogy, if you plan to
look at every single topic in a book, you might not want to look in the index for the
topic and then look up the page. It might be faster to read through every chapter in
the book. Similarly, if you are retrieving most of the rows in a table, it might not
make sense to look up the index to find the table rows. Instead, you might want to
read or scan the table.
B-tree indexes are most commonly used in a data warehouse to index unique or
near-unique keys. In many cases, it may not be necessary to index these columns in
a data warehouse, because unique constraints can be maintained without an index,
and because typical data warehouse queries may not work better with such indexes.
Bitmap indexes should be more common than B-tree indexes in most data
warehouse environments.
Many significant constraint features have been introduced for data warehousing.
Readers familiar with Oracle's constraint functionality in Oracle7 and Oracle8
should take special note of the functionality described in this chapter. In fact, many
Oracle7-based and Oracle8-based data warehouses lacked constraints because of
concerns about constraint performance. Newer constraint functionality addresses
these concerns.
By default, this constraint is both enabled and validated. Oracle implicitly creates a
unique index on sales_id to support this constraint. However, this index can be
problematic in a data warehouse for three reasons:
■ The unique index can be very large, because the sales table can easily have
millions or even billions of rows.
■ The unique index is rarely used for query execution. Most data warehousing
queries do not have predicates on unique keys, so creating this index will
probably not improve performance.
■ If sales is partitioned along a column other than sales_id, the unique index
must be global. This can detrimentally affect all maintenance operations on the
sales table.
A unique index is required for unique constraints to ensure that each individual
row modified in the sales table satisfies the UNIQUE constraint.
For data warehousing tables, an alternative mechanism for unique constraints is:
ALTER TABLE sales ADD CONSTRAINT sales_unique
UNIQUE (prod_id, cust_id, time_id, channel_id) DISABLE VALIDATE;
This statement creates a unique constraint, but, because the constraint is disabled, a
unique index is not required. This approach can be advantageous for many data
warehousing environments because the constraint now ensures uniqueness without
the cost of a unique index.
However, there are trade-offs for the data warehouse administrator to consider with
DISABLE VALIDATE constraints. Because this constraint is disabled, no DML
statements that modify the unique column are permitted against the sales table.
You can use one of two strategies for modifying this table in the presence of a
constraint:
■ Use DDL to add data to this table (such as exchanging partitions). See the
example in Chapter 14, "Maintaining the Data Warehouse".
■ Before modifying this table, drop the constraint. Then, make all necessary data
modifications. Finally, re-create the disabled constraint. Re-creating the
constraint is more efficient than re-creating an enabled constraint. However, this
approach does not guarantee that data added to the sales table while the
constraint has been dropped is unique.
ENABLE NOVALIDATE can quickly create an enforced constraint, even when the
constraint is believed to be true. Suppose that the ETL process verifies that a
FOREIGN KEY constraint is true. Rather than have the database re-verify this
FOREIGN KEY constraint, which would require time and database resources, the
data warehouse administrator could instead create a FOREIGN KEY constraint using
ENABLE NOVALIDATE.
RELY Constraints
The ETL process commonly verifies that certain constraints are true. For example, it
can validate all of the foreign keys in the data coming into the fact table. This means
that you can trust it to provide clean data, instead of implementing constraints in
the data warehouse. You create a RELY constraint as follows:
ALTER TABLE sales ADD CONSTRAINT sales_time_fk
FOREIGN KEY (sales_time_id) REFERENCES time (time_id)
RELY DISABLE NOVALIDATE;
RELY constraints, even though they are not used for data validation, can:
■ Enable more sophisticated query rewrites for materialized views. See
Chapter 22, "Query Rewrite", for further details.
■ Enable other data warehousing tools to retrieve information regarding
constraints directly from the Oracle data dictionary.
Creating a RELY constraint is inexpensive and does not impose any overhead
during DML or load. Because the constraint is not being validated, no data
processing is necessary to create it.
View Constraints
You can create constraints on views. The only type of constraint supported on a
view is a RELY constraint.
This type of constraint is useful when queries typically access views instead of base
tables, and the DBA thus needs to define the data relationships between views
rather than tables. View constraints are particularly useful in OLAP environments,
where they may enable more sophisticated rewrites for materialized views.
This chapter introduces you to the use of materialized views and discusses:
■ Overview of Data Warehousing with Materialized Views
■ Types of Materialized Views
■ Creating Materialized Views
■ Registering Existing Materialized Views
■ Partitioning and Materialized Views
■ Choosing Indexes for Materialized Views
■ Invalidating Materialized Views
■ Security Issues with Materialized Views
■ Altering Materialized Views
■ Dropping Materialized Views
■ Analyzing Materialized View Capabilities
■ Overview of Materialized View Management Tasks
are often referred to as summaries, because they store summarized data. They can
also be used to precompute joins with or without aggregations. A materialized view
eliminates the overhead associated with expensive joins and aggregations for a
large or important class of queries.
Oracle9i
Query Results
Strategy
Generate Plan
Strategy
When using query rewrite, create materialized views that satisfy the largest number
of queries. For example, if you identify 20 queries that are commonly applied to the
detail or fact tables, then you might be able to satisfy them with five or six
well-written materialized views. A materialized view definition can include any
number of aggregations (SUM, COUNT(x), COUNT(*), COUNT(DISTINCT x), AVG,
VARIANCE, STDDEV, MIN, and MAX). It can also include any number of joins. If you
are unsure of which materialized views to create, Oracle provides a set of advisory
procedures in the DBMS_OLAP package to help in designing and evaluating
materialized views for query rewrite. These functions are also known as the
Summary Advisor or the Advisor.
Many large decision support system (DSS) databases have schemas that do not
closely resemble a conventional data warehouse schema, but that still require joins
and aggregates. The use of summary management features imposes no schema
restrictions, and can enable some existing DSS database applications to improve
performance without the need to redesign the database or the application.
Figure 8–2 illustrates the use of summary management in the warehousing cycle.
After the data has been transformed, staged, and loaded into the detail data in the
warehouse, you can invoke the summary management process. First, use the
Advisor to plan how you will use summaries. Then, create summaries and design
how queries will be rewritten.
Operational
Databases Staging
file
Extraction of Data
Incremental Transformations
Detail Data
Summary
Management
Data Warehouse
Query
Rewrite MDDB
Data Mart
Detail
Incremental Extract
Load and Refresh Program
Summary
Workload
Statistics
Understanding the summary management process during the earliest stages of data
warehouse design can yield large dividends later in the form of higher
performance, lower summary administration costs, and reduced storage
requirements.
Hierarchies describe the business relationships and common access patterns in the
database. An analysis of the dimensions, combined with an understanding of the
typical work load, can be used to create materialized views.
Terminology
Some basic data warehousing terms are defined here:
■ Dimension tables describe the business entities of an enterprise, represented as
hierarchical, categorical information such as time, departments, locations, and
products. Dimension tables are sometimes called lookup or reference tables.
Dimension tables usually change slowly over time and are not modified on a
periodic schedule. They are used in long-running decision support queries to
aggregate the data returned from the query into appropriate levels of the
dimension hierarchy.
■ Fact tables describe the business transactions of an enterprise. Fact tables are
sometimes called detail tables.
The vast majority of data in a data warehouse is stored in a few very large fact
tables that are updated periodically with data from one or more operational
online transaction processing (OLTP) databases.
Fact tables include measures such as sales, units, and inventory.
– A simple measure is a numeric or character column of one table such as
fact.sales.
– A computed measure is an expression involving measures of one table, for
example, fact.revenues - fact.expenses.
– A multitable measure is a computed measure defined on multiple tables,
for example, fact_a.revenues - fact_b.expenses.
Fact tables also contain one or more foreign keys that organize the business
transactions by the relevant business entities such as time, product, and market.
In most cases, these foreign keys are non-null, form a unique compound key of
the fact table, and each foreign key joins with exactly one row of a dimension
table.
■ A materialized view is a precomputed table comprising aggregated and joined
data from fact and possibly from dimension tables. Among builders of data
warehouses, a materialized view is also known as a summary or aggregation.
Before starting to define and use the various components of summary management,
you should review your schema design to abide by the following guidelines
wherever possible:
Guidelines 1 and 2 are more important than guideline 3. If your schema design does
not follow guidelines 1 and 2, it does not then matter whether it follows guideline 3.
Guidelines 1, 2, and 3 affect both query rewrite performance and materialized view
refresh performance.
If you are concerned with the time required to enable constraints and whether any
constraints might be violated, use the ENABLE NOVALIDATE with the RELY clause
to turn on constraint checking without validating any of the existing constraints.
The risk with this approach is that incorrect query results could occur if any
constraints are broken. Therefore, as the designer, you must determine how clean
the data is and whether the risk of wrong results is too great.
Fast refresh for a materialized view containing joins and aggregates is possible after
any type of DML to the base tables (direct load or conventional INSERT, UPDATE, or
DELETE). It can be defined to be refreshed ON COMMIT or ON DEMAND. A REFRESH
ON COMMIT, materialized view will be refreshed automatically when a transaction
that does DML to one of the materialized views commits. The time taken to
complete the commit may be slightly longer than usual when this method is chosen.
This is because the refresh operation is performed as part of the commit process.
Therefore, this method may not be suitable if many users are concurrently changing
the tables upon which the materialized view is based.
Here are some examples of materialized views with aggregates. Note that
materialized view logs are only created because this materialized view will be fast
refreshed.
Example 8–2 creates a materialized view store_sales_mv that computes the sum
of sales by store. It is derived by joining the tables store and fact on the
column store_key. The materialized view does not initially contain any data,
because the build method is DEFERRED. A complete refresh is required for the first
refresh of a build deferred materialized view. When it is refreshed and once
populated, this materialized view can be used by query rewrite.
SUM(f.dollar_sales) AS sum_dollar_sales,
COUNT(f.dollar_sales) AS count_dollar_sales,
SUM(f.unit_sales) AS sum_unit_sales,
COUNT(f.unit_sales) AS count_unit_sales
FROM fact f
GROUP BY f.store_key, f.time_key;
This example creates a materialized view that contains aggregates on a single table.
Because the materialized view log has been created, the materialized view is fast
refreshable. If DML is applied against the fact table, then the changes will be
reflected in the materialized view when the commit is issued.
Table 8–1 illustrates the aggregate requirements for materialized views.
Note that COUNT(*) must always be present. Oracle recommends that you include
the optional aggregates in column Z in the materialized view in order to obtain the
most efficient and accurate fast refresh of the aggregates.
When a single materialized view stores all the levels of aggregation needed in an
OLAP environment, it enables efficient creation and data refresh.
Materialized views for OLAP environments have the following characteristics:
■ They contain joins of all the base tables (fact table and dimension tables in a
typical star schema)
■ They create multiple aggregate groupings using GROUPING SETS, ROLLUP, or
CUBE in the GROUP BY clause of the query definition. These grouping features
are described in Chapter 18, "SQL for Aggregation in Data Warehouses".
■ To enable fast refresh or general query rewrite on such a materialized view, the
SELECT list includes a GROUPING_ID function using all the GROUP BY
expressions as its arguments.
This is a materialized view that stores aggregates at four different levels. Queries
can be rewritten to use this materialized view if they require one or more these
groupings.
The creation and fast refresh of such a materialized view is very efficient as all the
joins are factored out (and hence, computed only once) and some groupings can be
derived from other groupings, rather than going to the joined base data. For
example, group (store_country, prod_category) can be computed from
(store_country, prod_category, prod_subcategory, prod_name). In
addition to creation and refresh efficiency, a single database object containing all the
required groupings can be easier to manage than many materialized views each
holding just one aggregate group.
If an OLAP environment’s queries cover the full range of aggregate groupings
possible in its data set, it may be best to materialize the whole hierarchical cube.
This means that each dimension’s aggregation hierarchy is precomputed in
combination with each of the other dimensions. Naturally, precomputing a full
hierarchical cube requires more disk space and higher creation and refresh times
than a small set of aggregate groups. The trade-off in processing time and disk
space versus query performance needs to be factored in before deciding to create it.
Example 8–5 is an example of a hierarchical materialized view:
4. If there are outer joins, unique constraints must exist on the join columns of the
inner table. For example, if you are joining the fact and a dimension table and
the join is an outer join with the fact table being the outer table, there must exist
unique constraints on the join columns of the dimension table.
If some of the above restrictions are not met, you can create the materialized view as
REFRESH FORCE to take advantage of fast refresh when it is possible. If the
materialized view is created as ON COMMIT, Oracle performs all of the fast refresh
checks. If one of the tables did not meet all of the criteria, but the other tables did,
the materialized view would still be fast refreshable with respect to the other tables
for which all the criteria are met.
A materialized view log should contain the rowid of the master table. It is not
necessary to add other columns.
To speed up refresh, you should create indexes on the materialized view's columns
that store the rowids of the fact table.
In this example, in order to perform a fast refresh, UNIQUE constraints should exist
on s.store_key and t.time_key. You should also create indexes on the columns
fact_rid, time_rid, and store_rid, as illustrated below. This will improve the
refresh performance.
Alternatively, if the example shown above did not include the columns time_rid
and store_rid, and if the refresh method was REFRESH FORCE, then this
materialized view would be fast refreshable only if the fact table was updated but
not if the tables time or store were updated.
CREATE MATERIALIZED VIEW detail_fact_mv
PARALLEL
BUILD IMMEDIATE
REFRESH FORCE
AS
SELECT
f.rowid "fact_rid",
s.store_key, s.store_name, f.dollar_sales,
f.unit_sales, f.time_key
FROM fact f, time t, store s
WHERE f.store_key = s.store_key(+) AND
f.time_key = t.time_key(+);
sum_sales_store_time
join_fact_store_time
1. If you do not need the REFRESH FAST clause, then you can define a nested
materialized view.
2. Materialized views with joins only and single-table aggregate materialized
views can be REFRESH FAST and nested if all the materialized views that they
depend on are either materialized join views or single-table aggregate
materialized views.
Here are some guidelines on how to use nested materialized views:
1. If you want to use fast refresh, you should fast refresh all the materialized views
along any chain. It makes little sense to define a fast refreshable materialized
view on top of a materialized view that must be refreshed with a complete
refresh.
2. When using materialized views, you can define them to be ON COMMIT or ON
DEMAND. The choice would depend on the application using the materialized
views. If you expect the materialized views to always remain fresh, then all the
materialized views should have the ON COMMIT refresh option. If the time
window for refresh does not permit refreshing all the materialized views at
commit time, then the appropriate materialized views could be created with (or
altered to have) the ON DEMAND refresh option.
MV2
MV1
Table1 Table2
Naming
The name of a materialized view must conform to standard Oracle naming
conventions. However, if the materialized view is based on a user-defined prebuilt
table, then the name of the materialized view must exactly match that table name.
If you already have a naming convention for tables and indexes, you might consider
extending this naming scheme to the materialized views so that they are easily
identifiable. For example, instead of naming the materialized view sum_of_sales,
it could be called sum_of_sales_mv to denote that this is a materialized view and
not a table or view.
Storage Characteristics
Unless the materialized view is based on a user-defined prebuilt table, it requires
and occupies storage space inside the database. Therefore, the storage needs for the
Build Methods
Two build methods are available for creating the materialized view, as shown in the
following table. If you select BUILD IMMEDIATE, the materialized view definition is
added to the schema objects in the data dictionary, and then the fact or detail tables
are scanned according to the SELECT expression and the results are stored in the
materialized view. Depending on the size of the tables to be scanned, this build
process can take a considerable amount of time.
An alternative approach is to use the BUILD DEFERRED clause, which creates the
materialized view without data, thereby enabling it to be populated at a later date
using the DBMS_MVIEW.REFRESH package described in Chapter 14, "Maintaining
the Data Warehouse".
4. Aggregate functions must occur only as the outermost part of the expression.
That is, aggregates such as AVG(AVG(x)) or AVG(x)+ AVG(x) are not
allowed.
5. CONNECT BY clauses are not allowed.
Refresh Options
When you define a materialized view, you can specify two refresh options: how to
refresh and what type of refresh. If unspecified, the defaults are assumed as ON
DEMAND and FORCE.
The two refresh execution modes are: ON COMMIT and ON DEMAND. Depending on
the materialized view you create, some of the options may not be available.
When a materialized view is maintained using the ON COMMIT method, the time
required to complete the commit may be slightly longer than usual. This is because
the refresh operation is performed as part of the commit process. Therefore this
method may not be suitable if many users are concurrently changing the tables
upon which the materialized view is based.
If you anticipate performing insert, update or delete operations on tables referenced
by a materialized view concurrently with the refresh of that materialized view, and
that materialized view includes joins and aggregation, Oracle recommends you use
ON COMMIT fast refresh rather than ON DEMAND fast refresh.
If you think the materialized view did not refresh, check the alert log or trace file.
If a materialized view fails during refresh at COMMIT time, you must explicitly
invoke the refresh procedure using the DBMS_MVIEW package after addressing the
errors specified in the trace files. Until this is done, the view will no longer be
refreshed automatically at commit time.
You can specify how you want your materialized views to be refreshed from the
detail tables by selecting one of four options: COMPLETE, FAST, FORCE, and NEVER.
Whether the fast refresh option is available depends upon the type of materialized
view. You can call the procedure DBMS_MVIEW.EXPLAIN_MVIEW to determine
whether fast refresh is possible. Fast refresh is available for both general classes of
materialized views:
■ Materialized views with joins only
■ Materialized views with aggregates
■ If there are no outer joins, you can have arbitrary selections and joins in the
WHERE clause. However, if there are outer joins, the WHERE clause cannot have
any selections. Furthermore, if there are outer joins, all the joins must be
connected by ANDs and must use the equality (=) operator.
■ Rowids of all the tables in the FROM list must appear in the SELECT list of the
query.
■ Materialized view logs must exist with rowids for all the base tables in the
FROM list of the query.
ORDER BY Clause
An ORDER BY clause is allowed in the CREATE MATERIALIZED VIEW statement. It
is used only during the initial creation of the materialized view. It is not used
during a full refresh or a fast refresh.
To improve the performance of queries against large materialized views, store the
rows in the materialized view in the order specified in the ORDER BY clause. This
initial ordering provides physical clustering of the data. If indexes are built on the
columns by which the materialized view is ordered, accessing the rows of the
materialized view using the index often reduces the time for disk I/O due to the
physical clustering.
The ORDER BY clause is not considered part of the materialized view definition. As a
result, there is no difference in the manner in which Oracle detects the various types
of materialized views (for example, materialized join views with no aggregates). For
the same reason, query rewrite is not affected by the ORDER BY clause. This feature
is similar to the CREATE TABLE ... ORDER BY capability that exists in Oracle.
The keyword SEQUENCE is new for Oracle9i and Oracle recommends that this
clause be included in your materialized view log statement unless you are sure that
you will never perform a mixed DML operation (a combination of INSERT,
UPDATE, or DELETE operations on multiple tables).
The boundary of a mixed DML operation is determined by whether the
materialized view is ON COMMIT or ON DEMAND.
■ For ON COMMIT, the mixed DML statements occur within the same transaction
because the refresh of the materialized view will occur upon commit of this
transaction.
■ For ON DEMAND, the mixed DML statements occur between refreshes. An
example of a materialized view log is shown below where one is created on the
table sales that includes the SEQUENCE keyword.
CREATE MATERIALIZED VIEW LOG ON sales
WITH SEQUENCE, ROWID
(prod_id, cust_id, time_id, channel_id, promo_id,
quantity_sold, amount, cost)
INCLUDING NEW VALUES;
See Also: Chapter 22, "Query Rewrite", for details about integrity
levels
When you drop a materialized view that was created on a prebuilt table, the table
still exists—only the materialized view is dropped.
If the user-defined materialized view does not contain a time dimension, then:
■ Create a new materialized view that does include the time dimension (if
possible).
■ The view should aggregate over the time column in the new materialized view.
Partition Marker
In many cases, the advantages of PCT will be offset by this restriction for highly
aggregated materialized views. The DBMS_MVIEW.PMARKER function is designed to
significantly reduce the cardinality of the materialized view (see Example 8–9 on
page 8-36 for an example). The function returns a partition identifier that uniquely
identifies the partition for a specified row within a specified partition table. The
DBMS_MVIEW.PMARKER function is used instead of the partition key column in the
SELECT and GROUP BY clauses.
Unlike the general case of a PL/SQL function in a materialized view, use of the
DBMS_MVIEW.PMARKER does not prevent rewrite with that materialized view even
when the rewrite mode is QUERY_REWRITE_INTEGRITY=ENFORCED.
cust_mth_sales_mv includes the partition key column from table sales (time_
id) in both its select and group by lists. This enables PCT on table sales for
materialized view cust_mth_sales_mv. However, the GROUP BY and SELECT
lists include PRODUCTS.PROD_ID rather the partition key column (PROD_
CATEGORY) of the products table. Therefore, PCT is not enabled on table
products for this materialized view. In other words, any partition maintenance
operation to the sales table will allow a PCT fast refresh of cust_mth_sales_mv.
However, PCT fast refresh is not possible after any kind of modification to the
products table. To correct this, the GROUP BY and SELECT lists must include
column PRODUCTS.PROD_CATEGORY. Following a partition maintenance
operation, such as a drop partition, it is recommended a PCT fast refresh be
performed on any materialized view that is referencing the table upon which the
partition operations are undertaken.
In this example, the table part_fact_tab has been partitioned over three months
and then the materialized view was registered to use the prebuilt table. This
materialized view is eligible for query rewrite because the ENABLE QUERY
REWRITE clause has been included.
is most likely due to a privilege not being granted explicitly and trying to inherit the
privilege from a role instead. The owner of the materialized view must have
explicitly been granted SELECT access to the referenced tables if the tables are in a
different schema.
If the materialized view is being created with ON COMMIT REFRESH specified, then
the owner of the materialized view requires an additional privilege if any of the
tables in the defining query are outside the owner's schema. In that case, the owner
requires the ON COMMIT REFRESH system privilege or the ON COMMIT REFRESH
object privilege on each table outside the owner's schema.
See Also: Oracle9i SQL Reference for further information about the
ALTER MATERIALIZED VIEW statement and "Invalidating
Materialized Views" on page 8-41
See Also: Oracle9i Supplied PL/SQL Packages and Types Reference for
further information about the DBMS_MVIEW package
DBMS_MVIEW.EXPLAIN_MVIEW Declarations
The following PL/SQL declarations that are made for you in the DBMS_MVIEW
package show the order and datatypes of these parameters for explaining an
existing materialized view and a potential materialized view with output to a table
and to a VARRAY.
Explain an existing or potential materialized view with output to MV_
CAPABILITIES_TABLE
DBMS_MVIEW.EXPLAIN_MVIEW
(mv IN VARCHAR2,
stmt_id IN VARCHAR2:= NULL);
Using MV_CAPABILITIES_TABLE
One of the simplest ways to use DBMS_MVIEW.EXPLAIN_MVIEW is with the MV_
CAPABILITIES_TABLE, which has the following structure:
CREATE TABLE MV_CAPABILITIES_TABLE
(
STMT_ID VARCHAR(30), -- client-supplied unique statement identifier
MV VARCHAR(30), -- NULL for SELECT based EXPLAIN_MVIEW
CAPABILITY_NAME VARCHAR(30), -- A descriptive name of particular
-- capabilities, such as REWRITE.
-- See Table 8–2
POSSIBLE CHARACTER(1), -- Y = capability is possible
-- N = capability is not possible
RELATED_TEXT VARCHAR(2000), -- owner.table.column, and so on related to
-- this message
RELATED_NUM NUMBER, -- When there is a numeric value
-- associated with a row, it goes here.
MSGNO INTEGER, -- When available, message # explaining
You can use the utlxmv.sql script found in the admin directory to create MV_
CAPABILITIES_TABLE.
You need to use the SEQ column in an ORDER BY clause so the rows will display in a
logical order. If a capability is not possible, N will appear in the P column and an
explanation in the MSGTXT column. If a capability is not possible for more than one
reason, a row is displayed for each reason.
CAPABILITY_NAME P REL_TEXT MSGTXT
--------------- - -------- ------
PCT N
REFRESH_COMPLETE Y
REFRESH_FAST N
REWRITE Y
PCT_TABLE N SALES no partition key or PMARKER in select list
PCT_TABLE N TIMES relation is not a partitioned table
MV_CAPABILITIES_TABLE.CAPABILITY_NAME Details
Table 8–2 lists explanations for values in the CAPABILITY_NAME column.
The following sections will help you create and manage a data warehouse:
■ What are Dimensions?
■ Creating Dimensions
■ Viewing Dimensions
■ Using Dimensions with Constraints
■ Validating Dimensions
■ Altering Dimensions
■ Deleting Dimensions
Dimensions 9-1
What are Dimensions?
region
subregion
country
state
city
customer
Data analysis typically starts at higher levels in the dimensional hierarchy and
gradually drills down if the situation warrants such analysis.
Dimensions do not have to be defined, but spending time creating them can yield
significant benefits, because they help query rewrite perform more complex types of
rewrite. They are mandatory if you use the Summary Advisor (a GUI tool for
materialized view management) to recommend which materialized views to create,
drop, or retain.
Dimensions 9-3
Creating Dimensions
You must not create dimensions in any schema that does not satisfy these
relationships. Incorrect results can be returned from queries otherwise.
Creating Dimensions
Before you can create a dimension object, the dimension tables must exist in the
database, containing the dimension data. For example, if you create a customer
dimension, one or more tables must exist that contain the city, state, and country
information. In a star schema data warehouse, these dimension tables already exist.
It is therefore a simple task to identify which ones will be used.
Now you can draw the hierarchies of a dimension as shown in Figure 9–1. For
example, city is a child of state (because you can aggregate city-level data up to
state), and country. This hierarchical information will be stored in the database
object dimension.
In the case of normalized or partially normalized dimension representation (a
dimension that is stored in more than one table), identify how these tables are
joined. Note whether the joins between the dimension tables can guarantee that
each child-side row joins with one and only one parent-side row. In the case of
denormalized dimensions, determine whether the child-side columns uniquely
determine the parent-side (or attribute) columns. These constraints can be enabled
with the NOVALIDATE and RELY clauses if the relationships represented by the
constraints are guaranteed by other means.
You create a dimension using either the CREATE DIMENSION statement or the
Dimension Wizard in Oracle Enterprise Manager. Within the CREATE DIMENSION
statement, use the LEVEL clause to identify the names of the dimension levels.
This customer dimension contains a single hierarchy with a geograph rollup, with
arrows drawn from the child level to the parent level, as shown in Figure 9–1 on
page 9-3.
Each arrow in this graph indicates that for any child there is one and only one
parent. For example, each city must be contained in exactly one state and each state
must be contained in exactly one country. States that belong to more than one
Each level in the dimension must correspond to one or more columns in a table in
the database. Thus, level product is identified by the column prod_id in the
products table and level subcategory is identified by a column called prod_
subcategory in the same table.
In this example, the database tables are denormalized and all the columns exist in
the same table. However, this is not a prerequisite for creating dimensions. "Using
Normalized Dimension Tables" on page 9-9 shows how to create a dimension
customers_dim that has a normalized schema design using the JOIN KEY clause.
The next step is to declare the relationship between the levels with the HIERARCHY
statement and give that hierarchy a name. A hierarchical relationship is a
functional dependency from one level of a hierarchy to the next level in the
hierarchy. Using the level names defined previously, the CHILD OF relationship
denotes that each child's level value is associated with one and only one parent
level value. The following statements declare a hierarchy prod_rollup and define
the relationship between products, subcategory, and category.
HIERARCHY prod_rollup
( product CHILD OF
subcategory CHILD OF
category)
Dimensions 9-5
Creating Dimensions
See Also: Chapter 22, "Query Rewrite" for further details of using
dimensional information
The design, creation, and maintenance of dimensions is part of the design, creation,
and maintenance of your data warehouse schema. Once the dimension has been
created, check that it meets these requirements:
■ There must be a 1:n relationship between a parent and children. A parent can
have one or more children, but a child can have only one parent.
■ There must be a 1:1 attribute relationship between hierarchy levels and their
dependent dimension attributes. For example, if there is a column fiscal_
month_desc, then a possible attribute relationship would be fiscal_month_
desc to fiscal_month_name.
■ If the columns of a parent level and child level are in different relations, then
the connection between them also requires a 1:n join relationship. Each row
of the child table must join with one and only one row of the parent table. This
relationship is stronger than referential integrity alone, because it requires that
the child join key must be non-null, that referential integrity must be
maintained from the child join key to the parent join key, and that the parent
join key must be unique.
■ Ensure (using database constraints if necessary) that the columns of each
hierarchy level are non-null and that hierarchical integrity is maintained.
■ The hierarchies of a dimension can overlap or be disconnected from each
other. However, the columns of a hierarchy level cannot be associated with
more than one dimension.
■ Join relationships that form cycles in the dimension graph are not supported.
For example, a hierarchy level cannot be joined to itself either directly or
indirectly.
Note: The information stored with a dimension objects is only declarative. The
above discussed relationships are not enforced with the creation of a dimension
object. It is highly recommended to validate any dimension definition with the
DBMS_MVIEW.VALIDATE_DIMENSION procedure, as discussed on "Validating
Dimensions" on page 9-12.
Multiple Hierarchies
A single dimension definition can contain multiple hierarchies as illustrated below.
Suppose our retailer wants to track the sales of certain items over time. The first
step is to define the time dimension over which sales will be tracked. Figure 9–2
illustrates a dimension times_dim with two time hierarchies.
Dimensions 9-7
Creating Dimensions
year fis_year
quarter fis_quarter
month fis_month
fis_week
day
From the illustration, you can construct the hierarchy of the denormalized time_
dim dimension's CREATE DIMENSION statement as follows. The complete CREATE
DIMENSION statement as well as the CREATE TABLE statement are shown in
Appendix B, "Sample Data Warehousing Schema".
CREATE DIMENSION times_dim
LEVEL day IS TIMES.TIME_ID
LEVEL month IS TIMES.CALENDAR_MONTH_DESC
LEVEL quarter IS TIMES.CALENDAR_QUARTER_DESC
LEVEL year IS TIMES.CALENDAR_YEAR
LEVEL fis_week IS TIMES.WEEK_ENDING_DAY
LEVEL fis_month IS TIMES.FISCAL_MONTH_DESC
LEVEL fis_quarter IS TIMES.FISCAL_QUARTER_DESC
LEVEL fis_year IS TIMES.FISCAL_YEAR
HIERARCHY cal_rollup (
day CHILD OF
month CHILD OF
quarter CHILD OF
year
)
HIERARCHY fis_rollup (
day CHILD OF
fis_week CHILD OF
fis_month CHILD OF
fis_quarter CHILD OF
fis_year
) <attribute determination clauses>...
Dimensions 9-9
Viewing Dimensions
Dimension Wizard
The Dimension Wizard is automatically invoked whenever a request is made to
create a dimension object in Oracle Enterprise Manager. You are then guided step
by step through the information required for a dimension.
A dimension created using the Wizard can contain any of the attributes described in
"Creating Dimensions" on page 9-4, such as join keys, multiple hierarchies, and
attributes. You might prefer to use the Wizard because it graphically displays the
hierarchical relationships as they are being constructed. When it is time to describe
the hierarchy, the Wizard automatically displays a default hierarchy based on the
column values, which you can subsequently amend.
Viewing Dimensions
Dimensions can be viewed through one of two methods:
■ Using The DEMO_DIM Package
■ Using Oracle Enterprise Manager
To display all of the dimensions that have been defined, call the procedure DEMO_
DIM.PRINT_ALLDIMS without any parameters as shown below.
EXECUTE DBMS_OUTPUT.ENABLE(10000);
EXECUTE DEMO_DIM.PRINT_ALLDIMS;
Dimensions 9-11
Validating Dimensions
In addition, you should use the RELY clause to inform query rewrite that it can rely
upon the constraints being correct as follows:
ALTER TABLE time MODIFY CONSTRAINT pk_time RELY;
Validating Dimensions
The information of a dimension object is declarative only and not enforced by the
database. If the relationships described by the dimensions are incorrect, incorrect
results could occur. Therefore, you should verify the relationships specified by
CREATE DIMENSION using the DBMS_OLAP.VALIDATE_DIMENSION procedure
periodically.
This procedure is easy to use and has only five parameters:
■ Dimension name
■ Owner name
■ Set to TRUE to check only the new rows for tables of this dimension
■ Set to TRUE to verify that all columns are not null
■ Unique run ID obtained by calling the DBMS_OLAP.CREATE_ID procedure.
The ID is used to identify the result of each run
The following example validates the dimension TIME_FN in the grocery schema
VARIABLE RID NUMBER;
EXECUTE DBMS_OLAP.CREATE_ID(:RID);
EXECUTE DBMS_OLAP.VALIDATE_DIMENSION ('TIME_FN', 'GROCERY', \
FALSE, TRUE, :RID);
However, rather than query this view, it may be better to query the rowid of the
invalid row to retrieve the actual row that has violated the constraint. In this
example, the dimension TIME_FN is checking a table called month. It has found a
row that violates the constraints. Using the rowid, you can see exactly which row in
the month table is causing the problem.
SELECT * FROM month
WHERE rowid IN (SELECT bad_rowid
FROM SYSTEM.MVIEW_EXCEPTIONS
WHERE RUNID = :RID);
Finally, to remove results from the system table for the current run:
EXECUTE DBMS_OLAP.PURGE_RESULTS(:RID);
Altering Dimensions
You can modify the dimension using the ALTER DIMENSION statement. You can
add or drop a level, hierarchy, or attribute from the dimension using this command.
Referring to the time dimension in Figure 9–2, you can remove the attribute fis_
year, drop the hierarchy fis_rollup, or remove the level fiscal_year. In
addition, you can add a new level called f_year as shown below.
ALTER DIMENSION times_dim DROP ATTRIBUTE fis_year;
ALTER DIMENSION times_dim DROP HIERARCHY fis_rollup;
ALTER DIMENSION times_dim DROP LEVEL fis_year;
ALTER DIMENSION times_dim ADD LEVEL f_year IS times.fiscal_year;
Dimensions 9-13
Deleting Dimensions
If you try to remove anything with further dependencies inside the dimension,
Oracle rejects the altering of the dimension. A dimension becomes invalid if you
change any schema object that the dimension is referencing. For example, if the
table on which the dimension is defined is altered, the dimension becomes invalid.
To check the status of a dimension, view the contents of the column invalid in the
ALL_DIMENSIONS data dictionary view.
To revalidate the dimension, use the COMPILE option as follows:
ALTER DIMENSION times_dim COMPILE;
Deleting Dimensions
A dimension is removed using the DROP DIMENSION statement. For example:
DROP DIMENSION times_dim;
This section deals with the tasks for managing a data warehouse.
It contains the following chapters:
■ Overview of Extraction, Transformation, and Loading
■ Extraction in Data Warehouses
■ Transportation in Data Warehouses
■ Loading and Transformation
■ Maintaining the Data Warehouse
■ Change Data Capture
■ Summary Advisor
10
Overview of Extraction, Transformation, and
Loading
Overview of ETL
You need to load your data warehouse regularly so that it can serve its purpose of
facilitating business analysis. To do this, data from one or more operational systems
needs to be extracted and copied into the warehouse. The process of extracting data
from source systems and bringing it into the data warehouse is commonly called
ETL, which stands for extraction, transformation, and loading. The acronym ETL is
perhaps too simplistic, because it omits the trasportation phase and implies that
each of the other phases of the process is distinct. We refer to the entire process,
including data loading, as ETL. You should understand that ETL refers to a broad
process, and not three well-defined steps.
The methodology and tasks of ETL have been well known for many years, and are
not necessarily unique to data warehouse environments: a wide variety of
proprietary applications and database systems are the IT backbone of any
enterprise. Data has to be shared between applications or systems, trying to
integrate them, giving at least two applications the same picture of the world. This
data sharing was mostly addressed by mechanisms similar to what we now call
ETL.
Data warehouse environments face the same challenge with the additional burden
that they not only have to exchange but to integrate, rearrange and consolidate data
over many systems, thereby providing a new unified information base for business
intelligence. Additionally, the data volume in data warehouse environments tends
to be very large.
What happens during the ETL process? During extraction, the desired data is
identified and extracted from many different sources, including database systems
and applications. Very often, it is not possible to identify the specific subset of
interest, therefore more data than necessary has to be extracted, so the identification
of the relevant data will be done at a later point in time. Depending on the source
system's capabilities (for example, operating system resources), some
transformations may take place during this extraction process. The size of the
extracted data varies from hundreds of kilobytes up to gigabytes, depending on the
source system and the business situation. The same is true for the time delta
between two (logically) identical extractions: the time span may vary between
days/hours and minutes to near real-time. Web server log files for example can
easily become hundreds of megabytes in a very short period of time.
ETL Tools
Designing and maintaining the ETL process is often considered one of the most
difficult and resource-intensive portions of a data warehouse project. Many data
warehousing projects use ETL tools to manage this process. Oracle Warehouse
Builder (OWB), for example, provides ETL capabilities and takes advantage of
inherent database abilities. Other data warehouse builders create their own ETL
tools and processes, either inside or outside the database.
Besides the support of extraction, transformation, and loading, there are some other
tasks that are important for a successful ETL implementation as part of the daily
operations of the data warehouse and its support for further enhancements. Besides
the support for designing a data warehouse and the data flow, these tasks are
typically addressed by ETL tools such as OWB.
Oracle9i is not an ETL tool and does not provide a complete solution for ETL.
However, Oracle9i does provide a rich set of capabilities that can be used by both
ETL tools and customized ETL solutions. Oracle9i offers techniques for transporting
data between Oracle databases, for transforming large volumes of data, and for
quickly loading new data into a data warehouse.
Daily Operations
The successive loads and transformations must be scheduled and processed in a
specific order. Depending on the success or failure of the operation or parts of it, the
result must be tracked and subsequent, alternative processes might be started. The
control of the progress as well as the definition of a business worklow of the
operations are typically addressed by ETL tools such as OWB.
This chapter discusses extraction, which is the process of taking data from an
operational system and moving it to your warehouse or staging system. The chapter
discusses:
■ Overview of Extraction in Data Warehouses
■ Understanding Extraction Methods in Data Warehouses
■ Data Warehousing Extraction Examples
Full Extraction
The data is extracted completely from the source system. Since this extraction
reflects all the data currently available on the source system, there’s no need to keep
track of changes to the data source since the last successful extraction. The source
data will be provided as-is and no additional logical information (for example,
timestamps) is necessary on the source site. An example for a full extraction may be
an export file of a distinct table or a remote SQL statement scanning the complete
source table.
Incremental Extraction
At a specific point in time, only the data that has changed since a well-defined event
back in history will be extracted. This event may be the last time of extraction or a
more complex business event like the last booking day of a fiscal period. To identify
this delta change there must be a possibility to identify all the changed information
since this specific time event. This information can be either provided by the source
data itself like an application column, reflecting the last-changed timestamp or a
change table where an appropriate additional mechanism keeps track of the
changes besides the originating transactions. In most cases, using the latter method
means adding extraction logic to the source system.
Many data warehouses do not use any change-capture techniques as part of the
extraction process. Instead, entire tables from the source systems are extracted to the
data warehouse or staging area, and these tables are compared with a previous
extract from the source system to identify the changed data. This approach may not
have significant impact on the source systems, but it clearly can place a considerable
burden on the data warehouse processes, particularly if the data volumes are large.
Oracle’s Change Data Capture mechanism can extract and maintain such delta
information.
See Also: Chapter 15, "Change Data Capture" for further details
about the Change Data Capture framework
Online Extraction
The data is extracted directly from the source system itself. The extraction process
can connect directly to the source system to access the source tables themselves or to
an intermediate system that stores the data in a preconfigured manner (for example,
snapshot logs or change tables). Note that the intermediate system is not necessarily
physically different from the source system.
With online extractions, you need to consider whether the distributed transactions
are using original source objects or prepared source objects.
Offline Extraction
The data is not extracted directly from the source system but is staged explicitly
outside the original source system. The data already has an existing structure (for
example, redo logs, archive logs or transportable tablespaces) or was created by an
extraction routine.
Because change data capture is often desirable as part of the extraction process and
it might not be possible to use Oracle’s change data capture mechanism, this section
describes several techniques for implementing a self-developed change capture on
Oracle source systems:
■ Timestamps
■ Partitioning
■ Triggers
These techniques are based upon the characteristics of the source systems, or may
require modifications to the source systems. Thus, each of these techniques must be
carefully evaluated by the owners of the source system prior to implementation.
Each of these techniques can work in conjunction with the data extraction technique
discussed above. For example, timestamps can be used whether the data is being
unloaded to a file or accessed through a distributed query.
See Also: Chapter 15, "Change Data Capture" for further details
about the Change Data Capture framework
Timestamps
The tables in some operational systems have timestamp columns. The timestamp
specifies the time and date that a given row was last modified. If the tables in an
operational system have columns containing timestamps, then the latest data can
easily be identified using the timestamp columns. For example, the following query
might be useful for extracting today's data from an orders table:
SELECT * FROM orders WHERE TRUNC(CAST(order_date AS date),'dd') = TO_
DATE(SYSDATE,'dd-mon-yyyy');
Partitioning
Some source systems might use Oracle range partitioning, such that the source
tables are partitioned along a date key, which allows for easy identification of new
data. For example, if you are extracting from an orders table, and the orders
table is partitioned by week, then it is easy to identify the current week's data.
Triggers
Triggers can be created in operational systems to keep track of recently updated
records. They can then be used in conjunction with timestamp columns to identify
the exact time and date when a given row was last modified. You do this by creating
a trigger on each source table that requires change data capture. Following each
DML statement that is executed on the source table, this trigger updates the
timestamp column with the current time. Thus, the timestamp column provides the
exact time and date when a given row was last modified.
A similar internalized trigger-based technique is used for Oracle materialized view
logs. These logs are used by materialized views to identify changed data, and these
logs are accessible to end users. A materialized view log can be created on each
source table requiring change data capture. Then, whenever any modifications are
made to the source table, a record is inserted into the materialized view log
indicating which rows were modified. If you want to use a trigger-based
mechanism, use change data capture.
Materialized view logs rely on triggers, but they provide an advantage in that the
creation and maintenance of this change-data system is largely managed by Oracle.
However, Oracle recommends the usage of synchronous Oracle Change Data
Capture for trigger based change capture, since CDC provides an externalized
interface for accessing the change information and provides a framework for
maintaining the distribution of this information to various clients
Trigger-based techniques affect performance on the source systems, and this impact
should be carefully considered prior to implementation on a production source
system.
The exact format of the output file can be specified using SQL*Plus system
variables.
This extraction technique offers the advantage of being able to extract the output of
any SQL statement. The example above extracts the results of a join.
This extraction technique can be parallelized by initiating multiple, concurrent
SQL*Plus sessions, each session running a separate query representing a different
portion of the data to be extracted. For example, suppose that you wish to extract
data from an orders table, and that the orders table has been range partitioned
by month, with partitions orders_jan1998, orders_feb1998, and so on. To
extract a single year of data from the orders table, you could initiate 12 concurrent
SQL*Plus sessions, each extracting a single partition. The SQL script for one such
session could be:
SPOOL order_jan.dat
SELECT * FROM orders PARTITION (orders_jan1998);
SPOOL OFF
The physical method is based on a range of values. By viewing the data dictionary,
it is possible to identify the Oracle data blocks that make up the orders table.
Using this information, you could then derive a set of rowid-range queries for
extracting data from the orders table:
SELECT * FROM orders WHERE rowid BETWEEN <value1> and <value2>;
Note: All parallel techniques can use considerably more CPU and
I/O resources on the source system, and the impact on the source
system should be evaluated before parallelizing any extraction
technique.
Oracle provides a direct-path export, which is quite efficient for extracting data.
However, in Oracle8i, there is no direct-path import, which should be considered
when evaluating the overall performance of an export-based extraction strategy.
This statement creates a local table in a data mart, country_city, and populates it
with data from the countries and customers tables on the source system.
This technique is ideal for moving small volumes of data. However, the data is
transported from the source system to the data warehouse through a single Oracle
Net connection. Thus, the scalability of this technique is limited. For larger data
volumes, file-based data extraction and transportation techniques are often more
scalable and thus more appropriate.
The following topics provide information about transporting data into a data
warehouse:
■ Overview of Transportation in Data Warehouses
■ Understanding Transportation Mechanisms in Data Warehouses
Step 1: Place the Data to be Transported into its own Tablespace The current month's data
must be placed into a separate tablespace in order to be transported. In this
example, you have a tablespace ts_temp_sales, which will hold a copy of the
current month's data. Using the CREATE TABLE ... AS SELECT statement, the
current month's data can be efficiently copied to this tablespace:
CREATE TABLE temp_jan_sales
NOLOGGING
TABLESPACE ts_temp_sales
AS
SELECT * FROM sales
WHERE time_id BETWEEN '31-DEC-1999' AND '01-FEB-2000';
See Also: Oracle9i Supplied PL/SQL Packages and Types Reference for
detailed information about the DBMS_TTS package
In this step, we have copied the January sales data into a separate tablespace;
however, in some cases, it may be possible to leverage the transportable tablespace
feature without even moving data to a separate tablespace. If the sales table has
been partitioned by month in the data warehouse and if each partition is in its own
tablespace, then it may be possible to directly transport the tablespace containing
the January data. Suppose the January partition, sales_jan2000, is located in the
tablespace ts_sales_jan2000. Then the tablespace ts_sales_jan2000 could
potentially be transported, rather than creating a temporary copy of the January
sales data in the ts_temp_sales.
However, the same conditions must be satisfied in order to transport the tablespace
ts_sales_jan2000 as are required for the specially created tablespace. First, this
tablespace must be set to READ ONLY. Second, because a single partition of a
partitioned table cannot be transported without the remainder of the partitioned
table also being transported, it is necessary to exchange the January partition into a
separate table (using the ALTER TABLE statement) to transport the January data.
The EXCHANGE operation is very quick, but the January data will no longer be a
part of the underlying sales table, and thus may be unavailable to users until this
data is exchanged back into the sales table after the export of the metadata. The
January data can be exchanged back into the sales table after you complete step 3.
Step 2: Export the Metadata The Export utility is used to export the metadata
describing the objects contained in the transported tablespace. For our example
scenario, the Export command could be:
EXP TRANSPORT_TABLESPACE=y
TABLESPACES=ts_temp_sales
FILE=jan_sales.dmp
This operation will generate an export file, jan_sales.dmp. The export file will be
small, because it contains only metadata. In this case, the export file will contain
information describing the table temp_jan_sales, such as the column names,
column datatype, and all other information that the target Oracle database will
need in order to access the objects in ts_temp_sales.
Step 3: Copy the Datafiles and Export File to the Target System Copy the data files that
make up ts_temp_sales, as well as the export file jan_sales.dmp to the data
mart platform, using any transportation mechanism for flat files.
Once the datafiles have been copied, the tablespace ts_temp_sales can be set to
READ WRITE mode if desired.
Step 4: Import the Metadata Once the files have been copied to the data mart, the
metadata should be imported into the data mart:
IMP TRANSPORT_TABLESPACE=y DATAFILES='/db/tempjan.f'
TABLESPACES=ts_temp_sales
FILE=jan_sales.dmp
At this point, the tablespace ts_temp_sales and the table temp_sales_jan are
accessible in the data mart. You can incorporate this new data into the data mart's
tables.
You can insert the data from the temp_sales_jan table into the data mart's sales
table in one of two ways:
INSERT /*+ APPEND */ INTO sales SELECT * FROM temp_sales_jan;
Following this operation, you can delete the temp_sales_jan table (and even the
entire ts_temp_sales tablespace).
Alternatively, if the data mart's sales table is partitioned by month, then the new
transported tablespace and the temp_sales_jan table can become a permanent
part of the data mart. The temp_sales_jan table can become a partition of the
data mart's sales table:
ALTER TABLE sales ADD PARTITION sales_00jan VALUES
LESS THAN (TO_DATE('01-feb-2000','dd-mon-yyyy'));
ALTER TABLE sales EXCHANGE PARTITION sales_00jan
WITH TABLE temp_sales_jan
INCLUDING INDEXES WITH VALIDATION;
This chapter helps you create and manage a data warehouse, and discusses:
■ Overview of Loading and Transformation in Data Warehouses
■ Loading Mechanisms
■ Transformation Mechanisms
■ Loading and Transformation Scenarios
Transformation Flow
From an architectural perspective, you can transform your data in two ways:
■ Multistage Data Transformation
■ Pipelined Data Transformation
Table Table
sales
Insert into sales
warehouse table
Table
environments. The ETL process flow can be changed dramatically and the database
becomes an integral part of the ETL solution.
The new functionality renders some of the former necessary process steps obsolete
whilst some others can be remodeled to enhance the data flow and the data
transformation to become more scalable and non-interruptive. The task shifts from
serial transform-then-load process (with most of the tasks done outside the
database) or load-then-transform process, to an enhanced transform-while-loading.
Oracle9i offers a wide variety of new capabilities to address all the issues and tasks
relevant in an ETL scenario. It is important to understand that the database offers
toolkit functionality rather than trying to address a one-size-fits-all solution. The
underlying database has to enable the most appropriate ETL process flow for a
specific customer need, and not dictate or constrain it from a technical perspective.
Figure 13–2 illustrates the new functionality, which is discussed throughout later
sections.
Flat Files
sales
Insert into sales
warehouse table
Table
Loading Mechanisms
You can use the following mechanisms for loading a warehouse:
■ SQL*Loader
■ External Tables
■ OCI and Direct-path APIs
■ Export/Import
SQL*Loader
Before any data transformations can occur within the database, the raw data must
become accessible for the database. One approach is to load it into the database.
Chapter 12, "Transportation in Data Warehouses", discusses several techniques for
transporting data to an Oracle data warehouse. Perhaps the most common
technique for transporting data is by way of flat files.
SQL*Loader is used to move data from flat files into an Oracle data warehouse.
During this data load, SQL*Loader can also be used to implement basic data
transformations. When using direct-path SQL*Loader, basic data manipulation,
such as datatype conversion and simple NULL handling, can be automatically
resolved during the data load. Most data warehouses use direct-path loading for
performance reasons.
Oracle's conventional-path loader provides broader capabilities for data
transformation than a direct-path loader: SQL functions can be applied to any
column as those values are being loaded. This provides a rich capability for
transformations during the data load. However, the conventional-path loader is
slower than direct-path loader. For these reasons, the conventional-path loader
should be considered primarily for loading and transforming smaller amounts of
data.
The following is a simple example of a SQL*Loader controlfile to load data into the
sales fact table of the Sales History schema from an external file sh_
sales.dat. The external flat file sh_sales.dat consists of sales transaction data,
aggregated on a daily level. Not all columns of this external file are loaded into the
sales table. This external file will also be used as source for loading the second fact
table cost of the Sales History schema, which is done using an external table:
The following shows the controlfile (sh_sales.ctl) to load the sales table:
LOAD DATA
INFILE sh_sales.dat
APPEND INTO TABLE sales
FIELDS TERMINATED BY "|"
( PROD_ID, CUST_ID, TIME_ID, CHANNEL_ID, PROMO_ID,
QUANTITY_SOLD, AMOUNT_SOLD)
External Tables
Another approach for handling external data sources is using external tables.
Oracle9i‘s external table feature enables you to use external data as a virtual table
that can be queried and joined directly and in parallel without requiring the
external data to be first loaded in the database. You can then use SQL, PL/SQL, and
Java to access the external data.
External tables enable the pipelining of the loading phase with the transformation
phase. The transformation process can be merged with the loading process without
any interruption of the data streaming. It is no longer necessary to stage the data
inside the database for further processing inside the database, such as comparison
or transformation. For example, the conversion functionality of a conventional load
can be used for a direct-path INSERT AS SELECT statement in conjunction with the
SELECT from an external table.
The main difference between external tables and regular tables is that externally
organized tables are read-only. No DML operations (UPDATE/INSERT/DELETE)
are possible and no indexes can be created on them.
Oracle9i’s external tables are a complement to the existing SQL*Loader
functionality, and are especially useful for environments where the complete
external source has to be joined with existing database objects and transformed in a
complex manner, or where the external data volume is large and used only once.
SQL*Loader, on the other hand, might still be the better choice for loading of data
where additional indexing of the staging table is necessary. This is true for
operations where the data is used in independent complex transformations or the
data is only partially used in further processing.
We cannot load the data into the cost fact table without applying the above
mentioned aggregation of the detailed information, due to the suppression of some
of the dimensions.
Oracle’s external table framework offers a solution to solve this. Unlike
SQL*Loader, where we would have to load the data before applying the
aggregation, we can combine the loading and transformation within a single SQL
DML statement, as shown below. We do not have to stage the data temporarily
before inserting into the target table.
The Oracle object directories must already exist, and point to the directory
containing the sh_sales.dat file as well as the directory containing the bad and
log files.
CREATE TABLE sales_transactions_ext
(
PROD_ID NUMBER(6),
CUST_ID NUMBER,
TIME_ID DATE,
CHANNEL_ID CHAR(1),
PROMO_ID NUMBER(6),
QUANTITY_SOLD NUMBER(3),
AMOUNT_SOLD NUMBER(10,2),
UNIT_COST NUMBER(10,2),
UNIT_PRICE NUMBER(10,2)
)
ORGANIZATION external
(
TYPE oracle_loader
DEFAULT DIRECTORY data_file_dir
ACCESS PARAMETERS
(
RECORDS DELIMITED BY NEWLINE CHARACTERSET US7ASCII
BADFILE log_file_dir:'sh_sales.bad_xt'
LOGFILE log_file_dir:'sh_sales.log_xt'
FIELDS TERMINATED BY "|" LDRTRIM
)
location
(
'sh_sales.dat'
)
)REJECT LIMIT UNLIMITED;
The external table can now be used from within the database, accessing some
columns of the external data only, grouping the data, and inserting it into the
costs fact table:
INSERT /*+ APPEND */ INTO COSTS
(
TIME_ID,
PROD_ID,
UNIT_COST,
UNIT_PRICE
)
SELECT
TIME_ID,
PROD_ID,
SUM(UNIT_COST),
SUM(UNIT_PRICE)
FROM sales_transactions_ext
GROUP BY time_id, prod_id;
Export/Import
Export and import are used when the data is inserted as is into the target system.
No large volumes of data should be handled and no complex extractions are
possible.
Transformation Mechanisms
You have the following choices for transforming data inside the database:
■ Transformation Using SQL
■ Transformation Using PL/SQL
■ Transformation Using Table Functions
Merge Examples
The following discusses various implementations of a merge. The examples assume
that new data for the dimension table products is propagated to the data warehouse
and has to be either inserted or updated. The table products_delta has the same
structure as products.
The advantage of this approach is its simplicity and lack of new language
extensions. The disadvantage is its performance. It requires an extra scan and a join
of both the products_delta and the products tables.
prod_list_price = crec.prod_list_price,
prod_min_price = crec.prod_min_price
WHERE crec.prod_id = prod_id;
IF SQL%notfound THEN
INSERT INTO products
(prod_id, prod_name, prod_desc, prod_subcategory,
prod_subcat_desc, prod_category,
prod_cat_desc, prod_status, prod_list_price, prod_min_price)
VALUES
(crec.prod_id, crec.prod_name, crec.prod_desc, crec.prod_subcategory,
crec.prod_subcat_desc, crec.prod_category,
crec.prod_cat_desc, crec.prod_status, crec.prod_list_price, crec.prod_min_
price);
END IF;
END LOOP;
CLOSE cur;
END merge_proc;
/
Figure 13–3 illustrates a typical aggregation where you input a set of rows and
output a set of rows, in that case, after performing a SUM operation:
In Out
Region Sales Region Sum of Sales
North 10 Table North 35
South 20 Function South 30
North 25 West 10
East 5 East 5
West 10
South 10
... ...
The table function takes the result of the SELECT on In as input and delivers a set
of records in a different format as output for a direct insertion into Out.
Additionally, a table function can fan out data within the scope of an atomic
transaction. This can be used for many occasions like an efficient logging
mechanism or a fan out for other independent transformations. In such a scenario, a
single staging table will be needed.
tf1 tf2
Source Target
Stage Table 1
tf3
This will insert into target and, as part of tf1, into Stage Table 1 within the
scope of an atomic transaction.
INSERT INTO target SELECT * FROM tf3(SELT * FROM stage_table1);
prod_list_price NUMBER(8,2),
prod_min_price NUMBER(8,2)
);
/
CREATE TYPE product_t_table AS TABLE OF product_t;
/
COMMIT;
prod_desc VARCHAR2(4000);
prod_subcategory VARCHAR2(50);
prod_subcat_desc VARCHAR2(2000);
prod_category VARCHAR2(50);
prod_cat_desc VARCHAR2(2000);
prod_weight_class NUMBER(2);
prod_unit_of_measure VARCHAR2(20);
prod_pack_size VARCHAR2(30);
supplier_id NUMBER(6);
prod_status VARCHAR2(20);
prod_list_price NUMBER(8,2);
prod_min_price NUMBER(8,2);
sales NUMBER:=0;
objset product_t_table := product_t_table();
i NUMBER := 0;
BEGIN
LOOP
-- Fetch from cursor variable
FETCH cur INTO prod_id, prod_name, prod_desc, prod_subcategory,
prod_subcat_desc, prod_category, prod_cat_desc, prod_weight_class,
prod_unit_of_measure, prod_pack_size, supplier_id, prod_status,
prod_list_price, prod_min_price;
EXIT WHEN cur%NOTFOUND; -- exit when last row is fetched
IF prod_status='obsolete' AND prod_category != 'Boys' THEN
-- append to collection
i:=i+1;
objset.extend;
objset(i):=product_t( prod_id, prod_name, prod_desc, prod_subcategory,
prod_subcat_desc, prod_category, prod_cat_desc, prod_weight_class, prod_unit_
of_measure, prod_pack_size, supplier_id, prod_status, prod_list_price, prod_
min_price);
END IF;
END LOOP;
CLOSE cur;
RETURN objset;
END;
/
You can use the table function in a SQL statement to show the results. Here we use
additional SQL functionality for the output.
SELECT DISTINCT UPPER(prod_category), prod_status
FROM TABLE(obsolete_products(CURSOR(SELECT * FROM products)));
UPPER(PROD_CATEGORY) PROD_STATUS
-------------------- -----------
GIRLS obsolete
MEN obsolete
2 rows selected.
The following example implements the same filtering than the first one. The main
differences between those two are:
■ This example uses a strong typed REF cursor as input and can be parallelized
based on the objects of the strong typed cursor, as shown in one of the following
examples.
■ The table function returns the result set incrementally as soon as records are
created.
REM Same example, pipelined implementation
REM strong ref cursor (input type is defined)
REM a table without a strong typed input ref cursor cannot be parallelized
REM
CREATE OR
REPLACE FUNCTION obsolete_products_pipe(cur cursor_pkg.strong_refcur_t)
RETURN product_t_table
PIPELINED
PARALLEL_ENABLE (PARTITION cur BY ANY) IS
prod_id NUMBER(6);
prod_name VARCHAR2(50);
prod_desc VARCHAR2(4000);
prod_subcategory VARCHAR2(50);
prod_subcat_desc VARCHAR2(2000);
prod_category VARCHAR2(50);
prod_cat_desc VARCHAR2(2000);
prod_weight_class NUMBER(2);
prod_unit_of_measure VARCHAR2(20);
prod_pack_size VARCHAR2(30);
supplier_id NUMBER(6);
prod_status VARCHAR2(20);
prod_list_price NUMBER(8,2);
prod_min_price NUMBER(8,2);
sales NUMBER:=0;
BEGIN
LOOP
-- Fetch from cursor variable
FETCH cur INTO prod_id, prod_name, prod_desc, prod_subcategory, prod_
subcat_desc, prod_category, prod_cat_desc, prod_weight_class, prod_unit_of_
measure, prod_pack_size, supplier_id, prod_status, prod_list_price, prod_min_
price;
EXIT WHEN cur%NOTFOUND; -- exit when last row is fetched
IF prod_status='obsolete' AND prod_category !='Boys' THEN
PIPE ROW (product_t(prod_id, prod_name, prod_desc, prod_subcategory,
prod_subcat_desc, prod_category, prod_cat_desc, prod_weight_class, prod_unit_of_
measure, prod_pack_size, supplier_id, prod_status, prod_list_price, prod_min_
price));
END IF;
END LOOP;
CLOSE cur;
RETURN;
END;
/
PROD_CATEGORY DECODE(PROD_STATUS,
------------- -------------------
Girls NO LONGER AVAILABLE
Men NO LONGER AVAILABLE
2 rows selected.
We now change the degree of parallelism for the input table products and issue the
same statement again
ALTER TABLE products PARALLEL 4;
The session statistics show that the statement has been parallelized
SELECT * FROM V$PQ_SESSTAT WHERE statistic='Queries Parallelized';
1 row selected.
Table functions are also capable to fan-out results into persistent table structures.
This is demonstrated in the next example. The function filters returns all obsolete
products except a those of a specific prod_category (default Men), which was set
to status obsolete by error. The detected wrong prod_id’s are stored in a separate
table structure. Its result set consists of all other obsolete product categories. It
furthermore demonstrates how normal variables can be used in conjunction with
table functions:
CREATE OR REPLACE FUNCTION obsolete_products_dml(cur cursor_pkg.strong_refcur_t,
prod_cat VARCHAR2 DEFAULT 'Men') RETURN product_t_table
PIPELINED
PARALLEL_ENABLE (PARTITION cur BY ANY) IS
PRAGMA AUTONOMOUS_TRANSACTION;
prod_id NUMBER(6);
prod_name VARCHAR2(50);
prod_desc VARCHAR2(4000);
prod_subcategory VARCHAR2(50);
prod_subcat_desc VARCHAR2(2000);
prod_category VARCHAR2(50);
prod_cat_desc VARCHAR2(2000);
prod_weight_class NUMBER(2);
prod_unit_of_measure VARCHAR2(20);
prod_pack_size VARCHAR2(30);
supplier_id NUMBER(6);
prod_status VARCHAR2(20);
prod_list_price NUMBER(8,2);
prod_min_price NUMBER(8,2);
sales NUMBER:=0;
BEGIN
LOOP
-- Fetch from cursor variable
FETCH cur INTO prod_id, prod_name, prod_desc, prod_subcategory, prod_
subcat_desc, prod_category, prod_cat_desc, prod_weight_class, prod_unit_of_
measure, prod_pack_size, supplier_id, prod_status, prod_list_price, prod_min_
price;
The following query shows all obsolete product groups except the prod_
category Men, which was wrongly set to status obsolete.
SELECT DISTINCT prod_category, prod_status FROM TABLE(obsolete_products_
dml(CURSOR(SELECT * FROM products)));
PROD_CATEGORY PROD_STATUS
------------- -----------
Boys obsolete
Girls obsolete
2 rows selected.
As you can see, there are some products of the prod_category Men that were
obsoleted by accident:
SELECT DISTINCT msg FROM obsolete_products_errors;
MSG
----------------------------------------
correction: category MEN still available
1 row selected.
Taking advantage of the second input variable changes the result set as follows:
SELECT DISTINCT prod_category, prod_status FROM TABLE(obsolete_products_
dml(CURSOR(SELECT * FROM products), 'Boys'));
PROD_CATEGORY PROD_STATUS
------------- -----------
Girls obsolete
Men obsolete
2 rows selected.
MSG
-----------------------------------------
correction: category BOYS still available
1 row selected.
Since table functions can be used like a normal table, they can be nested, as shown
in the next example:
SELECT DISTINCT prod_category, prod_status
FROM TABLE(obsolete_products_dml(CURSOR(SELECT *
FROM TABLE(obsolete_products_pipe(CURSOR(SELECT * FROM products))))));
PROD_CATEGORY PROD_STATUS
------------- -----------
Girls obsolete
1 row selected.
The biggest advantage of Oracle9i ETL is its toolkit functionality, where you can
combine any of the latter discussed functionality to improve and speed up your
ETL processing. For example, you can take an external table as input, join it with an
existing table and use it as input for a parallelized table function to process complex
business logic. This table function can be used as input source for a MERGE
operation, thus streaming the new information for the data warehouse, provided in
a flat file within one single statement through the complete ETL process.
■ Each disk has been further subdivided using an operating system utility into 4
operating system files with names like /dev/D1.1, /dev/D1.2, ... ,
/dev/D30.4.
■ Four tablespaces are allocated on each group of 10 disks. To better balance I/O
and parallelize table space creation (because Oracle writes each block in a
,
, ,
datafile when it is added to a tablespace), it is best if each of the four tablespaces
on each group of 10 disks has its first datafile on a different disk. Thus the first
tablespace has /dev/D1.1 as its first datafile, the second tablespace has
/dev/D4.2 as its first datafile, and so on, as illustrated in Figure 13–5.
,
,
,
Figure 13–5 Datafile Layout for Parallel Load Example
,,,,,,,,
,
,
,
TSfacts1 /dev/D1.1 /dev/D2.1 ... /dev/D10.1
TSfacts2 /dev/D1.2 /dev/D2.2 ... /dev/D10.2
TSfacts3 /dev/D1.3 /dev/D2.3 ... /dev/D10.3
,
TSfacts4 /dev/D1.4 /dev/D2.4 ... /dev/D10.4
TSfacts8 /dev/D11.4 /dev/D12.4 ... /dev/D20.4
Extent sizes in the STORAGE clause should be multiples of the multiblock read size,
where blocksize * MULTIBLOCK_READ_COUNT = multiblock read size.
INITIAL and NEXT should normally be set to the same value. In the case of parallel
load, make the extent size large enough to keep the number of extents reasonable,
and to avoid excessive overhead and serialization due to bottlenecks in the data
dictionary. When PARALLEL=TRUE is used for parallel loader, the INITIAL extent
is not used. In this case you can override the INITIAL extent size specified in the
tablespace default storage clause with the value specified in the loader control file,
for example, 64KB.
Tables or indexes can have an unlimited number of extents, provided you have set
the COMPATIBLE initialization parameter to match the current release number, and
use the MAXEXTENTS keyword on the CREATE or ALTER statement for the
tablespace or object. In practice, however, a limit of 10,000 extents per object is
reasonable. A table or index has an unlimited number of extents, so set the
PERCENT_INCREASE parameter to zero to have extents of equal size.
In the example, the keyword PARALLEL=TRUE is not set. A separate control file per
partition is necessary because the control file must specify the partition into which
the loading should be done. It contains a statement such as:
LOAD INTO facts partition(jan95)
The advantage of this approach is that local indexes are maintained by SQL*Loader.
You still get parallel loading, but on a partition level—without the restrictions of the
PARALLEL keyword.
A disadvantage is that you must partition the input prior to loading manually.
Oracle partitions the input data so that it goes into the correct partitions. In this case
all the loader sessions can share the same control file, so there is no need to mention
it in the statement.
The keyword PARALLEL=TRUE must be used, because each of the seven loader
sessions can write into every partition. In Case 1, every loader session would write
into only one partition, because the data was partitioned prior to loading. Hence all
the PARALLEL keyword restrictions are in effect.
In this case, Oracle attempts to spread the data evenly across all the files in each of
the 12 tablespaces—however an even spread of data is not guaranteed. Moreover,
there could be I/O contention during the load when the loader processes are
attempting to write to the same device simultaneously.
For Oracle Real Application Clusters, divide the loader session evenly among the
nodes. The datafile being read should always reside on the same node as the loader
session.
The keyword PARALLEL=TRUE must be used, because multiple loader sessions can
write into the same partition. Hence all the restrictions entailed by the PARALLEL
keyword are in effect. An advantage of this approach, however, is that it guarantees
that all of the data is precisely balanced, exactly reflecting your partitioning.
The advantage of this approach is that as in Case 3, you have control over the exact
placement of datafiles because you use the FILE keyword. However, you are not
required to partition the input data by value because Oracle does that for you.
A disadvantage is that this approach requires all the partitions to be in the same
tablespace. This minimizes availability.
In the above example, stage_dir is a directory where the external flat files reside.
Note that loading data in parallel can be performed in Oracle9i by using
SQL*Loader. But external tables are probably easier to use and the parallel load is
automatically coordinated. Unlike SQL*Loader, dynamic load balancing between
parallel execution servers will be performed as well because there will be intra-file
parallelism. The latter implies that the user will not have to manually split input
files before starting the parallel load. This will be accomplished dynamically.
This CTAS statement will convert each valid UPC code to a valid product_id
value. If the ETL process has guaranteed that each UPC code is valid, then this
statement alone may be sufficient to implement the entire transformation.
Using this outer join, the sales transactions that originally contained invalidated
UPC codes will be assigned a product_id of NULL. These transactions can be
handled later.
Additional approaches to handling invalid UPC codes exist. Some data warehouses
may choose to insert null-valued product_id values into their sales table, while
other data warehouses may not allow any new data from the entire batch to be
inserted into the sales table until all invalid UPC codes have been addressed. The
correct approach is determined by the business requirements of the data warehouse.
Regardless of the specific requirements, exception handling can be addressed by the
same basic SQL techniques as transformations.
Pivoting Scenarios
A data warehouse can receive data from many different sources. Some of these
source systems may not be relational databases and may store data in very different
formats from the data warehouse. For example, suppose that you receive a set of
sales records from a nonrelational database having the form:
product_id, customer_id, weekly_start_date, sales_sun, sales_mon, sales_tue,
sales_wed, sales_thu, sales_fri, sales_sat
PRODUCT_ID CUSTOMER_ID WEEKLY_ST SALES_SUN SALES_MON SALES_TUE SALES_WED SALES_THU SALES_FRI SALES_SAT
---------- ----------- --------- ---------- ---------- ---------- -------------------- ---------- ----------
111 222 01-OCT-00 100 200 300 400 500 600 700
222 333 08-OCT-00 200 300 400 500 600 700 800
333 444 15-OCT-00 300 400 500 600 700 800 900
In your data warehouse, you would want to store the records in a more typical
relational form in a fact table sales of the Sales History schema:
prod_id, cust_id, time_id, amount_sold
Thus, you need to build a transformation such that each record in the input stream
must be converted into seven records for the data warehouse's sales table. This
operation is commonly referred to as pivoting, and Oracle offers several ways to do
this.
The result of the above will resemble the following:
SELECT prod_id, cust_id, time_id, amount_sold FROM sales;
Pre-Oracle9i Pivoting
The pre-Oracle9i way of doing this involved using CTAS (or parallel INSERT AS
SELECT) or PL/SQL, as shown in Example 13–14 and Example 13–15.
Like all CTAS operations, this operation can be fully parallelized. However, the
CTAS approach also requires seven separate scans of the data, one for each day of
the week. Even with parallelism, the CTAS approach can be time-consuming.
This PL/SQL procedure can be modified to provide even better performance. Array
inserts can accelerate the insertion phase of the procedure. Further performance can
be gained by parallelizing this transformation operation, particularly if the temp_
sales_step1 table is partitioned, using techniques similar to the parallelization of
data unloading described in Chapter 11, "Extraction in Data Warehouses". The
primary advantage of this PL/SQL procedure over a CTAS approach is that it
requires only a single scan of the data.
Oracle9i Pivoting
Oracle9i offers a faster way of pivoting your data by using a multitable insert, as in
Example 13–16.
The above statement only scans the source table once and then inserts the
appropriate data for each day.
This chapter discusses how to load and refresh a data warehouse, and discusses:
■ Using Partitioning to Improve Data Warehouse Refresh
■ Optimizing DML Operations During Refresh
■ Refreshing Materialized Views
■ Using Materialized Views With Partitioned Tables
Apply all constraints to the sales_01_2001 table that are present on the
sales table. This includes referential integrity constraints. A typical constraint
would be:
ALTER TABLE sales_01_2001 ADD CONSTRAINT sales_customer_id
REFERENCES customer(customer_id) ENABLE NOVALIDATE;
If the partitioned table sales has a primary or unique key that is enforced with
a global index structure, please ensure that the constraint on sales_jan01 is
validated without the creation of an index structure, like:
ALTER TABLE sales_01_2001 ADD CONSTRAINT sales_pk_jan01
PRIMARY KEY (sales_transaction_id) DISABLE VALIDATE;
The creation of the constraint with ENABLE clause would cause the creation of a
unique index, which does not match a local index structure of the partitioned
table. The exchange command would fail.
3. Add the sales_01_2001 table to the sales table.
In order to add this new data to the sales table, we need to do two things.
First, we need to add a new partition to the sales table. We will use the ALTER
TABLE ... ADD PARTITION statement. This will add an empty partition to the
sales table:
ALTER TABLE sales ADD PARTITION sales_01_2001
VALUES LESS THAN (TO_DATE('01-FEB-2001', 'DD-MON-YYYY'));
Then, we can add our newly created table to this partition using the EXCHANGE
PARTITION operation. This will exchange the new, empty partition with the
newly loaded table.
ALTER TABLE sales EXCHANGE PARTITION sales_01_2001 WITH TABLE sales_01_2001
INCLUDING INDEXES WITHOUT VALIDATION UPDATE GLOBAL INDEXES;
The EXCHANGE operation will preserve the indexes and constraints that were
already present on the sales_01_2001 table. For unique constraints (such as
the unique constraint on sales_transaction_id), you can use the UPDATE
GLOBAL INDEXES clause, as shown above. This will automatically maintain
your global index structures as part of the partition maintenance operation and
keep them accessible throughout the whole process. If there were only
foreign-key constraints, the exchange operation would be instantaneous.
The benefits of this partitioning technique are significant. First, the new data is
loaded with minimal resource utilization. The new data is loaded into an entirely
separate table, and the index processing and constraint processing are applied only
to the new partition. If the sales table was 50 GB and had 12 partitions, then a new
month's worth of data contains approximately 4 GB. Only the new month's worth of
data needs to be indexed. None of the indexes on the remaining 46 GB of data needs
to be modified at all. This partitioning scheme additionally ensures that the load
processing time is directly proportional to the amount of new data being loaded,
not to the total size of the sales table.
Second, the new data is loaded with minimal impact on concurrent queries. All of
the operations associated with data loading are occurring on a separate sales_01_
2001 table. Therefore, none of the existing data or indexes of the sales table is
affected during this data refresh process. The sales table and its indexes remain
entirely untouched throughout this refresh process.
Third, in case of the existence of any global indexes, those are incrementally
maintained as part of the exchange command. This maintenance does not affect the
availability of the existing global index structures.
The exchange operation can be viewed as a publishing mechanism. Until the data
warehouse administrator exchanges the sales_01_2001 table into the sales
table, end users cannot see the new data. Once the exchange has occurred, then any
end user query accessing the sales table will immediately be able to see the
sales_01_2001 data.
Partitioning is useful not only for adding new data but also for removing data.
Many data warehouses maintain a rolling window of data. For example, the data
warehouse stores the most recent 36 months of sales data. Just as a new partition
can be added to the sales table (as described above), an old partition can be
quickly (and independently) removed from the sales table. The above two
benefits (reduced resources utilization and minimal end-user impact) are just as
pertinent to removing a partition as they are to adding a partition.
This example is a simplification of the data warehouse load scenario. Real-world
data warehouse refresh characteristics are always more complex. However, the
advantages of this rolling window approach are not diminished in more complex
scenarios.
Consider two typical scenarios:
1. Data is loaded daily. However, the data warehouse contains two years of data,
so that partitioning by day might not be desired.
Solution: Partition by week or month (as appropriate). Use INSERT to add the
new data to an existing partition. The INSERT operation only affects a single
partition, so the benefits described above remain intact. The INSERT operation
could occur while the partition remains a part of the table. Inserts into a single
partition can be parallelized:
INSERT INTO sales PARTITION (sales_01_2001) SELECT * FROM new_sales;
suppose that most of data extracted from the OLTP systems will be new sales
transactions. These records will be inserted into the warehouse's sales table, but
some records may reflect modifications of previous transactions, such as returned
merchandise or transactions that were incomplete or incorrect when initially loaded
into the data warehouse. These records require updates to the sales table.
As a typical scenario, suppose that there is a table called new_sales that contains
both inserts and updates that will be applied to the sales table. When designing
the entire data warehouse load process, it was determined that the new_sales
table would contain records with the following semantics:
■ If a given sales_transaction_id of a record in new_sales already exists
in sales, then update the sales table by adding the sales_dollar_amount
and sales_quantity_sold values from the new_sales table to the existing
row in the sales table.
■ Otherwise, insert the entire new record from the new_sales table into the
sales table.
This UPDATE-ELSE-INSERT operation is often called an upsert or merge. A merge
can be executed using one SQL statement in Oracle9i, though it required two earlier.
Purging Data
Occasionally, it is necessary to remove large amounts of data from a data
warehouse. A very common scenario is the rolling window discussed previously, in
which older data is rolled out of the data warehouse to make room for new data.
However, sometimes other data might need to be removed from a data warehouse.
Suppose that a retail company has previously sold products from MS Software,
and that MS Software has subsequently gone out of business. The business users
of the warehouse may decide that they are no longer interested in seeing any data
related to MS Software, so this data should be deleted.
One approach to removing a large volume of data is to use parallel delete:
DELETE FROM sales WHERE sales_product_id IN
(SELECT product_id
FROM product WHERE product_category = 'MS Software');
This SQL statement will spawn one parallel process per partition. This approach
will be much more efficient than a serial DELETE statement, and none of the data in
the sales table will need to be moved.
However, this approach also has some disadvantages. When removing a large
percentage of rows, the DELETE statement will leave many empty row-slots in the
existing partitions. If new data is being loaded using a rolling window technique (or
is being loaded using direct-path INSERT or load), then this storage space will not
be reclaimed. Moreover, even though the DELETE statement is parallelized, there
might be more efficient methods. An alternative method is to re-create the entire
sales table, keeping the data for all product categories except MS Software.
CREATE TABLE sales2 AS
SELECT * FROM sales, product
WHERE sales.sales_product_id = product.product_id
AND product_category <> 'MS Software'
NOLOGGING PARALLEL (DEGREE 8)
#PARTITION ... ;
#create indexes, constraints, and so on
DROP TABLE SALES;
RENAME SALES2 TO SALES;
This approach may be more efficient than a parallel delete. However, it is also costly
in terms of the amount of disk space, because the sales table must effectively be
instantiated twice.
An alternative method to utilize less space is to re-create the sales table one
partition at a time:
CREATE TABLE sales_temp AS SELECT * FROM sales WHERE 1=0;
INSERT INTO sales_temp PARTITION (sales_99jan)
SELECT * FROM sales, product
WHERE sales.sales_product_id = product.product_id
AND product_category <> 'MS Software';
<create appropriate indexes and constraints on sales_temp>
ALTER TABLE sales EXCHANGE PARTITION sales_99jan WITH TABLE sales_temp;
Performing a refresh operation requires temporary space to rebuild the indexes and
can require additional space for performing the refresh operation itself. Some sites
might prefer not to refresh all of their materialized views at the same time: as soon
as some underlying detail data has been updated, all materialized views using this
data will become stale. Therefore, if you defer refreshing your materialized views,
you can either rely on your chosen rewrite integrity level whether or not a stale
materialized view can be used for query rewrite, or you can temporarily disable
query rewrite with an ALTER SYSTEM SET QUERY_REWRITE_ENABLED = FALSE
statement. After refreshing the materialized views, you can reenable query rewrite
as the default for all sessions in the current database instance by specifying ALTER
SYSTEM SET QUERY_REWRITE_ENABLED as TRUE. Refreshing a materialized view
automatically updates all of its indexes. In the case of full refresh, this requires
temporary sort space to rebuild all indexes during refresh. This is because the full
refresh truncates or deletes the table before inserting the new full data volume. If
insufficient temporary space is available to rebuild the indexes, then you must
explicitly drop each index or mark it UNUSABLE prior to performing the refresh
operation.
If you anticipate performing insert, update or delete operations on tables referenced
by a materialized view concurrently with the refresh of that materialized view, and
that materialized view includes joins and aggregation, Oracle recommends you use
ON COMMIT fast refresh rather than ON DEMAND fast refresh.
Complete Refresh
A complete refresh occurs when the materialized view is initially defined as BUILD
IMMEDIATE, unless the materialized view references a prebuilt table. For
materialized views using BUILD DEFERRED, a complete refresh must be requested
before it can be used for the first time. A complete refresh may be requested at any
time during the life of any materialized view. The refresh involves reading the detail
tables to compute the results for the materialized view. This can be a very
time-consuming process, especially if there are huge amounts of data to be read and
processed. Therefore, you should always consider the time required to process a
complete refresh before requesting it.
However, there are cases when the only refresh method available for an already
built materialized view is complete refresh because the materialized view does not
satisfy the conditions specified in the following section for a fast refresh.
Fast Refresh
Most data warehouses have periodic incremental updates to their detail data. As
described in "Schema Design Guidelines for Materialized Views" on page 8-8, you
can use the SQL*Loader or any bulk load utility to perform incremental loads of
detail data. Fast refresh of your materialized views is usually efficient, because
instead of having to recompute the entire materialized view, the changes are
applied to the existing data. Thus, processing only the changes can result in a very
fast refresh time.
ON COMMIT Refresh
A materialized view can be refreshed automatically using the ON COMMIT method.
Therefore, whenever a transaction commits which has updated the tables on which
a materialized view is defined, those changes will be automatically reflected in the
materialized view. The advantage of using this approach is you never have to
remember to refresh the materialized view. The only disadvantage is the time
required to complete the commit will be slightly longer because of the extra
processing involved. However, in a data warehouse, this should not be an issue
because there is unlikely to be concurrent processes trying to update the same table.
Three refresh procedures are available in the DBMS_MVIEW package for performing
ON DEMAND refresh. Each has its own unique set of parameters.
See Also: Oracle9i Supplied PL/SQL Packages and Types Reference for
detailed information about the DBMS_MVIEW package and Oracle9i
Replication explains how to use it in a replication environment
Multiple materialized views can be refreshed at the same time, and they do not all
have to use the same refresh method. To give them different refresh methods,
specify multiple method codes in the same order as the list of materialized views
(without commas). For example, the following specifies that cal_month_sales_
mv be completely refreshed and fweek_pscat_sales_mv receive a fast refresh.
DBMS_MVIEW.REFRESH('CAL_MONTH_SALES_MV, FWEEK_PSCAT_SALES_MV', 'CF', '',
TRUE, FALSE, 0,0,0, FALSE);
If the refresh method is not specified, the default refresh method as specified in the
materialized view definition will be used.
To obtain the list of materialized views that are directly dependent on a given object
(table or materialized view), use the procedure DBMS_MVIEW.GET_MV_
DEPENDENCIES to determine the dependent materialized views for a given table,
or for deciding the order to refresh nested materialized views.
DBMS_MVIEW.GET_MV_DEPENDENCIES(mvlist IN VARCHAR2, deplist OUT VARCHAR2)
The input to the above function is the name or names of the materialized view. The
output is a comma separated list of the materialized views that are defined on it.
For example:
GET_MV_DEPENDENCIES("JOHN.SALES_REG, SCOTT.PROD_TIME", deplist)
populates deplist with the list of materialized views defined on the input
arguments. For example:
deplist <= "JOHN.SUM_SALES_WEST, JOHN.SUM_SALES_EAST, SCOTT.SUM_PROD_MONTH".
Monitoring a Refresh
While a job is running, you can query the V$SESSION_LONGOPS view to tell you
the progress of each materialized view being refreshed.
SELECT * FROM V$SESSION_LONGOPS;
to revalidate the materialized view and then reissue the SELECT statement.
or if the specific update scenarios are unknown, make sure the SEQUENCE
clause is included.
2. Use Oracle's bulk loader utility or direct-path INSERT (INSERT with the
APPEND hint for loads).
This is a lot more efficient than conventional insert. During loading, disable all
constraints and re-enable when finished loading. Note that materialized view
logs are required regardless of whether you use direct load or conventional
DML.
Try to optimize the sequence of conventional mixed DML operations,
direct-path INSERT and the fast refresh of materialized views. You can use fast
refresh with a mixture of conventional DML and direct loads. Fast refresh can
perform significant optimizations if it finds that only direct loads have
occurred, as shown below:
1. Direct-path INSERT (SQL*Loader or INSERT /*+ APPEND */) into the
detail table
2. Refresh materialized view
3. Conventional mixed DML
4. Refresh materialized view
You can use fast refresh with conventional mixed DML (INSERT, UPDATE, and
DELETE) to the detail tables. However, fast refresh will be able to perform
significant optimizations in its processing if it detects that only inserts or deletes
have been done to the tables, such as:
■ DML INSERT or DELETE to the detail table
■ Refresh materialized views
■ DML Update to the detail table
■ Refresh materialized view
Even more optimal is the separation of INSERT and DELETE.
If possible, refresh should be performed after each type of data change (as
shown above) rather than issuing only one refresh at the end. If that is not
possible, restrict the conventional DML to the table to inserts only, to get much
better refresh performance. Avoid mixing deletes and direct loads.
Furthermore, for refresh ON COMMIT, Oracle keeps track of the type of DML
done in the committed transaction. Therefore, do not perform direct-path
INSERT and DML to other tables in the same transaction, as Oracle may not be
able to optimize the refresh phase.
For ON COMMIT materialized views, where refreshes automatically occur at the
end of each transaction, it may not be possible to isolate the DML statements, in
which case keeping the transactions short will help. However, if you plan to
make numerous modifications to the detail table, it may be better to perform
them in one transaction, so that refresh of the materialized view will be
performed just once at commit time rather than after each update.
3. Oracle recommends partitioning the tables because it enables you to use:
■ Parallel DML
For large loads or refresh, enabling parallel DML will help shorten the
length of time for the operation.
■ Partition Change Tracking (PCT) fast refresh
You can refresh your materialized views fast after partition maintenance
operations on the detail tables. "Partition Change Tracking" on page 8-34 for
details on enabling PCT for materialized views.
Partitioning the materialized view will also help refresh performance as refresh
can update the materialized view using parallel DML. For example, assume
that the detail tables and materialized view are partitioned and have a parallel
clause. The following sequence would enable Oracle to parallelize the refresh of
the materialized view.
1. Bulk load into the detail table
2. Enable parallel DML with an ALTER SESSION ENABLE PARALLEL DML
statement
3. Refresh the materialized view
If job queues are enabled and there are many materialized views to refresh, it is
faster to refresh all of them in a single command than to call them individually.
6. Use REFRESH FORCE to ensure getting a refreshed materialized view that can
definitely be used for query rewrite. If a fast refresh cannot be done, a complete
refresh will be performed.
In other words, Oracle builds a partially ordered set of materialized views and
refreshes them such that, after the successful completion of the refresh, all the
materialized views are fresh. The status of the materialized views can be checked by
querying the appropriate USER_, DBA_, or ALL_MVIEWS view.
If any of the materialized views are defined as ON DEMAND refresh (irrespective of
whether the refresh method is FAST, FORCE, or COMPLETE), you will need to refresh
them in the correct order (taking into account the dependencies between the
materialized views) because the nested materialized view will be refreshed with
respect to the current state of the other materialized views (whether fresh or not).
If a refresh fails during commit time, the list of materialized views that has not been
refreshed is written to the alert log, and you must manually refresh them along with
all their dependent materialized views.
Use the same DBMS_MVIEW procedures on nested materialized views that you use
on regular materialized views.
These procedures have the following behavior when used with nested materialized
views:
■ If REFRESH is applied to a materialized view my_mv that is built on other
materialized views, then my_mv will be refreshed with respect to the current
state of the other materialized views (that is, they will not be made fresh first).
■ If REFRESH_DEPENDENT is applied to materialized view my_mv, then only
materialized views that directly depend on my_mv will be refreshed (that is, a
materialized view that depends on a materialized view that depends on my_mv
will not be refreshed).
■ If REFRESH_ALL_MVIEWS is used, the order in which the materialized views
will be refreshed is not guaranteed.
■ GET_MV_DEPENDENCIES provides a list of the immediate (or direct)
materialized view dependencies for an object.
reconstructed from the detail tables, it might be preferable to disable logging on the
materialized view. To disable logging and run incremental refresh non-recoverably,
use the ALTER MATERIALIZED VIEW ... NOLOGGING statement prior to refreshing.
If the materialized view is being refreshed using the ON COMMIT method, then,
following refresh operations, consult the alert log alert_<SID>.log and the trace
file ora_<SID>_number.trc to check that no errors have occurred.
The following examples will illustrate the use of this feature. In "PCT Fast Refresh
Scenario 1", assume sales is a partitioned table using the time_id column and
products is partitioned by the prod_category column. The table times is not a
partitioned table.
As can be seen from the partial sample output from EXPLAIN_MVIEW, any
partition maintenance operation performed on the sales table will allow PCT
fast refresh. However, PCT is not possible after partition maintenance
operations or updates to the products table as there is insufficient information
contained in cust_mth_sales_mv for PCT refresh to be possible. Note that
the times table is not partitioned and hence can never allow for PCT refresh.
Oracle will apply PCT refresh if it can determine that the materialized view has
sufficient information to support PCT for all the updated tables.
4. Suppose at some later point, a SPLIT operation of one partition in the sales
table becomes necessary.
ALTER TABLE SALES
SPLIT PARTITION month3 AT (TO_DATE('05-02-1998', 'DD-MM-YYYY'))
INTO (
PARTITION month3_1
TABLESPACE summ,
PARTITION month3
TABLESPACE summ
);
Fast refresh will automatically do a PCT refresh as it is the only fast refresh
possible in this scenario. However, fast refresh will not occur if a partition
maintenance operation occurs when any update has taken place to a table on
which PCT is not enabled. This is shown in "PCT Fast Refresh Scenario 2".
"PCT Fast Refresh Scenario 1" would also be appropriate if the materialized view
was created using the PMARKER clause as illustrated below.
CREATE MATERIALIZED VIEW cust_sales_marker_mv
BUILD IMMEDIATE
REFRESH FAST ON DEMAND
ENABLE QUERY REWRITE
AS
SELECT DBMS_MVIEW.PMARKER(s.rowid) s_marker,
SUM(s.quantity_sold), SUM(s.amount_sold),
p.prod_name, t.calendar_month_name, COUNT(*),
COUNT(s.quantity_sold), COUNT(s.amount_sold)
FROM sales s, products p, times t
WHERE s.time_id = t.time_id AND
s.prod_id = p.prod_id
GROUP BY DBMS_MVIEW.PMARKER(s.rowid),
p.prod_name, t.calendar_month_name;
6. Refresh cust_mth_sales_mv.
EXECUTE DBMS_MVIEW.REFRESH('CUST_MTH_SALES_MV', 'F',
'',TRUE,FALSE,0,0,0,FALSE);
ORA-12052: cannot fast refresh materialized view SH.CUST_MTH_SALES_MV
The materialized view is not fast refreshable because DML has occurred to a table
on which PCT fast refresh is not possible. To avoid this occurring, Oracle
recommends performing a fast refresh immediately after any partition maintenance
operation on detail tables for which partition tracking fast refresh is available.
If the situation in "PCT Fast Refresh Scenario 2" occurs, there are two possibilities;
perform a complete refresh or switch to the CONSIDER FRESH option outlined
below, if suitable. However, it should be noted that CONSIDER FRESH and partition
change tracking fast refresh are not compatible. Once the ALTER MATERIALIZED
VIEW cust_mth_sales_mv CONSIDER FRESH statement has been issued, PCT
refresh will not longer be applied to this materialized view, until a complete refresh
is done.
A common situation in a warehouse is the use of rolling windows of data. In this
case, the detail table and the materialized view may contain say the last 12 months
of data. Every month, new data for a month is added to the table and the oldest
month is deleted (or maybe archived). PCT refresh provides a very efficient
mechanism to maintain the materialized view in this case.
3. Now, if the materialized view satisfies all conditions for PCT refresh.
EXECUTE DBMS_MVIEW.REFRESH('CUST_MTH_SALES_MV', 'F', '',
TRUE, FALSE,0,0,0,FALSE);
Fast refresh will automatically detect that PCT is available and perform a PCT
refresh.
3. Use CONSIDER FRESH to declare that the materialized view has been refreshed.
ALTER MATERIALIZED VIEW cust_mth_sales_mv CONSIDER FRESH;
The materialized view is now considered stale and requires a refresh because
of the partition operation. However, as the detail table no longer contains all
the data associated with the partition fast refresh cannot be attempted.
2. Therefore, alter the materialized view to tell Oracle to consider it fresh.
ALTER MATERIALIZED VIEW cust_mth_sales_mv CONSIDER FRESH;
Because the fast refresh detects that only INSERT statements occurred against
the sales table it will update the materialized view with the new data.
However, the status of the materialized view will remain UNKNOWN. The only
way to return the materialized view to FRESH status is with a complete refresh
which, also will remove the historical data from the materialized view.
Oracle Change Data Capture efficiently identifies and captures data that has been
added to, updated, or removed from, Oracle relational tables, and makes the change
data available for use by applications. Change Data Capture is provided as an
Oracle database server component with Oracle9i.
This chapter introduces Change Data Capture in the following sections:
■ About Oracle Change Data Capture
■ Installation and Implementation
■ Security
■ Columns in a Change Table
■ Views
■ Synchronous Mode of Data Capture
■ Publishing Change Data
■ Subscribing to Change Data
■ Export and Import Considerations
See Also: Oracle9i Supplied PL/SQL Packages and Types Reference for
more information about the Change Data Capture publish and
subscribe PL/SQL packages.
Table 15–1 Database Extraction With and Without Change Data Capture
Database Extraction . . . .
With Change Data Capture Without Change Data Capture
Extraction Database extraction from INSERT, Database extraction is marginal at
UPDATE, and DELETE operations best for INSERT operations, and
occurs immediately, at the same time problematic for UPDATE and
the changes occur to the source tables. DELETE operations, because the
data is no longer in the table.
Staging Stages data directly to relational The entire contents of tables are
tables; there is no need to use flat files. moved into flat files.
Interface Provides an easy-to-use publish and Error prone and manpower
subscribe interface using DBMS_ intensive to administer.
LOGMNR_CDC_PUBLISH and DBMS_
LOGMNR_CDC_SUBSCRIBE packages.
Cost Supplied with the Oracle9i (and later) Expensive because you must write
database server. Reduces overhead and maintain the capture software
cost by simplifying the extraction of yourself, or purchase it from a
change data. third-party vendors.
Publisher
The publisher is usually a database administrator (DBA) who is in charge of
creating and maintaining schema objects that make up the Change Data Capture
system. The publisher performs these tasks:
■ Determines the relational tables (called source tables) from which the data
warehouse application is interested in capturing change data.
■ Uses the Oracle supplied package, DBMS_LOGMNR_CDC_PUBLISH, to set up the
system to capture data from one or more source tables.
■ Publishes the change data in the form of change tables.
■ Allows controlled access to subscribers by using the SQL GRANT and REVOKE
statements to grant and revoke the SELECT privilege on change tables for users
and roles.
Subscribers
The subscribers, usually applications, are consumers of the published change data.
Subscribers subscribe to one or more sets of columns in source tables. Subscribers
perform the following tasks:
■ Use the Oracle supplied package, DBMS_LOGMNR_CDC_SUBSCRIBE, to
subscribe to source tables for controlled access to the published change data for
analysis.
■ Extend the subscription window and create a new subscriber view when the
subscriber is ready to receive a set of change data.
■ Use SELECT statements to retrieve change data from the subscriber views.
■ Drop the subscriber view and purge the subscription window when finished
processing a block of changes.
■ Drop the subscription when the subscriber no longer needs its change data.
Figure 15–1 Publish and Subscribe Model in a Change Data Capture System
For example, assume that the change tables in Figure 15–1 contains all of the
changes that occurred between Monday and Friday, and also assume that:
■ Subscriber 1 is viewing and processing data from Tuesday.
■ Subscriber 2 is viewing and processing data from Wednesday to Thursday.
Subscribers 1 and 2 each have a unique subscription window that contains a block
of transactions. Oracle Change Data Capture manages the subscription window for
each subscriber by creating a subscriber view that returns a range of transactions of
interest to that subscriber. The subscriber accesses the change data by performing
SELECT statements on the subscriber view that was generated by Change Data
Capture.
When a subscriber needs to read additional change data, the subscriber makes
procedure calls to extend the window and to create a new subscriber view. Each
subscriber can walk through the data at its own pace, while Oracle Change Data
Capture manages the data storage. As each subscriber finishes processing the data
in its subscription window, it calls procedures to drop the subscriber view and purge
the contents of the subscription window. Extending and purging windows is
necessary to prevent the change table from growing indefinitely, and to prevent the
subscriber from seeing the same data again.
Thus, Oracle Change Data Capture provides the following benefits for subscribers:
■ Guarantees that each subscriber sees all of the changes, does not miss any
changes, and does not see the same change data more than once.
■ Keeps track of multiple subscribers and gives each subscriber shared access to
change data.
■ Handles all of the storage management, automatically removing data from
change tables when it is no longer required by any of the subscribers.
Operational
Databases
Change Data Capture System
Source SYNC_
Tables Change Data Capture SOURCE
.. . .
SYNC_SET
Subscriber Subscriber
View 1 View 2
C1 C2 C3 C4 C1 C2 C3 C4 C5 C6 C7 C8 C1 C4 C6 C8
Change Source Table 1 Change Source Table 2 Change Source Table 2 Change Source Table 3
Source System
A source system is a production database that contains source tables for which
Change Data Capture will capture changes.
Source Table
A source table is a database table that resides on the source system that contains the
data you want to capture. Changes made to the source table are immediately
reflected in the change table.
Change Source
A change source represents a source system. There is a system-generated change
source named SYNC_SOURCE.
Change Set
A change set represents the collection of change tables. There is a system-generated
change set named SYNC_SET.
Change Table
A change table contains the change data resulting from DML statements made to a
single source table. A change table consists of two things: the change data itself,
which is stored in a database table, and the system metadata necessary to maintain
the change table. A given change table can capture changes from only one source
table. In addition to published columns, the change table contains control columns
that are managed by Change Data Capture. See the "Columns in a Change Table"
section for more information.
Publication
A publication provides a way for publishers to publish multiple change tables on
the same source table, and control subscriber access to the published change data.
For example, Publication A consists of a change table that contains all the columns
from the EMPLOYEE source table, while Publication B contains all the columns
except the salary column from the EMPLOYEE source table. Because each change
table is a separate publication, the publisher can implement security on the salary
column by allowing only selected subscribers to access Publication A.
Subscriber View
A subscriber view is a view created by Change Data Capture that returns all of the
rows in the subscription window. In Figure 15–2, the subscribers have created two
views: one on columns 7 and 8 of Source Table 3 and one on columns 4, 6, and 8 of
Source Table 4 The columns included in the view are based on the actual columns
that the subscribers subscribed to in the source table.
Subscription Window
A subscription window defines the time range of change rows that the subscriber
can currently see. The oldest row in the window is the low watermark; the newest
row in the window is the high watermark. Each subscriber has a subscription
window.
Security
You grant privileges for a change table separately from the privileges you grant for
a source table. For example, a subscriber that has privileges to perform a SELECT
operation on a source table might not have privileges to perform a SELECT
operation on a change table.
The publisher controls subscribers' access to change data by using the SQL GRANT
and REVOKE statements to grant and revoke the SELECT privilege on change tables
for users and roles. The publisher must grant the SELECT privilege before a user or
application can subscribe to the change table.
The publisher must not grant any DML access (using either the INSERT, UPDATE, or
DELETE statements) to the subscribers on the change tables because of the risk that
a subscriber might inadvertently change the data in the change table, making it
inconsistent with its source. Furthermore, the publisher should avoid creating
change tables in schemas to which users have DML access.
Views
Information about the Change Data Capture environment is provided in the views
described in Table 15–3.
Step 1 Decide which Oracle instance will be the source system that will
provide the change data.
The publisher needs to gather requirements from the subscribers and determine
which source system contains the relevant source tables.
Step 2 Create the change tables that will contain the changes to individual
source tables.
Use the DBMS_LOGMNR_CDC_PUBLISH.CREATE_CHANGE_TABLE procedure to
create change tables.
Create a change table for each source table to be published, and decide which
columns should be included. For update operations, decide whether to capture old
values, new values, or both.
This statement creates a change table named emp_ct within the change set SYNC_
SET. The column_type_list parameter identifies the columns captured by the
change table. The source_schema and source_table parameters identify the
schema and source table that reside on the production system.
The capture_values setting in the example indicates that for UPDATE operations,
the change data will contain two separate rows for each row that changed: one row
will contain the row values before the update occurred, and the other row will
contain the row values after the update occurred.
Step 1 Find the source tables for which the subscriber has access privileges.
Query the ALL_SOURCE_TABLES view to see all of the published source tables for
which the subscriber has access privileges.
set. To see all of the published source table columns for which the subscriber has
privileges, query the ALL_PUBLISHED_COLUMNS view.
In the following example, the subscriber wants to see only one source table.
EXECUTE SYS.DBMS_LOGMNR_CDC_SUBSCRIBE.SUBSCRIBE (\
SUBSCRIPTION_HANDLE => :subhandle, \
SOURCE_SCHEMA => 'scott', \
SOURCE_TABLE => 'emp', \
COLUMN_LIST => 'empno, ename, hiredate');
At this point, the subscriber has created a new window that begins where the
previous window ends. The new window contains any data that was added to the
change table. If no new data has been added, the EXTEND_WINDOW procedure has
no effect. To access the new change data, the subscriber must call the CREATE_
SUBSCRIBER_VIEW procedure, and select from the new subscriber view that is
generated by Change Data Capture.
■ If the publisher adds a user column to a change table and a new subscription
does not include this newly added column, then the subscription window starts
at the low-water mark for the change table thus enabling the subscriber to see
the entire table.
■ If the publisher adds a user column to a change table, and old subscriptions
exist, then the subscription windows remain unchanged.
■ Subscribers subscribe to source columns and never to control columns. They
can see the control columns that were present at the time of the subscription.
■ If the publisher adds a control column to a change table and there is a new
subscription, then the subscription window starts at the low-water mark for the
change table. The subscription can see the control column immediately. All
rows that existed in the change table prior to adding the control column will
have the value NULL for the newly added control column field.
■ If the publisher adds a control column to a change table, then any existing
subscriptions can see the new control column when the window is extended
(DBMS_LOGMNR_CDC_PUBLISH.EXTEND_WINDOW procedure) such that the
low watermark for the window crosses over the point when the control column
was added.
– Take out a dummy subscription to preserve the change table data until real
subscriptions appear. Then, you can drop the dummy subscription.
■ When importing data into a source table for which a change table already exists,
the imported data is also recorded in any associated change tables.
Assume that you have a source table Employees that has an associated change
table "CT_Employees." When you import data into Employees, that data is also
recorded in CT_Employees.
■ When importing a source table and its change table to a database where the
tables did not previously exist, Change Data Capture for that source table will
not be established until the import process completes. This protects you from
duplicating activity in the change table.
■ When exporting a source table and its associated change table, and then
importing them into a new instance, the imported source table data is not
recorded in the change table because it is already in the change table.
■ When importing a change table having the optional control ROW_ID column,
the ROW_ID columns stored in the change table have meaning only if the
associated source table has not been imported. If a source table is re-created or
imported, each row will have a new ROW_ID that is unrelated to the ROW_ID
that was previously recorded in a change table.
■ Any time a table is exported from one database and imported to another, there
is a risk that the import target already has tables or objects with the same name.
Moving a change table to a different database where a table exists that has the
same name as the source table may result in import errors.
■ If you need to move a synchronous change table or its source table, then move
both tables together and check the import log for error messages.
This chapter illustrates how to use the Summary Advisor, a tool for choosing and
understanding materialized views. The chapter contains:
■ Overview of the Summary Advisor in the DBMS_OLAP Package
■ Using the Summary Advisor
■ Estimating Materialized View Size
■ Is a Materialized View Being Used?
Oracle9i
SQL
Cache
Warehouse
Trace
Materialized Log
View and
Dimensions
Oracle Trace
Workload Manager
Format
Workload Collection
(optional)
workload, you have to first load the workload and then generate the set of
recommendations.
Before you can use any of these procedures, you must create a unique identifier for
the data they are about to create. This number is obtained by calling the procedure
CREATE_ID and the unique number is known subsequently as a run ID, workload
ID or filter ID depending on the procedure it is given.
The identifier is used to store the Advisor artifacts in the repository. Each activity in
the Advisor requires a unique identifier to distinguish it from other objects. For
example, when you add a filter item, you associate the item with a filter ID. When
you load a workload, the data gets stored using the unique workload ID. In
addition, when you run RECOMMEND_MVIEW_STRATEGY or EVALUATE_MVIEW_
STRATEGY, a unique ID is associated with the run.
Because the ID is just a unique number, Oracle uses the same CREATE_ID function
to acquire the value. It is only when a specific operation is performed (such as a
load workload) that the ID is identified as a workload ID.
You can use the Summary Advisor with or without a workload, but better results
are achieved if a workload is provided. This can be supplied by:
■ The user
■ Oracle Trace
■ The current SQL cache contents
Once the workload is loaded into the Advisor workload repository or at the time
the materialized view recommendations are generated, a filter can be applied to the
workload to restrict what is analyzed. This provides the ability to generate different
sets of recommendations based on different workload scenarios.
These filters are created using the procedure ADD_FILTER_ITEM. You can create
any number of filters, and use more than one at a time to filter a workload. See
"Using Filters with the Summary Advisor" on page 16-18 for further details.
The Summary Advisor uses four types of schema objects, some of which are defined
in the user's schema and some are in the system schema:
■ User Schema
For both V-table and workload tables, before the workload is available to the
recommendation process. It must be loaded into the advisor workload
repository.
■ V-tables
The advisory functions and procedures of the DBMS_OLAP package require you to
gather structural statistics about fact and dimension table cardinalities, and the
distinct cardinalities of every dimension level column, JOIN KEY column, and fact
table key column. You do this by loading your data warehouse, then gathering
either exact or estimated statistics with the DBMS_STATS package or the ANALYZE
TABLE statement. Because gathering statistics is time-consuming and extreme
statistical accuracy is not required, it is generally preferable to estimate statistics.
Using information from the system workload table, schema metadata and statistical
information generated by the DBMS_STATS package, the Advisor engine generates
summary recommendations and summary usage evaluations and stores the results
in result tables.
To use the Summary Advisor with a workload, some or all of the following steps
must be followed:
1. Optionally obtain an identifier number as a filter ID and define one or more
filter items.
2. Obtain an identifier number as a workload ID and load a workload. If a filter
was defined in step 1, then it can be used during the operation to refine the SQL
statements as they are collected from the workload source. Load the workload.
3. Call the procedure RECOMMEND_MVIEW_STRATEGY to generate the
recommendations.
These steps can be repeated several times with different workloads to see the effect
on the materialized views.
Identifier Numbers
Most of the DBMS_OLAP procedures require a unique identifier as one of their
parameters. You obtain this by calling the procedure CREATE_ID, which is shown
below.
DBMS_OLAP.CREATE_ID Procedure
Workload Management
The Advisor performs best when a workload based on usage is available. The
Advisor Workload Repository is capable of storing multiple workloads, so that the
different uses of a real-world data warehousing environment can be viewed over a
long period of time and across the life cycle of database instance startup and
shutdown.
To facilitate wider use of the Summary Advisor, three types of workload are
supported:
■ Current contents of the SQL cache
■ Oracle Trace collection
■ User-specified Workload
When the workload is loaded using the appropriate load_workload procedure, it
is stored in a new workload repository in the SYSTEM schema called MVIEW_
WORKLOAD whose format is shown in Table 16–2. A specific workload can be
removed by calling the PURGE_WORKLOAD routine and passing it a valid workload
ID. To remove all workloads for the current user, call PURGE_WORKLOAD and pass
the constant value DBMS_OLAP.WORKLOAD_ALL.
Once the workload has been collected using the appropriate LOAD_WORKLOAD
routine, there is also a filter mechanism that may be applied, this lets you specify
the portion of workload that is to be loaded into the repository. You can also use the
same filter mechanism to restrict workload-based summary recommendation and
evaluation to a subset of the queries contained in the workload repository. Once the
workload has been loaded, the Summary Advisor is run by calling the procedure
RECOMMEND_MVIEW_STRATEGY. A major benefit of this approach is that it is easy to
model different workloads by simply modifying the frequency column, removing
some SQL queries, or adding new queries.
Summary Advisor can retrieve workload information from the SQL cache as well as
Oracle Trace. If the collected data was retrieved from a server with the instance
parameter cursor_sharing set to SIMILAR or FORCE, then user queries with
embedded literal values will be converted to a statement that contains
system-generated bind variables.
In Oracle9i, it is not possible to retrieve the bind-variable data in order to
reconstruct the statement in the form originally submitted by the user. This will, in
turn, cause Summary Advisor to not consider the query for rewrite and potentially
miss a critical statement in the user's workload. As a work-around, if the Advisor
will be used to recommend materialized views, then the server should set the
instance parameter CURSOR_SHARING to EXACT.
DBMS_OLAP.LOAD_WORKLOAD_USER Procedure
The actual workload is defined in a separate table and the two parameters owner_
name and table_name describe where it is stored. There is no restriction on which
schema the workload resides in, the name for the table, or how many of these
user-defined tables exist. The only restriction is that the format of the user table
must correspond to the USER_WORKLOAD table, as described in Table 16–4 below:
materialized view usage. Doing so enables you to see which materialized views are
in use. It also lets the Advisor detect any unusual query requests from users that
would result in recommending some different materialized views.
A workload collected by Oracle Trace is loaded using the procedure LOAD_
WORKLOAD_TRACE described below. You obtain workload_id by calling the
procedure CREATE_ID. The value of the flags parameter will determine whether
the workload is considered new, should be used to overwrite an existing workload
or should be appended to an existing workload. The optional filter ID can be
supplied to specify the filter that is to be used against this workload. In addition,
you can specify an application name to describe this workload and give every
query a default priority. The application name is simply a tag that enables you to
classify the workload query. The name can later be used to filter the workload
during a RECOMMEND_MVIEW_STRATEGY or EVALUATE_MVIEW_STRATEGY
operation.
The priority is an important piece of information. It tells the Advisor how important
the query is to the business. When recommendations are formed, the priority will
determine its value and will cause the Advisor to make decisions that favor higher
ranking queries.
If the owner_name parameter is not defined, then the procedure will expect to find
the formatted trace tables in the schema for the current user.
DBMS_OLAP.LOAD_WORKLOAD_TRACE Procedure
Oracle Trace collects two types of data. One is a duration event which causes a data
item to be collected twice: once at the start of the operation and once at the end of
the operation. The duration of the data item is the difference between the start and
end of the operation. For example, execution time is collected as a duration event. It
first collects the clock time when the operation starts. Then it collects the clock time
when the operation ends. Execution time is calculated by subtracting the start time
from the end time.
A point event is a static data item that doesn't change over time. For example, an
owner name is a static data item that would be the same at the start and the end of
an operation.
To collect, analyze and load the summary event set, you must do the following:
1. Set six initialization parameters to collect data using Oracle Trace. Enabling
these parameters incurs some additional overhead at database connection, but
is otherwise transparent.
■ ORACLE_TRACE_COLLECTION_NAME = oraclesm or oraclee
ORACLEE is the Oracle Expert collection which contains Summary Advisor
data and additional data that is only used by Oracle Expert.
ORACLESM is the Summary Advisor collection that contains only Summary
Advisor data and is the preferred collection type.
■ ORACLE_TRACE_COLLECTION_PATH = <location of collection files>
■ ORACLE_TRACE_COLLECTION_SIZE = 0
■ ORACLE_TRACE_ENABLE = TRUE
■ ORACLE_TRACE_FACILITY_NAME = oraclesm or oralcee
■ ORACLE_TRACE_FACILITY_PATH = <location of trace facility files>
See Also: Oracle Enterprise Manager Oracle Trace User’s Guide for
further information regarding these parameters
2. Run the Oracle Trace Manager, specify a collection name, and select the
SUMMARY_EVENT set. Oracle Trace Manager reads information from the
associated configuration file and registers events to be logged with Oracle.
While collection is enabled, the workload information defined in the event set
gets written to a flat log file.
3. When collection is complete, Oracle Trace automatically formats the Oracle
Trace log file into a set of relations, which have the predefined synonyms
beginning with V_192216243_. Alternatively, the collection file, which usually
has an extension of .CDF, can be formatted manually using the otrcfmt utility,
as shown in this example:
otrcfmt collection_name.cdf user/password@database
DBMS_OLAP.LOAD_WORKLOAD_CACHE Procedure
Validating a Workload
Prior to loading a workload, one of the three VALIDATE_WORKLOAD procedures:
■ VALIDATE_WORKLOAD_USER
■ VALIDATE_WORKLOAD_CACHE
■ VALIDATE_WORKLOAD_TRACE
may be called to check that the workload exists. This procedure does not check that
the contents of the workload are valid, it merely checks that the workload exists.
The following are examples of validating the three types of workload:
DECLARE
isitgood NUMBER;
err_text VARCHAR2(200);
BEGIN
DBMS_OLAP.VALIDATE_WORKLOAD_CACHE (isitgood, err_text);
END;
DECLARE
isitgood NUMBER;
err_text VARCHAR2(200);
BEGIN
DBMS_OLAP.VALIDATE_WORKLOAD_TRACE ('SH', isitgood, err_text);
END;
DECLARE
isitgood NUMBER;
err_text VARCHAR2(200);
BEGIN
DBMS_OLAP.VALIDATE_WORKLOAD_USER ('SH', 'USER_WORKLOAD', isitgood, err_text);
END;
Removing a Workload
When workloads are no longer needed, they can be removed using the procedure
PURGE_WORKLOAD. You can delete all workloads or a specific collection.
DBMS_OLAP.PURGE_WORKLOAD Procedure
DBMS_OLAP.ADD_FILTER_ITEM Procedure
The Advisor supports nine different filter item types. For each filter item, Oracle
stores an attribute that tells Advisor how to apply the selection rule. For example,
an APPLICATION item requires a string attribute that can be either a single name as
in GREG, or it can be a comma-separated list of names like GREG, ROSE, KALLIE,
HANNAH. For a single name, the Advisor takes the value and only accept the
workload query if the application name exactly matches the supplied name. For a
list of names, the queries application name must appear in the list. Referring to my
example, a query whose application name is GREG would match either a single
application filter item containing GREG or the list GREG, ROSE, KALLIE, HANNAH.
Conversely, a query whose application is KALLIE will only match the filter item list
GREG, ROSE, KALLIE, HANNAH.
For numeric filter items such as CARDINALITY, the attribute represents a possible
range of values. Advisor will determine if the filter item represents a bounded
range such as 500 to 1000000, or it could be an exact match like 1000 to 1000. When
the range value is specified as NULL, then the value is infinitely small or large,
depending upon which attribute is set.
Data filters, such as LASTUSE behave similar to numeric filter except Advisor treats
the range test as two dates. A NULL value indicates infinity.
You can define a number of different types of filter as shown in Table 16–9:
When dealing with a workload, the client can optionally attach a filter to reduce or
refine the set of target SQL statements. If no filter is attached, then all target SQL
statements will be collected or used.
A new filter can be created with the CREATE_ID call. Filter items can be added to
the filter by using the ADD_FILTER_ITEM call. When a filter is created, an entry is
stored in the read-only view SYSTEM.MVIEW_FILTER.
Below is an example illustrating how to add three different types of filter
1. Declare an output variable to receive the new identifier:
VARIABLE MY_ID NUMBER:
The above example defines a filter with three filter items. The first filter will only
allow queries that reference the table SCOTT.EMP. The second item will accept
queries that were executed by one of the users SCOTT, PAYROLL or PERSONNEL.
Finally, the third filter item accepts queries that execute at least 500 times.
Note, all filter items must match for a single query to be accepted. If any of the
items fail to match, then the query will not be accepted.
In the previous example, three filters will be applied against the data. However,
each filter item could have created with its only unique filter id, thus creating three
different filters as shown below:
VARIABLE MY_ID NUMBER:
CALL DBMS_OLAP.CREATE_ID(:MY_ID);
CALL DBMS_OLAP.ADD_FILTER_ITEM(:MY_ID,'BASETABLE',
'SCOTT.EMP', NULL, NULL, NULL, NULL);
CALL DBMS_OLAP.CREATE_ID(:MY_ID);
CALL DBMS_OLAP.ADD_FILTER_ITEM(:MY_ID, 'OWNER',
'SCOTT, PAYROLL,PERSONNEL', NULL, NULL, NULL, ULL);
CALL DBMS_OLAP.CREATE_ID(:MY_ID);
CALL DBMS_OLAP.ADD_FILTER_ITEM(:MY_ID, 'FREQUENCY', NULL, 500,NULL, NULL,NULL);
Removing a Filter
A filter can be removed at anytime by calling the procedure PURGE_FILTER which
is described below. You can delete a specific filter or all filters. You can remove all
filters using the purge_filter call by specifying DBMS_OLAP.FILTER_ALL as
the filter ID.
DBMS_OLAP.PURGE_FILTER Procedure
DBMS_OLAP.PURGE_FILTER Example
VARIABLE MY_FILTER_ID NUMBER:
CALL DBMS_OLAP.PURGE_FILTER(:MY_FILTER_ID);
CALL DBMS_OLAP.PURGE_FILTER(DBMS_OLAP.FILTER_ALL);
See Also: Oracle9i Supplied PL/SQL Packages and Types Reference for
detailed information about the DBMS_OLAP package
The results from calling this package are put in the table SYSTEM.MVIEW_
RECOMMENDATIONS shown in Table 16–12. The output can be queried directly using
the MVIEW_RECOMMENDATION table or a structured report can be generated using
the DBMS_OLAP.GENERATE_MVIEW_REPORT procedure.
Below are several examples of how you can use the Advisor recommendation
process:
In this example, a workload is loaded from the table USER_WORKLOAD and no
filtering is applied to the workload. The fact table is called sales.
DECLARE
workload_id NUMBER;
run_id NUMBER;
BEGIN
-- load the workload
DBMS_OLAP.CREATE_ID (workload_id);
DBMS_OLAP.LOAD_WORKLOAD_USER(workload_id, DBMS_OLAP.WORKLOAD_NEW,
DBMS_OLAP.FILTER_NONE,'SH','USER_WORKLOAD' );
-- run recommend_mv
DBMS_OLAP.CREATE_ID (run_id);
DBMS_OLAP.RECOMMEND_MVIEW_STRATEGY(run_id, workload_id, NULL, 1000000, 100,
NULL, 'sales');
END;
In this example, the workload is derived from the current contents of the SQL cache
and then filtered for only the application called sales_hist:
DECLARE
workload_id NUMBER;
filter_id NUMBER;
run_id NUMBER;
BEGIN
-- add a filter for application sales_hist
DBMS_OLAP.CREATE_ID(filter_id);
DBMS_OLAP.ADD_FILTER_ITEM(filter_id, 'APPLICATION', 'sales_hist', NULL, NULL,
NULL, NULL);
-- load the workload
DBMS_OLAP.CREATE_ID(workload_id);
DBMS_OLAP.LOAD_WORKLOAD_CACHE (workload_id, DBMS_OLAP.WORKLOAD_NEW, DBMS_
OLAP.FILTER_NONE, NULL
,NULL);
-- run recommend_mv
DBMS_OLAP.CREATE_ID (run_id );
DBMS_OLAP.RECOMMEND_MVIEW_STRATEGY(run_id, workload_id, NULL, 1000000, 100,
NULL, 'sales');
END;
■ filename
Contains the fully-specified output file name
■ id
Contains the Advisor run ID for which the script will be created
■ tablespace_name
Contains an optional tablespace in which new materialized views will be
placed.
The resulting script is a executable SQL file that can contain DROP and CREATE
statements for materialized views. For new materialized views, the name of the
materialized views is auto-generated by combining the user-specified ID and the
Rank value of the materialized views. It is recommended that the user review the
generated SQL script before attempting to execute it.
The filename specification requires the same security model as described in the
GENERATE_MVIEW_REPORT routine.
/*****************************************************************************
** Rank 2
** Storage 6,000 bytes
** Gain 13.00%
** Benefit Ratio 874.00
*****************************************************************************/
/*****************************************************************************
** Rank 3
** Storage 6,000 bytes
** Gain 76.00%
** Benefit Ratio 8,744.00
**
** SELECT COUNT(*), MAX(dollar_cost), MIN(dollar_cost)
** FROM sh.sales
** WHERE store_key IN (10, 23)
** AND unit_sales > 5000
** GROUP BY store_key, promotion_key
*****************************************************************************/
Parameters
■ file_name
A valid output file specification. Note, the Oracle9i restricts file access within
Oracle Stored Procedures. This means that file locations and names must adhere
to the known file permissions in the Policy Table. See the Security and
Performance section of the Oracle9i Java Developer’s Guide for more information
on file permissions.
■ id
The Advisor ID number used to collect or analyze data. NULL indicates all data
for the requested section.
■ flags
Report flags to indicate required detail sections. Multiple sections can be
selected by referencing the following constants.
RPT_ALL
RPT_ACTIVITY
RPT_JOURNAL
RPT_RECOMMENDATION
RPT_USAGE
RPT_WORKLOAD_DETAIL
RPT_WORKLOAD_FILTER
RPT_WORKLOAD_QUERY
Because of the Oracle security model, report output file directories must be granted
read and write permission prior to executing this call. The call is described in the
the Oracle9i Java Developer’s Guide and is as follows:
CALL DBMS_JAVA.GRANT_PERMISSION('Oracle-user-goes-here',
'java.io.FilePermission', 'directory-spec-goes-here/*', 'read, write');
DBMS_OLAP.PURGE_RESULTS Procedure
DBMS_OLAP.SET_CANCELLED Procedure
Sample Sessions
Here are some complete examples of how to use the Summary Advisor.
REM===============================================================
REM Setup for demos
REM===============================================================
CONNECT system/manager
GRANT SELECT ON mview_recommendations to sh;
GRANT SELECT ON mview_workload to sh;
GRANT SELECT ON mview_filter to sh;
DISCONNECT
REM***************************************************************
REM * Demo 1: Materialized View Recommendation With User Workload*
REM***************************************************************
REM===============================================================
REM Step 1. Define user workload table and add artificial workload queries.
REM===============================================================
CONNECT sh/sh
CREATE TABLE user_workload(
query VARCHAR2(40),
owner VARCHAR2(40),
application VARCHAR2(30),
frequency NUMBER,
lastuse DATE,
priority NUMBER,
responsetime NUMBER,
resultsize NUMBER
)
/
INSERT INTO user_workload values
(
'SELECT SUM(s.quantity_sold)
FROM sales s, products p
WHERE s.prod_id = p.prod_id and p.prod_category = ''Boys''
GROUP BY p.prod_category', 'SH', 'app1', 10, NULL, 5, NULL, NULL
)
/
INSERT INTO user_workload values
(
'SELECT SUM(s.amount)
FROM sales s, products p
WHERE s.prod_id = p.prod_id AND
p.prod_category = ''Girls''
GROUP BY p.prod_category',
'SH', 'app1', 10, NULL, 6, NULL, NULL
)
/
INSERT INTO user_workload values
(
'SELECT SUM(quantity_sold)
FROM sales s, products p
WHERE s.prod_id = p.prod_id and
p.prod_category = ''Men''
GROUP BY p.prod_category
',
'SH', 'app1', 11, NULL, 3, NULL, NULL
)
/
INSERT INTO user_workload VALUES
(
'SELECT SUM(quantity_sold)
FROM sales s, products p
WHERE s.prod_id = p.prod_id and
p.prod_category in (''Women'', ''Men'')
GROUP BY p.prod_category ', 'SH', 'app1', 1, NULL, 8, NULL, NULL
)
/
REM===================================================================
REM Step 2. Create a new identifier to identify a new collection in the
REM internal repository and load the user-defined workload into the
REM workload collection without filtering the workload.
REM
=======================================================================
VARIABLE WORKLOAD_ID NUMBER;
EXECUTE DBMS_OLAP.CREATE_ID(:workload_id);
EXECUTE DBMS_OLAP.LOAD_WORKLOAD_USER(:workload_id,\
DBMS_OLAP.WORKLOAD_NEW,\
DBMS_OLAP.FILTER_NONE, 'SH', 'USER_WORKLOAD');
SELECT COUNT(*) FROM SYSTEM.MVIEW_WORKLOAD
WHERE workloadid = :workload_id;
REM====================================================================
REM Step 3. Create a new identifier to identify a new filter object. Add
REM two filter items such that the filter can filter out workload
REM queries with priority >= 5 and frequency <= 10.
REM=====================================================================
VARIABLE filter_id NUMBER;
EXECUTE DBMS_OLAP.CREATE_ID(:filter_id);
EXECUTE DBMS_OLAP.ADD_FILTER_ITEM(:filter_id, 'PRIORITY',
NULL, 5, NULL, NULL, NULL);
EXECUTE DBMS_OLAP.ADD_FILTER_ITEM(:filter_id, 'FREQUENCY', NULL,
NULL, 10, NULL, NULL);
SELECT COUNT(*) FROM SYSTEM.MVIEW_FILTER
WHERE filterid = :filter_id;
REM=====================================================================
REM Step 4. Recommend materialized views with part of the previous workload
REM collection that satisfy the filter conditions. Create a new
REM identifier to identify the recommendation output.
REM===================================================================
REM===================================================================
REM Step 5. Generate HTML reports on the output.
REM===================================================================
EXECUTE DBMS_OLAP.GENERATE_MVIEW_REPORT('/tmp/output1.html', :run_id, DBMS_
OLAP.RPT_RECOMMENDATION);
REM====================================================================
REM Step 6. Cleanup current output, filter and workload collection
REM FROM the internal repository, truncate the user workload table
REM for new user workloads.
REM====================================================================
EXECUTE DBMS_OLAP.PURGE_RESULTS(:run_id);
EXECUTE DBMS_OLAP.PURGE_FILTER(:filter_id);
EXECUTE DBMS_OLAP.PURGE_WORKLOAD(:workload_id);
SELECT COUNT(*) FROM SYSTEM.MVIEW_WORKLOAD
WHERE workloadid = :WORKLOAD_ID;
TRUNCATE TABLE user_workload;
REM*******************************************************************
REM * Demo 2: Materialized View Recommendation With SQL Cache. *
REM*******************************************************************
CONNECT sh/sh
REM===================================================================
REM Step 1. Run some applications or some SQL queries, so that the
REM Oracle SQL Cache is populated with target queries.
REM===================================================================
REM Clear Pool of SQL queries
SELECT SUM(s.quantity_sold)
FROM sales s, products p
WHERE s.prod_id = p.prod_id
GROUP BY p.prod_category;
SELECT SUM(s.amount)
FROM sales s, products p
WHERE s.prod_id = p.prod_id
GROUP BY p.prod_category;
REM====================================================================
REM Step 2. Create a new identifier to identify a new collection in the
REM internal repository and grab a snapshot of the Oracle SQL cache
REM into the new collection.
REM====================================================================
EXECUTE DBMS_OLAP.CREATE_ID(:WORKLOAD_ID);
EXECUTE DBMS_OLAP.LOAD_WORKLOAD_CACHE(:WORKLOAD_ID,
DBMS_OLAP.WORKLOAD_NEW, DBMS_OLAP.FILTER_NONE, NULL, 1);
SELECT COUNT(*) FROM SYSTEM.MVIEW_WORKLOAD
WHERE workloadid = :WORKLOAD_ID;
REM====================================================================
REM Step 3. Recommend materialized views with all of the workload workload
REM and no filtering.
REM=====================================================================
EXECUTE DBMS_OLAP.RECOMMEND_MVIEW_STRATEGY(:run_id, :workload_id, DBMS_
OLAP.FILTER_NONE, 10000000, 100, NULL, NULL);
SELECT COUNT(*) FROM SYSTEM.MVIEW_RECOMMENDATIONS;
REM===================================================================
REM Step 4. Generate HTML reports on the output.
REM====================================================================
EXECUTE DBMS_OLAP.GENERATE_MVIEW_REPORT('/tmp/output2.html', :run_id,
DBMS_OLAP.RPT_RECOMMENDATION);
REM====================================================================
REM Step 5. Evaluate materialized views.
REM====================================================================
EXECUTE DBMS_OLAP.CREATE_ID(:run_id);
EXECUTE DBMS_OLAP.EVALUATE_MVIEW_STRATEGY(:run_id, workload_id, DBMS_
OLAP.FILTER_NONE);
REM==================================================================
REM Step 5. Cleanup current output, and workload collection
REM FROM the internal repository.
REM===================================================================
EXECUTE DBMS_OLAP.PURGE_RESULTS(:run_id);
EXECUTE DBMS_OLAP.PURGE_WORKLOAD(:workload_id);
DISCONNECT
REM===================================================================
REM Cleanup for demos.
REM===================================================================
CONNECT system/manager
REVOKE SELECT ON MVIEW_RECOMMENDATIONS FROM sh;
REVOKE SELECT ON MVIEW_WORKLOAD FROM sh;
REVOKE SELECT ON MVIEW_FILTER FROM sh;
DISCONNECT
ESTIMATE_MVIEW_SIZE Parameters
Table 16–15 ESTIMATE_MVIEW_SIZE Procedure Parameters
Parameter Description
stmt_id Arbitrary string used to identify the statement in an EXPLAIN
PLAN.
select_clause The SELECT statement to be analyzed.
num_rows Estimated cardinality.
num_bytes Estimated number of bytes.
ESTIMATE_SUMMARY_SIZE returns:
■ The number of rows it expects in the materialized view
■ The size of the materialized view in bytes
In the example shown below, the query specified in the materialized view is passed
into the ESTIMATE_SUMMARY_SIZE procedure. Note that the SQL statement is
passed in without a semicolon at the end.
DBMS_OLAP.ESTIMATE_SUMMARY_SIZE ('simple_store',
'SELECT product_key1, product_key2,
SUM(dollar_sales) AS sum_dollar_sales,
SUM(unit_sales) AS sum_unit_sales,
SUM(dollar_cost) AS sum_dollar_cost,
SUM(customer_count) AS no_of_customers
FROM fact GROUP BY product_key1, product_key2', no_of_rows, mv_size );
The procedure returns two values: an estimate for the number of rows, and the size
of the materialized view in bytes, as shown below.
No of Rows: 17284
Size of Materialized view (bytes): 2281488
DBMS_OLAP.EVALUATE_MVIEW_STRATEGY Procedure
Table 16–16 DBMS_OLAP.EVALUATE_MVIEW_STRATEGY Procedure Parameters
Parameter Datatype Description
run_id NUMBER The Advisor-assigned id for the current session
workload_id NUMBER An optional workload id that maps to a user-supplied
workload
filter_id NUMBER The optional filter id is used to identify a filter against
the target workload
In the example below, the utilization of materialized views is analyzed and the
results are displayed.
DBMS_OLAP.EVALUATE_MVIEW_STRATEGY(:run_id, NULL, DBMS_OLAP.FILTER_NONE);
This section deals with ways to improve your data warehouse’s performance, and
contains the following chapters:
■ Schema Modeling Techniques
■ SQL for Aggregation in Data Warehouses
■ SQL for Analysis in Data Warehouses
■ Advanced Analytic Services
■ Using Parallel Execution
■ Query Rewrite
17
Schema Modeling Techniques
Star Schemas
The star schema is the simplest data warehouse schema. It is called a star schema
because the entity-relationship diagram of this schema resembles a star, with points
radiating from a central table. The center of the star consists of one or more fact
tables and the points of the star are the dimension tables.
A star schema is characterized by one or more very large fact tables that contain the
primary information in the data warehouse and a number of much smaller
dimension tables (or lookup tables), each of which contains information about the
entries for a particular attribute in the fact table.
A star query is a join between a fact table and a number of dimension tables. Each
dimension table is joined to the fact table using a primary key to foreign key join,
but the dimension tables are not joined to each other.
The cost-based optimizer recognizes star queries and generates efficient execution
plans for them. Star queries are not recognized by the rule-based optimizer.
A typical fact table contains keys and measures. For example, in the Sales
History schema, the fact table, sales, contain the measures quantity_sold,
amount, and cost, and the keys cust_id, time_id, prod_id, channel_id, and
promo_id. The dimension tables are customers, times, products, channels,
and promotions. The product dimension table, for example, contains
information about each product number that appears in the fact table.
A star join is a primary key to foreign key join of the dimension tables to a fact
table.
The main advantages of star schemas are that they:
■ Provide a direct and intuitive mapping between the business entities being
analyzed by end users and the schema design.
■ Provide highly optimized performance for typical data warehouse queries.
Figure 17–1 presents a graphical representation of a star schema.
products times
sales
(amount_sold,
quantity_sold)
Fact Table
customers channels
Snowflake Schemas
The snowflake schema is a more complex data warehouse model than a star
schema, and is a type of star schema. It is called a snowflake schema because the
diagram of the schema resembles a snowflake.
Snowflake schemas normalize dimensions to eliminate redundancy. That is, the
dimension data has been grouped into multiple tables instead of one large table. For
example, a product dimension table in a star schema might be normalized into a
products table, a product_category table, and a product_manufacturer
table in a snowflake schema. While this saves space, it increases the number of
dimension tables and requires more foreign key joins. The result is more complex
queries and reduced query performance. Figure 17–2 presents a graphical
representation of a snowflake schema.
suppliers
products times
sales
(amount_sold,
quantity_sold)
customers channels
countries
When a data warehouse satisfies these conditions, the majority of the star queries
running in the data warehouse will use a query execution strategy known as the
star transformation. The star transformation provides very efficient query
performance for star queries.
Note: Bitmap indexes are available only if you have purchased the
Oracle9i Enterprise Edition. In Oracle9i Standard Edition, bitmap
indexes and star transformation are not available.
Oracle processes this query in two phases. In the first phase, Oracle uses the bitmap
indexes on the foreign key columns of the fact table to identify and retrieve only the
necessary rows from the fact table. That is, Oracle will retrieve the result set from
the fact table using essentially the following query:
SELECT ... FROM sales
WHERE time_id IN
(SELECT time_id FROM times
WHERE calendar_quarter_desc IN('1999-Q1','1999-Q2'))
AND cust_id IN
(SELECT cust_id FROM customers WHERE cust_state_province='CA')
AND channel_id IN
(SELECT channel_id FROM channels WHERE channel_desc IN('Internet','Catalog'));
This is the transformation step of the algorithm, because the original star query has
been transformed into this subquery representation. This method of accessing the
fact table leverages the strengths of Oracle's bitmap indexes. Intuitively, bitmap
indexes provide a set-based processing scheme within a relational database. Oracle
has implemented very fast methods for doing set operations such as AND (an
intersection in standard set-based terminology), OR (a set-based union), MINUS, and
COUNT.
In this star query, a bitmap index on time_id is used to identify the set of all rows
in the fact table corresponding to sales in 1999-Q1. This set is represented as a
bitmap (a string of 1's and 0's that indicates which rows of the fact table are
members of the set).
A similar bitmap is retrieved for the fact table rows corresponding to the sale from
1999-Q2. The bitmap OR operation is used to combine this set of Q1 sales with the
set of Q2 sales.
Additional set operations will be done for the customer dimension and the
product dimension. At this point in the star query processing, there are three
bitmaps. Each bitmap corresponds to a separate dimension table, and each bitmap
represents the set of rows of the fact table that satisfy that individual dimension's
constraints.
These three bitmaps are combined into a single bitmap using the bitmap AND
operation. This final bitmap represents the set of rows in the fact table that satisfy
all of the constraints on the dimension table. This is the result set, the exact set of
rows from the fact table needed to evaluate the query. Note that none of the actual
data in the fact table has been accessed. All of these operations rely solely on the
bitmap indexes and the dimension tables. Because of the bitmap indexes'
compressed data representations, the bitmap set-based operations are extremely
efficient.
Once the result set is identified, the bitmap is used to access the actual data from the
sales table. Only those rows that are required for the end user's query are retrieved
from the fact table. At this point, Oracle has effectively joined all of the dimension
tables to the fact table using bitmap indexes. This technique provides excellent
performance because Oracle is joining all of the dimension tables to the fact table
with one logical join operation, rather than joining each dimension table to the fact
table independently.
The second phase of this query is to join these rows from the fact table (the result
set) to the dimension tables. Oracle will use the most efficient method for accessing
and joining the dimension tables. Many dimension are very small, and table scans
are typically the most efficient access method for these dimension tables. For large
dimension tables, table scans may not be the most efficient access method. In the
example above, a bitmap index on product.department can be used to quickly
identify all of those products in the grocery department. Oracle's cost-based
optimizer automatically determines which access method is most appropriate for a
given dimension table, based upon the cost-based optimizer's knowledge about the
sizes and data distributions of each dimension table.
The specific join method (as well as indexing method) for each dimension table will
likewise be intelligently determined by the cost-based optimizer. A hash join is
often the most efficient algorithm for joining the dimension tables. The final answer
is returned to the user once all of the dimension tables have been joined. The query
technique of retrieving only the matching rows from one table and then joining to
another table is commonly known as a semi-join.
HASH JOIN
TABLE ACCESS FULL TIMES
PARTITION RANGE ITERATOR
TABLE ACCESS BY LOCAL INDEX ROWID SALES
BITMAP CONVERSION TO ROWIDS
BITMAP AND
BITMAP MERGE
BITMAP KEY ITERATION
BUFFER SORT
TABLE ACCESS FULL CUSTOMERS
BITMAP INDEX RANGE SCAN SALES_CUST_BIX
BITMAP MERGE
BITMAP KEY ITERATION
BUFFER SORT
TABLE ACCESS FULL CHANNELS
BITMAP INDEX RANGE SCAN SALES_CHANNEL_BIX
BITMAP MERGE
BITMAP KEY ITERATION
BUFFER SORT
TABLE ACCESS FULL TIMES
BITMAP INDEX RANGE SCAN SALES_TIME_BIX
In this plan, the fact table is accessed through a bitmap access path based on a
bitmap AND, of three merged bitmaps. The three bitmaps are generated by the
BITMAP MERGE row source being fed bitmaps from row source trees underneath it.
Each such row source tree consists of a BITMAP KEY ITERATION row source which
fetches values from the subquery row source tree, which in this example is a full
table access. For each such value, the BITMAP KEY ITERATION row source retrieves
the bitmap from the bitmap index. After the relevant fact table rows have been
retrieved using this access path, they are joined with the dimension tables and
temporary tables to produce the answer to the query.
The processing of the same star query using the bitmap join index is similiar to the
previous example. The only difference is that Oracle will utilize the join index,
instead of a single-table bitmap index, to access the customer data in the first phase
of the star query.
The difference between this plan as compared to the previous one is that the inner
part of the bitmap index scan for the customer dimension has no subselect. This is
because the join predicate information on customer.cust_state_province
can be satisfied with the bitmap join index sales_c_state_bjix.
versions of the query, the optimizer will then decide whether to use the best plan
for the transformed or untransformed version.
If the query requires accessing a large percentage of the rows in the fact table, it
might be better to use a full table scan and not use the transformations. However, if
the constraining predicates on the dimension tables are sufficiently selective that
only a small portion of the fact table needs to be retrieved, the plan based on the
transformation will probably be superior.
Note that the optimizer generates a subquery for a dimension table only if it decides
that it is reasonable to do so based on a number of criteria. There is no guarantee
that subqueries will be generated for all dimension tables. The optimizer may also
decide, based on the properties of the tables and the query, that the transformation
does not merit being applied to a particular query. In this case the best regular plan
will be used.
D
O
PR
Market
Time
You can retrieve slices of data from the cube. These correspond to cross-tabular
reports such as the one shown in Table 18–1. Regional managers might study the
data by comparing slices of the cube applicable to different markets. In contrast,
product managers might compare slices that apply to different products. An ad hoc
user might work with a wide variety of constraints, working in a subset cube.
Answering multidimensional questions often involves accessing and querying huge
quantities of data, sometimes in millions of rows. Because the flood of detailed data
generated by large organizations cannot be interpreted at the lowest level,
aggregated views of the information are essential. Aggregations, such as sums and
counts, across many dimensions are vital to multidimensional analyses. Therefore,
analytical tasks require convenient and efficient data aggregation.
Optimized Performance
Not only multidimensional issues, but all types of processing can benefit from
enhanced aggregation facilities. Transaction processing, financial and
manufacturing systems—all of these generate large numbers of production reports
An Aggregate Scenario
To illustrate the use of the GROUP BY extension, this chapter uses the Sales
History data of the common schema. All the examples refer to data from this
scenario. The hypothetical company has sales across the world and tracks sales by
both dollars and quantities information. Because there are many rows of data, the
queries shown here typically have tight constraints on their WHERE clauses to limit
the results to a small number of rows.
Channel
UK US Total
Consider that even a simple report like Example 18–1, with just nine values in its
grid, generates four subtotals and a grand total. The subtotals are the shaded
numbers. Half of the values needed for this report would not be calculated with a
query that requested SUM(amount_sold) and did a GROUP BY(channel_desc,
country_id). To get the higher-level aggregates would require additional queries.
Database commands that offer improved calculation of subtotals bring major
benefits to querying, reporting, and analytical operations.
SELECT channel_desc, country_id,
TO_CHAR(SUM(amount_sold), '9,999,999,999') SALES$
FROM sales, customers, times, channels
WHERE sales.time_id=times.time_id AND
sales.cust_id=customers.cust_id AND
sales.channel_id= channels.channel_id AND
channels.channel_desc IN ('Direct Sales', 'Internet') AND
times.calendar_month_desc='2000-09'
AND country_id IN ('UK', 'US')
GROUP BY CUBE(channel_desc, country_id);
CHANNEL_DESC CO SALES$
-------------------- -- --------------
Direct Sales UK 1,378,126
Direct Sales US 2,835,557
Direct Sales 4,213,683
Internet UK 911,739
Internet US 1,732,240
Internet 2,643,979
UK 2,289,865
US 4,567,797
6,857,662
ROLLUP Syntax
ROLLUP appears in the GROUP BY clause in a SELECT statement. Its form is:
SELECT … GROUP BY ROLLUP(grouping_column_reference_list)
Partial Rollup
You can also roll up so that only some of the sub-totals will be included. This partial
rollup uses the following syntax:
GROUP BY expr1, ROLLUP(expr2, expr3);
In this case, the GROUP BY clause creates subtotals at (2+1=3) aggregation levels.
That is, at level (expr1, expr2, expr3), (expr1, expr2), and (expr1).
CUBE is typically most suitable in queries that use columns from multiple
dimensions rather than columns representing different levels of a single dimension.
For instance, a commonly requested cross-tabulation might need subtotals for all
the combinations of month, state, and product. These are three independent
dimensions, and analysis of all possible subtotal combinations is commonplace. In
contrast, a cross-tabulation showing all possible combinations of year, month, and
day would have several values of limited interest, because there is a natural
hierarchy in the time dimension. Subtotals such as profit by day of month summed
across year would be unnecessary in most analyses. Relatively few users need to
ask "What were the total sales for the 16th of each month across the year?" See
"Hierarchy Handling in ROLLUP and CUBE" on page 18-28 for an example of
handling rollup calculations efficiently.
CUBE Syntax
CUBE appears in the GROUP BY clause in a SELECT statement. Its form is:
SELECT … GROUP BY CUBE (grouping_column_reference_list)
Partial CUBE
Partial CUBE resembles partial ROLLUP in that you can limit it to certain dimensions
and precede it with columns outside the CUBE operator. In this case, subtotals of all
possible combinations are limited to the dimensions within the cube list (in
parentheses), and they are combined with the preceding items in the GROUP BY list.
GROUPING Functions
Two challenges arise with the use of ROLLUP and CUBE. First, how can you
programmatically determine which result set rows are subtotals, and how do you
find the exact level of aggregation for a given subtotal? You often need to use
subtotals in calculations such as percent-of-totals, so you need an easy way to
determine which rows are the subtotals. Second, what happens if query results
contain both stored NULL values and "NULL" values created by a ROLLUP or CUBE?
How can you differentiate between the two?
GROUPING Function
GROUPING handles these problems. Using a single column as its argument,
GROUPING returns 1 when it encounters a NULL value created by a ROLLUP or CUBE
operation. That is, if the NULL indicates the row is a subtotal, GROUPING returns a 1.
Any other type of value, including a stored NULL, returns a 0.
GROUPING Syntax
GROUPING appears in the selection list portion of a SELECT statement. Its form is:
SELECT … [GROUPING(dimension_column)…] …
GROUP BY … {CUBE | ROLLUP} (dimension_column)
A program can easily identify the detail rows above by a mask of "0 0 0" on the T, R,
and D columns. The first level subtotal rows have a mask of "0 0 1", the second level
subtotal rows have a mask of "0 1 1", and the overall total row has a mask of "1 1 1".
You can resolve ambiguity in result sets by using the GROUPING and other functions
as shown in the code below.
These results include text values clarifying which rows have aggregations.
Grouping function used to differentiate aggregate-based "NULL" from stored NULL
values.
To understand the SQL statement above, note its first column specification, which
handles the channel_desc column. In the first line of the SQL code above:
SELECT DECODE(GROUPING(channel_desc), 1, ’All Channels’, channel_desc)
AS Channel,
CHANNEL_DESC C CO SALES$ CH MO CO
-------------------- - -- -------------- --------- --------- ---------
UK 4,554,487 1 1 0
US 9,370,256 1 1 0
Direct Sales 8,510,440 0 1 1
Internet 5,414,303 0 1 1
13,924,743 1 1 1
Compare the result set of Example 18–8 with that in Example 18–3 on page 18-9 to
see how Example 18–8 is a precisely specified group: it contains only the yearly
totals, regional totals aggregated over time and department, and the grand total.
GROUPING_ID Function
To find the GROUP BY level of a particular row, a query must return GROUPING
function information for each of the GROUP BY columns. If we do this using the
GROUPING function, every GROUP BY column requires another column using the
GROUPING function. For instance, a four-column GROUP BY clause needs to be
analyzed with four GROUPING functions. This is inconvenient to write in SQL and
increases the number of columns required in the query. When you want to store the
query result sets in tables, as with materialized views, the extra columns waste
storage space.
To address these problems, Oracle9i introduces the GROUPING_ID function.
GROUPING_ID returns a single number that enables you to determine the exact
GROUP BY level. For each row, GROUPING_ID takes the set of 1’s and 0’s that would
be generated if you used the appropriate GROUPING functions and concatenates
them, forming a bit vector. The bit vector is treated as a binary number, and the
number’s base-10 value is returned by the GROUPING_ID function. For instance, if
you group with the expression CUBE(a, b) the possible values are as shown in
Table 18–2:
GROUP_ID Function
While the extensions to GROUP BY offer power and flexibility, they also allow
complex result sets that can include duplicate groupings. The GROUP_ID function
lets you distinguish among duplicate groupings. If there are multiple sets of rows
calculated for a given level, GROUP_ID assigns the value of 0 to all the rows in the
first set. All other sets of duplicate rows for a particular grouping are assigned
higher values, starting with 1. For example, consider the following query, which
generates a duplicate grouping:
The above statement computes all the 8 (2 *2 *2) groupings, though only the above 3
groups are of interest to you.
Another alternative is the following statement, which is lengthy due to several
unions. This statement requires three scans of the base table, making it inefficient.
CUBE and ROLLUP can be thought of as grouping sets with very specific semantics.
The following equivalences show this fact:
CUBE(a, b, c)
is equivalent to
GROUPING SETS ((a, b, c), (a, b), (a, c), (b, c), (a), (b), (c), ())
ROLLUP(a, b, c)
is equivalent to
GROUPING SETS ((a, b, c), (a, b), ())
is equivalent to:
GROUP BY channel_desc
UNION ALL
GROUP BY calendar_month_desc
UNION ALL country_id
In the absence of an optimizer that looks across query blocks to generate the
execution plan, a query based on UNION would need multiple scans of the base
table, sales. This could be very inefficient as fact tables will normally be huge. Using
GROUPING SETS statements, all the groupings of interest are available in the same
query block.
Composite Columns
A composite column is a collection of columns that are treated as a unit during the
computation of groupings. You specify the columns in parentheses as in the
following statement:
ROLLUP (year, (quarter, month), day)
In this statement, the data is not rolled up across year and quarter, but is instead
equivalent to the following groupings of a UNION ALL:
■ (year, quarter, month, day),
■ (year, quarter, month),
■ (year)
■ ()
Here, (quarter, month) form a composite column and are treated as a unit. In
general, composite columns are useful in ROLLUP, CUBE, GROUPING SETS, and
concatenated groupings. For example, in CUBE or ROLLUP, composite columns
would mean skipping aggregation across certain levels. That is,
GROUP BY ROLLUP(a, (b, c))
is equivalent to
GROUP BY a, b, c UNION ALL
GROUP BY a UNION ALL
GROUP BY ()
Here, (b, c) are treated as a unit and rollup will not be applied across (b, c). It is
as if you have an alias, for example z, for (b, c) and the GROUP BY expression
reduces to GROUP BY ROLLUP(a, z). Compare this with the normal rollup as in:
GROUP BY ROLLUP(a, b, c)
which would be
GROUP BY a, b, c UNION ALL
GROUP BY a, b UNION ALL
GROUP BY a UNION ALL
GROUP BY ().
Similarly,
GROUP BY CUBE((a, b), c)
would be equivalent to
GROUP BY a, b, c UNION ALL
GROUP BY a, b UNION ALL
GROUP BY c UNION ALL
GROUP By ()
Concatenated Groupings
Concatenated groupings offer a concise way to generate useful combinations of
groupings. Groupings specified with concatenated groupings yield the
cross-product of groupings from each grouping set. The cross-product operation
enables even a small number of concatenated groupings to generate a large number
of final groups. The concatenated groupings are specified simply by listing multiple
grouping sets, cubes, and rollups, and separating them with commas. Here is an
example of concatenated grouping sets:
GROUP BY GROUPING SETS(a, b), GROUPING SETS(c, d)
The ROLLUPs in the GROUP BY specification above generate the following groups,
four for each dimension:
The concatenated grouping sets specified in the SQL above will take the ROLLUP
aggregations listed in the table and perform a cross-product on them. The
cross-product will create the 96 (4x4x6) aggregate groups needed for a hierarchical
cube of the data. There are major advantages in using three ROLLUP expressions to
replace what would otherwise require 96 grouping set expressions: the concise SQL
is far less error-prone to develop and far easier to maintain, and it enables much
better query optimization. You can picture how a cube with more dimensions and
more levels would make the use of concatenated groupings even more
advantageous.
CHANNEL_DESC CHANNEL_TOTAL
-------------------- -------------
Direct Sales 312829530
Note that the example above could also be performed efficiently using the reporting
aggregate functions described in Chapter 19, "SQL for Analysis in Data
Warehouses".
The following topics provide information about how to improve analytical SQL
queries in a data warehouse:
■ Overview of SQL for Analysis in Data Warehouses
■ Ranking Functions
■ Windowing Aggregate Functions
■ Reporting Aggregate Functions
■ LAG/LEAD Functions
■ FIRST/LAST Functions
■ Linear Regression Functions
■ Inverse Percentile Functions
■ Hypothetical Rank and Distribution Functions
■ WIDTH_BUCKET Function
■ User-Defined Aggregate Functions
■ CASE Expressions
To perform these operations, the analytic functions add several new elements to
SQL processing. These elements build on existing SQL to allow flexible and
powerful calculation expressions. With just a few exceptions, the analytic functions
have these new elements. The processing flow is represented in Figure 19–1.
Partitions created;
Joins,
Analytic functions Final
WHERE, GROUP BY,
applied to each row in ORDER BY
and HAVING clauses
each partition
■ Result Set Partitions - The analytic functions allow users to divide query result
sets into groups of rows called partitions. Note that the term partitions used
with analytic functions is unrelated to Oracle's table partitions feature.
Throughout this chapter, the term partitions refers to only the meaning related
to analytic functions. Partitions are created after the groups defined with GROUP
BY clauses, so they are available to any aggregate results such as sums and
averages. Partition divisions may be based upon any desired columns or
expressions. A query result set may be partitioned into just one partition
holding all the rows, a few large partitions, or many small partitions holding
just a few rows each.
■ Window - For each row in a partition, you can define a sliding window of data.
This window determines the range of rows used to perform the calculations for
the current row. Window sizes can be based on either a physical number of
rows or a logical interval such as time. The window has a starting row and an
ending row. Depending on its definition, the window may move at one or both
ends. For instance, a window defined for a cumulative sum function would
have its starting row fixed at the first row of its partition, and its ending row
would slide from the starting point all the way to the last row of the partition.
In contrast, a window defined for a moving average would have both its
starting and end points slide so that they maintain a constant physical or logical
range.
A window can be set as large as all the rows in a partition or just a sliding
window of one row within a partition. When a window is near a border, the
function returns results for only the available rows, rather than warning you
that the results are not what you want.
When using window functions, the current row is included during calculations,
so you should only specify (n-1) when you are dealing with n items.
■ Current Row - Each calculation performed with an analytic function is based on
a current row within a partition. The current row serves as the reference point
determining the start and end of the window. For instance, a centered moving
average calculation could be defined with a window that holds the current row,
the six preceding rows, and the following six rows. This would create a sliding
window of 13 rows, as shown in Figure 19–2.
Window Start
Window Finish
Ranking Functions
A ranking function computes the rank of a record compared to other records in the
dataset based on the values of a set of measures. The types of ranking function are:
■ RANK and DENSE_RANK
■ CUME_DIST and PERCENT_RANK
■ NTILE
■ ROW_NUMBER
DENSE_RANK() OVER (
[PARTITION BY <value expression1> [, ...]]
ORDER BY <value expression2> [collate clause] [ASC|DESC]
[NULLS FIRST|NULLS LAST] [, ...]
)
The difference between RANK and DENSE_RANK is that DENSE_RANK leaves no gaps
in ranking sequence when there are ties. That is, if you were ranking a competition
using DENSE_RANK and had three people tie for second place, you would say that
all three were in second place and that the next person came in third. The RANK
function would also give three people in second place, but the next person would
be in fifth place.
The following are some relevant points about RANK:
■ Ascending is the default sort order, which you canThe expressions in the
optional PARTITION BY clause divide the query result set into groups within
which the RANK function operates. That is, RANK gets reset whenever the group
changes. In effect, the value expressions of the PARTITION BY clause define the
reset boundaries.
■ If the PARTITION BY clause is missing, then ranks are computed over the entire
query result set.
■ value_expression1 can be any valid expression involving column references,
constants, aggregates, or expressions invoking these items.
■ The ORDER BY clause specifies the measures (<value expression>s) on which
ranking is done and defines the order in which rows are sorted in each group
(or partition). Once the data is sorted within each partition, ranks are given to
each row starting from 1.
■ value_expression2 can be any valid expression involving column references,
aggregates, or expressions invoking these items.
■ The NULLS FIRST | NULLS LAST clause indicates the position of NULLs in the
ordered sequence, either first or last in the sequence. The order of the sequence
would make NULLs compare either high or low with respect to non-NULL
values. If the sequence were in ascending order, then NULLS FIRST implies that
NULLs are smaller than all other non-NULL values and NULLS LAST implies
they are larger than non-NULL values. It is the opposite for descending order.
See the example in "Treatment of NULLs" on page 19-11.
■ If the NULLS FIRST | NULLS LAST clause is omitted, then the ordering of the
null values depends on the ASC or DESC arguments. Null values are considered
larger than any other values. If the ordering sequence is ASC, then nulls will
appear last; nulls will appear first otherwise. Nulls are considered equal to
other nulls and, therefore, the order in which nulls are presented is
non-deterministic.
Ranking Order
The following example shows how the [ASC | DESC] option changes the ranking
order.
While the data in this result is ordered on the measure SALES$, in general, it is not
guaranteed by the RANK function that the data will be sorted on the measures. If
you want the data to be sorted on SALES$ in your result, you must specify it
explicitly with an ORDER BY clause, at the end of the SELECT statement.
The sales_count column breaks the ties for three pairs of values.
Note that, in the case of DENSE_RANK, the largest rank value gives the number of
distinct values in the dataset.
A single query block can contain more than one ranking function, each partitioning
the data into different groups (that is, reset on different boundaries). The groups can
be mutually exclusive. The following query ranks products based on their dollar
Treatment of NULLs
NULLs are treated like normal values. Also, for rank computation, a NULL value is
assumed to be equal to another NULL value. Depending on the ASC | DESC options
provided for measures and the NULLS FIRST | NULLS LAST clause, NULLs will
either sort low or high and hence, are given ranks appropriately. The following
example shows how NULLs are ranked in different cases:
If the value for two rows is NULL, the next group expression is used to resolve the
tie. If they cannot be resolved even then, the next expression is used and so on till
the tie is resolved or else the two rows are given the same rank. For example:
Top N Ranking
You can easily obtain top N ranks by enclosing the RANK function in a subquery and
then applying a filter condition outside the subquery. For example, to obtain the top
five countries in sales for a specific month, you can issue:
sales.channel_id=channels.channel_id AND
times.calendar_month_desc='2000-09'
GROUP BY country_id)
WHERE COUNTRY_RANK <= 5;
CO SALES$ COUNTRY_RANK
-- -------------- ------------
US 6,517,786 1
NL 3,447,121 2
UK 3,207,243 3
DE 3,194,765 4
FR 2,125,572 5
Bottom N Ranking
Bottom N is similar to top N except for the ordering sequence within the rank
expression. Using the previous example, you can order SUM(s_amount) ascending
instead of descending.
CUME_DIST
The CUME_DIST function (defined as the inverse of percentile in some statistical
books) computes the position of a specified value relative to a set of values. The
order can be ascending or descending. Ascending is the default. The range of values
for CUME_DIST is from greater than 0 to 1. To compute the CUME_DIST of a value x
in a set S of size N, you use the formula:
CUME_DIST(x) = number of values in S coming before and including x
in the specified order/ N
The semantics of various options in the CUME_DIST function are similar to those in
the RANK function. The default order is ascending, implying that the lowest value
gets the lowest CUME_DIST (as all other values come later than this value in the
order). NULLs are treated the same as they are in the RANK function. They are
counted towards both the numerator and the denominator as they are treated like
non-NULL values. The example below finds cumulative distribution of sales by
channel within each month:
PERCENT_RANK
PERCENT_RANK is similar to CUME_DIST, but it uses rank values rather than row
counts in its numerator. Therefore, it returns the percent rank of a value relative to a
group of values. The function is available in many popular spreadsheets. PERCENT_
RANK of a row is calculated as:
(rank of row in its partition - 1) / (number of rows in the partition - 1)
PERCENT_RANK returns values in the range zero to one. The row(s) with a rank of 1
will have a PERCENT_RANK of zero.
NTILE
NTILE allows easy calculation of tertiles, quartiles, deciles and other common
summary statistics. This function divides an ordered partition into a specified
number of groups called buckets and assigns a bucket number to each row in the
partition. NTILE is a very useful calculation because it lets users divide a data set
into fourths, thirds, and other groupings.
The buckets are calculated so that each bucket has exactly the same number of rows
assigned to it or at most 1 row more than the others. For instance, if you have 100
rows in a partition and ask for an NTILE function with four buckets, 25 rows will be
assigned a value of 1, 25 rows will have value 2, and so on. These buckets are
referred to as equiheight buckets.
If the number of rows in the partition does not divide evenly (without a remainder)
into the number of buckets, then the number of rows assigned per bucket will differ
by one at most. The extra rows will be distributed one per bucket starting from the
lowest bucket number. For instance, if there are 103 rows in a partition which has an
NTILE(5) function, the first 21 rows will be in the first bucket, the next 21 in the
second bucket, the next 21 in the third bucket, the next 20 in the fourth bucket and
the final 20 in the fifth bucket.
The NTILE function has the following syntax:
NTILE(N) OVER
([PARTITION BY <value expression1> [, ...]]
ORDER BY <value expression2> [collate clause] [ASC|DESC]
[NULLS FIRST | NULLS LAST] [, ...])
Here is an example assigning each month's sales total into one of 4 buckets:
ROW_NUMBER
The ROW_NUMBER function assigns a unique number (sequentially, starting from 1,
as defined by ORDER BY) to each row within the partition. It has the following
syntax:
ROW_NUMBER() OVER
([PARTITION BY <value expression1> [, ...]]
ORDER BY <value expression2> [collate clause] [ASC|DESC]
[NULLS FIRST | NULLS LAST] [, ...])
Note that there are three pairs of tie values in these results. Like NTILE, ROW_
NUMBER is a non-deterministic function, so each tied value could have its row
number switched. To ensure deterministic results, you must order on a unique key.
Inmost cases, that will require adding a new tie breaker column to the query and
using it in the ORDER BY specification.
can be used only in the SELECT and ORDER BY clauses of the query. Two other
functions are available: FIRST_VALUE, which returns the first value in the window;
and LAST_VALUE, which returns the last value in the window. These functions
provide access to more than one row of a table without a self-join. The syntax of the
windowing functions is:
{SUM|AVG|MAX|MIN|COUNT|STDDEV|VARIANCE|FIRST_VALUE|LAST_VALUE}
({<value expression1> | *}) OVER
([PARTITION BY <value expression2>[,...]]
ORDER BY <value expression3> [collate clause>]
[ASC| DESC] [NULLS FIRST | NULLS LAST] [,...]
ROWS | RANGE
{{UNBOUNDED PRECEDING | <value expression4> PRECEDING}
| BETWEEN
{UNBOUNDED PRECEDING | <value expression4> PRECEDING}
AND{CURRENT ROW | <value expression4> FOLLOWING}}
In this example, the analytic function SUM defines, for each row, a window that
starts at the beginning of the partition (UNBOUNDED PRECEDING) and ends, by
default, at the current row.
Nested SUMs are needed in this example since we are performing a SUM over a value
that is itself a SUM. Nested aggregations are used very often in analytic aggregate
functions.
t.calendar_month_desc
ROWS 2 PRECEDING), '9,999,999,999') as MOVING_3_MONTH_AVG
FROM sales s, times t, customers c
WHERE
s.time_id=t.time_id AND
s.cust_id=c.cust_id AND
t.calendar_year=1999 AND
c.cust_id IN (6380)
GROUP BY c.cust_id, t.calendar_month_desc
ORDER BY c.cust_id, t.calendar_month_desc;
Note that the first two rows for the three month moving average calculation in the
data above are based on a smaller interval size than specified because the window
calculation cannot reach past the data retrieved by the query. You need to consider
the different window sizes found at the borders of result sets. In other words, you
may need to modify the query to include exactly what you want.
The starting and ending rows for each product's centered moving average
calculation in the data above are based on just two days, since the window
calculation cannot reach past the data retrieved by the query. Users need to consider
the different window sizes found at the borders of result sets: the query may need
to be adjusted.
R_RKEY P_PKEY S_AMT CURRENT_GROUP_SUM /*Source numbers for the current_group_sum column*/
------ ------ ----- ----------------- /*------- */
EAST 1 130 130 /* 130 */
EAST 2 50 180 /*130+50 */
EAST 3 80 265 /*50+(80+75+60) */
EAST 3 75 265 /*50+(80+75+60) */
EAST 3 60 265 /*50+(80+75+60) */
EAST 4 20 235 /*80+75+60+20 */
instead of just
fn(t_timekey)
to mean the same thing. You can also write a PL/SQL function that returns an
INTERVAL datatype value.
Or it could yield:
TIME_ID INDIV_SALE CUM_SALES
--------- -------------- --------------
11-DEC-99 1,932 2,968
11-DEC-99 588 3,556
11-DEC-99 1,036 1,036
12-DEC-99 504 504
12-DEC-99 1,160 2,093
12-DEC-99 429 933
One way to handle this problem would be to add the prod_id column to the result
set and order on both time_id and prod_id.
where
■ An asterisk (*) is only allowed in COUNT(*)
■ DISTINCT is supported only if corresponding aggregate functions allow it
■ <value expression1> and <value expression2> can be any valid
expression involving column references or aggregates.
■ The PARTITION BY clause defines the groups on which the windowing
functions would be computed. If the PARTITION BY clause is absent, then the
function is computed over the whole query result set.
Reporting functions can appear only in the SELECT clause or the ORDER BY clause.
The major benefit of reporting functions is their ability to do multiple passes of data
in a single query block and speed up query performance. Queries such as "Count
the number of salesmen with sales more than 10% of city sales" do not require joins
between separate query blocks.
For example, consider the question "For each product category, find the region in
which it had maximum sales". The equivalent SQL query using the MAX reporting
aggregate function would be:
RATIO_TO_REPORT
The RATIO_TO_REPORT function computes the ratio of a value to the sum of a set
of values. If the expression value expression evaluates to NULL, RATIO_TO_
REPORT also evaluates to NULL, but it is treated as zero for computing the sum of
values for the denominator. Its syntax is:
RATIO_TO_REPORT
(<value expression1>) OVER
([PARTITION BY <value expression2>[,...]])
where
■ <value expression1> and <value expression2> can be any valid
expression involving column references or aggregates.
■ The PARTITION BY clause defines the groups on which the RATIO_TO_
REPORT function is to be computed. If the PARTITION BY clause is absent, then
the function is computed over the whole query result set.
AS TOTAL_SALES,
TO_CHAR(RATIO_TO_REPORT(SUM(amount_sold)) OVER (), '9.999')
AS RATIO_TO_REPORT
FROM sales s, channels ch
WHERE s.channel_id=ch.channel_id AND
s.time_id=to_DATE('11-OCT-2000')
GROUP BY ch.channel_desc ;
LAG/LEAD Functions
The LAG and LEAD functions are useful for comparing values when the relative
positions of rows can be known reliably. They work by specifying the count of rows
which separate the target row from the current row. Since the functions provide
access to more than one row of a table at the same time without a self-join, they can
enhance processing speed. The LAG function provides access to a row at a given
offset prior to the current position, and the LEAD function provides access to a row
at a given offset after the current position.
LAG/LEAD Syntax
The functions have the following syntax:
{LAG | LEAD}
(<value expression1>, [<offset> [, <default>]]) OVER
([PARTITION BY <value expression2>[,...]]
ORDER BY <value expression3> [collate clause>]
[ASC | DESC] [NULLS FIRST | NULLS LAST] [,...])
FIRST/LAST Functions
The FIRST/LAST aggregate functions allow you to return the result of an aggregate
applied over a set of rows that rank as the first or last with respect to a given order
specification. FIRST/LAST lets you order on column A but return an result of an
aggregate applied on column B. This is valuable because it avoids the need for a
self-join or subquery, thus improving performance. These functions begin with a
tiebreaker function, which is a regular aggregate function (MIN, MAX, SUM, AVG,
COUNT, VARIANCE, STDDEV) that produces the return value. The tiebreraker
function is performed on the set rows (1 or more rows) that rank as first or last
respect to the order specification to return a single value.
To specify the ordering used within each group, the FIRST/LAST functions add a
new clause starting with the word KEEP.
FIRST/LAST Syntax
[MIN | MAX | COUNT | SUM | AVG | STDDEV | VARIANCE ] (<expression>)
KEEP ( DENSE_RANK [FIRST | LAST] ORDER BY <order by expression> [, ...]
[ASC|DESC] [NULLS FIRST| NULLS LAST] )
A query like this can be useful for understanding the sales patterns of your different
channels. For instance, the result set here highlights that Telesales sell relatively
small volumes.
Using the FIRST and LAST functions as reporting aggregates makes it easy to
include the results in calculations such "Salary as a percent of the highest salary."
REGR_COUNT
REGR_COUNT returns the number of non-null number pairs used to fit the
regression line. If applied to an empty set (or if there are no (e1, e2) pairs where
neither of e1 or e2 is null), the function returns 0.
REGR_R2
The REGR_R2 function computes the coefficient of determination (usually called
"R-squared" or "goodness of fit") for the regression line.
REGR_R2 returns values between 0 and 1 when the regression line is defined (slope
of the line is not null), and it returns NULL otherwise. The closer the value is to 1,
the better the regression line fits the data.
PERC_DISC PERC_CONT
--------- ---------
5000 5000
The final result will be: PERCENTILE_CONT(X) = if (CRN = FRN = RN), then
(value of expression from row at RN) else (CRN - RN) * (value of expression for row
at FRN) + (RN -FRN) * (value of expression for row at CRN).
Consider the example query above where we compute PERCENTILE_CONT(0.5).
Here n is 17. The row number RN = (1 + 0.5*(n-1))= 9 for both groups. Putting this
into the formula, (FRN=CRN=9), we return the value from row 9 as the result.
Another example is, if you want to compute PERCENTILE_CONT(0.66). The
computed row number RN=(1 + 0.66*(n-1))= (1 + 0.66*16)= 11.67. PERCENTILE_
CONT(0.66) = (12-11.67)*(value of row 11)+(11.67-11)*(value of row 12). These results
are:
SELECT PERCENTILE_DISC(0.66) WITHIN GROUP
(ORDER BY cust_credit_limit) AS perc_disc,
PERCENTILE_CONT(0.66) WITHIN GROUP
(ORDER BY cust_credit_limit) AS perc_cont
FROM customers WHERE cust_city='Marshal';
PERC_DISC PERC_CONT
--------- ---------
7000 7000
Inverse distribution aggregate functions can appear in the HAVING clause of a query
like other existing aggregate functions.
As Reporting Aggregates
You can also use the aggregate functions PERCENTILE_CONT, PERCENTILE_DISC
as reporting aggregate functions. When used as reporting aggregate functions, the
syntax is similar to those of other reporting aggregates.
[PERCENTILE_CONT | PERCENTILE_DISC]( <constant expression> )
WITHIN GROUP ( ORDER BY <single order by expression>
[ASC|DESC] [NULLS FIRST| NULLS LAST])
OVER ( [PARTITION BY <value expression> [,...]] )
This query computes the same thing (median credit limit for customers in this result
set, but reports the result for every row in the result set, as shown in the output
below.
Unlike the inverse percentile aggregates, the ORDER BY clause in the sort
specification for hypothetical rank and distribution functions may take multiple
expressions. The number of arguments and the expressions in the ORDER BY clause
should be the same and the arguments must be constant expressions of the same or
compatible type to the corresponding ORDER BY expression. Below is an example
using 2 arguments in several hypothetical ranking functions.
These functions can appear in the HAVING clause of a query just like other
aggregate functions. They cannot be used as either reporting aggregate functions or
windowing aggregate functions.
WIDTH_BUCKET Function
For a given expression, the WIDTH_BUCKET function returns the bucket number
that the result of this expression will be assigned after it is evaluated. You can
generate equiwidth histograms with this function. Equiwidth histograms divide
data sets into buckets whose interval size (highest value to lowest value) is equal.
The number of rows held by each bucket will vary. A related function, NTILE,
creates equiheight buckets.
Equiwidth histograms can be generated only for numeric, date or datetime types.
So the first three parameters should be all numeric expressions or all date
expressions. Other types of expressions are not allowed. If the first parameter is
NULL, the result is NULL. If the second or the third parameter is NULL, an error
message is returned, as a NULL value cannot denote any end point (or any point) for
a range in a date or numeric value dimension. The last parameter (number of
buckets) should be a numeric expression that evaluates to a positive integer value;
0, NULL, or a negative value will result in an error.
Buckets are numbered from 0 to (n+1). Bucket 0 holds the count of values less than
the minimum. Bucket(n+1) holds the count of values greater than or equal to the
maximum specified value.
WIDTH_BUCKET Syntax
The WIDTH_BUCKET takes four expressions as parameters. The first parameter is the
expression that the equiwidth histogram is for. The second and third parameters are
expressions that denote the end points of the acceptable range for the first
parameter. The fourth parameter denotes the number of buckets.
WIDTH_BUCKET(<expression>, <minval expression>, <maxval expression>,
<num buckets>)
Consider the following data from table customers, that shows the credit limits of
17 customers. This data is gathered in the query shown in Example 19–28 on
page 19-43.
CUST_ID CUST_CREDIT_LIMIT
-------- -----------------
22110 11000
28340 5000
40800 11000
121790 9000
165400 3000
171630 1500
184090 7000
215240 5000
227700 3000
246390 11000
346070 1500
364760 5000
370990 7000
383450 1500
408370 7000
420830 1500
464440 15000
Credit Limits
0 5000 10000 15000 20000
Bucket #
0 1 2 3 4 5
You can specify the bounds in the reverse order, for example, WIDTH_BUCKET
(cust_credit_limit, 20000, 0, 4). When the bounds are reversed, the buckets
will be open-closed intervals. In this example, bucket number 1 is (15000,20000],
bucket number 2 is (10000,15000], and bucket number 4, is (0,5000]. The
overflow bucket will be numbered 0 (20000, +infinity), and the underflow
bucket will be numbered 5 (-infinity, 0].
It is an error if the bucket count parameter is 0 or negative.
USERDEF_SKEW
============
0.583891
CASE Expressions
Oracle now supports simple and searched CASE statements. CASE statements are
similar in purpose to the Oracle DECODE statement, but they offer more flexibility
and logical power. They are also easier to read than traditional DECODE statements,
and offer better performance as well. They are commonly used when breaking
categories into buckets like age (for example, 20-29, 30-39, and so on). The syntax
for simple statements is:
CASE value expression t
WHEN <value expression 1> THEN <result 1>
WHEN <value expression 2> THEN <result 2>
...
ELSE result n + 1
END
You can specify only 255 arguments and each WHEN ... THEN pair counts as two
arguments. For a workaround to this limit, see Oracle9i SQL Reference.
where foo is a function that returns its input if the input is greater than 2000, and
returns 2000 otherwise. The query has performance implications because it needs to
invoke a function for each row. Writing custom functions can also add to the
development load.
Using CASE expressions in the database without PL/SQL, the above query can be
rewritten as:
SELECT AVG(CASE when e.sal > 2000 THEN e.sal ELSE 2000 end) FROM emps e;
Using a CASE expression lets you avoid developing custom functions and can also
perform faster.
BUCKET COUNT_IN_GROUP
------------- --------------
0 - 3999 6
4000 - 7999 6
8000 - 11999 4
12000 - 16000 1
OLAP
Oracle9i OLAP adds the query performance and calculation capability previously
found only in multidimensional databases to Oracle’s relational platform. In
addition, it provides a Java OLAP API that is appropriate for the development of
internet-ready analytical applications. Unlike other combinations of OLAP and
RDBMS technology, Oracle9i OLAP is not a multidimensional database using
bridges to move data from the relational data store to a multidimensional data
store. Instead, it is truly an OLAP-enabled relational database. As a result, Oracle9i
provides the benefits of a multidimensional database along with the scalability,
accessibility, security, manageability, and high availability of the Oracle9i database.
The Java OLAP API, which is specifically designed for internet-based analytical
applications, offers productive data access.
Scalability
Oracle9i OLAP is highly scalable. In today’s environment, there is tremendous
growth along three dimensions of analytic applications: number of users, size of
data, complexity of analyses. There are more users of analytical applications, and
they need access to more data to perform more sophisticated analysis and target
marketing. For example, a telephone company might want a customer dimension to
include detail such as all telephone numbers as part of an application that is used to
analyze customer turnover. This would require support for multi-million row
dimension tables and very large volumes of fact data. Oracle9i can handle very
large data sets using parallel execution and partitioning, as well as offering support
for advanced hardware and clustering.
Availability
Oracle9i includes many features that support high availability. One of the most
significant is partitioning, which allows management of precise subsets of tables
and indexes, so that management operations affect only small pieces of these data
structures. By partitioning tables and indexes, data management processing time is
reduced, thus minimizing the time data is unavailable. Another feature supporting
high availability is transportable tablespaces. With transportable tablespaces, large
data sets, including tables and indexes, can be added with almost no processing to
other databases. This enables extremely rapid data loading and updates.
Manageability
Oracle enables you to precisely control resource utilization. The Database Resource
Manager, for example, provides a mechanism for allocating the resources of a data
warehouse among different sets of end-users. Consider an environment where the
marketing department and the sales department share an OLAP system. Using the
Database Resource Manager, you could specify that the marketing department
receive at least 60 percent of the CPU resources of the machines, while the sales
department receive 40 percent of the CPU resources. You can also further specify
limits on the total number of active sessions, and the degree of parallelism of
individual queries for each department.
Another resource management facility is the progress monitor, which gives end users
and administrators the status of long-running operations. Oracle9i maintains
statistics describing the percent-complete of these operations. Oracle Enterprise
Manager enables you to view a bar-graph display of these operations showing what
percent complete they are. Moreover, any other tool or any database administrator
can also retrieve progress information directly from the Oracle data server, using
system views.
Security
Just as the demands of real-world transaction processing required Oracle to develop
robust features for scalability, manageability and backup and recovery, they lead
Oracle to create industry-leading security features. The security features in Oracle
have reached the highest levels of U.S. government certification for database
trustworthiness. Oracle’s fine grained access control feature, enables cell-level
security for OLAP users. Fine grained access control works with minimal burden on
query processing, and it enables efficient centralized security management.
Data Mining
Oracle enables data mining inside the database for performance and scalability.
Some of the capabilities are:
■ An API that provides programmatic control and application integration
■ Analytical capabilities with OLAP and statistical functions in the database
■ Multiple Algorithms: Naïve Bayes and Association Rules
■ Real-time and Batch Scoring modes
■ Multiple Prediction types
■ Association insights
Data Preparation
Data preparation can create new tables or views of existing data. Both options
perform faster than moving data to an external data mining utility and offer the
programmer the option of snap-shots or real-time updates.
ODM provides utilities for complex, data mining-specific tasks. Binning improves
model build time and model performance, so ODM provides a utility for
user-defined binning. ODM accepts data in either single record format or in
transactional format and performs mining on transactional formats. Single record
format is most common in applications, so ODM provides a utility for transforming
single record format.
Associated analysis for preparatory data exploration and model evaluation is
extended by Oracle’s statistical functions and OLAP capabilities. Because these also
operate within the database, they can all be incorporated into a seamless application
that shares database objects. This allows for more functional and faster applications.
Model Building
ODM provides Naïve Bayes for prediction and rating. This algorithm can predict
binary outcomes in which the prediction might be either yes or no. It can also
predict multi-class outcomes in which the prediction might be one or more of a set
of possible outcomes. ODM also provides Association Rules for market basket
analysis and other association problems. For example, the possible outcomes of a
loyalty prediction might include: increase use, remain stable, decrease use, and
defect. All model building takes place inside the database. Once again, the data
does not need to move and the process is accelerated.
Model Evaluation
Models are stored in the database and directly accessible for evaluation, reporting,
and further analysis by a wide variety of tools and application functions. ODM
provides APIs for calculating traditional confusion matrixes and lift charts. It stores
the models, the underlying data, and these analysis results together in the database
to allow further analysis, reporting and application specific model management.
Scoring
ODM provides both batch and real-time scoring. In batch mode, ODM takes a table
as input. It scores every record, and returns a scored table as a result. In real-time
mode, parameters for a single record are passed in and the scores are returned in a
Java object.
In both modes, ODM can deliver a variety of scores. It can return a rating or
probability of a specific outcome. Alternatively it can return a predicted outcome
and the probability of that outcome occurring. Some examples follow.
■ How likely is this event to end in outcome A?
■ Which outcome is most likely to result from this event?
■ What is the probability of each possible outcome for this event?
Java API
The Oracle Data Mining API lets you build analytical models and deliver real-time
predictions in any application that supports Java. The API is based on the emerging
JSR-073 standard.
When a user issues a SQL statement, the optimizer decides whether to execute the
operations in parallel and determines the degree of parallelism (DOP) for each
operation. You can specify the number of parallel execution servers required for an
operation in various ways.
If the optimizer targets the statement for parallel processing, the following sequence
of events takes place:
1. The SQL statement's foreground process becomes a parallel execution
coordinator.
2. The parallel execution coordinator obtains as many parallel execution servers as
needed (determined by the DOP) from the server pool or creates new parallel
execution servers as needed.
3. Oracle executes the statement as a sequence of operations. Each operation is
performed in parallel, if possible.
4. When statement processing is completed, the coordinator returns any resulting
data to the user process that issued the statement and returns the parallel
execution servers to the server pool.
The parallel execution coordinator calls upon the parallel execution servers during
the execution of the SQL statement, not during the parsing of the statement.
Therefore, when parallel execution is used with the shared server, the server process
that processes the EXECUTE call of a user's statement becomes the parallel execution
coordinator for the statement.
See Also:
■ "Minimum Number of Parallel Execution Servers" on
page 21-36 for information about using the initialization
parameter PARALLEL_MIN_PERCENT
■ Oracle9i Database Performance Guide and Reference for
information about monitoring an instance's pool of parallel
execution servers and determining the appropriate values for
the initialization parameters
... Parallel
execution
server set 1
Parallel
... execution
server set 2
connections
message
buffer
When a connection is between two processes on the same instance, the servers
communicate by passing the buffers back and forth. When the connection is
between processes in different instances, the messages are sent using external
high-speed network protocols. In Figure 21–1, the DOP is equal to the number of
parallel execution servers, which in this case is n. Figure 21–1 does not show the
parallel execution coordinator. Each parallel execution server actually has an
additional connection to the parallel execution coordinator.
See Also:
■ "Setting the Degree of Parallelism" on page 21-32
■ "Parallelization Rules for SQL Statements" on page 21-38
Figure 21–2 Data Flow Diagram for a Join of the EMP and DEPT Tables
Parallel
Execution
Coordinator
GROUP
BY
SORT
HASH
JOIN
Parent operations can begin consuming rows as soon as the child operations have
produced rows. In the previous example, while the parallel execution servers are
producing rows in the FULL SCAN dept operation, another set of parallel execution
servers can begin to perform the HASH JOIN operation to consume the rows.
Each of the two operations performed concurrently is given its own set of parallel
execution servers. Therefore, both query operations and the data flow tree itself
have parallelism. The parallelism of an individual operation is called intraoperation
parallelism and the parallelism between operations in a data flow tree is called
interoperation parallelism.
Due to the producer-consumer nature of the Oracle server's operations, only two
operations in a given tree need to be performed simultaneously to minimize
execution time.
To illustrate intraoperation and interoperation parallelism, consider the following
statement:
SELECT * FROM emp ORDER BY ename;
The execution plan implements a full scan of the emp table. This operation is
followed by a sorting of the retrieved rows, based on the value of the ename
column. For the sake of this example, assume the ename column is not indexed.
Also assume that the DOP for the query is set to 4, which means that four parallel
execution servers can be active for any given operation.
Figure 21–3 illustrates the parallel execution of the example query.
A-G
EMP Table
H-M
Parallel
User Execution
Process Coordinator
N-S
SELECT *
FROM emp
ORDER BY ename; T-Z
As you can see from Figure 21–3, there are actually eight parallel execution servers
involved in the query even though the DOP is 4. This is because a parent and child
operator can be performed at the same time (interoperation parallelism).
Also note that all of the parallel execution servers involved in the scan operation
send rows to the appropriate parallel execution server performing the SORT
operation. If a row scanned by a parallel execution server contains a value for the
ename column between A and G, that row gets sent to the first ORDER BY parallel
execution server. When the scan operation is complete, the sorting processes can
return the sorted results to the coordinator, which, in turn, returns the complete
query results to the user.
Types of Parallelism
The following types of parallelism are discussed in this section:
■ Parallel Query
■ Parallel DDL
■ Parallel DML
■ Parallel Execution of Functions
■ Other Types of Parallelism
Parallel Query
You can parallelize queries and subqueries in SELECT statements. You can also
parallelize the query portions of DDL statements and DML statements (INSERT,
UPDATE, and DELETE).
However, you cannot parallelize the query portion of a DDL or DML statement if it
references a remote object. When you issue a parallel DML or DDL statement in
which the query portion references a remote object, the operation is automatically
executed serially.
See Also:
■ "Operations That Can Be Parallelized" on page 21-3 for
information on the query operations that Oracle can parallelize
■ "Parallelizing SQL Statements" on page 21-6 for an explanation
of how the processes perform parallel queries
■ "Distributed Transaction Restrictions" on page 21-27 for
examples of queries that reference a remote object
■ "Rules for Parallelizing Queries" on page 21-38 for information
on the conditions for parallelizing a query and the factors that
determine the DOP
These scan methods can be used for index-organized tables with overflow areas and
for index-organized tables that contain LOBs.
Parallel DDL
This section includes the following topics on parallelism for DDL statements:
■ DDL Statements That Can Be Parallelized
■ CREATE TABLE ... AS SELECT in Parallel
■ Recoverability and Parallel DDL
■ Space Management for Parallel DDL
See Also:
■ Oracle9i SQL Reference for information about the syntax and use
of parallel DDL statements
■ Oracle9i Application Developer’s Guide - Large Objects (LOBs) for
information about LOB restrictions
Parallel Execution
Coordinator
See Also:
■ Oracle9i SQL Reference for a discussion of the syntax of the
CREATE TABLE statement
■ Oracle9i Database Administrator’s Guide for information about
dictionary-managed tablespaces
■ If the unused space in each temporary segment is larger than the value of the
MINIMUM EXTENT parameter set at the tablespace level, then Oracle trims the
unused space when merging rows from all of the temporary segments into the
table or index. The unused space is returned to the system free space and can be
allocated for new extents, but it cannot be coalesced into a larger segment
because it is not contiguous space (external fragmentation).
■ If the unused space in each temporary segment is smaller than the value of the
MINIMUM EXTENT parameter, then unused space cannot be trimmed when the
rows in the temporary segments are merged. This unused space is not returned
to the system free space; it becomes part of the table or index (internal
fragmentation) and is available only for subsequent inserts or for updates that
require additional space.
For example, if you specify a DOP of 3 for a CREATE TABLE ... AS SELECT
statement, but there is only one datafile in the tablespace, then internal
fragmentation may occur, as shown in Figure 21–5 on page 21-18. The pockets of
free space within the internal table extents of a datafile cannot be coalesced with
other free space and cannot be allocated as extents.
USERS Tablespace
DATA1.ORA
EXTENT 1
Parallel
Execution
Server Free space
for INSERTs
Parallel EXTENT 2
CREATE TABLE emp Execution
AS SELECT ... Server Free space
for INSERTs
EXTENT 3
Parallel
Execution Free space
Server for INSERTs
Parallel DML
Parallel DML (PARALLEL, INSERT, UPDATE, and DELETE) uses parallel execution
mechanisms to speed up or scale up large DML operations against large database
tables and indexes.
running an Oracle Real Application Cluster. You also have to find out about
current resource usage to balance workload across instances.
Parallel DML removes these disadvantages by performing inserts, updates, and
deletes in parallel automatically.
Refreshing Tables in a Data Warehouse System In a data warehouse system, large tables
need to be refreshed (updated) periodically with new or modified data from the
production system. You can do this efficiently by using parallel DML combined
with updatable join views. You can also use the MERGE statement.
The data that needs to be refreshed is generally loaded into a temporary table before
starting the refresh process. This table contains either new rows or rows that have
been updated since the last refresh of the data warehouse. You can use an updatable
join view with parallel UPDATE to refresh the updated rows, and you can use an
anti-hash join with parallel INSERT to refresh the new rows.
large intermediate summary tables. These summary tables are often temporary and
frequently do not need to be logged. Parallel DML can speed up the operations
against these large intermediate tables. One benefit is that you can put incremental
results in the intermediate tables and perform parallel update.
In addition, the summary tables may contain cumulative or comparison
information which has to persist beyond application sessions; thus, temporary
tables are not feasible. Parallel DML operations can speed up the changes to these
large summary tables.
Using Scoring Tables Many DSS applications score customers periodically based on a
set of criteria. The scores are usually stored in large DSS tables. The score
information is then used in making a decision, for example, inclusion in a mailing
list.
This scoring activity queries and updates a large number of rows in the large table.
Parallel DML can speed up the operations against these large tables.
Running Batch Jobs Batch jobs executed in an OLTP database during off hours have a
fixed time window in which the jobs must complete. A good way to ensure timely
job completion is to parallelize their operations. As the work load increases, more
machine resources can be added; the scaleup property of parallel operations ensures
that the time constraint can be met.
The default mode of a session is DISABLE PARALLEL DML. When parallel DML is
disabled, no DML will be executed in parallel even if the PARALLEL hint is used.
When parallel DML is enabled in a session, all DML statements in this session will
be considered for parallel execution. However, even if parallel DML is enabled, the
DML operation may still execute serially if there are no parallel hints or no tables
with a parallel attribute or if restrictions on parallel operations are violated.
The session's PARALLEL DML mode does not influence the parallelism of SELECT
statements, DDL statements, and the query portions of DML statements. Thus, if
this mode is not set, the DML operation is not parallelized, but scans or join
operations within the DML statement may still be parallelized.
See Also:
■ "Space Considerations for Parallel DML" on page 21-24
■ "Lock and Enqueue Resources for Parallel DML" on page 21-24
■ "Restrictions on Parallel DML" on page 21-24
Rollback Segments
Oracle assigns transactions to rollback segments that have the fewest active
transactions. To speed up both forward and undo operations, you should create and
bring online enough rollback segments so that at most two parallel process
transactions are assigned to one rollback segment.
The SET TRANSACTION USE ROLLBACK SEGMENT statement is ignored when
parallel DML is used because parallel DML requires more than one rollback
segment for performance.
You should create the rollback segments in tablespaces that have enough space for
them to extend when necessary. You can then set the MAXEXTENTS storage
parameters for the rollback segments to UNLIMITED. Also, set the OPTIMAL value
for the rollback segments so that after the parallel DML transactions commit, the
rollback segments are shrunk to the OPTIMAL size.
See Also: Oracle9i Backup and Recovery Concepts for details about
parallel rollback
System Recovery Recovery from a system failure requuires a new startup. Recovery
is performed by the SMON process and any recovery server processes spawned by
SMON. Parallel DML statements may be recovered using parallel rollback. If the
initialization parameter COMPATIBLE is set to 8.1.3 or greater, Fast-Start
On-Demand Rollback enables dead transactions to be recovered, on demand one
block at a time.
Instance Recovery (Oracle Real Application Clusters) Recovery from an instance failure
in an Oracle Real Application Cluster is performed by the recovery processes (that
is, the SMON processes and any recovery server processes they spawn) of other live
instances. Each recovery process of the live instances can recover the parallel
execution coordinator or parallel execution server transactions of the failed instance
independently.
Partitioning Key Restriction You can only update the partitioning key of a partitioned
table to a new value if the update does not cause the row to move to a new
partition. The update is possible if the table is defined with the row movement
clause enabled.
Function Restrictions The function restrictions for parallel DML are the same as those
for parallel DDL and parallel query.
NOT NULL and CHECK These types of integrity constraints are allowed. They are not a
problem for parallel DML because they are enforced on the column and row level,
respectively.
UNIQUE and PRIMARY KEY These types of integrity constraints are allowed.
Delete Cascade Delete on tables having a foreign key with delete cascade is not
parallelized because parallel execution servers will try to delete rows from multiple
partitions (parent and child tables).
Deferrable Integrity Constraints If any deferrable constraints apply to the table being
operated on, the DML operation will not be parallelized.
Trigger Restrictions
A DML operation will not be parallelized if the affected tables contain enabled
triggers that may get fired as a result of the statement. This implies that DML
statements on tables that are being replicated will not be parallelized.
Relevant triggers must be disabled in order to parallelize DML on the table. Note
that, if you enable or disable triggers, the dependent shared cursors are invalidated.
See Also:
■ Oracle9i Application Developer’s Guide - Fundamentals for
information about the PRAGMA RESTRICT_REFERENCES
■ Oracle9i SQL Reference for information about CREATE
FUNCTION
and determine that the code neither reads nor writes to the database or reads nor
modifies package variables.
For a parallel DML statement, any function call that cannot be executed in parallel
causes the entire DML statement to be executed serially.
For an INSERT ... SELECT or CREATE TABLE ... AS SELECT statement, function calls
in the query portion are parallelized according to the parallel query rules in the
prior paragraph. The query may be parallelized even if the remainder of the
statement must execute serially, or vice versa.
See Also:
■ Oracle9i Database Utilities for information about parallel load
and SQL*Loader
■ Oracle9i User-Managed Backup and Recovery Guide for
information about parallel media recovery
■ Oracle9i Database Performance Guide and Reference for
information about parallel instance recovery
■ Oracle9i Replication for information about parallel propagation
As mentioned, you can manually adjust the parameters shown in Table 21–2, even if
you set PARALLEL_AUTOMATIC_TUNING to TRUE. You might need to do this if you
have a highly customized environment or if your system does not perform
optimally using the completely automated settings.
See Also:
■ Oracle9i Database Reference and Oracle9i Database Performance
Guide and Reference for information about the syntax of the
SELECT and ALTER statements
■ Oracle9i SQL Reference for the syntax of the ALTER SYSTEM
statement
■ "Forcing Parallel Execution for a Session" on page 21-47
See Also:
■ "The Parallel Execution Server Pool" on page 21-3
■ "Parallelism Between Operations" on page 21-8
■ "Default Degree of Parallelism" on page 21-35
■ "Parallelization Rules for SQL Statements" on page 21-38
Hints
You can specify hints in a SQL statement to set the DOP for a table or index and for
the caching behavior of the operation.
■ The PARALLEL hint is used only for operations on tables. You can use it to
parallelize queries and DML statements (INSERT, UPDATE, and DELETE).
■ The PARALLEL_INDEX hint parallelizes an index range scan of a partitioned
index. (In an index operation, the PARALLEL hint is not valid and is ignored.)
The above factors determine the default number of parallel execution servers to use.
However, the actual number of processes used is limited by their availability on the
requested instances during run time. The initialization parameter PARALLEL_MAX_
SERVERS sets an upper limit on the total number of parallel execution servers that
an instance can have.
If a minimum fraction of the desired parallel execution servers is not available
(specified by the initialization parameter PARALLEL_MIN_PERCENT), a user error is
produced. The user can then retry the query with less parallelism.
In general, you cannot assume that the time taken to perform a parallel operation
on a given number of partitions (N) with a given number of parallel execution
servers (P) will be N/P. This formula does not take into account the possibility that
some processes might have to wait while others finish working on the last
partitions. By choosing an appropriate DOP, however, you can minimize the
workload skew and optimize performance.
Degree of Parallelism The DOP for a query is determined by the following rules:
1. The query uses the maximum DOP taken from all of the table declarations
involved in the query and all of the potential indexes that are candidates to
satisfy the query (the reference objects). That is, the table or index that has the
greatest DOP determines the query's DOP (maximum query directive).
2. If a table has both a parallel hint specification in the query and a parallel
declaration in its table specification, the hint specification takes precedence over
parallel declaration specification. See Table 21–3 on page 21-45 for precedence
rules.
Decision to Parallelize The following rule determines whether the UPDATE, MERGE, or
DELETE operation should be parallelized:
The UPDATE or DELETE operation will be parallelized if and only if at least one
of the following is true:
■ The table being updated or deleted has a PARALLEL specification.
■ The PARALLEL hint is specified in the DML statement.
■ An ALTER SESSION FORCE PARALLEL DML statement has been issued
previously during the session.
If the statement contains subqueries or updatable views, then they may have their
own separate parallel hints or clauses. However, these parallel directives do not
affect the decision to parallelize the UPDATE, MERGE, or DELETE.
The parallel hint or clause on the tables is used by both the query and the UPDATE,
MERGE, DELETE portions to determine parallelism, the decision to parallelize the
UPDATE, MERGE, or DELETE portion is made independently of the query portion,
and vice versa.
Degree of Parallelism The DOP is determined by the same rules as for the queries.
Note that in the case of UPDATE and DELETE operations, only the target table to be
modified (the only reference object) is involved. Thus, the UPDATE or DELETE
parallel hint specification takes precedence over the parallel declaration
specification of the target table. In other words, the precedence order is: MERGE,
UPDATE, DELETE hint > Session > Parallel declaration specification of target table
See Table 21–3 on page 21-45 for precedence rules.
The maximum DOP you can achieve is equal to the number of partitions (or
subpartitions in the case of composite subpartitions) in the table. A parallel
execution server can update or merge into, or delete from multiple partitions, but
each partition can only be updated or deleted by one parallel execution server.
If the DOP is less than the number of partitions, then the first process to finish work
on one partition continues working on another partition, and so on until the work is
finished on all partitions. If the DOP is greater than the number of partitions
involved in the operation, then the excess parallel execution servers will have no
work to do.
If tbl_1 is a partitioned table and its table definition has a parallel clause, then the
update operation is parallelized even if the scan on the table is serial (such as an
index scan), assuming that the table has more than one partition with c1 greater
than 100.
Both the scan and update operations on tbl_2 will be parallelized with degree
four.
Decision to Parallelize The following rule determines whether the INSERT operation
should be parallelized in an INSERT ... SELECT statement:
The INSERT operation will be parallelized if and only if at least one of the
following is true:
■ The PARALLEL hint is specified after the INSERT in the DML statement.
■ The table being inserted into (the reference object) has a PARALLEL
declaration specification.
■ An ALTER SESSION FORCE PARALLEL DML statement has been issued
previously during the session.
The decision to parallelize the INSERT operation is made independently of the
SELECT operation, and vice versa.
Parallel clauses in CREATE TABLE and ALTER TABLE statements specify table
parallelism. If a parallel clause exists in a table definition, it determines the
parallelism of DDL statements as well as queries. If the DDL statement contains
explicit parallel hints for a table, however, those hints override the effect of parallel
clauses for that table. You can use the ALTER SESSION FORCE PARALLEL DDL
statement to override parallel clauses.
Parallel CREATE INDEX or ALTER INDEX ... REBUILD The CREATE INDEX and ALTER
INDEX ... REBUILD statements can be parallelized only by a PARALLEL clause or an
ALTER SESSION FORCE PARALLEL DDL statement.
ALTER INDEX ... REBUILD can be parallelized only for a nonpartitioned index, but
ALTER INDEX ... REBUILD PARTITION can be parallelized by a PARALLEL clause
or an ALTER SESSION FORCE PARALLEL DDL statement.
The scan operation for ALTER INDEX ... REBUILD (nonpartitioned), ALTER INDEX ...
REBUILD PARTITION, and CREATE INDEX has the same parallelism as the
REBUILD or CREATE operation and uses the same DOP. If the DOP is not specified
for REBUILD or CREATE, the default is the number of CPUs.
Parallel MOVE PARTITION or SPLIT PARTITION The ALTER INDEX ... MOVE PARTITION
and ALTER INDEX ... SPLIT PARTITION statements can be parallelized only by a
PARALLEL clause or an ALTER SESSION FORCE PARALLEL DDL statement. Their
scan operations have the same parallelism as the corresponding MOVE or SPLIT
operations. If the DOP is not specified, the default is the number of CPUs.
Decision to Parallelize (Query Part) The query part of a CREATE TABLE ... AS SELECT
statement can be parallelized only if the following conditions are satisfied:
1. The query includes a parallel hint specification (PARALLEL or PARALLEL_
INDEX) or the CREATE part of the statement has a PARALLEL clause
specification or the schema objects referred to in the query have a
PARALLEL declaration associated with them.
2. At least one of the tables specified in the query requires one of the following:
■ A full table scan
■ An index range scan spanning multiple partitions
Degree of Parallelism (Query Part) The DOP for the query part of a CREATE TABLE ...
AS SELECT statement is determined by one of the following rules:
■ The query part uses the values specified in the PARALLEL clause of the CREATE
part.
■ If the PARALLEL clause is not specified, the default DOP is the number of CPUs.
■ If the CREATE is serial, then the DOP is determined by the query.
Note that any values specified in a hint for parallelism are ignored.
Decision to Parallelize (CREATE Part) The CREATE operation of CREATE TABLE ... AS
SELECT can be parallelized only by a PARALLEL clause or an ALTER SESSION
FORCE PARALLEL DDL statement.
When the CREATE operation of CREATE TABLE ... AS SELECT is parallelized, Oracle
also parallelizes the scan operation if possible. The scan operation cannot be
parallelized if, for example:
■ The SELECT clause has a NOPARALLEL hint
■ The operation scans an index of a nonpartitioned table
When the CREATE operation is not parallelized, the SELECT can be parallelized if it
has a PARALLEL hint or if the selected table (or partitioned index) has a parallel
declaration.
Degree of Parallelism (CREATE Part) The DOP for the CREATE operation, and for the
SELECT operation if it is parallelized, is specified by the PARALLEL clause of the
CREATE statement, unless it is overridden by an ALTER SESSION FORCE
PARALLEL DDL statement. If the PARALLEL clause does not specify the DOP, the
default is the number of CPUs.
■ The priority (1) specification overrides priority (2) and priority (3).
■ The priority (2) specification overrides priority (3).
The adaptive multiuser feature adjusts the DOP based on user load. For example,
you might have a table with a DOP of 5. This DOP might be acceptable with 10
users. However, if 10 more users enter the system and you enable the PARALLEL_
ADAPTIVE_MULTI_USER feature, Oracle reduces the DOP to spread resources more
evenly according to the perceived system load.
Note: Once Oracle determines the DOP for a query, the DOP does
not change for the duration of the query.
It is best to use the parallel adaptive multiuser feature when users process
simultaneous parallel execution operations. If you enable PARALLEL_AUTOMATIC_
TUNING, Oracle automatically sets PARALLEL_ADAPTIVE_MULTI_USER to TRUE.
PARALLEL_MAX_SERVERS
The recommended value for the PARALLEL_MAX_SEVERS parameter is:
2 x DOP x NUMBER_OF_CONCURRENT_USERS
This condition can be verified through the GV$SYSSTAT view by comparing the
statistics for parallel operations not downgraded and parallel operations
downgraded to serial. For example:
SELECT * FROM GV$SYSSTAT WHERE name LIKE 'Parallel operation%';
When Users Have Too Many Processes When concurrent users have too many query
server processes, memory contention (paging), I/O contention, or excessive context
switching can occur. This contention can reduce system throughput to a level lower
than if parallel execution were not used. Increase the PARALLEL_MAX_SERVERS
value only if the system has sufficient memory and I/O bandwidth for the resulting
load.
You can use operating system performance monitoring tools to determine how
much memory, swap space and I/O bandwidth are free. Look at the runq lengths
for both your CPUs and disks, as well as the service time for I/Os on the system.
Verify that the machine has sufficient swap space exists on the machine to add more
processes. Limiting the total number of query server processes might restrict the
number of concurrent users who can execute parallel operations, but system
throughput tends to remain stable.
See Also:
■ Oracle9i Database Administrator’s Guide for more information
about managing resources with user profiles
■ Oracle9i Real Application Clusters Administration for more
information on querying GV$ views
PARALLEL_MIN_SERVERS
The recommended value for the PARALLEL_MIN_SERVERS parameter is 0 (zero),
which is the default.
This parameter is used at startup and lets you specify in a single instance the
number of processes to be started and reserved for parallel operations. The syntax
is:
PARALLEL_MIN_SERVERS=n
The n variable is the number of processes you want to start and reserve for parallel
operations.
Setting PARALLEL_MIN_SERVERS balances the startup cost against memory usage.
Processes started using PARALLEL_MIN_SERVERS do not exit until the database is
shut down. This way, when a query is issued the processes are likely to be available.
It is desirable, however, to recycle query server processes periodically since the
memory these processes use can become fragmented and cause the high water mark
to slowly increase. When you do not set PARALLEL_MIN_SERVERS, processes exit
after they are idle for five minutes.
LARGE_POOL_SIZE or SHARED_POOL_SIZE
The following discussion of how to tune the large pool also applies to tuning the
shared pool, except as noted in "SHARED_POOL_SIZE" on page 21-56. You must
also increase the value for this memory setting by the amount you determine.
You should reduce the value for LARGE_POOL_SIZE low enough so your database
starts. After reducing the value of LARGE_POOL_SIZE, you might see the error:
ORA-04031: unable to allocate 16084 bytes of shared memory ("large
pool","unknown object","large pool heap","PX msg pool")
If so, execute the following query to determine why Oracle could not allocate the
16,084 bytes:
SELECT NAME, SUM(BYTES)
FROM V$SGASTAT
WHERE POOL='LARGE POOL'
GROUP BY ROLLUP (NAME);
If you specify LARGE_POOL_SIZE and the amount of memory you need to reserve
is bigger than the pool, Oracle does not allocate all the memory it can get. Instead, it
leaves some space. When the query runs, Oracle tries to get what it needs. Oracle
uses the 560 KB and needs another 16KB when it fails. The error does not report the
cumulative amount that is needed. The best way of determining how much more
memory is needed is to use the formulas in "Adding Memory for Message Buffers"
on page 21-53.
To resolve the problem in the current example, increase the value for LARGE_POOL_
SIZE. As shown in the sample output, the LARGE_POOL_SIZE is about 2 MB.
Depending on the amount of memory available, you could increase the value of
LARGE_POOL_SIZE to 4 MB and attempt to start your database. If Oracle continues
to display an ORA-4031 message, gradually increase the value for LARGE_POOL_
SIZE until startup is successful.
Adding Memory for Message Buffers You must increase the value for the LARGE_POOL_
SIZE or the SHARED_POOL_SIZE parameters to accommodate message buffers.
The message buffers allow query server processes to communicate with each other.
If you enable automatic parallel tuning, Oracle allocates space for the message
buffer from the large pool. Otherwise, Oracle allocates space from the shared pool.
Oracle uses a fixed number of buffers per virtual connection between producer
query servers and consumer query servers. Connections increase as the square of
the DOP increases. For this reason, the maximum amount of memory used by
parallel execution is bound by the highest DOP allowed on your system. You can
control this value by using either the PARALLEL_MAX_SERVERS parameter or by
using policies and profiles.
To calculate the amount of memory required, use one of the following formulas:
Calculating Additional Memory for Cursors Parallel execution plans consume more space
in the SQL area than serial execution plans. You should regularly monitor shared
pool resource use to ensure that the memory used by both messages and cursors
can accommodate your system's processing requirements.
make sure the size of memory is not too large or too small. To do this, tune the large
and shared pools after examining the size of structures in the large pool, using the
following query:
SELECT POOL, NAME, SUM(BYTES)
FROM V$SGASTAT
WHERE POOL LIKE '%pool%'
GROUP BY ROLLUP (POOL, NAME);
Evaluate the memory used as shown in your output, and alter the setting for
LARGE_POOL_SIZE based on your processing needs.
To obtain more memory usage statistics, execute the following query:
SELECT * FROM V$PX_PROCESS_SYSSTAT WHERE STATISTIC LIKE 'Buffers%';
The amount of memory used appears in the Buffers Current and Buffers HWM
statistics. Calculate a value in bytes by multiplying the number of buffers by the
value for PARALLEL_EXECUTION_MESSAGE_SIZE. Compare the high water mark
to the parallel execution message pool size to determine if you allocated too much
memory. For example, in the first output, the value for large pool as shown in px
msg pool is 38,092,812 or 38 MB. The Buffers HWM from the second output is
3,620, which when multiplied by a parallel execution message size of 4,096 is
14,827,520, or approximately 15 MB. In this case, the high water mark has reached
approximately 40 percent of its capacity.
SHARED_POOL_SIZE
As mentioned earlier, if PARALLEL_AUTOMATIC_TUNING is FALSE, Oracle
allocates query server processes from the shared pool. In this case, tune the shared
pool as described under the previous heading for large pool, with the following
exceptions:
■ Allow for other clients of the shared pool, such as shared cursors and stored
procedures
■ Remember that larger values improve performance in multiuser systems, but
smaller values use less memory
You must also take into account that using parallel execution generates more
cursors. Look at statistics in the V$SQLAREA view to determine how often Oracle
recompiles cursors. If the cursor hit ratio is poor, increase the size of the pool. This
happens only when you have a large number of distinct queries.
You can then monitor the number of buffers used by parallel execution in the same
way as explained previously, and compare the shared pool PX msg pool to the
current high water mark reported in output from the view V$PX_PROCESS_
SYSSTAT.
PARALLEL_MIN_PERCENT
The recommended value for the PARALLEL_MIN_PERCENT parameter is 0 (zero).
This parameter allows users to wait for an acceptable DOP, depending on the
application in use. Setting this parameter to values other than 0 (zero) causes Oracle
to return an error when the requested DOP cannot be satisfied by the system at a
given time.
For example, if you set PARALLEL_MIN_PERCENT to 50, which translates to 50
percent, and the DOP is reduced by 50 percent or greater because of the adaptive
algorithm or because of a resource limitation, then Oracle returns ORA-12827. For
example:
SELECT /*+ PARALLEL(e, 8, 1) */ d.deptno, SUM(SAL)
FROM emp e, dept d WHERE e.deptno = d.deptno
GROUP BY d.deptno ORDER BY d.deptno;
CLUSTER_DATABASE_INSTANCES
The CLUSTER_DATABASE_INSTANCES parameter should be set to a value that is
equal to the number of instances in your Real Application Cluster environment.
The CLUSTER_DATABASE_INSTANCES parameter specifies the number of instances
configured in an Oracle Real Application Cluster environment. Oracle uses the
several times greater than real memory, the paging rate might increase when the
machine is overloaded.
As a general rule for memory sizing, each process requires adequate address space
for hash joins. A dominant factor in high volume data warehousing operations is
the relationship between memory, the number of processes, and the number of hash
join operations. Hash joins and large sorts are memory-intensive operations, so you
might want to configure fewer processes, each with a greater limit on the amount of
memory it can use.
HASH_AREA_SIZE
You can improve hash join performance with a relatively high value for the HASH_
AREA_SIZE parameter. If you use a relatively high value, you will increase your
memory requirements.
Set HASH_AREA_SIZE using one of two approaches. The first approach examines
how much memory is available after configuring the SGA and calculating the
amount of memory processes the system uses during normal loads.
The total amount of memory that Oracle processes are allowed to use should be
divided by the number of processes during the normal load. These processes
include parallel execution servers. This number determines the total amount of
working memory per process. This amount then needs to be shared among different
operations in a given query. For example, setting HASH_AREA_SIZE or SORT_
AREA_SIZE to one-half or one-third of this number is reasonable.
Set these parameters to the highest number that does not cause swapping. After
setting these parameters as described, you should watch for swapping and free
memory. If swapping occurs, decrease the values for these parameters. If a
significant amount of free memory remains, you can increase the values for these
parameters.
The second approach to setting HASH_AREA_SIZE requires a thorough
understanding of the types of hash joins you execute and an understanding of the
amount of data you will be querying against. If the queries and query plans you
execute are well understood, this approach is reasonable.
The value for HASH_AREA_SIZE should be approximately half of the square root of
S, where S is the size in megabytes of the smaller of the inputs to the join operation.
In any case, the value for HASH_AREA_SIZE should not be less than 1 MB.
HASH_AREA_SIZE >= S
2
SORT_AREA_SIZE
The recommended values for this parameter range from 256 KB to 4 MB.
This parameter specifies the amount of memory to allocate per query server process
for sort operations. If you have a lot of system memory, you can benefit from setting
SORT_AREA_SIZE to a large value. This can dramatically increase the performance
of sort operations because the entire process is more likely to be performed in
memory. However, if memory is a concern for your system, you might want to limit
the amount of memory allocated for sort and hash operations.
If the sort area is too small, an excessive amount of I/O is required to merge a large
number of sort runs. If the sort area size is smaller than the amount of data to sort,
the sort will move to disk, creating sort runs. These must then be merged again
using the sort area. If the sort area size is very small, there will be many runs to
merge, and multiple passes might be necessary. The amount of I/O increases as
SORT_AREA_SIZE decreases.
If the sort area is too large, the operating system paging rate will be excessive. The
cumulative sort area adds up quickly because each query server process can allocate
this amount of memory for each sort. For such situations, monitor the operating
system paging rate to see if too much memory is being requested.
SORT_AREA_SIZE is relevant to parallel execution operations and to the query
portion of DML or DDL statements. All CREATE INDEX statements must do some
sorting to generate the index. Commands that require sorting include:
■ CREATE INDEX
■ Direct-path INSERT (if an index is involved)
■ ALTER INDEX ... REBUILD
PARALLEL_EXECUTION_MESSAGE_SIZE
The recommended value for PARALLEL_EXECUTION_MESSAGE_SIZE is 4 KB. If
PARALLEL_AUTOMATIC_TUNING is TRUE, the default is 4 KB. If PARALLEL_
AUTOMATIC_TUNING is FALSE, the default is slightly greater than 2 KB.
The PARALLEL_EXECUTION_MESSAGE_SIZE parameter specifies the upper limit
for the size of parallel execution messages. The default value is operating system
specific and this value should be adequate for most applications. Larger values for
PARALLEL_EXECUTION_MESSAGE_SIZE require larger values for LARGE_POOL_
SIZE or SHARED_POOL_SIZE, depending on whether you have enabled parallel
automatic tuning.
While you might experience significantly improved response time by increasing the
value for PARALLEL_EXECUTION_MESSAGE_SIZE, memory use also drastically
increases. For example, if you double the value for PARALLEL_EXECUTION_
MESSAGE_SIZE, parallel execution requires a message source pool that is twice as
large.
Therefore, if you set PARALLEL_AUTOMATIC_TUNING to FALSE, you must adjust
the SHARED_POOL_SIZE to accommodate parallel execution messages. If you have
set PARALLEL_AUTOMATIC_TUNING to TRUE, but have set LARGE_POOL_SIZE
manually, then you must adjust the LARGE_POOL_SIZE to accommodate parallel
execution messages.
PARALLEL_BROADCAST_ENABLE
The default value for the PARALLEL_BROADCAST_ENABLE parameter is FALSE.
Set PARALLEL_BROADCAST_ENABLE to TRUE if you are joining a very large join
result set with a very small result set (size being measured in bytes, rather than
number of rows). In this case, the optimizer has the option of broadcasting the small
set's rows to each of the query server processes that are processing the rows of the
larger set. The result is enhanced performance. If the result set is large, the
optimizer will not broadcast, which avoids excessive communication overhead.
Parameters Affecting Resource Consumption for Parallel DML and Parallel DDL
The parameters that affect parallel DML and parallel DDL resource consumption
are:
■ TRANSACTIONS
■ ROLLBACK_SEGMENTS
■ FAST_START_PARALLEL_ROLLBACK
■ LOG_BUFFER
■ DML_LOCKS
■ ENQUEUE_RESOURCES
Parallel inserts, updates, and deletes require more resources than serial DML
operations. Similarly, PARALLEL CREATE TABLE ... AS SELECT and PARALLEL
CREATE INDEX can require more resources. For this reason, you may need to
increase the value of several additional initialization parameters. These parameters
do not affect resources for queries.
TRANSACTIONS For parallel DML and DDL, each query server process starts a
transaction. The parallel coordinator uses the two-phase commit protocol to commit
transactions; therefore, the number of transactions being processed increases by the
DOP. As a result, you might need to increase the value of the TRANSACTIONS
initialization parameter.
The TRANSACTIONS parameter specifies the maximum number of concurrent
transactions. The default assumes no parallelism. For example, if you have a DOP
of 20, you will have 20 more new server transactions (or 40, if you have two server
sets) and 1 coordinator transaction. In this case, you should increase
TRANSACTIONS by 21 (or 41) if the transactions are running in the same instance. If
you do not set this parameter, Oracle sets it to a value equal to 1.1 x SESSIONS.
DML_LOCKS This parameter specifies the maximum number of DML locks. Its value
should equal the total number of locks on all tables referenced by all users. A
parallel DML operation's lock and enqueue resource requirement is very different
from serial DML. Parallel DML holds many more locks, so you should increase the
value of the ENQUEUE_RESOURCES and DML_LOCKS parameters by equal amounts.
Table 21–4 shows the types of locks acquired by coordinator and parallel execution
server processes for different types of parallel DML statements. Using this
information, you can determine the value required for these parameters.
Consider a table with 600 partitions running with a DOP of 100. Assume all
partitions are involved in a parallel UPDATE or DELETE statement with no
row-migrations.
DB_BLOCK_BUFFERS
When you perform parallel updates, merges, and deletes, the buffer cache behavior
is very similar to any OLTP system running a high volume of updates.
DB_BLOCK_SIZE
The recommended value for this parameter is 8 KB or 16 KB.
Set the database block size when you create the database. If you are creating a new
database, use a large block size such as 8 KB or 16 KB.
DB_FILE_MULTIBLOCK_READ_COUNT
The recommended value for this parameter is eight for 8 KB block size, or four for
16 KB block size. The default is 8.
This parameter determines how many database blocks are read with a single
operating system READ call. The upper limit for this parameter is
platform-dependent. If you set DB_FILE_MULTIBLOCK_READ_COUNT to an
excessively high value, your operating system will lower the value to the highest
allowable level when you start your database. In this case, each platform uses the
highest value possible. Maximum values generally range from 64 KB to 1 MB.
Synchronous read
Asynchronous read
I/O: CPU:
read block #1 process block #1
I/O: CPU:
read block #2 process block #2
Asynchronous operations are currently supported for parallel table scans, hash
joins, sorts, and serial table scans. However, this feature can require operating
system specific configuration and may not be supported on all platforms. Check
your Oracle operating system-specific documentation.
serial execution took 10 minutes while parallel execution took 5 minutes). If the
performance does not meet your expectations, consider the following questions:
■ Did the execution plan change?
If so, you should gather statistics and decide whether to use index-only access
and a CREATE TABLE AS SELECT statement. You should use index hints if your
system is CPU-bound.
You should also study the EXPLAIN PLAN output.
■ Did the data set change?
If so, you should gather statistics to evaluate any differences.
■ Is the hardware overtaxed?
If so, you should check CPU, I/O, and swap memory.
After setting your basic goals and answering these questions, you need to consider
the following topics:
■ Is There Regression?
■ Is There a Plan Change?
■ Is There a Parallel Plan?
■ Is There a Serial Plan?
■ Is There Parallel Execution?
■ Is The Workload Evenly Distributed?
Is There Regression?
Does parallel execution's actual performance deviate from what you expected? If
performance is as you expected, could there be an underlying performance
problem? Perhaps you have a desired outcome in mind to which you are comparing
the current outcome. Perhaps you have justifiable performance expectations that the
system does not achieve. You might have achieved this level of performance or a
particular execution plan in the past, but now, with a similar environment and
operation, the system is not meeting this goal.
If performance is not as you expected, can you quantify the deviation? For data
warehousing operations, the execution plan is key. For critical data warehousing
operations, save the EXPLAIN PLAN results. Then, as you analyze and reanalyze the
data, upgrade Oracle, and load new data, over time you can compare new
execution plans with old plans. Take this approach either proactively or reactively.
Alternatively, you might find that plan performance improves if you use hints. You
might want to understand why hints are necessary and determine how to get the
optimizer to generate the desired plan without hints. Try increasing the statistical
sample size: better statistics can give you a better plan.
Note: Using different sample sizes can cause the plan to change.
Generally, the higher the sample size, the better the plan.
V$PX_SESSION
The V$PX_SESSION view shows data about query server sessions, groups, sets, and
server numbers. It also displays real-time data about the processes working on
behalf of parallel execution. This table includes information about the requested
DOP and the actual DOP granted to the operation.
V$PX_SESSTAT
The V$PX_SESSTAT view provides a join of the session information from V$PX_
SESSION and the V$SESSTAT table. Thus, all session statistics available to a normal
session are available for all sessions performed using parallel execution.
V$PX_PROCESS
The V$PX_PROCESS view contains information about the parallel processes,
including status, session ID, process ID, and other information.
V$PX_PROCESS_SYSSTAT
The V$PX_PROCESS_SYSSTAT view shows the status of query servers and
provides buffer allocation statistics.
V$PQ_SESSTAT
The V$PQ_SESSTAT view shows the status of all current server groups in the
system such as data about how queries allocate processes and how the multiuser
and load balancing algorithms are affecting the default and hinted values. V$PQ_
SESSTAT will be obsolete in a future release.
You might need to adjust some parameter settings to improve performance after
reviewing data from these views. In this case, refer to the discussion of "Tuning
General Parameters for Parallel Execution" on page 21-48. Query these views
periodically to monitor the progress of long-running parallel operations.
Note: For many dynamic performance views, you must set the
parameter TIMED_STATISTICS to TRUE in order for Oracle to
collect statistics for each view. You can use the ALTER SYSTEM or
ALTER SESSION statements to turn TIMED_STATISTICS on and
off.
V$FILESTAT
The V$FILESTAT view sums read and write requests, the number of blocks, and
service times for every datafile in every tablespace. Use V$FILESTAT to diagnose
I/O and workload distribution problems.
You can join statistics from V$FILESTAT with statistics in the DBA_DATA_FILES
view to group I/O by tablespace or to find the filename for a given file number.
Using a ratio analysis, you can determine the percentage of the total tablespace
activity used by each file in the tablespace. If you make a practice of putting just one
large, heavily accessed object in a tablespace, you can use this technique to identify
objects that have a poor physical layout.
You can further diagnose disk space allocation problems using the DBA_EXTENTS
view. Ensure that space is allocated evenly from all files in the tablespace.
Monitoring V$FILESTAT during a long-running operation and then correlating I/O
activity to the EXPLAIN PLAN output is a good way to follow progress.
V$PARAMETER
The V$PARAMETER view lists the name, current value, and default value of all
system parameters. In addition, the view shows whether a parameter is a session
parameter that you can modify online with an ALTER SYSTEM or ALTER SESSION
statement.
V$PQ_TQSTAT
As a simple example, consider a hash join between two tables, with a join on a
column with only 2 distinct values. At best, this hash function will have one value
hash to parallel execution server A and the other to parallel execution server B. A
DOP of two is fine, but, if it is 4, then at least 2 parallel execution servers have no
work. To discover this type of skew, use a query similar to the following example:
SELECT dfo_number, tq_id, server_type, process, num_rows
FROM V$PQ_TQSTAT
ORDER BY dfo_number DESC, tq_id, server_type, process;
The best way to resolve thie problem might be to choose a different join method; a
nested loop join might be the best option. Alternatively, if one of the join tables is
small relative to the other, it can be broadcast if PARALLEL_BROADCAST_
ENABLED=TRUE or a PQ_DISTRIBUTE hint is used.
Now, assume that you have a join key with high cardinality, but one of the values
contains most of the data, for example, lava lamp sales by year. The only year that
had big sales was 1968, and thus, the parallel execution server for the 1968 records
will be overwhelmed. You should use the same corrective actions as described
above.
The V$PQ_TQSTAT view provides a detailed report of message traffic at the table
queue level. V$PQ_TQSTAT data is valid only when queried from a session that is
executing parallel SQL statements. A table queue is the pipeline between query
server groups, between the parallel coordinator and a query server group, or
between a query server group and the coordinator. Table queues are represented in
EXPLAIN PLAN output by the row labels of PARALLEL_TO_PARALLEL, SERIAL_
TO_PARALLEL, or PARALLEL_TO_SERIAL, respectively.
V$PQ_TQSTAT has a row for each query server process that reads from or writes to
in each table queue. A table queue connecting 10 consumer processes to 10
producer processes has 20 rows in the view. Sum the bytes column and group by
TQ_ID, the table queue identifier, to obtain the total number of bytes sent through
each table queue. Compare this with the optimizer estimates; large variations might
indicate a need to analyze the data using a larger sample.
The processes shown in the output from the previous example using
GV$PX_SESSION collaborate to complete the same task. The next example shows
the execution of a join query to determine the progress of these processes in terms
of physical reads. Use this query to track any specific statistic:
SELECT QCSID, SID, INST_ID "Inst",
SERVER_GROUP "Group", SERVER_SET "Set",
NAME "Stat Name", VALUE
FROM GV$PX_SESSTAT A, V$STATNAME B
WHERE A.STATISTIC# = B.STATISTIC#
AND NAME LIKE 'PHYSICAL READS'
AND VALUE > 0
ORDER BY QCSID, QCINST_ID, SERVER_GROUP, SERVER_SET;
Use the previous type of query to track statistics in V$STATNAME. Repeat this query
as often as required to observe the progress of the query server processes.
The next query uses V$PX_PROCESS to check the status of the query servers.
SELECT * FROM V$PX_PROCESS;
14 rows selected.
affinity for a tablespace (or a partition of a table or index within a tablespace) if the
instance has affinity for the first file in the tablespace.
Oracle considers affinity when allocating work to parallel execution servers. The
use of affinity for parallel execution of SQL statements is transparent to users.
sga_size
+ (# low_memory_processes * low_memory_required)
+ (# medium_memory_processes * medium_memory_required)
+ (# high_memory_processes * high_memory_required)
The formula to calculate the maximum number of processes your system can
support (referred to here as MAX_PROCESSES) is:
# low_memory_processes
+ # medium_memory_processes
+ # high_memory_processes
max_processes
In general, if the value for MAX_PROCESSES is much larger than the number of
users, consider using parallel operations. If MAX_PROCESSES is considerably less
than the number of users, consider other alternatives, such as those described in the
following section on "Balancing the Formula".
On average, no more than 5 percent of the time should be spent simply waiting in
the operating system on page faults. A wait time of more than 5 percent indicates
your paging subsystem is I/O-bound. Use your operating system monitor to check
wait time.
If wait time for paging devices exceeds 5 percent, you can reduce memory
requirements in one of the following ways:
■ Reduce the memory required for each class of process.
■ Reduce the number of processes in memory-intensive classes.
■ Add memory.
If the wait time indicates an I/O bottleneck in the paging subsystem, you could
resolve this by striping.
External Fragmentation
External fragmentation is a concern for parallel load, direct-path INSERT, and
PARALLEL CREATE TABLE ... AS SELECT. Memory tends to become fragmented as
extents are allocated and data is inserted and deleted. This can result in a fair
amount of free space that is unusable because it consists of small, noncontiguous
chunks of memory.
To reduce external fragmentation on partitioned tables, set all extents to the same
size. Set the value for NEXT equal to the value for INITIAL, and set PERCENT_
INCREASE to 0. The system can handle this well with a few thousand extents per
object. Therefore, set MAXEXTENTS to, for example, 1,000 to 3,000. Never attempt to
use a value for MAXEXTENTS in excess of 10,000. For tables that are not partitioned,
the initial extent should be small. In general, the smaller the extent, the better
utilization of space. The trade-off is that your system will spend more time getting
new extents.
Free Space
Schema objects from an OLTP database are often duplicated in the data warehouse.
However, these objects will probably not be subject to the same mix of insert versus
update activity in the data warehouse as in the OLTP environment. The PCTFREE
storage clause can be reduced in the data warehouse environment if the data is
loaded and then very seldomly updated. The default value is 10, which reserves 10
percent of each block that is loaded for future updates. An OLTP environment may
use higher values, so care should be taken when importing schema DDL from OLTP
systems.
You can increase the optimizer's ability to generate parallel plans converting
subqueries, especially correlated subqueries, into joins. Oracle can parallelize joins
more efficiently than subqueries. This also applies to updates.
When combined with the NOLOGGING option, the parallel version of CREATE
TABLE ... AS SELECT provides a very efficient intermediate table facility, for
example:
CREATE TABLE summary PARALLEL NOLOGGING
AS SELECT dim_1, dim_2 ..., SUM (meas_1)
FROM facts
GROUP BY dim_1, dim_2;
These tables can also be incrementally loaded with parallel INSERT. You can take
advantage of intermediate tables using the following techniques:
■ Common subqueries can be computed once and referenced many times. This
can allow some queries against star schemas (in particular, queries without
selective WHERE-clause predicates) to be better parallelized. Note that star
queries with selective WHERE-clause predicates using the star-transformation
technique can be effectively parallelized automatically without any
modification to the SQL.
■ Decompose complex queries into simpler steps in order to provide
application-level checkpoint or restart. For example, a complex multitable join
on a database 1 terabyte in size could run for dozens of hours. A crash during
this query would mean starting over from the beginning. Using CREATE TABLE
... AS SELECT or PARALLEL INSERT AS SELECT, you can rewrite the query as a
sequence of simpler queries that run for a few hours each. If a system failure
occurs, the query can be restarted from the last completed step.
■ Implement manual parallel deletes efficiently by creating a new table that omits
the unwanted rows from the original table, and then dropping the original
table. Alternatively, you can use the convenient parallel delete feature, which
directly deletes rows from the original table.
■ Create summary tables for efficient multidimensional drill-down analysis. For
example, a summary table might store the sum of revenue grouped by month,
brand, region, and salesman.
■ Reorganize tables, eliminating chained rows, compressing free space, and so on,
by copying the old table to a new table. This is much faster than export/import
and easier than reloading.
At the same time, temporary extents should be large enough that processes do not
have to wait for space. Temporary tablespaces use less overhead than permanent
tablespaces when allocating and freeing a new extent. However, obtaining a new
temporary extent still requires the overhead of acquiring a latch and searching
through the SGA structures, as well as SGA space consumption for the sort extent
pool. Also, if extents are too small, SMON might take a long time dropping old sort
segments when new instances start up.
Because the dictionary sees the size as 1 KB, which is less than the extent size, the
corrupt file is not accessed. Eventually, you might wish to re-create the tablespace.
To make your temporary tablespace available for use, enter:
ALTER USER scott TEMPORARY TABLESPACE TStemp;
Besides query performance, you should also monitor parallel load, parallel index
creation, and parallel DML, and look for good utilization of I/O and CPU resources.
There are several ways to optimize the parallel execution of join statements. You can
alter system configuration, adjust parameters as discussed earlier in this chapter, or
use hints, such as the DISTRIBUTION hint.
The key points when using EXPLAIN PLAN are to:
■ Verify optimizer selectivity estimates. If the optimizer thinks that only one row
will be produced from a query, it tends to favor using a nested loop. This could
be an indication that the tables are not analyzed or that the optimizer has made
an incorrect estimate about the correlation of multiple predicates on the same
table. A hint may be required to force the optimizer to use another join method.
Consequently, if the plan says only one row is produced from any particular
stage and this is incorrect, consider hints or gather statistics.
■ Use hash join on low cardinality join keys. If a join key has few distinct values,
then a hash join may not be optimal. If the number of distinct values is less than
the DOP, then some parallel query servers may be unable to work on the
particular query.
■ Consider data skew. If a join key involves excessive data skew, a hash join may
require some parallel query servers to work more than others. Consider setting
PARALLEL_BROADCAST_ENBALED to TRUE or using a hint to cause a
broadcast.
INDEX or ALTER INDEX statements, you should set INITRANS, the initial number
of transactions allocated within each data block, to a large value, such as the
maximum DOP against this index. Leave MAXTRANS, the maximum number of
concurrent transactions that can update a data block, at its default value, which is
the maximum your system can support. This value should not exceed 255.
If you run a DOP of 10 against a table with a global index, all 10 server processes
might attempt to change the same global index block. For this reason, you must set
MAXTRANS to at least 10 so all server processes can make the change at the same
time. If MAXTRANS is not large enough, the parallel DML operation fails.
In this case, you should consider increasing the DBWn processes. If there are no
waits for free buffers, the query will not return any rows.
[NO]LOGGING Clause
The [NO]LOGGING clause applies to tables, partitions, tablespaces, and indexes.
Virtually no log is generated for certain operations (such as direct-path INSERT) if
the NOLOGGING clause is used. The NOLOGGING attribute is not specified at the
INSERT statement level but is instead specified when using the ALTER or CREATE
statement for a table, partition, index, or tablespace.
When a table or index has NOLOGGING set, neither parallel nor serial direct-path
INSERT operations generate undo or redo logs. Processes running with the
NOLOGGING option set run faster because no redo is generated. However, after a
NOLOGGING operation against a table, partition, or index, if a media failure occurs
before a backup is taken, then all tables, partitions, and indexes that have been
modified might be corrupted.
When you add or enable a UNIQUE or PRIMARY KEY constraint on a table, you
cannot automatically create the required index in parallel. Instead, manually create
an index on the desired columns, using the CREATE INDEX statement and an
appropriate PARALLEL clause, and then add or enable the constraint. Oracle then
uses the existing index when enabling or adding the constraint.
Multiple constraints on the same table can be enabled concurrently and in parallel if
all the constraints are already in the ENABLE NOVALIDATE state. In the following
example, the ALTER TABLE ... ENABLE CONSTRAINT statement performs the table
scan that checks the constraint in parallel:
CREATE TABLE a (a1 NUMBER CONSTRAINT ach CHECK (a1 > 0) ENABLE NOVALIDATE)
PARALLEL;
INSERT INTO a values (1);
COMMIT;
ALTER TABLE a ENABLE CONSTRAINT ach;
INSERT
Oracle INSERT functionality can be summarized as follows:
If parallel DML is enabled and there is a PARALLEL hint or PARALLEL attribute set
for the table in the data dictionary, then inserts are parallel and appended, unless a
restriction applies. If either the PARALLEL hint or PARALLEL attribute is missing,
the insert is performed serially.
Direct-path INSERT
Append mode is the default during a parallel insert: data is always inserted into a
new block which is allocated to the table. Therefore the APPEND hint is optional.
You should use append mode to increase the speed of INSERT operations, but not
when space utilization needs to be optimized. You can use NOAPPEND to override
append mode.
The APPEND hint applies to both serial and parallel insert: even serial inserts are
faster if you use this hint. APPEND, however, does require more space and locking
overhead.
You can use NOLOGGING with APPEND to make the process even faster. NOLOGGING
means that no redo log is generated for the operation. NOLOGGING is never the
default; use it when you wish to optimize performance. It should not normally be
used when recovery is needed for the table or partition. If recovery is needed, be
sure to take a backup immediately after the operation. Use the ALTER TABLE
[NO]LOGGING statement to set the appropriate value.
Parallelizing INSERT ... SELECT In the INSERT ... SELECT statement you can specify a
PARALLEL hint after the INSERT keyword, in addition to the hint after the SELECT
keyword. The PARALLEL hint after the INSERT keyword applies to the INSERT
operation only, and the PARALLEL hint after the SELECT keyword applies to the
SELECT operation only. Thus, parallelism of the INSERT and SELECT operations
are independent of each other. If one operation cannot be performed in parallel, it
has no effect on whether the other operation can be performed in parallel.
The ability to parallelize inserts causes a change in existing behavior if the user has
explicitly enabled the session for parallel DML and if the table in question has a
PARALLEL attribute set in the data dictionary entry. In that case, existing INSERT ...
SELECT statements that have the select operation parallelized can also have their
insert operation parallelized.
If you query multiple tables, you can specify multiple SELECT PARALLEL hints and
multiple PARALLEL attributes.
The APPEND keyword is not required in this example because it is implied by the
PARALLEL hint.
Parallelizing UPDATE and DELETE The PARALLEL hint (placed immediately after the
UPDATE or DELETE keyword) applies not only to the underlying scan operation,
but also to the UPDATE or DELETE operation. Alternatively, you can specify UPDATE
or DELETE parallelism in the PARALLEL clause specified in the definition of the
table to be modified.
If you have explicitly enabled parallel DML for the session or transaction, UPDATE
or DELETE statements that have their query operation parallelized can also have
their UPDATE or DELETE operation parallelized. Any subqueries or updatable views
in the statement can have their own separate PARALLEL hints or clauses, but these
parallel directives do not affect the decision to parallelize the update or delete. If
these operations cannot be performed in parallel, it has no effect on whether the
UPDATE or DELETE portion can be performed in parallel.
Tables must be partitioned in order to support parallel UPDATE and DELETE.
The PARALLEL hint is applied to the UPDATE operation as well as to the scan.
Again, the parallelism is applied to the scan as well as UPDATE operation on table
emp.
contains either new rows or rows that have been updated since the last refresh of
the data warehouse. In this example, the updated data is shipped from the
production system to the data warehouse system by means of ASCII files. These
files must be loaded into a temporary table, named diff_customer, before
starting the refresh process. You can use SQL*Loader with both the parallel and
direct options to efficiently perform this task.
Once diff_customer is loaded, the refresh process can be started. It can be
performed in two phases or with a newer technique:
■ Updating the Table in Parallel
■ Inserting the New Rows into the Table in Parallel
■ Merging in Parallel
You can then update the customer table with the following SQL statement:
UPDATE /*+ PARALLEL(cust_joinview) */
(SELECT /*+ PARALLEL(customer) PARALLEL(diff_customer) */
CUSTOMER.c_name as c_name
CUSTOMER.c_addr as c_addr,
diff_customer.c_name as c_newname, diff_customer.c_addr as c_newaddr
WHERE customer.c_key = diff_customer.c_key) cust_joinview
SET c_name = c_newname, c_addr = c_newaddr;
The base scans feeding the join view cust_joinview are done in parallel. You can
then parallelize the update to further improve performance, but only if the
customer table is partitioned.
See Also:
■ "Rewriting SQL Statements" on page 21-85
■ Oracle9i Application Developer’s Guide - Fundamentals for
information about key-preserved tables
However, youcan guarantee that the subquery is transformed into an anti-hash join
by using the HASH_AJ hint. Doing so enables you to use parallel INSERT to execute
the preceding statement efficiently. Parallel INSERT is applicable even if the table is
not partitioned.
Merging in Parallel
In Oracle9i, you combine the previous updates and inserts into one statement,
commonly known as an upsert or merge. The following statement achieves the
same result as all of the statements in "Updating the Table in Parallel" on page 21-98
and "Inserting the New Rows into the Table in Parallel" on page 21-99:
MERGE INTO customer USING diff_customer
ON (diff_customer.c_key = customer.c_key)
WHEN MATCHED THEN
UPDATE SET (c_name, c_addr) = (SELECT c_name, c_addr
FROM diff_customer
WHERE diff_customer.c_key = customer.c_key)
WHEN NOT MATCHED THEN
INSERT VALUES (diff_customer.c_key,diff_customer.c_data);
Use discretion in employing hints. If used, hints should come as a final step in
tuning and only when they demonstrate a necessary and significant performance
advantage. In such cases, begin with the execution plan recommended by
cost-based optimization, and go on to test the effect of hints only after you have
quantified your performance expectations. Remember that hints are powerful. If
you use them and the underlying data changes, you might need to change the hints.
Otherwise, the effectiveness of your execution plans might deteriorate.
Always use cost-based optimization unless you have an existing application that
has been hand-tuned for rule-based optimization. If you must use rule-based
optimization, rewriting a SQL statement can greatly improve application
performance.
Cost-Based Rewrite
Query rewrite is available with cost-based optimization. Oracle optimizes the input
query with and without rewrite and selects the least costly alternative. The
optimizer rewrites a query by rewriting one or more query blocks, one at a time.
If the rewrite logic has a choice between multiple materialized views to rewrite a
query block, it will select one to optimize the ratio of the sum of the cardinality of
the tables in the rewritten query block to that in the original query block. Therefore,
the materialized view selected would be the one which can result in reading in the
least amount of data.
After a materialized view has been picked for a rewrite, the optimizer performs the
rewrite, and then tests whether the rewritten query can be rewritten further with
another materialized view. This process continues until no further rewrites are
possible. Then the rewritten query is optimized and the original query is optimized.
The optimizer compares these two optimizations and selects the least costly
alternative.
Since optimization is based on cost, it is important to collect statistics both on tables
involved in the query and on the tables representing materialized views. Statistics
are fundamental measures, such as the number of rows in a table, that are used to
calculate the cost of a rewritten query. They are created by using the DBMS_STATS
package.
Queries that contain in-line or named views are also candidates for query rewrite.
When a query contains a named view, the view name is used to do the matching
between a materialized view and the query. When a query contains an inline view,
the inline view can be merged into the query before matching between a
materialized view and the query occurs.
In addition, if the inline view's text definition exactly matches with that of an inline
view present in any eligible materialized view, general rewrite may be possible.
This is because, whenever a materialized view contains exactly identical inline view
text to the one present in a query, query rewrite treats such an inline view like a
named view or a table.
The following presents a graphical view of the cost-based approach.
User's SQL
Oracle9i
Generate Rewrite
plan
Generate
plan
Choose
(based on cost)
Execute
■ Either all or part of the results requested by the query must be obtainable from
the precomputed result stored in the materialized view.
To determine this, the optimizer may depend on some of the data relationships
declared by the user using constraints and dimensions. Such data relationships
include hierarchies, referential integrity, and uniqueness of key data, and so on.
The query rewrite examples in this chapter mainly refer to the following
materialized views. Note that those materialized views does not necessarily
represent the most efficient implementation for the Sales History business
example rather than the base for demonstrating Oracle's rewrite capabilities.
Further examples demonstrating specific functionality can be found in the specific
context.
Materialized views containing joins and aggregates:
CREATE MATERIALIZED VIEW sum_sales_pscat_week_mv
ENABLE QUERY REWRITE
AS
SELECT p.prod_subcategory, t.week_ending_day,
SUM(s.amount_sold) AS sum_amount_sold
FROM sales s, products p, times t
WHERE s.time_id=t.time_id
AND s.prod_id=p.prod_id
GROUP BY p.prod_subcategory, t.week_ending_day;
SUM(s.amount_sold) AS sum_amount_sold
FROM sales s, products p, times t
WHERE s.time_id=t.time_id
AND s.prod_id=p.prod_id
GROUP BY p.prod_id, t.week_ending_day, s.cust_id;
You must collect statistics on the materialized views so that the optimizer can
determine whether to rewrite the queries. You can do this either on a per object base
or for all newly created objects without statistics.
On a per object base, shown for JOIN_SALES_TIME_PRODUCT_MV:
See Also: Oracle9i Supplied PL/SQL Packages and Types Reference for
further information about using the DBMS_STATS package to
maintain statistics
With OPTIMIZER_MODE set to CHOOSE, a query will not be rewritten unless at least
one table referenced by it has been analyzed. This is because the rule-based
optimizer is used when OPTIMIZER_MODE is set to CHOOSE and none of the tables
referenced in a query have been analyzed.
You can set the level of query rewrite for a session, thus allowing different users to
work at different integrity levels. The possible statements are:
ALTER SESSION SET QUERY_REWRITE_INTEGRITY = STALE_TOLERATED;
ALTER SESSION SET QUERY_REWRITE_INTEGRITY = TRUSTED;
ALTER SESSION SET QUERY_REWRITE_INTEGRITY = ENFORCED;
Rewrite Hints
Hints can be included in SQL statements to control whether query rewrite occurs.
Using the NOREWRITE hint in a query prevents the optimizer from rewriting it.
The REWRITE hint with no argument in a query forces the optimizer to use a
materialized view (if any) to rewrite it regardless of the cost.
The REWRITE(mv1,mv2,...) hint with arguments forces rewrite to select the
most suitable materialized view from the list of names specified.
To prevent a rewrite, you can use the following statement:
SELECT /*+ NOREWRITE */ p.prod_subcategory, SUM(s.amount_sold)
FROM sales s, products p
WHERE s.prod_id=p.prod_id
GROUP BY p.prod_subcategory;
Note that the scope of a rewrite hint is a query block. If a SQL statement consists of
several query blocks (SELECT clauses), you might need to specify a rewrite hint on
each query block to control the rewrite for the entire statement.
The system privilege GRANT REWRITE lets you enable materialized views in your
own schema for query rewrite only if all tables directly referenced by the
materialized view are in that schema. The GRANT GLOBAL REWRITE privilege
allows you to enable materialized views for query rewrite even if the materialized
view references objects in other schemas.
The privileges for using materialized views for query rewrite are similar to those for
definer-rights procedures.
several situations where the output with rewrite can be different from that without
it.
1. A materialized view can be out of synchronization with the master copy of the
data. This generally happens because the materialized view refresh procedure is
pending following bulk load or DML operations to one or more detail tables of
a materialized view. At some data warehouse sites, this situation is desirable
because it is not uncommon for some materialized views to be refreshed at
certain time intervals.
2. The relationships implied by the dimension objects are invalid. For example,
values at a certain level in a hierarchy do not roll up to exactly one parent value.
3. The values stored in a prebuilt materialized view table might be incorrect.
4. Partition operations such as DROP and MOVE PARTITION on the detail table
could affect the results of the materialized view.
5. A wrong answer can occur because of bad data relationships defined by
unenforced table or view constraints.
When full text match fails, the optimizer then attempts a partial text match. In this
method, the text starting from the FROM clause of a query is compared against the
text starting with the FROM clause of a materialized view definition. Therefore, this
query:
SELECT p.prod_subcategory, t.calendar_month_desc, c.cust_city,
AVG(s.amount_sold)
FROM sales s, products p, times t, customers c
WHERE s.time_id=t.time_id
AND s.prod_id=p.prod_id
AND s.cust_id=c.cust_id
GROUP BY p.prod_subcategory, t.calendar_month_desc, c.cust_city;
is rewritten as:
SELECT prod_subcategory, calendar_month_desc, cust_city,
sum_amount_sold/count_amount_sold
FROM sum_sales_pscat_month_city_mv;
Note that, under the partial text match rewrite method, the average of sales
aggregate required by the query is computed using the sum of sales and count of
sales aggregates stored in the materialized view.
When neither text match succeeds, the optimizer uses a general query rewrite
method.
Also note that text comparison is case sensitive, so keywords like FROM must be in
the same case.
Table 22–1 Materialized View Types and General Query Rewrite Methods
MV with MV with Joins and MV with Aggregates
Joins Only Aggregates on a Single Table
Selection Compatibility X X X
Join Compatibility X X -
Data Sufficiency X X X
Grouping Compatibility - X X
Aggregate Computability - X X
To perform these checks, the optimizer uses data relationships on which it can
depend. For example, primary key and foreign key relationships tell the optimizer
that each row in the foreign key table joins with at most one row in the primary key
table. Furthermore, if there is a NOT NULL constraint on the foreign key, it indicates
that each row in the foreign key table must join to exactly one row in the primary
key table.
Data relationships such as these are very important for query rewrite because they
tell what type of result is produced by joins, grouping, or aggregation of data.
Therefore, to maximize the rewritability of a large set of queries when such data
relationships exist in a database, they should be declared by the user.
View Constraints
Data warehouse applications recognize multi-dimensional cubes in the database by
identifying integrity constraints in the relational schema. Integrity constraints
represent primary and foreign key relationships between fact and dimension tables.
By querying the data dictionary, applications can recognize integrity constraints
and hence the cubes in the database. However, this does not work in an
environment where DBAs, for schema complexity or security reasons, define views
on fact and dimension tables. In such environments, applications cannot identify
the cubes properly. By allowing constraint definitions between views, you can
propagate base table constraints to the views, thereby allowing applications to
recognize cubes even in a restricted environment.
View constraint definitions are declarative in nature, but operations on views are
subject to the integrity constraints defined on the underlying base tables, and
constraints on views can be enforced through constraints on base tables. Defining
constraints on base tables is necessary, not only for data correctness and cleanliness,
but also for materialized view query rewrite purposes using the original base
objects.
Materialized view rewrite extensively uses constraints for query rewrite. They are
used for determining lossless joins, which, in turn, determine if joins in the
materialized view are compatible with joins in the query and thus if rewrite is
possible.
DISABLE NOVALIDATE is the only valid state for a view constraint. However, you
can choose RELY or NORELY as the view constraint state to enable more
sophisticated query rewrites. For example, a view constraint in the RELY state
allows query rewrite to occur when the query integrity level is set to TRUSTED.
Table 22–3 illustrates when view constraints are used for determining lossless joins.
Note that view constraints cannot be used for query rewrite integrity level
TRUSTED. This level enforces the highest degree of constraint enforcement ENABLE
VALIDATE.
You can now establish a foreign-primary key relationship (in RELY ON) mode
between the view and the fact table, and thus rewrite will take place as described in
Table 22–3, by adding the following constraints. Rewrite will then work for example
in TRUSTED mode.
ALTER VIEW time_view ADD (CONSTRAINT time_view_pk
PRIMARY KEY (time_id) DISABLE NOVALIDATE);
ALTER VIEW time_view MODIFY CONSTRAINT time_view_pk RELY;
ALTER TABLE sales ADD (CONSTRAINT time_view_fk FOREIGN key (time_id)
REFERENCES time_view(time_id) DISABLE NOVALIDATE);
ALTER TABLE sales MODIFY CONSTRAINT time_view_fk RELY;
The following query, omitting the dimension table products, will also be rewritten
without the primary key/foreign key relationships, because the suppressed join
between sales and products is known to be lossless.
SELECT t.day_in_year,
SUM(s.amount_sold) AS sum_amount_sold
FROM time_view t, sales s
WHERE t.time_id = s.time_id
GROUP BY t.day_in_year;
To revert the changes you have made to the sales history schema, apply the
following SQL commands:
ALTER TABLE sales DROP CONSTRAINT time_view_fk;
DROP VIEW time_view;
Expression Matching
An expression that appears in a query can be replaced with a simple column in a
materialized view provided the materialized view column represents a
precomputed expression that matches with the expression in the query. If a query
can be rewritten to use a materialized view, it will be faster. This is because
materialized views contain precomputed calculations and do not need to perform
expression computation.
The expression matching is done by first converting the expressions into canonical
forms and then comparing them for equality. Therefore, two different expressions
will be matched as long as they are equivalent to each other. Further, if the entire
expression in a query fails to match with an expression in a materialized view, then
subexpressions of it are tried to find a match. The subexpressions are tried in a
top-down order to get maximal expression matching.
Consider a query that asks for sum of sales by age brackets (1-10, 11-20, 21-30, and
so on).
CREATE MATERIALIZED VIEW sales_by_age_bracket_mv
ENABLE QUERY REWRITE
AS
SELECT TO_CHAR((2000-c.cust_year_of_birth)/10-0.5,999) AS age_bracket,
SUM(s.amount_sold) AS sum_amount_sold
FROM sales s, customers c
WHERE s.cust_id=c.cust_id
GROUP BY TO_CHAR((2000-c.cust_year_of_birth)/10-0.5,999);
Date Folding
Date folding rewrite is a specific form of expression matching rewrite. In this type
of rewrite, a date range in a query is folded into an equivalent date range
representing higher date granules. The resulting expressions representing higher
date granules in the folded date range are matched with equivalent expressions in a
materialized view. The folding of date range into higher date granules such as
months, quarters, or years is done when the underlying datatype of the column is
an Oracle DATE. The expression matching is done based on the use of canonical
forms for the expressions.
DATE is a built-in datatype which represents ordered time units such as seconds,
days, and months, and incorporates a time hierarchy (second -> minute -> hour ->
day -> month -> quarter -> year). This hard-coded knowledge about DATE is used
in folding date ranges from lower-date granules to higher-date granules.
Specifically, folding a date value to the beginning of a month, quarter, year, or to the
end of a month, quarter, year is supported. For example, the date value
1-jan-1999 can be folded into the beginning of either year 1999 or quarter
1999-1 or month 1999-01. And, the date value 30-sep-1999 can be folded into
the end of either quarter 1999-03 or month 1999-09.
Because date values are ordered, any range predicate specified on date columns can
be folded from lower level granules into higher level granules provided the date
range represents an integral number of higher level granules. For example, the
range predicate date_col BETWEEN '1-jan-1999' AND '30-jun-1999' can
be folded into either a month range or a quarter range using the TO_CHAR function,
which extracts specific date components from a date value.
The advantage of aggregating data by folded date values is the compression of data
achieved. Without date folding, the data is aggregated at the lowest granularity
level, resulting in increased disk space for storage and increased I/O to scan the
materialized view.
Consider a query that asks for the sum of sales by product types for the years 1991,
1992, and 1993.
SELECT p.prod_category, SUM(s.amount_sold)
FROM sales s, products p
WHERE s.prod_id=p.prod_id
AND s.time_id >= TO_DATE('01-jan-1998', 'dd-mon-yyyy')
AND s.time_id <= TO_DATE('31-dec-1998', 'dd-mon-yyyy')
GROUP BY p.prod_category;
The range specified in the query represents an integral number of years, quarters, or
months. Assume that there is a materialized view mv3 that contains
pre-summarized sales by prod_type and is defined as follows:
CREATE MATERIALIZED VIEW mv3
ENABLE QUERY REWRITE
AS
SELECT prod_type, TO_CHAR(sale_date,'yyyy-mm') AS month, SUM(sales) AS sum_sales
FROM fact, product
WHERE fact.prod_id = product.prod_id
GROUP BY prod_type, TO_CHAR(sale_date, 'yyyy-mm');
The query can be rewritten by first folding the date range into the month range and
then matching the expressions representing the months with the month expression
in mv3. This rewrite is shown below in two steps (first folding the date range
followed by the actual rewrite).
SELECT prod_type, SUM(sales) AS sum_sales
FROM fact, product
WHERE fact.prod_id = product.prod_id AND
TO_CHAR(sale_date, 'yyyy-mm') BETWEEN
TO_CHAR('01-jan-1991', 'yyyy-mm') AND TO_CHAR('31-dec-1993', 'yyyy-mm')
GROUP BY prod_type;
Selection Compatibility
Oracle supports rewriting of queries so that they will use materialized views in
which the HAVING or WHERE clause of the materialized view contains a selection of
a subset of the data in a table or tables. A materialized view's WHERE or HAVING
clause can contain a join, a selection, or both, and still be used by a rewritten query.
Predicate clauses containing expressions, or selecting rows based on the values of
particular columns, are examples of non-join predicates.
To perform this type of query rewrite, Oracle must determine if the data requested
in the query is contained in, or is a subset of, the data stored in the materialized
view. This problem is sometimes referred to as the data containment problem or, in
more general terms, the problem of a restricted subset of data in a materialized
view. The following sections detail the conditions where Oracle can solve this
problem and thus rewrite a query to use a materialized view that contains a
restricted portion of the data in the detail table.
Selection compatibility is performed when both the query and the materialized
view contain selections (non-joins). A selection compatibility check is done on the
WHERE as well as the HAVING clause. If the materialized view contains selections
and the query does not, then selection compatibility check fails because the
materialized view is more restrictive than the query. If the query has selections and
the materialized view does not then selection compatibility check is not needed.
Regardless, selections and any columns mentioned in them must pass the data
sufficiency check.
Definitions
The following definitions are introduced to help the discussion:
■ <join relop>
Is one of the following (=, <, <=, >, >=)
■ <selection relop>
Is (=, <, <=, >, >=, !=, [NOT] BETWEEN | IN| LIKE |NULL)
■ <join predicate>
Is of the form (<column1> <join relop> <column2>), where columns
are from different tables within the same FROM clause in the current query
block. So, for example, there cannot be an outer reference.
■ <selection predicate>
Is of the form <LHS-expression><relop><RHS-expression>, where LHS
means left-hand side and RHS means right-hand side. All non-join predicates
are selection predicates. The left-hand side usually contains a column and the
right-hand side contains the values. For example, color='red' means the
left-hand side is color and the right-hand side is 'red' and the relational
operator is (=).
■ <LHS-constrained>
When comparing a selection from the query with a selection from the
materialized view, the left-hand side of the selection is compared with the
left-hand side of the query. If they match, they are said to be LHS-constrained or
just constrained for short.
■ <RHS-constrained>
When comparing a selection from the query with a selection from the
materialized view, the right-hand side of the selection is compared with the
right-hand side of the query. If they match, they are said to be RHS-constrained
or just constrained. Note that before comparing the selections, the
LHS/RHS-expression is converted to a canonical form and then the comparison
is done. This means that expressions such as <column1 + 5> and <5 +
column1> will match and be constrained.
Although selection compatibility does not restrict the general form of the WHERE,
there is an optimal pattern and normally most queries fall into this pattern as
follows:
(<join predicate> AND <join predicate> AND ....) AND
(<selection predicate> AND|OR <selection predicate> .... )
The join compatibility check operates on the joins and the selection compatibility
operates on the selections. If the WHERE clause has an OR at the top, then the
optimizer first checks for common predicates under the OR. If found, the common
predicates are factored out from under the OR then joined with an AND back to the
OR. This helps to put the WHERE into the optimal pattern. This is done only if OR
occurs at the top of the WHERE clause. For example, if the WHERE clause is:
(sales.prod_id = prod.prod_id AND prod.prod_name = 'Kids Polo Shirt')
OR (sales.prod_id = prod.prod_id AND prod.prod_name = 'Kids Shorts')
■ IN lists
Single and multi-column IN lists such as WHERE(prod_id) IN (102, 233,
....).
Note that selections of the form (column1='v1' OR column1='v2' OR
column1='v3' OR ....) are treated as a group and classified as an IN list.
■ IS [NOT] NULL
■ [NOT] LIKE
■ Other
Other selections are when selection compatibility cannot determine
containment of data. For example, EXISTS.
When comparing a selection from the query with a selection from the materialized
view, the left-hand side of the selection is compared with the left-hand side of the
query. If they match, they are said to be LHS-constrained or constrained for short.
If the selections are constrained, then the right-hand side values are checked for
containment. That is, the RHS values of the query selection must be contained by
right-hand side values of the materialized view selection. For example:
In the above example, the selections are constrained on prod_id and the
right-hand side value of the query 102 is within the range of the materialized view.
In the above example, the selections are constrained on prod_id and the query
range is within the materialized view range. In this example, we notice that both
query selections are constrained by the same materialized view selection. The
left-hand side can be an expression.
If the left-hand side and the right-hand side are constrained and the <selection
relop> is the same, then generally the selection can be dropped from the rewritten
query. Otherwise, the selection must be keep to filter out extra data from the
materialized view.
If query rewrite can drop the selection from the rewritten query, then any columns
from the selection may not have to be in the materialized view so more rewrites can
be done with less data.
In the above example, the materialized view selection with prod_name is not
constrained. The materialized view is more restrictive that the query because it only
contains the product Shorts, therefore, query rewrite will not occur.
In the above example, the materialized view IN lists are constrained by the columns
in the query multi-column in list. Furthermore, the right-hand side values of the
query selection are contained by the materialized view so that rewrite will occur.
In the above example, the materialized view IN list columns are fully constrained
by the columns in the query selections. Furthermore, the right-hand side values of
the query selection are contained by the materialized view. However, the following
example fails selection compatibility check:
In the above example, the materialized view in list column cust_city is not
constrained so the materialized view is more restrictive than the query. Selection
compatibility also works with complex ORs. If we assume that the shape of the
WHERE is as follows:
(selection AND selection AND ...) OR (selection AND selection AND ...)
Each group of selections separated by AND is related and the group is called a
disjunct. The disjuncts are separated by ORs. Selection compatibility requires that
every disjunct in the query be contained by some disjunct in the materialized view.
Otherwise, the materialized view is more restrictive than the query. The
materialized view disjuncts do not have to match any query disjunct. This just
means that the materialized view has more data than the query requires. When
comparing a disjunct from the query with a disjunct of the materialized view, the
normal selection compatibility rules apply as specified in the previous discussion.
For example:
In the above example, the query has a single disjunct (group of selections separated
by AND). The materialized view has two disjuncts separated by OR. The query
disjunct is contained by the second materialized view disjunct so selection
compatibility succeeds. It is clear that the materialized view contains more data
than needed by the query so the query can be rewritten.
For example, here is a simple materialized view definition:
CREATE MATERIALIZED VIEW cal_month_sales_id_mv
BUILD IMMEDIATE
REFRESH FORCE
ENABLE QUERY REWRITE
AS
SELECT t.calendar_month_desc,
SUM(s.amount_sold) AS dollars
FROM sales s,
times t
WHERE s.time_id = t.time_id AND s.cust_id = 10
GROUP BY t.calendar_month_desc;
The following query could be rewritten to use this materialized view because the
query asks for the amount where the customer ID is 10 and this is contained in the
materialized view.
SELECT t.calendar_month_desc, SUM(s.amount_sold) AS dollars
FROM times t, sales s
Because the predicate s.cust_id = 10 selects the same data in the query and in
the materialized view, it is dropped from the rewritten query. This means the
rewritten query looks like:
SELECT mv.calendar_month_desc, mv.dollars FROM cal_month_sales_id_mv mv;
Query rewrite can also occur when the query specifies a range of values, such as
s.prod_id > 10000 and s.prod_id < 20000, as long as the range specified in
the query is within the range specified in the materialized view. For example, if
there is a materialized view defined as:
CREATE MATERIALIZED VIEW product_sales_mv
BUILD IMMEDIATE
REFRESH FORCE
ENABLE QUERY REWRITE
AS
SELECT p.prod_name, SUM(s.amount_sold) AS dollar_sales
FROM products p, sales s
WHERE p.prod_id = s.prod_id
GROUP BY prod_name
HAVING SUM(s.amount_sold) BETWEEN 5000 AND 50000;
Rewrite with select expressions is also supported when the expression evaluates to
a constant, such as TO_DATE('12-SEP-1999','DD-Mon-YYYY'). For example, if
an existing materialized view is defined as:
CREATE MATERIALIZED VIEW sales_on_valentines_day_99_mv
BUILD IMMEDIATE
REFRESH FORCE
ENABLE QUERY REWRITE
AS
SELECT prod_id, cust_id, amount_sold
FROM sales s, times t
WHERE s.time_id = t.time_id
AND t.time_id = TO_DATE('04-FEB-1999', 'DD-MON-YYYY');
Rewrite can also occur against a materialized view when the selection is contained
in an IN expression. For example, given the following materialized view definition,
CREATE MATERIALIZED VIEW popular_promo_sales_mv
BUILD IMMEDIATE
REFRESH FORCE
ENABLE QUERY REWRITE
AS
SELECT p.promo_name, SUM(s.amount_sold) AS sum_amount_sold
FROM promotions p, sales s
WHERE s.promo_id = p.promo_id
AND promo_name IN ('coupon', 'premium', 'giveaway')
GROUP BY promo_name;
The query,
SELECT p.promo_name, SUM(s.amount_sold)
FROM promotions p, sales s
WHERE s.promo_id = p.promo_id
AND promo_name IN ('coupon', 'premium')
GROUP BY promo_name;
is rewritten to:
SELECT * FROM popular_promo_sales_mv WHERE promo_name IN ('coupon', 'premium');
You can also use expressions in selection predicates. This process looks like the
following example:
<expression> <relational operator> <constant>
would be rewritten to
SELECT * FROM popular_promo_sales_mv
WHERE promo_name = 'coupon' AND sum_amount_sold > 1000;
This is an example where the query is more restrictive than the definition of the
materialized view, so rewrite can occur. However, if the query had selected promo_
category, then it could not have been rewritten against the materialized view,
because the materialized view definition does not contain that column.
For another example, if the definition of a materialized view restricts a city name
column to Boston, then a query that selects Seattle as a value for this column
can never be rewritten with that materialized view, but a query that restricts city
name to Boston and restricts a column value that is not restricted in the
materialized view could be rewritten to use the materialized view.
All the rules noted previously also apply when predicates are combined with an OR
operator. The simple predicates, or simple predicates connect by ANDs, are
considered separately. Each predicate in the query must appear in the materialized
view if rewrite is to occur.
For example, the query could have a restriction like city='Boston' OR city
='Seattle' and to be eligible for rewrite, the materialized view that the query
might be rewritten against must have the same restriction. In fact, the materialized
view could have additional restrictions, such as city='Boston' OR
city='Seattle' OR city='Cleveland' and rewrite might still be possible.
Note, however, that the reverse is not true. If the query had the restriction city =
'Boston' OR city='Seattle' OR city='Cleveland' and the materialized
view only had the restriction city='Boston' OR city='Seattle', then rewrite
would not be possible since the query seeks more data than is contained in the
restricted subset of data stored in the materialized view.
Materialized
view join
graph
customers products times
sales
Common MV
subgraph delta
Common Joins The common join pairs between the two must be of the same type, or
the join in the query must be derivable from the join in the materialized view. For
example, if a materialized view contains an outer join of table A with table B, and a
query contains an inner join of table A with table B, the result of the inner join can
be derived by filtering the anti-join rows from the result of the outer join.
For example, consider this query:
SELECT p.prod_name, t.week_ending_day,
SUM(amount_sold)
FROM sales s, products p, times t
WHERE s.time_id=t.time_id
AND s.prod_id = p.prod_id
AND t. week_ending_day BETWEEN TO_DATE('01-AUG-1999', 'DD-MON-YYYY')
AND TO_DATE('10-AUG-1999', 'DD-MON-YYYY')
GROUP BY prod_name, week_ending_day;
The common joins between this query and the materialized view join_sales_
time_product_mv are:
s.time_id = t.time_id AND s.prod_id = p.prod_id
In general, if you use an outer join in a materialized view containing only joins, you
should put in the materialized view either the primary key or the rowid on the right
side of the outer join. For example, in the previous example, join_sales_time_
product_oj_mv, there is a primary key on both sales and products.
Another example of when a materialized view containing only joins is used is the
case of a semi-join rewrites. That is, a query contains either an EXISTS or an IN
subquery with a single table.
Consider this query, which reports the products that had sales greater than $1,000.
SELECT DISTINCT prod_name
FROM products p
WHERE EXISTS
(SELECT *
FROM sales s
WHERE p.prod_id=s.prod_id
AND s.amount_sold > 1000);
This query contains a semi-join between the product and the sales table:
s.prod_id = p.prod_id
Rewrites with semi-joins are currently restricted to materialized views with joins
only and are not available for materialized views with joins and aggregates.
Query Delta Joins A query delta join is a join that appears in the query but not in the
materialized view. Any number and type of delta joins in a query are allowed and
they are simply retained when the query is rewritten with a materialized view.
Upon rewrite, the materialized view is joined to the appropriate tables in the query
delta.
For example, consider this query:
SELECT p.prod_name, t.week_ending_day, c.cust_city,
SUM(s.amount_sold)
FROM sales s, products p, times t, customers c
WHERE s.time_id=t.time_id
AND s.prod_id = p.prod_id
AND s.cust_id = c.cust_id
GROUP BY prod_name, week_ending_day, cust_city;
Materialized View Delta Joins A materialized view delta join is a join that appears in
the materialized view but not the query. All delta joins in a materialized view are
required to be lossless with respect to the result of common joins. A lossless join
guarantees that the result of common joins is not restricted. A lossless join is one
where, if two tables called A and B are joined together, rows in table A will always
match with rows in table B and no data will be lost, hence the term lossless join. For
example, every row with the foreign key matches a row with a primary key
provided no nulls are allowed in the foreign key. Therefore, to guarantee a lossless
join, it is necessary to have FOREIGN KEY, PRIMARY KEY, and NOT NULL constraints
on appropriate join keys. Alternatively, if the join between tables A and B is an outer
join (A being the outer table), it is lossless as it preserves all rows of table A.
All delta joins in a materialized view are required to be non-duplicating with
respect to the result of common joins. A non-duplicating join guarantees that the
result of common joins is not duplicated. For example, a non-duplicating join is one
where, if table A and table B are joined together, rows in table A will match with at
most one row in table B and no duplication occurs. To guarantee a non-duplicating
join, the key in table B must be constrained to unique values by using a primary key
or unique constraint.
Consider this query that joins sales and times:
SELECT t.week_ending_day,
SUM(s.amount_sold)
FROM sales s, times t
WHERE s.time_id = t.time_id
AND t.week_ending_day BETWEEN TO_DATE('01-AUG-1999', 'DD-MON-YYYY')
AND TO_DATE('10-AUG-1999', 'DD-MON-YYYY')
GROUP BY week_ending_day;
The query can also be rewritten with the materialized view join_sales_time_
product_mv_oj where foreign key constraints are not needed. This view contains
an outer join (s.prod_id=p.prod_id(+)) between sales and products. This
makes the join lossless. If p.prod_id is a primary key, then the non-duplicating
condition is satisfied as well and optimizer will rewrite the query as:
SELECT week_ending_day,
SUM(amount_sold)
FROM join_sales_time_product_oj_mv
WHERE week_ending_day BETWEEN TO_DATE('01-AUG-1999', 'DD-MON-YYYY')
AND TO_DATE('10-AUG-1999', 'DD-MON-YYYY')
GROUP BY week_ending_day;
Current limitations restrict most rewrites with outer joins to materialized views
with joins only. There is limited support for rewrites with materialized aggregate
views with outer joins, so those views should rely on foreign key constraints to
assure losslessness of materialized view delta joins.
When the column data required by a query is not available from a materialized
view, such column data can still be obtained by joining the materialized view back
to the table that contains required column data provided the materialized view
contains a key that functionally determines the required column data.
For example, consider this query:
SELECT p.prod_category, t.week_ending_day,
SUM(s.amount_sold)
FROM sales s, products p, times t
WHERE s.time_id=t.time_id
AND s.prod_id=p.prod_id
AND p.prod_category='CD'
GROUP BY p.prod_category, t.week_ending_day;
Here the products table is called a joinback table because it was originally joined
in the materialized view but joined again in the rewritten query.
There are two ways to declare functional dependency:
■ Using the primary key constraint (as shown in the example above)
■ Using the DETERMINES clause of a dimension
The DETERMINES clause of a dimension definition might be the only way you could
declare functional dependency when the column that determines another column
cannot be a primary key. For example, the products table is a denormalized
dimension table that has columns prod_id, prod_name, and prod_
subcategory, and prod_subcategory functionally determines prod_subcat_
desc and prod_category determines prod_cat_desc.
The first functional dependency can be established by declaring prod_id as the
primary key, but not the second functional dependency because the prod_
subcategory column contains duplicate values. In this situation, you can use the
DETERMINES clause of a dimension to declare the second functional dependency.
The following dimension definition illustrates how the functional dependencies are
declared.
CREATE DIMENSION products_dim
LEVEL product IS (products.prod_id)
LEVEL subcategory IS (products.prod_subcategory)
LEVEL category IS (products.prod_category)
HIERARCHY prod_rollup (
product CHILD OF
subcategory CHILD OF
category
)
ATTRIBUTE product DETERMINES products.prod_name
ATTRIBUTE product DETERMINES products.prod_desc
ATTRIBUTE subcategory DETERMINES products.prod_subcat_desc
ATTRIBUTE category DETERMINES products.prod_cat_desc;
The hierarchy prod_rollup declares hierarchical relationships that are also 1:n
functional dependencies. The 1:1 functional dependencies are declared using the
DETERMINES clause, as seen when prod_subcategory functionally determines
prod_subcat_desc.
The following query:
SELECT p.prod_subcat_desc, t.week_ending_day,
SUM(s.amount_sold)
FROM sales s, products p, times t
WHERE s.time_id=t.time_id
AND s.prod_id=p.prod_id
AND p.prod_subcat_desc LIKE '%Men'
GROUP BY p.prod_subcat_desc, t.week_ending_day;
FROM products) iv
WHERE mv.prod_subcategory=iv.prod_subcategory
AND iv.prod_subcat_desc LIKE '%Men'
GROUP BY iv.prod_subcat_desc, mv.week_ending_day;
Note that, for this rewrite, the data sufficiency check determines that a joinback to
the products table is necessary, and the grouping compatibility check determines
that aggregate rollup is necessary.
The argument of an aggregate such as SUM can be an arithmetic expression like A+B.
The optimizer will try to match an aggregate SUM(A+B) in a query with an
aggregate SUM(A+B) or SUM(B+A) stored in a materialized view. In other words,
expression equivalence is used when matching the argument of an aggregate in a
query with the argument of a similar aggregate in a materialized view. To
accomplish this, Oracle converts the aggregate argument expression into a
canonical form such that two different but equivalent expressions convert into the
same canonical form. For example, A*(B-C), A*B-C*A, (B-C)*A, and -A*C+A*B
all convert into the same canonical form and, therefore, they are successfully
matched.
Query Rewrite with Inline Views Oracle supports general query rewrite when the user
query contains an inline view, or a subquery in the FROM list. Query rewrite
matches inline views in the materialized view with inline views in the request
query when the text of the two inline views exactly match. In this case, rewrite
treats the matching inline view as it would a named view, and general rewrite
processing is possible.
Query rewrite now matches inline views in the materialized view with inline views
in the request query when the text of the two inline views exactly match. In this
case, rewrite treats the matching inline view as it would a named view, and general
rewrite processing is possible.
Here is an example where the materialized view contains an inline view, and the
query has the same inline view, but the aliases for these views are different.
Previously, this query could not be rewritten because neither exact text match nor
partial text match is possible.
Here is the materialized view definition:
CREATE MATERIALIZED VIEW inline_example
ENABLE QUERY REWRITE AS
SELECT t.calendar_month_name, t.calendar_year p.prod_category,
SUM(V1.revenue) AS sum_revenue
FROM times t, products p,
(SELECT time_id, prod_id, amount_sold*0.2 as revenue FROM sales) V1
WHERE t.time_id = V1.time_id
AND p.prod_id = V1.prod_id
GROUP BY calendar_month_name, calendar_year, prod_category ;
And here is the query that will be rewritten to use the materialized view:
SELECT t.calendar_month_name, t.calendar_year, p.prod_category,
SUM(X1.revenue) AS sum_revenue
FROM times t, products p,
(SELECT time_id, prod_id, amount_sold*0.2 AS revenue FROM sales) X1
WHERE t.time_id = X1.time_id
AND p.prod_id = X1.prod_id
GROUP BY calendar_month_name, calendar_year, prod_category ;
Query Rewrite with Selfjoins Query rewrite of queries which contain multiple
references to the same tables, or self joins are possible, to the extent that general
rewrite can occur when the query and the materialized view definition have the
same aliases for the multiple references to a table. This allows Oracle to provide a
distinct identity for each table reference and this in turn allows query rewrite.
Below is an example of a materialized view and a query. In this example, the query
is missing a reference to a column in a table so an exact text match will not work.
But general query rewrite can occur because the aliases for the table references
match.
To demonstrate the self-join rewriting possibility with the Sales History schema,
we are assuming the following addition to include the actual shipping and payment
date in the fact table, referencing the same dimension table times. This is for
demonstration purposes only and will not return any results:
ALTER TABLE sales ADD (time_id_ship DATE);
ALTER TABLE sales ADD (CONSTRAINT time_id_book_fk FOREIGN key (time_id_ship)
REFERENCES times(time_id) ENABLE NOVALIDATE);
The following query fails the exact text match test but is rewritten because the
aliases for the table references match:
SELECT s.prod_id,
t2.fiscal_week_number - t1.fiscal_week_number AS lag
FROM times t1, sales s, times t2
WHERE t1.time_id = s.time_id
AND t2.time_id = s.time_id_ship;
Note that Oracle performs other checks to insure the correct match of an instance of
a multiply instanced table in the request query with the corresponding table
instance in the materialized view. For instance, in the following example, Oracle
correctly determines that the matching alias names used for the multiple instances
of table time does not establish a match between the multiple instances of table
time in the materialized view:
The following query cannot be rewritten using sales_shipping_lag_mv even
though the alias names of the multiply instanced table time match because the
joins are not compatible between the instances of time aliased by t2:
SELECT s.prod_id,
t2.fiscal_week_number - t1.fiscal_week_number AS lag
FROM times t1, sales s, times t2
WHERE t1.time_id = s.time_id AND t2.time_id = s.time_id_paid;
The request query above joins the instance of the time table aliased by t2 on the
s.time_id_paid column, while the materialized views joins the instance of the
time table aliased by t2 on the s.time_id_ship column. Because the join
conditions differ, Oracle correctly determines that rewrite cannot occur.
PARTITION SALES_Q3_1998
VALUES LESS THAN (TO_DATE('01-OCT-1998', 'DD-MON-YYYY')), ...
Suppose new data will be inserted for December 2000, which will end up in the
partition SALES_Q4_2000.
Until a refresh is done, the materialized view is stale and cannot be used for rewrite
in enforced mode. The fresh rows in the materialized view, that means the data of
all partitions where Oracle knows that no changes have occurred, can be
represented by modifying the materialized view's defining query as follows:
SELECT s.time_id, p.prod_subcategory, c.cust_city,
SUM(s.amount_sold) AS sum_amount_sold
FROM sales s, products p, customers c
WHERE s.cust_id = c.cust_id
AND s.prod_id = p.prod_id
AND s.time_id < TO_DATE('01-OCT-2000','DD-MON-YYYY')
GROUP BY time_id, prod_subcategory, cust_city;
Please note that the freshness of partially stale materialized views is tracked on a
per partition base, and not on a logical base. Since the partitioning strategy of the
sales fact table is on a quarterly base, changes in December 2000 causes the
complete partition SALES_Q4_2000 to become stale.
Consider the following query which asks for sales in quarter 1 and 2 of 2000:
SELECT s.time_id, p.prod_subcategory, c.cust_city,
SUM(s.amount_sold) AS sum_amount_sold
FROM sales s, products p, customers c
WHERE s.cust_id = c.cust_id
AND s.prod_id = p.prod_id
AND s.time_id BETWEEN TO_DATE('01-JAN-2000', 'DD-MON-YYYY')
AND TO_DATE('01-JUL-2000', 'DD-MON-YYYY')
GROUP BY time_id, prod_subcategory, cust_city;
Oracle knows that those ranges of rows in the materialized view are fresh and can
therefore rewrite the above query with the materialized view. The rewritten query
looks as follows:
SELECT time_id, prod_subcategory, cust_city, sum_amount_sold
FROM sum_sales_per_city_mv
WHERE time_id BETWEEN TO_DATE('01-JAN-2000', 'DD-MON-YYYY')
AND TO_DATE('01-JUL-2000', 'DD-MON-YYYY');
Instead of the partitioning key, a partition marker (a function that identifies the
partition given a rowid) can be present in the select (and GROUP BY list) of the
materialized view. You can use the materialized view to rewrite queries that require
data from only certain partitions (identifiable by the partition-marker), for instance,
queries that reference a partition-extended table-name or queries that have a
predicate specifying ranges of the partitioning keys containing entire partitions. See
Chapter 8, "Materialized Views", for details regarding the supplied partition marker
function DBMS_MVIEW.PMARKER.
The following example illustrates the use of a partition marker in the materialized
view instead of the direct usage of the partition key column.
CREATE MATERIALIZED VIEW sum_sales_per_city_2_mv
ENABLE QUERY REWRITE
AS
SELECT DBMS_MVIEW.PMARKER(s.rowid) AS pmarker,
t.fiscal_quarter_desc, p.prod_subcategory, c.cust_city,
SUM(s.amount_sold) AS sum_amount_sold
FROM sales s, products p, customers c, times t
WHERE s.cust_id = c.cust_id
AND s.prod_id = p.prod_id
AND s.time_id = t.time_id
GROUP BY DBMS_MVIEW.PMARKER(s.rowid),
prod_subcategory, cust_city, fiscal_quarter_desc;C
Suppose you know that the partition SALES_Q1_2000 is fresh. You can rewrite the
following query using the above materialized view. This query restricts the data to
the partition SALES_Q1_2000, that means only the first quarter of 2000, and selects
only certain values of cust_city.
SELECT s.city, SUM(f.dollar_sales)
FROM store s, fact f
WHERE s.store_id < 25
AND s.store_name = 'Sears'
GROUP BY s.city;
The same query could have been expressed with a partition-extended file name as
in the following statement:
SELECT s.city, SUM(f.dollar_sales)
FROM store s, fact partition(store_id_1_to_24) f
WHERE s.store_name = 'Sears'
GROUP BY s.city;
So the query can be rewritten against the materialized view without accessing stale
data.
For example, assume that you had created a materialized views join_sales_
time_product_mv and sum_sales_time_product_mv:
CREATE MATERIALIZED VIEW join_sales_time_product_mv
ENABLE QUERY REWRITE
AS
SELECT p.prod_id, p.prod_name, t.time_id, t.week_ending_day,
s.channel_id, s.promo_id, s.cust_id,
s.amount_sold
FROM sales s, products p, times t
WHERE s.time_id=t.time_id
AND s.prod_id = p.prod_id;
Oracle first tries to rewrite it with a materialized aggregate view and finds there is
none eligible (note that single-table aggregate materialized view sum_sales_
store_time_mv cannot yet be used), and then tries a rewrite with a materialized
join view and finds that join_sales_time_product_mv is eligible for rewrite.
The rewritten query has this form:
SELECT mv.prod_name, mv.week_ending_day, SUM(mv.amount_sold)
FROM join_sales_time_product_mv mv
GROUP BY mv.prod_name, mv.week_ending_day;
Because a rewrite occurred, Oracle tries the process again. This time the above
query can be rewritten with single-table aggregate materialized view sum_sales_
store_time into this form:
Materialized View has Simple GROUP BY and Query has Extended GROUP BY
When the query contains CUBE, ROLLUP, or concatenation of them, it can be
rewritten in terms of materialized view if all the GROUP BY expressions in the query
either match or functionally dependent on the GROUP BY expressions of the
materialized view. For example, the query:
SELECT c.cust_city, p.prod_subcategory, AVG(s.amount_sold) AS avg_sales_sold
FROM sales s, customers c, products p
WHERE s.prod_id = p.prod_id AND s.cust_id = c.cust_id
GROUP BY CUBE(c.cust_city, p.prod_subcategory);
SUM(mv.sum_amount_sold) AS sum_amount_sold
FROM sum_sales_pscat_month_city_mv mv
GROUP BY GROUPING SETS ((mv.prod_subcategory, mv.calendar_month_desc),
(mv.cust_city, mv.prod_subcategory);
Materialized View has Extended GROUP BY and Query has Simple GROUP BY
To rewrite queries in this scenario, Oracle requires the materialized view satisfy two
additional conditions:
■ to contain a grouping distinguisher, which is the GROUPING_ID function on all
GROUP BY expressions. For example, if the GROUP BY clause of the materialized
view is GROUP BY CUBE(a, b), then the SELECT list should contain
GROUPING_ID(a, b)
and
■ the GROUP BY clause of the materialized view should not result in any duplicate
groupings. For example, GROUP BY GROUPING SETS ((a, b), (a,b))
would disqualify an materialized view from general rewrite.
Oracle finds the grouping with the lowest cost from which the query can be
computed and uses that for rewrite. For example, consider the materialized view:
CREATE MATERIALIZED VIEW sum_grouping_set_mv
ENABLE QUERY REWRITE
AS
SELECT p.prod_category, p.prod_subcategory, c.cust_state_province, c.cust_city,
GROUPING_ID(p.prod_category,p.prod_subcategory,
c.cust_state_province,c.cust_city) AS gid,
SUM(s.amount_sold) AS sum_amount_sold
FROM sales s, products p, customers c
WHERE s.prod_id = p.prod_id AND s.cust_id = c.cust_id
GROUP BY GROUPING SETS
(
(p.prod_category, p.prod_subcategory, c.cust_city),
(p.prod_category, p.prod_subcategory, c.cust_state_province, c.cust_city),
(p.prod_category, p.prod_subcategory)
);
For this type of rewrite to occur, the predicates in the WHERE clause of the
materialized view and the query must match (answers could otherwise be
incorrect).
This type of rewrite is useful for OLAP applications where queries ask for
aggregations from multiple levels of a cube. For example, you can construct a sales
cube with two dimensions: product and customers. The product dimension has
two levels: prod_category and prod_subcategory and the customer
dimension two levels: cust_state_province and cust_city. In a cube, we use
a concatenated rollup of the dimensions. The rollup is arranged with decreasing
hierarchy levels. So the sales cube can be represented as a view:
CREATE VIEW sales_cube_view
AS
SELECT p.prod_category, p.prod_subcategory, c.cust_state_province,
c.cust_city, SUM(s.amount_sold) as sum_amount_sold
FROM sales s, products p, customers c
WHERE s.prod_id = p.prod_id AND s.cust_id = c.cust_id
GROUP BY ROLLUP(p.prod_category, p.prod_subcategory),
ROLLUP(c.cust_state_province, c.cust_city);
To support that cube, you would build corresponding materialized view:
CREATE MATERIALIZED VIEW sales_cube_mv
ENABLE QUERY REWRITE
AS
SELECT p.prod_category, p.prod_subcategory, c.cust_state_province, c.cust_city,
GROUPING_ID(p.prod_category,p.prod_subcategory,c.cust_state_province,
c.cust_city) AS gid,
SUM(s.amount_sold) as sum_amount_sold,
COUNT(s.amount_sold) AS count_amount_sold,
COUNT(*) AS cnt_star
FROM sales s, products p, customers c
WHERE s.prod_id = p.prod_id AND s.cust_id = c.cust_id
GROUP BY ROLLUP(p.prod_category, p.prod_subcategory),
ROLLUP(c.cust_state_province, c.cust_city);
Using the sales_cube_view, OLAP queries can ask for multiple levels of
aggregations using a single query. For example, this query asks for sums sales of
product category Men in San Francisco by prod_category and prod_
Note that the rewrite requires simple selection from the materialized view container
table. No rollup is required.
If none of the materialized views contain all groupings of the query, then the
materialized view containing the smallest grouping from which all groupings of the
query can be computed is selected for rewrite. As an example, Oracle rewrites the
query:
SELECT p.prod_category, p.prod_subcategory, c.cust_city,
SUM(s.amount_sold) AS sum_amount_sold
FROM sales s, products p, customers c
WHERE s.prod_id = p.prod_id AND s.cust_id = c.cust_id
GROUP BY GROUPING SETS
(
(p.prod_category, c.cust_city),
(p.prod_subcategory, c.cust_city));
as:
SELECT prod_category, prod_subcategory, cust_city,
SUM(sum_amount_sold) AS sum_amount_sold
FROM sum_grouping_set_mv
WHERE gid = <grouping identifier of (prod_category,
prod_subcategory, cust_city)>
GROUP BY GROUPING SETS ((prod_category, cust_city),
(prod_subcategory, cust_city));
Explain Plan
The EXPLAIN PLAN facility is used as described in Oracle9i SQL Reference. For query
rewrite, all you need to check is that the object_name column in PLAN_TABLE
contains the materialized view name. If it does, then query rewrite will occur when
this query is executed.
In this example, the materialized view cal_month_sales_mv has been created.
CREATE MATERIALIZED VIEW cal_month_sales_mv
ENABLE QUERY REWRITE
AS
SELECT t.calendar_month_desc, SUM(s.amount_sold) AS dollars
FROM sales s, times t
WHERE s.time_id = t.time_id
GROUP BY t.calendar_month_desc;
If EXPLAIN PLAN is used on the following SQL statement, the results are placed in
the default table PLAN_TABLE. However, PLAN_TABLE must first be created using
the utlxplan.sql script.
EXPLAIN PLAN
FOR
SELECT t.calendar_month_desc, SUM(s.amount_sold)
FROM sales s, times t
WHERE s.time_id = t.time_id
GROUP BY t.calendar_month_desc;
For the purposes of query rewrite, the only information of interest from PLAN_
TABLE is the OBJECT_NAME, which identifies the objects that will be used to
execute this query. Therefore, you would expect to see the object name CALENDAR_
MONTH_SALES_MV in the output as illustrated below.
OBJECT_NAME
-----------------------
CALENDAR_MONTH_SALES_MV
2 rows selected.
DBMS_MVIEW.EXPLAIN_REWRITE Procedure
It can be difficult to understand why a query did not rewrite. The rules governing
query rewrite eligibility are quite complex, involving various factors such as
constraints, dimensions, query rewrite integrity modes, freshness of the
materialized views, and the types of queries themselves. In addition, you may want
to know why query rewrite chose a particular materialized view instead of another.
To help with this matter, Oracle provides a PL/SQL procedure (DBMS_
MVIEW.EXPLAIN_REWRITE) to advise you when a query can be rewritten and, if
not, why not. Using the results from DBMS_MVIEW.EXPLAIN_REWRITE, you can
take the appropriate action needed to make a query rewrite if at all possible.
DBMS_MVIEW.EXPLAIN_REWRITE Syntax
You can obtain the output from DBMS_MVIEW.EXPLAIN_REWRITE in two ways.
The first is to use a table, while the second is to create a varray. The following shows
the basic syntax for using an output table:
DBMS_MVIEW.EXPLAIN_REWRITE (
QUERY VARCHAR2(2000),
MV VARCHAR2(30),
STATEMENT_ID VARCHAR2(30)
);
considered for rewriting the given query. When SCHEMA is omitted and only MV is
specified, EXPLAIN_REWRITE looks for the materialized view in the current
schema.
Therefore, to call the EXPLAIN_REWRITE procedure using an output table is as
follows:
DBMS_MVIEW.EXPLAIN_REWRITE (
QUERY VARCHAR2(2000),
MV VARCHAR2(30),
STATEMENT_ID VARCHAR2(30)
);
Using REWRITE_TABLE
Output of EXPLAIN_REWRITE can be directed to a table named REWRITE_TABLE.
You can create this output table by running the Oracle-supplied script
utlxrw.sql. This script can be found in the admin directory. The format of
REWRITE_TABLE is given below.
CREATE TABLE REWRITE_TABLE(
statement_id VARCHAR2(30), -- ID for the query
mv_owner VARCHAR2(30), -- MV's schema
mv_name VARCHAR2(30), -- Name of the MV
sequence INTEGER, -- Seq # of error msg
query VARCHAR2(2000),-- user query
message VARCHAR2(512), -- EXPLAIN_REWRITE error msg
pass VARCHAR2(3), -- Query Rewrite pass no
flags INTEGER, -- For future use
reserved1 INTEGER, -- For future use
reserved2 VARCHAR2(256); -- For future use
);
Here is another example where you can see a more detailed explanation of why
some materialized views were not considered and eventually the materialized view
sales_mv was chosen as the best one.
DECLARE
qrytext VARCHAR2(500) :='SELECT cust_first_name, cust_last_name,
SUM(amount) AS dollar_sales FROM sales s, customers c WHERE s.cust_id= c.cust_id
GROUP BY cust_first_name, cust_last_name';
idno VARCHAR2(30) :='ID1';
BEGIN
DBMS_MVIEW.EXPLAIN_REWRITE(querytxt, '', idno);
END;
/
SELECT message FROM rewrite_table ORDER BY sequence;
SQL> MESSAGE
--------------------------------------------------------------------------------
QSM-01082: Joining materialized view, CAL_MONTH_SALES_MV, with table, SALES, not possible
QSM-01022: a more optimal materialized view than PRODUCT_SALES_MV was used to rewrite
QSM-01022: a more optimal materialized view than FWEEK_PSCAT_SALES_MV was used to rewrite
QSM-01033: query rewritten with materialized view, SALES_MV
Using a VARRAY
You can save the output of EXPLAIN_REWRITE in a PL/SQL varray. The elements
of this array are of the type RewriteMessage, which is defined in the SYS schema
as shown below:
TYPE RewriteMessage IS record(
mv_owner VARCHAR2(30), -- MV's schema
mv_name VARCHAR2(30), -- Name of the MV
sequence INTEGER, -- Seq # of error msg
query VARCHAR2(2000),-- user query
message VARCHAR2(512), -- EXPLAIN_REWRITE error msg
pass VARCHAR2(3), -- Query Rewrite pass no
flags INTEGER, -- For future use
reserved1 INTEGER, -- For future use
reserved2 VARCHAR2(256); -- For future use
);
The array type, RewriteArrayType, which is a varray of RewriteMessage
objects, is defined in SYS schema as follows:
■ TYPE RewriteArrayType AS VARRAY(256) OF RewriteMessage;
Using this array type, now you can declare an array variable and specify it in
the EXPLAIN_REWRITE statement.
■ Each RewriteMessage record provides a message concerning rewrite
processing.
The parameters are the same as for REWRITE_TABLE, except for STATEMENT_
ID, which is not used when using a varray as output.
■ The MV_OWNER field defines the owner of materialized view that is relevant to
the message.
■ The MV_NAME field defines the name of a materialized view that is relevant to
the message.
■ The SEQUENCE field defines the sequence in which messages should be
ordered.
■ The QUERY_TEXT field contains the first 2000 characters of the query text under
analysis.
■ The MESSAGE field contains the text of message relevant to rewrite processing
of QUERY_TEXT.
■ The FLAGS, RESERVED1, and RESERVED2 fields are reserved for future use.
The query will not rewrite with this materialized view. This can be quite confusing
to a novice user as it seems like all information required for rewrite is present in the
materialized view. The user can find out from DBMS_MVIEW.EXPLAIN_REWRITE
that AVG cannot be computed from the given materialized view. The problem is that
a ROLLUP is required here and AVG requires a COUNT or a SUM to do ROLLUP.
An example PL/SQL block for the above query, using a varray as its output
medium, is as follows:
SET SERVEROUTPUT ON
DECLARE
Rewrite_Array SYS.RewriteArrayType := SYS.RewriteArrayType();
querytxt VARCHAR2(1500) := 'SELECT S.CITY, AVG(F.DOLLAR_SALES)
FROM STORE S, FACT F WHERE S.STORE_KEY = F.STORE_KEY
GROUP BY S.CITY';
i NUMBER;
BEGIN
DBMS_MVIEW.Explain_Rewrite(querytxt, 'MV_CITY_STATE', Rewrite_Array);
FOR i IN 1..Rewrite_Array.count
LOOP
DBMS_OUTPUT.PUT_LINE(Rewrite_Array(i).message);
END LOOP;
END;
/
Constraints
Make sure all inner joins referred to in a materialized view have referential integrity
(foreign key - primary key constraints) with additional NOT NULL constraints on the
foreign key columns. Since constraints tend to impose a large overhead, you could
make them NO VALIDATE and RELY and set the parameter QUERY_REWRITE_
INTEGRITY to STALE_TOLERATED or TRUSTED. However, if you set QUERY_
REWRITE_INTEGRITY to ENFORCED, all constraints must be enforced to get
maximum rewritability.
Dimensions
You can express the hierarchical relationships and functional dependencies in
normalized or denormalized dimension tables using the HIERARCHY and
DETERMINES clauses of a dimension. Dimensions can express intra-table
relationships which cannot be expressed by any constraints. Set the parameter
QUERY_REWRITE_INTEGRITY to TRUSTED or STALE_TOLERATED for query
rewrite to take advantage of the relationships declared in dimensions.
Outer Joins
Another way of avoiding constraints is to use outer joins in the materialized view.
Query rewrite will be able to derive an inner join in the query, such as (A.a=B.b),
from an outer join in the materialized view (A.a = B.b(+)), as long as the rowid
of B or column B.b is available in the materialized view. Most of the support for
rewrites with outer joins is provided for materialized views with joins only. To
exploit it, a materialized view with outer joins should store the rowid or primary
key of the inner table of an outer join. For example, the materialized view join_
sales_time_product_mv_oj stores the primary keys prod_id and time_id of
the inner tables of outer joins.
Text Match
If you need to speed up an extremely complex, long-running query, you could
create a materialized view with the exact text of the query. Then the materialized
view would contain the query results, thus eliminating the time required to perform
any complex joins and search through all the data for that which is required.
Aggregates
To get the maximum benefit from query rewrite, make sure that all aggregates
which are needed to compute ones in the targeted set of queries are present in the
materialized view. The conditions on aggregates are quite similar to those for
incremental refresh. For instance, if AVG(x) is in the query, then you should store
COUNT(x) and AVG(x) or store SUM(x) and COUNT(x) in the materialized view.
Grouping Conditions
Aggregating data at lower levels in the hierarchy is better than aggregating at
higher levels because lower levels can be used to rewrite more queries. Note,
however, that doing so will also take up more space. For example, instead of
grouping on state, group on city (unless space constraints prohibit it).
Instead of creating multiple materialized views with overlapping or hierarchically
related GROUP BY columns, create a single materialized view with all those GROUP
BY columns. For example, instead of using a materialized view that groups by city
and another materialized view that groups by month, use a materialized view that
groups by city and month.
Use GROUP BY on columns which correspond to levels in a dimension but not on
columns that are functionally dependent, because query rewrite will be able to use
the functional dependencies automatically based on the DETERMINES clause in a
dimension. For example, instead of grouping on prod_name, group on prod_id
(as long as there is a dimension which indicates that the attribute prod_id
determines prod_name, you will enable the rewrite of a query involving prod_
name).
Expression Matching
If several queries share the same common subexpression, it is advantageous to
create a materialized view with the common subexpression as one of its SELECT
columns. This way, the performance benefit due to precomputation of the common
subexpression can be obtained across several queries.
Date Folding
When creating a materialized view which aggregates data by folded date granules
such as months or quarters or years, always use the year component as the prefix
but not as the suffix. For example, TO_CHAR(date_col, 'yyyy-q') folds the date
into quarters, which collate in year order, whereas TO_CHAR(date_col,
'q-yyyy') folds the date into quarters, which collate in quarter order. The former
preserves the ordering while the latter does not. For this reason, any materialized
view created without a year prefix will not be eligible for date folding rewrite.
Statistics
Optimization with materialized views is based on cost and the optimizer needs
statistics of both the materialized view and the tables in the query to make a
cost-based choice. Materialized views should thus have statistics collected using the
DBMS_STATS package.
This section deals with other topics of interest in a data warehousing environment.
It contains the following chapters:
■ Glossary
■ Sample Data Warehousing Schema
A
Glossary
additive
Describes a fact (or measure) that can be summarized through addition. An additive
fact is the most common type of fact. Examples include Sales, Cost, and Profit.
Contrast with nonadditive, semi-additive.
advisor
The Summary Advisor recommends which materialized views to retain, create, and
drop. It helps database administrators manage materialized views. It is a GUI in
Oracle Enterprise Manager, and has similar capabilities to the DBMS_OLAP package.
attribute
A descriptive characteristic of one or more levels. Attributes represent logical
groupings that enable end users to select data based on like characteristics. Note
that in relational modeling, an attribute is defined as a characteristic of an entity. In
Oracle9i, an attribute is a column in a dimension that characterizes elements of a
single level.
aggregation
The process of consolidating data values into a single value. For example, sales data
could be collected on a daily basis and then be aggregated to the week level, the
week data could be aggregated to the month level, and so on. The data can then be
referred to as aggregate data. Aggregation is synonymous with summarization, and
aggregate data is synonymous with summary data.
aggregate
Summarized data. For example, unit sales of a particular product could be
aggregated by day, month, quarter and yearly sales.
Glossary A-1
ancestor
A value at any level above a given value in a hierarchy. For example, in a Time
dimension, the value 1999 might be the ancestor of the values Q1-99 and Jan-99.
See also descendant, hierarchy, level.
attribute
A descriptive characteristic of one or more levels. For example, the Product
dimension for a clothing manufacturer might contain a level called Item, one of
whose attributes is Color. Attributes represent logical groupings that enable end
users to select data based on like characteristics.
Note that in relational modeling, an attribute is defined as a characteristic of an
entity. In Oracle9i, an attribute is a column in a dimension that characterizes
elements of a single level.
child
A value at the level below a given value in a hierarchy. For example, in a Time
dimension, the value Jan-99 might be the child of the value Q1-99. A value can be
a child for more than one parent if the child value belongs to multiple hierarchies.
See also hierarchy, level, parent.
cleansing
The process of resolving inconsistencies and fixing the anomalies in source data,
typically as part of the ETL process. See also ETL.
cross product
A procedure for combining the elements in multiple sets. For example, given two
columns, each element of the first column is matched with every element of the
second column. A simple example is shown below:
Col1 Col2 Cross Product
---- ---- -------------
a c ac
b d ad
bc
bd
data source
A database, application, repository, or file that contributes data to a warehouse.
data mart
A data warehouse that is designed for a particular line of business, such as sales,
marketing, or finance. In a dependent data mart, the data can be derived from an
enterprise-wide data warehouse. In an independent data mart, data can be collected
directly from sources. See also data warehouse.
data warehouse
A relational database that is designed for query and analysis rather than transaction
processing. A data warehouse usually contains historical data that is derived from
transaction data, but it can include data from other sources. It separates analysis
workload from transaction workload and enables a business to consolidate data
from several sources.
In addition to a relational database, a data warehouse environment often consists of
an ETL solution, an OLAP engine, client analysis tools, and other applications that
manage the process of gathering data and delivering it to business users. See also
ETL, OLAP.
denormalize
The process of allowing redundancy in a table so that it can remain flat. Contrast
with normalize.
dimension
A structure, often composed of one or more hierarchies, that categorizes data.
Several distinct dimensions, combined with measures, enable end users to answer
business questions. Commonly used dimensions are Customer, Product, and Time.
In Oracle 9i, a dimension is a database object that defines hierarchical (parent/child)
relationships between pairs of column sets. In Oracle Express, a dimension is a
database object that consists of a list of values.
Glossary A-3
dimension value
One element in the list that makes up a dimension. For example, a computer
company might have dimension values in the Product dimension called LAPPC and
DESKPC. Values in the Geography dimension might include Boston and Paris.
Values in the Time dimension might include MAY96 and JAN97.
drill
To navigate from one item to a set of related items. Drilling typically involves
navigating up and down through the levels in a hierarchy. When selecting data, you
can expand or collapse a hierarchy by drilling down or up in it, respectively. See
also drill down, drill up.
drill down
To expand the view to include child values that are associated with parent values in
the hierarchy. (See also drill, drill up.)
drill up
To collapse the list of descendant values that are associated with a parent value in
the hierarchy.
element
An object or process. For example, a dimension is an object, a mapping is a process,
and both are elements.
ETL
Extraction, transformation, and loading. ETL refers to the methods involved in
accessing and manipulating source data and loading it into a data warehouse. The
order in which these processes are performed varies.
Note that ETT (extraction, transformation, transportation) and ETM (extraction,
transformation, move) are sometimes used instead of ETL. (See also data warehouse,
extraction, transformation, transportation.)
extraction
The process of taking data out of a source as part of an initial phase of ETL. (See
also ETL.)
fact table
A table in a star schema that contains facts. A fact table typically has two types of
columns: those that contain facts and those that are foreign keys to dimension
fact/measure
Data, usually numeric and additive, that can be examined and analyzed. Values for
facts or measures are usually not known in advance; they are observed and stored.
Examples include Sales, Cost, and Profit. Fact and measure are synonymous; fact is
more commonly used with relational environments, measure is more commonly
used with multidimensional environments. See also derived fact.
fast refresh
An operation that applies only the data changes to a materialized view, thus
eliminating the need to rebuild the materialized view from scratch.
file-to-table mapping
Maps data from flat files to tables in the warehouse.
hierarchy
A logical structure that uses ordered levels as a means of organizing data. A
hierarchy can be used to define data aggregation; for example, in a Time dimension,
a hierarchy might be used to aggregate data from the Month level to the Quarter
level to the Year level. A hierarchy can also be used to define a navigational drill
path, regardless of whether the levels in the hierarchy represent aggregated totals.
See also dimension, level.
hub module
The metadata container for process data.
level
A position in a hierarchy. For example, a Time dimension might have a hierarchy
that represents data at the Month, Quarter, and Year levels.
(See also hierarchy.)
Glossary A-5
mapping
The definition of the relationship and data flow between source and target objects.
materialized view
A pre-computed table comprising aggregated and/or joined data from fact and
possibly dimension tables. Also known as a summary or aggregate table.
metadata
Data that describes data and other structures, such as objects, business rules, and
processes. For example, the schema design of a data warehouse is typically stored in
a repository as metadata, which is used to generate scripts used to build and
populate the data warehouse. A repository contains metadata.
Examples include: for data, the definition of a source to target transformation that is
used to generate and populate the data warehouse; for information, definitions of
tables, columns and associations that are stored inside a relational modeling tool;
for business rules, discount by 10 percent after selling 1,000 items.
model
An object that represents something to be made. A representative style, plan, or
design. Metadata that defines the structure of the data warehouse.
nonadditive
Describes a fact (or measure) that cannot be summarized through addition. An
example includes Average. Contrast with additive, semi-additive.
normalize
In a relational database, the process of removing redundancy in data by separating
the data into multiple tables. Contrast with denormalize.
The process of removing redundancy in data by separating the data into multiple
tables.
OLAP
Online analytical processing. OLAP functionality is characterized by dynamic,
multidimensional analysis of historical data, which supports activities such as the
following:
parent
A value at the level above a given value in a hierarchy. For example, in a Time
dimension, the value Q1-99 might be the parent of the value Jan-99. See also child,
hierarchy, level.
refresh
The mechanism whereby materialized views are populated with data.
schema
A collection of related database objects. Relational schemas are grouped by database
user ID and include tables, views, and other objects. See also snowflake schema, star
schema. Whenever possible, a demo schema called Sales History is used
throughout this Guide.
semi-additive
Describes a fact (or measure) that can be summarized through addition along some,
but not all, dimensions. Examples include Headcount and On Hand Stock. Contrast
with additive, nonadditive.
snowflake schema
A type of star schema in which the dimension tables are partly or fully normalized.
See also schema, star schema.
source
A database, application, file, or other storage facility from which the data in a data
warehouse is derived.
star schema
A relational schema whose design represents a multidimensional data model. The
star schema consists of one or more fact tables and one or more dimension tables
that are related through foreign keys. See also schema, snowflake schema.
Glossary A-7
subject area
A classification system that represents or distinguishes parts of an organization or
areas of knowledge. A data mart is often developed to support a subject area such
as sales, marketing, or geography. See also data mart.
table
A layout of data in columns.
target
Holds the intermediate or final results of any part of the ETL process. The target of
the entire ETL process is the data warehouse. See also data warehouse, ETL.
transformation
The process of manipulating data. Any manipulation beyond copying is a
transformation. Examples include cleansing, aggregating, and integrating data from
multiple sources.
transportation
The process of moving copied or transformed data from a source to a data
warehouse. See also transformation.
validation
The process of verifying metadata definitions and configuration parameters.
versioning
The ability to create new versions of a data warehouse project for new requirements
and changes.
This appendix introduces a common schema (Sales History) that is used in this
guide. Most of the examples throughout this book use the same, simple star schema.
This schema consists of four dimension tables and a single fact table (called sales)
partitioned by month. The definitions of these tables follow:
CREATE TABLE times
(
time_id DATE,
day_name VARCHAR2(9)
CONSTRAINT tim_day_name_nn NOT NULL,
day_number_in_week NUMBER(1)
CONSTRAINT tim_day_in_week_nn NOT NULL,
day_number_in_month NUMBER(2)
CONSTRAINT tim_day_in_month_nn NOT NULL,
calendar_week_number NUMBER(2)
CONSTRAINT tim_cal_week_nn NOT NULL,
fiscal_week_number NUMBER(2)
CONSTRAINT tim_fis_week_nn NOT NULL,
week_ending_day DATE
CONSTRAINT tim_week_ending_day_nn NOT NULL,
calendar_month_number NUMBER(2)
CONSTRAINT tim_cal_month_number_nn NOT NULL,
fiscal_month_number NUMBER(2)
CONSTRAINT tim_fis_month_number_nn NOT NULL,
calendar_month_desc VARCHAR2(8)
CONSTRAINT tim_cal_month_desc_nn NOT NULL,
fiscal_month_desc VARCHAR2(8)
CONSTRAINT tim_fis_month_desc_nn NOT NULL,
days_in_cal_month NUMBER
CONSTRAINT tim_days_cal_month_nn NOT NULL,
days_in_fis_month NUMBER
CONSTRAINT tim_days_fis_month_nn NOT NULL,
access
controlling to change data, 15-3 B
adaptive multiuser backups
algorithm for, 21-47 disk mirroring, 4-11
definition, 21-47 bandwidth, 5-2, 21-2
affinity bitmap indexes, 6-2
parallel DML, 21-78 nulls and, 6-5
partitions, 21-77 on partitioned tables, 6-6
aggregates, 8-7, 8-10, 22-62 parallel query and DML, 6-3
computability check, 22-41 bitmap join indexes, 6-6
ALL_PUBLISHED_COLUMNS view, 15-10 block range granules, 5-3
ALL_SOURCE_TABLES view, 15-10, 15-13 B-tree indexes, 6-10
ALTER MATERIALIZED VIEW statement, 8-23 bitmap indexes versus, 6-3
enabling query rewrite, 22-7 buffer pools
ALTER SESSION statement setting for parallel operations, 21-81
ENABLE PARALLEL DML clause, 21-21 build methods, 8-24
FORCE PARALLEL DDL clause, 21-42, 21-45
create or rebuild index, 21-43, 21-46
create table as select, 21-44, 21-45 C
move or split partition, 21-43, 21-46 cardinality, 6-3
FORCE PARALLEL DML clause CASE expressions, 19-44
insert, 21-41, 21-42, 21-45 change
update and delete, 21-39, 21-40, 21-45 capture, 11-5
ALTER TABLE statement data capture, 11-5
NOLOGGING clause, 21-95 change data
altering dimensions, 9-13 controlling access to, 15-3
analyzing data publishing, 15-3
for parallel processing, 21-71 Change Data Capture, 15-1
APPEND hint, 21-95 database extraction advantages, 15-2
applications DBMS_LOGMNR_CDC_PUBLISH
data warehouses package, 15-11
star queries, 17-2 change data capture, 11-5
decision support, 21-2 change sets
decision support systems (DSS), 6-3 definition, 15-7
parallel SQL, 21-14 SYNC_SET, 15-7
direct-path INSERT, 21-21 change source
Index-1
definition, 15-6 CREATE INDEX statement, 21-93
SYNC_SOURCE, 15-6 rules of parallelism, 21-43
change tables CREATE MATERIALIZED VIEW statement, 8-23
columns in, 15-8 enabling query rewrite, 22-7
contain published data, 15-3 CREATE SNAPSHOT statement, 8-3
definition, 15-7 CREATE TABLE AS SELECT statement, 21-70,
importing for Change Data Capture, 15-18 21-86
CHANGE_SETS view, 15-10 rules of parallelism
CHANGE_SOURCES view, 15-10 index-organized tables, 21-14
CHANGE_TABLES view, 15-10 CREATE TABLE statement
CLUSTER_DATABASE_INSTANCES parameter AS SELECT
and parallel execution, 21-57 decision support systems, 21-14
columns rules of parallelism, 21-43
cardinality, 6-3 space fragmentation, 21-16
in a change table, 15-8 temporary storage space, 21-16
common joins, 22-32 parallelism, 21-14
COMPATIBLE parameter, 13-29, 22-8 index-organized tables, 21-14
COMPLETE clause, 8-27 CUBE clause, 18-10
complete refresh, 14-10 partial, 18-12
complex queries when to use, 18-10
snowflake schemas, 17-3 with query rewrite (and also) ROLLUP
composite columns, 18-21 clause with query rewrite, 22-50
composite partitioning methods, 5-8 CUME_DIST function, 19-13
performance considerations, 5-12
concatenated groupings, 18-24
concurrent users
D
increasing the number of, 21-50 data
CONSIDER FRESH clause, 14-26 integrity of
constraints, 7-2, 9-11 parallel DML restrictions, 21-26
foreign key, 7-5 partitioning, 5-4
parallel create table, 21-43 purging, 14-8
RELY, 7-6 sufficiency check, 22-37
unique, 7-4 transformation, 13-8
view, 7-7, 22-14 transportation, 12-2
with partitioning, 7-7 data cubes
with query rewrite, 22-61 hierarchical, 18-26
cost data manipulation language
data warehousing with and without Change Data parallel DML, 21-18
Capture, 15-2 transaction model for parallel DML, 21-22
cost-based optimizations, 21-100 data marts, 1-7
parallel execution, 21-100 data warehouses, 8-2
cost-based rewrite, 22-3 architectures, 1-5
CPU dimension tables
utilization, 5-2, 21-2 (lookup tables), 8-7
CREATE DIMENSION statement, 9-4 dimensions, 17-2
Index-2
fact tables (detail tables), 8-7 SET_CANCELLED procedure, 16-32
partitioned tables, 5-9 DBMS_STATS package, 16-5, 22-3
refresh tips, 14-15 decision support systems (DSS)
refreshing table data, 21-20 bitmap indexes, 6-3
star queries, 17-2 disk striping, 21-78
data warehousing parallel DML, 21-20
refreshing table data, 21-20 parallel SQL, 21-14, 21-20
database performance, 21-20
extraction with and without Change Data processes, 21-81
Capture, 15-2 scoring tables, 21-21
database extraction degree of parallelism, 21-32, 21-38, 21-40
with and without Change Data Capture, 15-2 and adaptive multiuser, 21-46
database writer process (DBWn) between query operations, 21-9
tuning, 21-92 parallel SQL, 21-34
databases DELETE statement
scalability, 21-20 parallel DELETE statement, 21-39
staging, 8-2 DEMO_DIM package, 9-10
date folding DENSE_RANK function, 19-5
with query rewrite, 22-18 design
DB_BLOCK_SIZE parameter, 21-66 logical, 3-2
and parallel query, 21-66 physical, 3-2
DB_FILE_MULTIBLOCK_READ_COUNT detail tables, 8-7
parameter, 21-66 dimension tables, 2-5, 8-7, 17-2
DBA_DATA_FILES view, 21-72 normalized, 9-9
DBA_EXTENTS view, 21-72 Dimension Wizard, 9-10
DBA_PUBLISHED_COLUMNS view, 15-10 dimensional modeling, 2-3
DBA_SOURCE_TABLES view, 15-10 dimensions, 2-6, 9-2, 9-11
DBA_SUBSCRIBED_COLUMNS view, 15-10 altering, 9-13
DBA_SUBSCRIBED_TABLES view, 15-10 creating, 9-4
DBA_SUBSCRIPTIONS view, 15-10 definition, 9-2
DBMS_LOGMNR_CDC_PUBLISH package, 15-3 dimension tables (lookup tables), 8-7
DBMS_LOGMNR_CDC_SUBSCRIBE dropping, 9-14
package, 15-3 hierarchies, 2-6
DBMS_MVIEW package, 14-11 hierarchies overview, 2-6
EXPLAIN_MVIEW procedure, 8-43 multiple, 18-3
EXPLAIN_REWRIITE procedure, 22-56 star joins, 17-3
REFRESH procedure, 14-9, 14-12 star queries, 17-2
REFRESH_ALL_MVIEWS procedure, 14-9 validating, 9-12
REFRESH_DEPENDENT procedure, 14-9 with query rewrite, 22-61
DBMS_OLAP package, 16-3, 16-5 direct-path INSERT
ADD_FILTER_ITEM procedure, 16-18 external fragmentation, 21-84
LOAD_WORKLOAD_TRACE procedure, 16-12 restrictions, 21-24
PURGE_FILTER procedure, 16-23 disk affinity
PURGE_RESULTS procedure, 16-32 disabling with MPP, 4-6
PURGE_WORKLOAD procedure, 16-18 parallel DML, 21-78
Index-3
partitions, 21-77 extraction, transformation, loading (ETL)
with MPP, 21-88 overview, 10-2
disk striping process, 7-2
affinity, 21-77 extractions
DISK_ASYNCH_IO parameter, 21-66 data files, 11-8
distributed transactions distributed operations, 11-11
parallel DDL restrictions, 21-11 full, 11-3
parallel DML restrictions, 21-11, 21-27 incremental, 11-3
DML statements OCI, 11-10
captured by Change Data Capture, 15-4 online, 11-4
DML_LOCKS parameter, 21-63 overview, 11-2
drilling down, 9-2 physical, 11-4
hierarchies, 9-2 Pro*C, 11-10
DROP MATERIALIZED VIEW statement, 8-23 SQL*Plus, 11-8
prebuilt tables, 8-32 with and without Change Data Capture, 15-2
dropping
dimensions, 9-14
materialized views, 8-42
F
fact tables, 2-5
star joins, 17-3
E star queries, 17-2
ENFORCED mode, 22-10 facts, 9-2
ENQUEUE_RESOURCES parameter, 21-63 FAST clause, 8-27
estimating materialized view size, 16-37 fast refresh, 14-11
EVALUATE_MVIEW_STRATEGY package, 16-38 fast refresh restrictions, 8-27
EXCHANGE PARTITION statement, 7-7 FAST_START_PARALLEL_ROLLBACK
execution plans parameter, 21-63
parallel operations, 21-69 FIRST/LAST functions, 19-29
star transformations, 17-7 FIRST_VALUE function, 19-24
EXPLAIN PLAN statement, 21-69, 22-55 FORCE clause, 8-27
query parallelization, 21-89 foreign key constraints, 7-5
star transformations, 17-7 foreign key joins
exporting snowflake schemas, 17-3
EXP utility, 11-10 fragmentation
exporting a source table external, 21-84
change data capture, 15-18 parallel DDL, 21-16
expression matching FREELISTS parameter, 21-91
with query rewrite, 22-17 full partition-wise joins, 5-15
extend window functions
to create a new view, 15-3 COUNT, 6-5
extents CUME_DIST, 19-13
parallel DDL, 21-16 DENSE_RANK, 19-5
size, 13-29 FIRST/LAST, 19-29
temporary, 21-87 FIRST_VALUE, 19-24
external tables, 13-6 GROUP_ID, 18-18
Index-4
GROUPING, 18-13 H
GROUPING_ID, 18-17
LAG/LEAD, 19-28 hash areas, 21-81
LAST_VALUE, 19-24 hash joins, 21-60, 21-81
linear regression, 19-32 hash partitioning, 5-7
NTILE, 19-15 HASH_AREA_SIZE parameter
PERCENT_RANK, 19-14 and parallel execution, 21-59, 21-60
RANK, 19-5 hierarchies, 9-2
ranking, 19-5 how used, 2-6
RATIO_TO_REPORT, 19-27 multiple, 9-7
REGR_AVGX, 19-33 overview, 2-6
REGR_AVGY, 19-33 rolling up and drilling down, 9-2
REGR_COUNT, 19-32 hints
REGR_INTERCEPT, 19-33 PARALLEL, 21-35
REGR_SLOPE, 19-33 PARALLEL_INDEX, 21-35
REGR_SXX, 19-33 query rewrite, 22-8, 22-9
REGR_SXY, 19-33 histograms
REGR_SYY, 19-33 creating with user-defined buckets, 19-45
reporting, 19-24 hypothetical rank, 19-39
ROW_NUMBER, 19-16
WIDTH_BUCKET, 19-43 I
windowing, 19-17
I/O
asynchronous, 21-66
G parallel execution, 5-2, 21-2
striping to avoid bottleneck, 4-2
global
importing a change table
indexes, 21-90
Change Data Capture, 15-18
striping, 4-5
importing a source table
granting access to change data, 15-3
Change Data Capture, 15-18
granules, 5-3
indexes
block range, 5-3
bitmap indexes, 6-6
partition, 5-4
bitmap join, 6-6
GROUP_ID function, 18-18
B-tree, 6-10
grouping
cardinality, 6-3
compatibility check, 22-40
creating in parallel, 21-93
conditions, 22-62
global, 21-90
GROUPING function, 18-13
local, 21-90
when to use, 18-16
nulls and, 6-5
GROUPING_ID function, 18-17
parallel creation, 21-93
GROUPING_SETS expression, 18-19
parallel DDL storage, 21-16
groups, instance, 21-37
parallel local, 21-93
GV$FILESTAT view, 21-71
partitioned tables, 6-6
partitioning, 5-8
STORAGE clause, 21-94
index-organized tables
Index-5
parallel CREATE, 21-14 load
parallel queries, 21-11 parallel, 13-31
INITIAL extent size, 13-29, 21-84 LOB datatypes
INSERT statement restrictions
functionality, 21-95 parallel DDL, 21-14
parallelizing INSERT ... SELECT, 21-41 parallel DML, 21-25
instance groups for parallel operations, 21-37 local indexes, 6-3, 6-6, 21-90
instance recovery local striping, 4-4
SMON process, 21-24 locks
instances parallel DML, 21-24
instance groups, 21-37 LOG_BUFFER parameter
integrity rules and parallel execution, 21-63
parallel DML restrictions, 21-26 LOGGING clause, 21-92
interface logging mode
publish and subscribe, 15-2 parallel DDL, 21-14, 21-15
invalidating logical design, 3-2
materialized views, 8-41 lookup tables, 8-7, 17-2
star queries, 17-2
J
Java M
used by Change Data Capture, 15-8 manual
JOB_QUEUE_PROCESSES parameter, 14-15 refresh, 14-11
join compatibility, 22-31 striping, 4-4
joins massively parallel processing (MPP)
full partition-wise, 5-15 affinity, 21-77, 21-78
partial partition-wise, 5-20 massively parallel systems, 5-2, 21-2
partition-wise, 5-15 materialized views
star joins, 17-3 aggregates, 8-10
star queries, 17-2 altering, 8-42
build methods, 8-24
containing only joins, 8-16
K creating, 8-22
key lookups, 13-34 delta joins, 22-35
keys, 8-7, 17-2 dropping, 8-32, 8-42
estimating size, 16-37
L invalidating, 8-41
logs, 11-7
LAG/LEAD functions, 19-28 naming, 8-23
LARGE_POOL_SIZE parameter, 21-51 nested, 8-18
LAST_VALUE function, 19-24 partitioned tables, 14-22
level relationships, 2-6 partitioning, 8-34
purpose, 2-7 prebuilt, 8-23
levels, 2-6 query rewrite
linear regression functions, 19-32 hints, 22-8, 22-9
Index-6
matching join graphs, 8-25 N
parameters, 22-8
privileges, 22-10 nested loop joins, 21-81
refresh dependent, 14-13 nested materialized views, 8-18
refreshing, 8-27, 14-9 refreshing, 14-20
refreshing all, 14-13 restrictions, 8-21
registration, 8-32 nested tables
restrictions, 8-25 restrictions, 21-13
rewrites NEVER clause, 8-27
enabling, 22-7 NEXT extent, 21-84
schema design guidelines, 8-8 NOAPPEND hint, 21-95
security, 8-41 NOARCHIVELOG mode, 21-93
storage characteristics, 8-23 nodes
types of, 8-10 disk affinity in a Real Application Cluster, 21-77
uses for, 8-2 NOLOGGING clause, 21-86, 21-92, 21-93
MAXEXTENTS keyword, 13-29, 21-84 with APPEND hint, 21-95
MAXEXTENTS UNLIMITED storage NOLOGGING mode
parameter, 21-23 parallel DDL, 21-14, 21-15
measures, 8-7, 17-2 nonvolatile data, 1-3
media recoveries, 21-88 NOPARALLEL attribute, 21-85
memory NOREWRITE hint, 22-8, 22-9
configure at 2 levels, 21-58 NTILE function, 19-15
process classification, 21-81 nulls
virtual, 21-58 indexes and, 6-5
merge, 14-5
MERGE statement, 14-5 O
MINIMUM EXTENT parameter, 21-17
object types
mirroring
parallel query, 21-12
disks, 4-10
restrictions, 21-13
monitoring
restrictions
data capture, 15-10
parallel DDL, 21-14
parallel processing, 21-71
parallel DML, 21-25
refresh, 14-15
OLTP database
MOVE PARTITION statement
batch jobs, 21-21
rules of parallelism, 21-43
parallel DML, 21-20
MPP
ON COMMIT clause, 8-26
disk affinity, 4-6
ON DEMAND clause, 8-26
MULTIBLOCK_READ_COUNT parameter, 13-29
online transaction processing (OLTP)
multiple archiver processes, 21-91
processes, 21-81
multiple hierarchies, 9-7
optimization
MV_CAPABILITIES_TABLE, 8-44
parallel SQL, 21-6
MVIEW_WORKLOAD view, 16-2
optimizations
query rewrite
enabling, 22-7
hints, 22-8, 22-9
Index-7
matching join graphs, 8-25 object types, 21-13, 21-25
query rewrites remote transactions, 21-27
privileges, 22-10 rollback segments, 21-23
OPTIMIZER_MODE parameter, 14-15, 21-100, 22-8 transaction model, 21-22
optimizers parallel execution
with rewrite, 22-2 cost-based optimization, 21-100
Oracle Real Application Clusters I/O parameters, 21-65
disk affinity, 21-77 index creation, 21-93
instance groups, 21-37 interoperator parallelism, 21-9
parallel load, 13-32 intraoperator parallelism, 21-9
system monitor process and, 21-24 introduction, 5-2
ORDER BY clause, 8-30 maximum processes, 21-80
outer joins method of, 21-31
with query rewrite, 22-61 plans, 21-69
oversubscribing resources, 21-82 process classification, 4-2, 4-6, 4-9, 4-12
resource parameters, 21-58
rewriting SQL, 21-85
P solving problems, 21-84
paging, 21-82 space management, 21-83
rate, 21-59 tuning, 5-2, 21-2
subsystem, 21-82 understanding performance issues, 21-80
PARALLEL clause, 21-95 PARALLEL hint, 21-35, 21-85, 21-95
parallelization rules, 21-38 parallelization rules, 21-38
PARALLEL CREATE INDEX statement, 21-62 UPDATE and DELETE, 21-39
PARALLEL CREATE TABLE AS SELECT statement parallel load
external fragmentation, 21-84 example, 13-31
resources required, 21-62 Oracle Real Application Clusters, 13-32
parallel DDL, 21-13 using, 13-26
extent allocation, 21-16 parallel partition-wise joins
parallelization rules, 21-38 performance considerations, 5-24
partitioned tables and indexes, 21-13 parallel query, 21-11
restrictions bitmap indexes, 6-3
LOBs, 21-14 index-organized tables, 21-11
object types, 21-13, 21-14 object types, 21-12
parallel delete, 21-39 restrictions, 21-13
parallel DELETE statement, 21-39 parallelization rules, 21-38
parallel DML, 21-18 parallel scan operations, 4-3
applications, 21-20 parallel SQL
bitmap indexes, 6-3 allocating rows to parallel execution
degree of parallelism, 21-38, 21-40 servers, 21-7
enabling PARALLEL DML, 21-21 degree of parallelism, 21-34
lock and enqueue resources, 21-24 instance groups, 21-37
parallelization rules, 21-38 number of parallel execution servers, 21-3
recovery, 21-23 optimizer, 21-6
restrictions, 21-24 parallelization rules, 21-38
Index-8
shared server, 21-4 PARALLEL_ADAPTIVE_MULTI_USER, 21-47
summary or rollup tables, 21-14 PARALLEL_AUTOMATIC_TUNING, 21-30
parallel update, 21-39 PARALLEL_BROADCAST_ENABLE, 21-62
parallel UPDATE statement, 21-39 PARALLEL_EXECUTION_MESSAGE_
PARALLEL_ADAPTIVE_MULTI_USER SIZE, 21-61
parameter, 21-47 PARALLEL_MAX_SERVERS, 14-15, 21-4, 21-50
PARALLEL_AUTOMATIC_TUNING PARALLEL_MIN_PERCENT, 21-36, 21-49,
parameter, 21-30 21-57
PARALLEL_BROADCAST_ENABLE PARALLEL_MIN_SERVERS, 21-3, 21-4, 21-51
parameter, 21-62 PARALLEL_THREADS_PER_CPU, 21-30
PARALLEL_EXECUTION_MESSAGE_SIZE PGA_AGGREGATE_TARGET, 14-15
parameter, 21-61 QUERY_REWRITE_ENABLED, 22-7, 22-8
PARALLEL_INDEX hint, 21-35 ROLLBACK_SEGMENTS, 21-63
PARALLEL_MAX_SERVERS parameter, 14-15, SHARED_POOL_SIZE, 21-51, 21-56
21-4, 21-50 SORT_AREA_SIZE, 21-60
and parallel execution, 21-49 STAR_TRANSFORMATION_ENABLED, 17-4
PARALLEL_MIN_PERCENT parameter, 21-36, TAPE_ASYNCH_IO, 21-66
21-49, 21-57 TIMED_STATISTICS, 21-72
PARALLEL_MIN_SERVERS parameter, 21-3, 21-4, TRANSACTIONS, 21-62
21-51 partial partition-wise joins, 5-20
PARALLEL_THREADS_PER_CPU Partition Change Tracking (PCT), 8-34, 14-22
parameter, 21-30, 21-48 partition granules, 5-4
parallelism partitioned tables
degree, 21-32 data warehouses, 5-9
degree, overriding, 21-84 example, 13-29
enabing for tables and queries, 21-46 partitioning, 11-7
interoperator, 21-9 composite, 5-8
intraoperator, 21-9 data, 5-4
parameters hash, 5-7
CLUSTER_DATABASE_INSTANCES, 21-57 indexes, 5-8
COMPATIBLE, 13-29, 22-8 materialized views, 8-34
DB_BLOCK_SIZE, 21-66 prebuilt tables, 8-39
DB_FILE_MULTIBLOCK_READ_ range, 5-6
COUNT, 21-66 partitions
DISK_ASYNCH_IO, 21-66 affinity, 21-77
DML_LOCKS, 21-63 bitmap indexes, 6-6
ENQUEUE_RESOURCES, 21-63 parallel DDL, 21-13
FAST_START_PARALLEL_ROLLBACK, 21-63 partition pruning
FREELISTS, 21-91 disk striping and, 21-78
HASH_AREA_SIZE, 21-59 pruning, 5-13
JOB_QUEUE_PROCESSES, 14-15 range partitioning
LARGE_POOL_SIZE, 21-51 disk striping and, 21-78
LOG_BUFFER, 21-63 rules of parallelism, 21-43, 21-45
MULTIBLOCK_READ_COUNT, 13-29 partition-wise joins, 5-15
OPTIMIZER_MODE, 14-15, 21-100, 22-8 benefits of, 5-23
Index-9
PERCENT_RANK function, 19-14 correctness, 22-10
performance enabling, 22-7
DSS database, 21-20 hints, 22-8, 22-9
PGA_AGGREGATE_TARGET parameter, 14-15 matching join graphs, 8-25
physical design, 3-2 methods, 22-11
pivoting, 13-35 parameters, 22-8
PL/SQL packages privileges, 22-10
for publish and subscribe tasks, 15-3 restrictions, 8-25
plans when it occurs, 22-4
star transformations, 17-7 QUERY_REWRITE_ENABLED parameter, 22-7,
prebuilt materialized views, 8-23 22-8
PRIMARY KEY constraints, 21-94
process monitor process (PMON)
parallel DML process recovery, 21-23
R
processes RAID, 21-88
and memory contention in parallel configurations, 4-9
processing, 21-50 range partitioning, 5-6
classes of parallel execution, 4-2, 4-6, 4-9, 4-12 performance considerations, 5-9
DSS, 21-81 RANK function, 19-5
maximum number, 21-80 ranking functions, 19-5
maximum number for parallel query, 21-80 RATIO_TO_REPORT function, 19-27
OLTP, 21-81 REBUILD INDEX PARTITION statement
pruning rules of parallelism, 21-43
partitions, 5-13, 21-78 REBUILD INDEX statement
using DATE columns, 5-14 rules of parallelism, 21-43
publication recovery
definition, 15-7 instance recovery
publisher parallel DML, 21-24
tasks, 15-3 SMON process, 21-24
publishers media, with striping, 4-10
capture data, 15-3 parallel DML, 21-23
determines the source tables, 15-3 redo buffer allocation retries, 21-63
publish change data, 15-3 reference tables, 8-7
purpose, 15-3 refresh
purging data, 14-8 monitoring, 14-15
options, 8-26
refreshing
Q materialized views, 14-9
queries nested materialized views, 14-20
ad hoc, 21-14 partitioning, 14-2
enabling parallelism for, 21-46 REGR_AVGX function, 19-33
star queries, 17-2 REGR_AVGY function, 19-33
query delta joins, 22-34 REGR_COUNT function, 19-32
query rewrite REGR_INTERCEPT function, 19-33
controlling, 22-8 REGR_R2 function, 19-33
Index-10
REGR_SLOPE function, 19-33 ROLLBACK_SEGMENTS parameter, 21-63
REGR_SXX function, 19-33 rolling up hierarchies, 9-2
REGR_SXY function, 19-33 ROLLUP, 18-7
REGR_SYY function, 19-33 partial, 18-8
regression when to use, 18-7
detecting, 21-68 root level, 2-6
RELY constraints, 7-6 ROW_NUMBER function, 19-16
remote transactions RULE hint, 21-100
parallel DML and DDL restrictions, 21-11
replication
restrictions
S
parallel DML, 21-25 sar UNIX command, 21-77
reporting functions, 19-24 scalability
resources batch jobs, 21-21
consumption, parameters affecting, 21-58 parallel DML, 21-20
consumption, parameters affecting parallel scalable operations, 21-88
DML/DDL, 21-62 schemas, 17-2
limiting for users, 21-50 design guidelines for materialized views, 8-8
limits, 21-49 snowflake, 2-3
oversubscribing, 21-82 star, 2-3
parallel query usage, 21-58 star schemas, 17-2
restrictions third-normal form, 17-2
direct-path INSERT, 21-24 security
fast refresh, 8-27 Change Data Capture, 15-8
nested materialized views, 8-21 subscriber access to change data, 15-8
nested tables, 21-13 SELECT privilege
parallel DDL, 21-14 granting and revoking for access to change
remote transactions, 21-11 data, 15-3
parallel DML, 21-24 sessions
remote transactions, 21-11, 21-27 enabling parallel DML, 21-21
query rewrite, 8-25 SGA size, 21-58
result set, 17-5 shared server
revoking access to change data, 15-3 parallel SQL execution, 21-4
REWRITE hint, 22-8, 22-9 SHARED_POOL_SIZE parameter, 21-51, 21-56
rewrites single table aggregate requirements, 8-13
hints, 22-9 skewing parallel DML workload, 21-37
parameters, 22-8 SMP architecture
privileges, 22-10 disk affinity, 21-78
query optimizations snowflake schemas, 17-3
hints, 22-8, 22-9 complex queries, 17-3
matching join graphs, 8-25 SORT_AREA_SIZE parameter, 21-60
rollback segments, 21-63 and parallel execution, 21-60
MAXEXTENTS UNLIMITED, 21-23 source systems, 11-2
OPTIMAL, 21-23 definition, 15-6
parallel DML, 21-23 source tables
Index-11
definition, 15-6 manual, 4-4
exporting for Change Data Capture, 15-18 media recovery, 4-10
importing for Change Data Capture, 15-18 temporary tablespace, 21-88
space management, 21-87 subqueries
MINIMUM EXTENT parameter, 21-17 in DDL statements, 21-14
parallel DDL, 21-16 subscriber
parallel execution, 21-83 definition, 15-5
reducing transactions, 21-84 subscriber views
SPLIT PARTITION statement definition, 15-7
rules of parallelism, 21-43 dropping, 15-3
SQL statements removing, 15-3
parallelizing, 21-3, 21-6 subscribers
SQL*Loader, 13-26 access to change data, 15-8
staging drop the subscriber view, 15-3
areas, 1-6 drop the subscription, 15-3
databases, 8-2 extend the window to create a new view, 15-3
files, 8-2 purge the subscription window, 15-3
with and without Change Data Capture, 15-2 purpose, 15-3
STALE_TOLERATED mode, 22-10 removing subscriber views, 15-3
star joins, 17-3 retrieve change data from the subscriber
star queries, 17-2 views, 15-3
star transformation, 17-5 subscribe to source tables, 15-3
star schemas tasks, 15-3
advantages, 2-4 subscription window
defining fact tables, 2-5 purging, 15-3
dimensional model, 2-4, 17-2 Summary Advisor, 16-2
star transformations, 17-2, 17-5 Wizard, 16-6
restrictions, 17-10 summary management, 8-5
STAR_TRANSFORMATION_ENABLED summary tables, 2-5
parameter, 17-4 symmetric multiprocessors, 5-2, 21-2
statistics, 22-63 SYNC_SET change set
estimating, 21-69 system-generated change set, 15-7
operating system, 21-77 SYNC_SOURCE change source
storage system-generated change source, 15-6
fragmentation in parallel DDL, 21-16 synchronous data capture, 15-11
STORAGE clause system monitor process (SMON)
parallel execution, 21-16 Oracle Real Application Clusters and, 21-24
parallel query, 21-94 parallel DML instance recovery, 21-24
storage parameters parallel DML system recovery, 21-24
MAXEXTENTS UNLIMITED, 21-23
OPTIMAL (in rollback segments), 21-23
striping
T
analyzing, 4-6 table queues, 21-73
example, 13-26 tables
local, 4-4 detail tables, 8-7
Index-12
dimension tables (lookup tables), 8-7 triggers, 11-7
dimensions restrictions, 21-27
star queries, 17-2 parallel DML, 21-25
enabling parallelism for, 21-46 TRUSTED mode, 22-10
external, 13-6 two-phase commit, 21-62
fact tables, 8-7
star queries, 17-2
historical, 21-21
U
lookup tables (dimension tables), 17-2 unique constraints, 7-4, 21-94
parallel creation, 21-14 UNLIMITED extents, 21-23
parallel DDL storage, 21-16 update frequencies, 8-50
refreshing in data warehouse, 21-20 UPDATE statement
STORAGE clause with parallel execution, 21-16 parallel UPDATE statement, 21-39
summary or rollup, 21-14 update windows, 8-50
tablespaces upsert (now merge), 13-11
creating, example, 13-27 user resources
dedicated temporary, 21-87 limiting, 21-50
transportable, 11-5, 12-3, 12-6 USER_PUBLISHED_COLUMNS view, 15-10
TAPE_ASYNCH_IO parameter, 21-66 USER_SOURCE_TABLES view, 15-10
temporary extents, 21-87 USER_SUBSCRIBED_COLUMNS view, 15-10
temporary segments USER_SUBSCRIBED_TABLES view, 15-10
parallel DDL, 21-16 USER_SUBSCRIPTIONS view, 15-10
temporary tablespaces
striping, 21-88 V
text match, 22-12
with query rewrite, 22-62 V$FILESTAT view
third-normal-form schemas, 17-2 and parallel query, 21-72
TIMED_STATISTICS parameter, 21-72 V$PARAMETER view, 21-73
timestamps, 11-6 V$PQ_SESSTAT view, 21-70, 21-72
transactions V$PQ_SYSSTAT view, 21-70
distributed V$PQ_TQSTAT view, 21-70, 21-73
parallel DDL restrictions, 21-11 V$PX_PROCESS view, 21-71, 21-72
parallel DML restrictions, 21-11, 21-27 V$PX_SESSION view, 21-71
rate, 21-83 V$PX_SESSTAT view, 21-71
TRANSACTIONS parameter, 21-62 V$SESSTAT view, 21-74, 21-77
transformations, 13-2 V$SORT_SEGMENT view, 21-84
scenarios, 13-26 V$SYSSTAT view, 21-63, 21-74, 21-92
SQL and PL/SQL, 13-9 validating dimensions, 9-12
SQL*Loader, 13-5 view constraints, 7-7, 22-14
star, 17-2 views
transportable tablespaces, 11-5, 12-3, 12-6 ALL_PUBLISHED_COLUMNS, 15-10
transportation ALL_SOURCE_TABLES, 15-10, 15-13
definition, 12-2 CHANGE_SETS, 15-10
distributed operations, 12-2 CHANGE_SOURCES, 15-10
flat files, 12-2 CHANGE_TABLES, 15-10
Index-13
DBA_DATA_FILES, 21-72
DBA_EXTENTS, 21-72
DBA_PUBLISHED_COLUMNS, 15-10
DBA_SOURCE_TABLES, 15-10
DBA_SUBSCRIBED_COLUMNS, 15-10
DBA_SUBSCRIBED_TABLES, 15-10
DBA_SUBSCRIPTIONS, 15-10
USER_PUBLISHED_COLUMNS, 15-10
USER_SOURCE_TABLES, 15-10
USER_SUBSCRIBED_COLUMNS, 15-10
USER_SUBSCRIBED_TABLES, 15-10
USER_SUBSCRIPTIONS, 15-10
V$FILESTAT, 21-72
V$PARAMETER, 21-73
V$PQ_SESSTAT, 21-72
V$PQ_TQSTAT, 21-73
V$PX_PROCESS, 21-72
V$SESSTAT, 21-74, 21-77
V$SYSSTAT, 21-74
virtual memory, 21-58
vmstat UNIX command, 21-77
W
wait times, 21-83
WIDTH_BUCKET function, 19-43
windowing functions, 19-17
workloads
distribution, 21-70
exceeding, 21-82
skewing, 21-37
Index-14