Professional Documents
Culture Documents
Best Practices For Dbms Stats
Best Practices For Dbms Stats
Document
Configuration
History
Distribution
History
Table of Contents
1 PURPOSE OF THIS DOCUMENT..............................................................................................................................4
2 SUMMARY......................................................................................................................................................................4
3 TABLE AND INDEX STATISTICS ............................................................................................................................6
4 COLUMN STATISTICS................................................................................................................................................6
4.1 WHEN SHOULD HISTOGRAMS BE CREATED....................................................................................6
4.1.1 Example: Small Table ..............................................................................................................................7
4.1.2 Example: Unique/Primary Keys...............................................................................................................7
4.1.3 Example: MyWebSite Field......................................................................................................................7
4.1.4 Example: Age Field...................................................................................................................................7
4.1.5 Example: Name field.................................................................................................................................7
4.2 EVERY APPLICATION IS DIFFERENT..................................................................................................7
5 COLUMN WHERE CLAUSE USAGE........................................................................................................................8
5.1 USING SYS.COL_USAGE$.......................................................................................................................8
5.2 MAINTAINING SYS.COL_USAGE$........................................................................................................9
6 CPU COST MODELING...............................................................................................................................................9
7 DBMS_STATS...............................................................................................................................................................10
8 SETTING DBMS_STATS PARAMETERS...............................................................................................................11
8.1 USING DEFAULTS..................................................................................................................................12
8.2 GETTING PARAMS.................................................................................................................................12
8.3 CONSTANTS............................................................................................................................................12
8.4 ESTIMATE_PERCENT............................................................................................................................13
8.4.1 auto_sample_size....................................................................................................................................13
8.5 CASCADE.................................................................................................................................................13
8.6 METHOD_OPT.........................................................................................................................................14
8.7 DEGREE....................................................................................................................................................14
8.8 GRANULARITY.......................................................................................................................................14
9 COLLECTING TABLE STATISTICS.......................................................................................................................14
10 COLLECTING INDEX STATS................................................................................................................................14
11 COLLECTING COLUMN STATS AND HISTOGRAMS.....................................................................................15
11.1 METHOD_OPT.......................................................................................................................................15
11.2 METHOD_OPT SIZE..............................................................................................................................15
11.2.1 size N.....................................................................................................................................................15
11.2.2 repeat.....................................................................................................................................................15
11.2.3 auto........................................................................................................................................................15
11.2.4 skewonly...............................................................................................................................................15
11.3 METHOD_OPT EXAMPLES.................................................................................................................15
11.3.1 FOR ALL COLUMNS..........................................................................................................................15
11.3.2 FOR ALL COLUMNS SIZE 1 **** Note this is the default value.....................................................16
11.3.3 FOR ALL COLUMNS SIZE 254.........................................................................................................16
11.3.4 FOR ALL INDEXED COLUMNS.......................................................................................................16
11.3.5 FOR ALL INDEXED COLUMNS SIZE 1..........................................................................................16
11.3.6 FOR ALL HIDDEN COLUMNS.........................................................................................................16
11.3.7 FOR ALL HIDDEN COLUMNS SIZE 1.............................................................................................16
11.3.8 FOR COLUMNS COL_A, COL_B......................................................................................................16
11.3.9 FOR COLUMNS COL_A SIZE 1, COL_B SIZE 1.............................................................................16
11.3.10 FOR COLUMN COL_A SIZE 5, COL_B SIZE AUTO, COL_C SIZE 200.....................................16
11.3.11 FOR COLUMNS COL_A SIZE AUTO, COL_B SIZE AUTO.........................................................16
11.3.12 FOR ALL COLUMNS SIZE AUTO..................................................................................................16
11.3.13 FOR COLUMNS SIZE AUTO COL_A, COL_B, COL_C................................................................16
Copyright 2007 TUSC Page 3 of 26
Best Practices for Analyzing Objects
Document
11.3.14 FOR ALL COLUMNS SIZE SKEWONLY.......................................................................................16
11.3.15 FOR ALL COLUMNS SIZE REPEAT..............................................................................................17
12 COLLECTING DICTIONARY AND FIXED OBJECT STATS...........................................................................17
12.1 FIXED OBJECTS STATS.......................................................................................................................17
12.2 DICTIONARY STATS - STATISTICS ON SYS, SYSTEM and OTHER ORACLE COMPONENTS
..........................................................................................................................................................................17
13 COLLECTING CPU COST MODELING STATS.................................................................................................18
13.1 HOW DO I REVIEW CPU COST MODELING STATISTICS?...........................................................18
13.2 SYS.AUX_STATS$.................................................................................................................................18
13.3 HOW DO I COLLECT STATS FOR CPU COST MODELING?..........................................................19
13.4 WHEN DO I COLLECT NEW SYSTEM STATS?................................................................................19
13.5 SAVING AND RESTORE SYSTEM STATS........................................................................................19
13.6 VIEWING SAVED SYSTEM STATS....................................................................................................19
14 RETENTION OF PREVIOUSLY COLLECTED STATISTICS...........................................................................20
14.1 BACKING UP AND RESTORING STATISTICS USING STATTAB ................................................20
14.1.1 CREATING A STATTAB TABLE......................................................................................................20
14.1.2 SAVING OFF STATISTICS – DBMS_STATS.EXPORT_/IMPORT................................................20
14.1.2.1 Transfering Stats to Another Schema or Database............................................................................20
14.1.3 BACKING UP USAGE INFORMATION...........................................................................................20
14.1.4 VIEWING SAVED STATISTICS USING STATTAB.......................................................................21
14.1.4.1 Viewing Saved Table Statistics..........................................................................................................21
14.1.4.2 Viewing Saved Column Statistics......................................................................................................21
14.1.4.3 Viewing Saved Index Statistics..........................................................................................................21
14.1.4.4 Viewing Saved CPU Statistics...........................................................................................................22
14.1.4.5 sys.aux_stats$.....................................................................................................................................22
14.2 USING 10G RETENTION OF STATISTICS.........................................................................................23
14.2.1.1 Determining How far back we can restore from................................................................................23
14.2.1.2 Getting and Setting the Retention Time.............................................................................................23
14.2.2 RESTORING STATISTICS WITH 10G AUTO RETENTION..........................................................23
14.2.2.1 Restoring Table Stats.........................................................................................................................23
14.2.2.2 Restoring Dictionary Stats.................................................................................................................24
14.2.2.1 Restoring Database Stats....................................................................................................................24
14.2.2.2 Restoring Schema Stats......................................................................................................................24
15 AUTOMATED STATS JOB......................................................................................................................................24
16 LOCKING AND UNLOCKING STATISTIC COLLECTIONS...........................................................................25
17 LIMITATIONS OF DBMS_STATS..........................................................................................................................25
17.1 CHAINED ROWS ..................................................................................................................................25
17.2 VALIDATE STRUCTURE.....................................................................................................................25
18 APPENDIX..................................................................................................................................................................25
18.1 A Note from Metalink on Automatic Undo Retention.............................................................................25
18.2 BIBLIOGRAPHY....................................................................................................................................26
2 SUMMARY
The Oracle Cost Based Optimizer (CBO) is a core part of the Oracle technology stack and makes a significant
contribution to the overall performance to the database. The technology was originally obtained from Digital
Equipment Corp following Oracle’s purchase of the Rdb database system in 1992. Since then it has been refined
and extended. With Oracle 10g, the original Rule Based Optimizer (RBO) is de-supported. It is expected that in
future releases the RBO will disappear altogether and the CBO will be the only query optimization technology
available.
The Oracle Cost Based Optimizer relies on table and object statistics to determine the optimal path to use to fulfill a
user’s query. In Oracle releases prior to 10g, there are 2 types of optimizers that Oracle utilizes to create execution
plans for queries. The RULE based optimizer, available in release 9i and lower, utilizes a set of fairly
understandable rules, applied in serial order, to estimate and obtain a proper execution plan.
The Cost Based Optimizer (CBO), which has been available since Oracle version 7.3, is the only optimizer available
in future releases. The CBO uses informational statistics captured about an object to estimate and obtain the proper
execution plan with the cheapest cost.
In 9i and 10g, additional statistics about the machine, cpu’s and i/o patterns can also be collected using the CPU
Cost Modeler.
To operate properly, the CBO must have accurate statistics to create the best, and cheapest execution path for a
query. This white paper helps to clarify how to collect statistics accurate statistics and what options to use.
4 COLUMN STATISTICS
When Oracle calculates the estimated cardinality of an execution path, Oracle estimates that each distinct column
value will point to the same number of rows that any other distinct column value will.
If data is highly skewed in favor of 1 column value over another, Oracle can use this information to obtain a closer
estimate to the number of rows that will be returned.
To map the skew-ness of a column, Oracle utilizes 2 types of distributions named: Frequency and Height-Balanced
(or equi-depth). Oracle limits the number of histogram buckets for either distribution type to 254.
A frequency distribution models a precise histogram, based exactly on how many rows a single column value is
contained within. A frequency distribution can be created only when there are less than 255 distinct values for a
column. A frequency histogram can take on 2 forms by Oracle. Each form will show up slightly differently when
querying histogram$. In form #1, Oracle will create 1 bucket number for each distinct value, and place the exact
count of rows for that value in the bucket. In form #2, Oracle will use “Bucket Subtraction”. Bucket Subtraction
will label a bucket number with the number of total rows of current value and store the actual column value. In this
method, Oracle obtains the number of rows for each distinct value by subtracting the current bucket number with the
previous bucket number.
You can easily identify a “Bucket Subtraction” histogram because the bucket numbers usually go beyond 255.
A Height-Balanced distribution model (statistically known as an equi-depth histogram) obtains it’s model from
trying to evenly distribute that distinct column values across a known number of buckets (hence equi-depth/height
balanced).
As the number of distinct values approaches the number of rows and when the number of rows is large, this model
becomes very in-accurate.
Since all the rows probably fit in 1 block, we would ever only do 1 I/O operation to retrieve this data. A
histogram would never be necessary and in point of fact, indexes probably wouldn’t be either.
4.1.2 Example: Unique/Primary Keys
Never create a histogram on any UNIQUE or PRIMARY KEY column. The data is 100% evenly
distributed with 1 single possible value per row.
4.1.3 Example: MyWebSite Field
When Oracle creates a histogram on varchar2 fields, only the first 32 characters of the substring are used
for creating the histogram.
Since all mywebsite URL’s start with exactly the same 40 characters, a histogram could not be used
effectively.
Also to note, if UTF8 or other multibyte charactersets are used, the substring is 16 characters.
4.1.4 Example: Age Field
Since we are most likely talking about the human species in this table, we are all < 254 years old.
A histogram on age might very well be valuable if the age varies widely overall, but there are a huge
amount of 18 years old in this table.
Since an index is on AGE, the optimizer might decide to use a FULL table scan if we are looking for 18
year olds only.
4.1.5 Example: Name field
If the name field was queried as the only column in the where clause, a FULL TABLE SCAN, or RANGE
scan would be used, regardless of a histogram in place.
Assume that an index was on the name field, and assuming the name field is fairly distinct across the data,
the data most likely will closely track to the number of rows in the table and probably wouldn’t change the
execution plan or the cardinality estimate.
The only guaranteed approach to histogram creation is to do thorough analysis on the application and the execution
plans for each query.
Copyright 2007 TUSC Page 7 of 26
Best Practices for Analyzing Objects
Document
The table also tracks “how” it was used in the clause, whether it was used an equality (a=b), an equijoin (tablea.a =
tableb.b), nonequijoins (tablea.a <> tableb), Ranges, likes and IS NULL usages.
The above query can be joined to dba_objects and dba_tab_columns to see the table_name and column_names. This
query shows how each column is used in where clauses. It also shows the last date the column was used in a where
clause.
5.1 USING SYS.COL_USAGE$
There are many ways this table can be used. I use the following query to help manually determine which fields
should have histograms collected, making sure to not collect histograms on any column that is UNIQUE or a
PRIMARY KEY.
Although oracle does not support direct manipulation of the sys tables, I have found that purging old information,
those records where timestamp > 6 months, helps both the results of the SIZE AUTO command, and the
performance.
Starting in 9i, the optimizer includes CPU Cost Modeling, which adds a CPU cost to the CBO costing and refines
the I/O costs based upon actual hardware responses to single block and multiblock read times.
In 10G, CPU Cost modeling is turned on by default, although, using defaults set by Oracle, until statistics are
gathered by the dba.
Once CPU Cost modeling is in place, the optimizer uses the following formula for costing execution plans:
“The costing model is a formula that calculates the cost of any statement.
where:
• #SRDs is the number of single block reads
• #MRDs is the number of multi block reads
• #CPUCycles is the number of CPU Cycles *)
• sreadtim is the single block read time
• mreadtim is the multi block read time
• cpuspeed is the CPU cycles per second
“
7 DBMS_STATS
DBMS_STATS is the package used to generate cost based optimizer statistics for databases.
This package can be broken down into 6 categories and their associated functions and/or procedures below.
EXPORT_COLUMN_STATS Procedure
EXPORT_DATABASE_STATS Procedure
EXPORT_DICTIONARY_STATS Procedure
EXPORT_FIXED_OBJECTS_STATS Procedure
EXPORT_INDEX_STATS Procedure
EXPORT_SCHEMA_STATS Procedure
EXPORT_SYSTEM_STATS Procedure
EXPORT_TABLE_STATS Procedure
IMPORT_COLUMN_STATS Procedure
IMPORT_DATABASE_STATS Procedure
IMPORT_DICTIONARY_STATS Procedure
IMPORT_FIXED_OBJECTS_STATS Procedure
IMPORT_INDEX_STATS Procedure
IMPORT_SCHEMA_STATS Procedure
IMPORT_SYSTEM_STATS Procedure
IMPORT_TABLE_STATS Procedure
UNLOCK_SCHEMA_STATS Procedure
UNLOCK_TABLE_STATS Procedure
SET_COLUMN_STATS Procedures
SET_INDEX_STATS Procedures
SET_SYSTEM_STATS Procedure
SET_TABLE_STATS Procedure
GET_COLUMN_STATS Procedures
GET_INDEX_STATS Procedures
GET_SYSTEM_STATS Procedure
GET_TABLE_STATS Procedure
GENERATE_STATS Procedure
Recommended Defaults:
DBMS_STATS.SET_PARAM('CASCADE','TRUE');
DBMS_STATS.SET_PARAM('ESTIMATE_PERCENT','100');
DBMS_STATS.SET_PARAM('DEGREE’,'NULL');
DBMS_STATS.SET_PARAM('METHOD_OPT','FOR ALL COLUMNS SIZE AUTO');
DBMS_STATS.SET_PARAM('NO_INVALIDATE','FALSE');
DBMS_STATS.SET_PARAM('GRANULARITY','ALL');
Copyright 2007 TUSC Page 11 of 26
Best Practices for Analyzing Objects
Document
DBMS_STATS.SET_PARAM('AUTOSTATS_TARGET','AUTO');
DBMS_STATS.GATHER_SCHEMA_STATS (OWNNAME=>’SOME_SCHEMA’);
select
'AUTOSTATS_TARGET:', dbms_stats.get_param('AUTOSTATS_TARGET'),
'GRANULARITY:', dbms_stats.get_param('GRANULARITY'),
'CASCADE:', dbms_stats.get_param('CASCADE'),
'ESTIMATE_PERCENT:', dbms_stats.get_param('ESTIMATE_PERCENT'),
'DEGREE:', dbms_stats.get_param('DEGREE'),
'METHOD_OPT:', dbms_stats.get_param('METHOD_OPT'),
'NO_INVALIDATE:',dbms_stats.get_param('NO_INVALIDATE')
from dual
/
8.3 CONSTANTS
Use the following constant to indicate that auto-sample size algorithms should be used:
The constant used to determine the system default degree of parallelism, based on the initialization
parameters, is:
Use the following constant to let Oracle decide whether to collect statistics for indexes or not:
Use the following constant to let oracle decide when to invalidate dependent cursors.
8.4 ESTIMATE_PERCENT
Estimate percent specifies what amount of the percent of the table should be sampled to obtain the statistics.
Higher sampling percentages, up to 100%, are best, but, there are many documents that say collection statistics on a
range of only 10-15%-30% of the table is sufficient.
For most of the tables, especially for table/columns with less than 255 distinct values, it is quite important to collect
statistics based upon 100% of the data in the table.
This is very, very important for the creation of frequency based histograms on columns with < 255 distinct
values. If a value is missing in the rows estimated, then it will not get mapped to a Frequency distribution.
For extremely large tables, and those tables where a sampling of the data will give a very good statistical
representation of the table, estimating the statistics at different values can provide a good result in a more efficient
manner.
I tend to recommend using 100% estimate of the data, on tables that are small always, and on databases where
collecting at 100% can be done during the appropriate window. For very large objects, test at different levels. Keep
in mind, that a table with 100 million rows of data probably won’t shift statistical representation readily.
8.4.1 auto_sample_size
Oracle will determine the best sample size for an objects statistic while performing the collection if this
value is used. There are different opinions as to what is best for a database, and every application is
different.
I recommend setting estimate percent to 100% on all databases where collection at 100% is possible
within the scheduled jobs window. When the collections can not be finished within the window, I suggest
testing AUTO_SAMPLE_SIZE to see if good statistical measures can be obtained for your application.
An alternative to using AUTO_SAMPLE_SIZE is to hard set ESTIMATE_PERCENT or to break up the
collection into separate jobs.
8.5 CASCADE
I prefer setting CASCADE=FALSE when doing very large tables, and calling the
DBMS_STATS.GATHER_INDEX_STATS specifically for those objects.
8.6 METHOD_OPT
The METHOD_OPT parameter tells the DBMS_STATS routine whether or not to create histograms for table
columns and how to go about doing it.
The default value for METHOD_OPT calculates column statistics with no histograms.
8.7 DEGREE
The DEGREE option sets the degree of parallelization for the collection. Since collections typically run in serial, set
this to the number of cpu’s in the system, provided the system is normally quiet during statistics collection.
Otherwise set this to a reasonable number not to over-parallelize the collection.
8.8 GRANULARITY
The GRANULARITY option, applies to only partitioned and sub-partitioned tables, defines which level of statistics
are going to be collected.
There are 6 levels: AUTO, ALL, GLOBAL, PARTITION, SUBPARTITION, GLOBAL AND PARTITION.
By breaking the job down, I find they finish faster, and with less problems, especially with sort space.
As with table statistics, options for setting DEGREE, GRANULARITY and ESTIMATE_PERCENT exists as also
the ability to gather at partition and sub-partition layers.
On very large databases, I recommend breaking the jobs down, manually calling GATHER_INDEX_STATS as
appropriate.
11.2.4 skewonly
Oracle will choose the number of buckets, including NOT creating a histogram, based upon whether this
column’s data is highly skewed, only. It will not look at the columns where clause usage.
The difference between AUTO and SKEWONLY is simple. SKEWONLY does not review sys.col_usage$ and
only looks as skewness. AUTO investigates both skewness and usage.
In Oracle 9i, there was much debate on whether to gather stats on the sys objects. Some said yes, some said no. My
personal experience was that collecting schema level stats in 9i didn’t work so well and was a bad idea.
I suggest doing this once a database has been fully populated and any time a significant amount of schema
objects have been created.
To see your fixed objects stats, join the v$fixed_table view to tab_stats$.
In addition, you can delete, export and import fixed objects stats from one system to another using
dbms_stats.delete_fixed_objects_stats, dbms_stats.export_fixed_objects_stats and
dbms_stats.import_fixed_objects_stats.
12.2 DICTIONARY STATS - STATISTICS ON SYS, SYSTEM and OTHER ORACLE COMPONENTS
There are many schemas that ship with the database today, drsys, cmsys, mdsys, wmsys, etc.
It appears, that even though the stated default for the “OPTIONS” parameter is “GATHER”, specifically
setting this parameter, “OPTIONS”, obtain the correct collection. Leaving “OPTIONS” as set to default,
does not.
The documentation also states that you can individually collect statistics on the other components by
specifically giving the comp_id (component id) from dba_registry to the call to gather stats. Without
specifying the “OPTIONS” parameter, this also does not work as expected.
Copyright 2007 TUSC Page 17 of 26
Best Practices for Analyzing Objects
Document
begin
for c1rec in (select comp_id from dba_registry) loop
DBMS_STATS.GATHER_DICTIONARY_STATS(
COMP_ID=>c1rec.COMP_ID, OPTIONS=>’GATHER’
);
end loop;
end;
/
If selecting “LAST_ANALYZED” column from dba_tables shows that the date is still old, verify the
schema is listed in dba_registry. If not, try performing a dbms_stats.gather_schema_stats.
To determine if CPU Cost Modeling is active, verify that data is populated in this table.
If cpuspeed is NOT populated, but cpuspeednw IS populated, then CPU Cost modeling is turned on, but
using Oracle Defaults.
For CPU Cost Modeling to function properly, workload statistics must be captured using
dbms_stats.gather_system_stats.
13.2 SYS.AUX_STATS$
Each record in the sys.aux_stats$ table holds a value for the CPU statistics.
The values are defined below:
“
• iotfrspeed—I/O transfer speed in bytes for each millisecond
• ioseektim - seek time + latency time + operating system overhead time, in milliseconds
• sreadtim - average time to read single block (random read), in milliseconds
• mreadtim - average time to read an mbrc block at once (sequential read), in milliseconds
• cpuspeed - average number of CPU cycles for each second, in millions, captured for the
workload (statistics collected using 'INTERVAL' or 'START' and 'STOP' options)
• cpuspeednw - average number of CPU cycles for each second, in millions, captured for the
noworkload (statistics collected using 'NOWORKLOAD' option.
• mbrc - average multiblock read count for sequential read, in blocks
• maxthr - maximum I/O system throughput, in bytes/second
• slavethr - average slave I/O throughput, in bytes/second” *From 10g Manual
Copyright 2007 TUSC Page 18 of 26
Best Practices for Analyzing Objects
Document
A single call to dbms_stats.gather_system_stats with an appropriate interval of a few hours during an average
workload is all that is required to collect statistics.
Statistics are gathered using the DBMS_STATS.GATHER_SYSTEM_STATS call using an interval period, or
manually started and stopped.
CPU “System” stats should be re-collected whenever the average workload for the database shifts and whenever
there is a change to CPU’s and/or I/O hardware and patterns.
This includes most hardware upgrades, including adding HBA cards, NIC Cards, CPU’s, Disk drives, and external
RAID or SAN hardware and/or configuration that could have an impact on performance.
In 9i, cpu cost modeling statistics can be exported using dbm_stats.export_system_stats and then re-imported using
dbms_stats.import_system_stats.
In 10g, system stats can also be restored from recent past collections (as available), using
dbms_stats.restore_system_stats;
Provided you have exported system stats using dbms_stats.export_system_stats, and provided a “STATID” for that
collection, the following view can be used to view the contents of those stats.
Time and Time again, I am encountered by customers with problems where the database becomes very, very slow.
After analysis, it is revealed that someone recollected statistics last night and explain plans are not what they were
yesterday
There is no backup of yesterday’s statistics. This was a major problem in 8i and 9i, but has mostly been erased with
the default stats retention history in 10g.
Oracle provides a table to store copies of statistics, and can be run easily using DBMS_STATS.
Once in place, a simple call to DBMS_STATS.EXPORT_xxx_STATS should be used before the collection begins.
In 10g, a default 31 days of statistics history is kept. I still use EXPORT functions even in 10g.
DBMS_STATS.CREATE_STAT_TABLE(ownname=>’MYSCHEMA’,stattab=>’STATTAB,tblspace=>’TABLESPACE_NAME’);
If the schema is different or some tables in the new database don’t exist, YOU MUST manually
manipulate the STATTAB table. To modify the schema these stats are appropriate for, update
STATTAB, setting “C5” column as appropriate.
Delete rows for columns that do not belong, or for tables that do not belong. Use the views below to view
the statistics in the STATTAB table.
To use the following views, add a where clause to select from the appropriate “STATID” that you wish to
view.
14.1.4.1 Viewing Saved Table Statistics
CREATE OR REPLACE VIEW STATTAB_TABLE_STATS
AS
SELECT
STATID,
C5 OWNER,
C1 TABLE_NAME,
C2 PARTITION_NAME,
C3 SUBPART_NAME,
N1 NUM_ROWS,
N2 NUM_BLOCKS,
N3 AVG_ROW_LEN,
N4 SAMPLE_SIZE
FROM
STATTAB
WHERE TYPE= 'T';
14.1.4.5 sys.aux_stats$
Each record in the sys.aux_stats$ table holds a value for the CPU statistics.
The values are defined below:
“
• iotfrspeed—I/O transfer speed in bytes for each millisecond
• ioseektim - seek time + latency time + operating system overhead time, in milliseconds
• sreadtim - average time to read single block (random read), in milliseconds
• mreadtim - average time to read an mbrc block at once (sequential read), in milliseconds
• cpuspeed - average number of CPU cycles for each second, in millions, captured for the
workload (statistics collected using 'INTERVAL' or 'START' and 'STOP' options)
• cpuspeednw - average number of CPU cycles for each second, in millions, captured for
the noworkload (statistics collected using 'NOWORKLOAD' option.
• mbrc - average multiblock read count for sequential read, in blocks
• maxthr - maximum I/O system throughput, in bytes/second
• slavethr - average slave I/O throughput, in bytes/second” *From 10g Manual
14.1.4.5.1 Viewing Saved CPU Statistics
CREATE OR REPLACE VIEW STATTAB_cpu_stats
AS
SELECT
CPU.STATID,
CPU.C1 STATUS,
CPU.C2 START_TIME,
CPU.C3 STOP_TIME,
CPU.N3 CPUSPEED,
CPU.N11 MBRC,
CASE CPU.C4 WHEN 'CPU_SERIO' THEN CPU.N1 END SREADTIME,
CASE CPU.C4 WHEN 'CPU_SERIO' THEN CPU.N2 END MREADTIME,
CASE PARIO.C4 WHEN 'PARIO' THEN PARIO.N1 END MAXTHR,
Copyright 2007 TUSC Page 22 of 26
Best Practices for Analyzing Objects
Document
CASE PARIO.C4 WHEN 'PARIO' THEN PARIO.N2 END SLAVTHR
FROM
STATTAB CPU, STATTAB PARIO
WHERE CPU.TYPE= 'S'
AND CPU.C4 = 'CPU_SERIO'
AND CPU.STATID = PARIO.STATID
AND PARIO.C4 = 'PARIO';
To identify the retention period and availability of statistics, queries can be run using
Dbms_stats.GET_STATS_HISTORY_AVAILABILITY and
Dbms_stats.GET_STATS_HISTORY_RETENTION against dual.
GET_STATS_HISTORY_AVAILABILITY
---------------------------------------------------------------------------
16-DEC-07 03.34.26.921000000 PM -06:00
GET_STATS_HISTORY_RETENTION
---------------------------
31
To change any of the constants that the job uses, use can set the following globals using:
AUTO_SAMPLE_SIZE CONSTANT NUMBER;
DEFAULT_DEGREE CONSTANT NUMBER;
AUTO_DEGREE CONSTANT NUMBER;
AUTO_CASCADE CONSTANT BOOLEAN;
AUTO_INVALIDATE CONSTANT BOOLEAN
This is very useful if you have performed specific collections and do not want the automatic scheduled job to modify
those collections.
17 LIMITATIONS OF DBMS_STATS
17.1 CHAINED ROWS
Periodically, analyzed chained rows should be run on all tables. This is especially true when statspack shows large
number of “Table Fetch continued row”. To analyze for chained rows, the older, ANALYZE TABLE xxxx LIST
CHAINED ROWS INTO CHAINED_ROWS should be run.
This report will give you a list of all rows in the table that are chained, and the ROWID’s of those rows.
It is important to fixed tables with many chained rows, by rebuilding those rows.
Chained rows statistics DO NOT AFFECT the CBO and therefore have nothing to do with DBMS_STATS.
18 APPENDIX
18.1 A Note from Metalink on Automatic Undo Retention
Workaround:
alter system set "_smu_debug_mode" = 33554432;
This causes the v$undostat.tuned_undoretention to be calculated as
the maximum of:
maxquerylen secs + 300
undo_retention specified in init.ora
18.2 BIBLIOGRAPHY
The following documents were consulted in the preparation of this paper: