Professional Documents
Culture Documents
Spdoc 12 C
Spdoc 12 C
Release 12.1
Production
------------------------------------------------------------------------Copyright (c) 1993, 2014, Oracle and/or its affiliates. All rights reserved.
Author: Connie Dialeris Green
Contributors: Cecilia Gervasio, Graham Wood, Russell Green, Patrick Tearle,
Harald Eri, Stefan Pommerenk, Vladimir Barriere, Kathryn Chou
Please refer to the Oracle11g server README file in the rdbms doc directory,
for copyright, disclosure, restrictions, warrant, trademark, disclaimer,
and licensing information. The README file is README_RDBMS.HTM.
Oracle Corporation, 500 Oracle Parkway, Redwood City, CA 94065.
------------------------------------------------------------------------Statistics Package (STATSPACK) README (spdoc.txt)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
TABLE OF CONTENTS
----------------0. Introduction and Terminology
1. Enterprise Manager (EM), Automatic Workload Repository (AWR) and Statspack
2. Statspack Configuration
2.1. Database Space Requirements
2.2. Installing the Tool
2.3. Errors during Installation
3. Gathering data - taking a snapshot
3.1. Automating Statspack Statistics Gathering
3.2. Using dbms_job
4. Running the Performance reports
4.1. Running the instance report
4.2. Running the instance report when there are multiple instances
4.3. Configuring the Instance Report
4.4. Running the SQL report
4.5. Running the SQL report when there are multiple instances
4.6. Configuring the SQL report
4.7. Gathering optimizer statistics on the PERFSTAT schema
5. Configuring the amount of data captured
5.1. Snapshot Level
5.2. Snapshot SQL thresholds
5.3. Changing the default values for Snapshot Level and SQL Thresholds
5.4. Snapshot Levels - details
5.5. Specifying a Session Id
5.6. Input Parameters for the SNAP and
MODIFY_STATSPACK_PARAMETERS procedures
6. DB time,, and Time Units used for Performance Statistics
6.1. DB time compared to Total Call Time
6.2. Time Units used for Performance Statistics
7. Event Timings
15.2. Modifications
0. Introduction and Terminology
-------------------------------To effectively perform reactive tuning, it is vital to have an established
baseline for later comparison when the system is running poorly. Without
a baseline data point, it becomes very difficult to identify what a new
problem is attributable to: Has the volume of transactions on the system
increased? Has the transaction profile or application changed? Has the
number of users increased?
Statspack fundamentally differs from the well known UTLBSTAT/UTLESTAT
tuning scripts by collecting more information, and also by storing the
performance statistics permanently in Oracle tables, which can later
be used for reporting and analysis. The data collected can be analyzed
using the report provided, which includes an 'instance health and load'
summary page, high resource SQL statements, as well as the traditional
wait events and initialization parameters.
Statspack improves on the existing UTLBSTAT/UTLESTAT performance scripts
in the following ways:
- Statspack collects more data, including high resource SQL
(and the optimizer execution plans for those statements)
- Statspack pre-calculates many ratios useful when performance
tuning, such as cache hit ratios, per transaction and per
second statistics (many of these ratios must be calculated
manually when using BSTAT/ESTAT)
- Permanent tables owned by PERFSTAT store performance statistics;
instead of creating/dropping tables each time, data is inserted
into the pre-existing tables. This makes historical data
comparisons easier
- Statspack separates the data collection from the report generation.
Data is collected when a 'snapshot' is taken; viewing the data
collected is in the hands of the performance engineer when he/she
runs the performance report
- Data collection is easy to automate using either dbms_job or an
OS utility
NOTE: The term 'snapshot' is used to denote a set of statistics gathered
at a single time, identified by a unique Id which includes the
snapshot number (or snap_id). This term should not be confused
with Oracle's Snapshot Replication technology.
How does Statspack work?
Statspack is a set of SQL, PL/SQL and SQL*Plus scripts which allow the
collection, automation, storage and viewing of performance data. A user
is automatically created by the installation script - this user, PERFSTAT,
owns all objects needed by this package. This user is granted limited
query-only privileges on the V$views required for performance tuning.
For more information on using AWR, please see the Oracle 10g Server
Performance Tuning Guide. For license information regarding AWR, please
see the Oracle database Licensing Information Manual.
If you are going to use AWR instead of Statspack, and you have been using
Statspack at your site, it is recommended that you continue to capture
Statspack data for a short time (e.g. one month) after the upgrade to 10g.
This is because comparing post-upgrade Statspack data to pre-upgrade Statspack
data may make diagnosing initial upgrade problems easier to detect.
WARNING: If you choose to continue Statspack data collection after
upgrading to 10g, and statistics_level is set to typical or
all (which enables AWR collection), it is advised to stagger
Statspack data collection so it does not coincide with AWR
data collection (AWR data collection is by default is every
hour, on the hour). Staggering data collection should be done
to avoid the potential for any interference (e.g. stagger data
collection by 30 minutes).
Long term, typically, there is little reason to collect data through both
AWR and Statspack. If you choose to use AWR instead of Statspack, you should
ensure you should keep a representative set of baselined Statspack data for
future reference.
2. Statspack Configuration
--------------------------2.1. Database Space Requirements
The amount of database space required by the package will vary considerably
based on the frequency of snapshots, the size of the database and instance,
and the amount of data collected (which is configurable).
It is therefore difficult to provide general storage clauses and space
utilization predictions that will be accurate at each site.
Space Requirements
-----------------The default initial and next extent sizes are 100k, 1MB, 3MB or 5MB for all
Statspack tables and indexes. To install Statspack, the minimum
space requirement is approximately 100MB. However, the amount of space
actually allocated will depend on the Tablespace storage characteristics
of the tablespace Statspack is installed in (for example, if your minimum
extent size is 10m, then the storage requirement will be considerably more
than 100m).
Using Locally Managed Tablespaces
--------------------------------If you install the package in a locally-managed tablespace, such as
SYSAUX, modifying storage clauses is not required, as the storage
characteristics are automatically managed.
Using Dictionary Managed Tablespaces
-----------------------------------If you install the package in a dictionary-managed tablespace, Oracle
suggests you monitor the space used by the objects created, and adjust
the storage clauses of the segments, if required.
SQL> @%ORACLE_HOME%\rdbms\admin\spcreate
The spcreate install script runs 3 other scripts - you do not need to
run these - these scripts are called automatically:
1. spcusr -> creates the user and grants privileges
2. spctab -> creates the tables
3. spcpkg -> creates the package
Check each of the three output files produced (spcusr.lis,
spctab.lis, spcpkg.lis) by the installation to ensure no
errors were encountered, before continuing on to the next step.
Note that there are two ways to install Statspack - interactively (as
shown above), or in 'batch' mode; batch mode is useful when you do
not wish to be prompted for the PERFSTAT user's password, and default
and temporary tablespaces.
Batch mode installation
~~~~~~~~~~~~~~~~~~~~~~~
To install in batch mode, you must assign values to the SQL*Plus
variables which specify the password and the default and temporary
tablespaces before running spcreate.
The variables are:
perfstat_password
-> for the password
default_tablespace -> for the default tablespace
temporary_tablespace -> for the temporary tablespace
e.g.
on Unix:
SQL> connect / as sysdba
SQL> define default_tablespace='tools'
SQL> define temporary_tablespace='temp'
SQL> define perfstat_password='erg8oiw'
SQL> @?/rdbms/admin/spcreate
SQL> undefine perfstat_password
spcreate will no longer prompt for the above information.
2.3. Errors during installation
Specifying SYSTEM tablespace
A possible error during installation is to specify the SYSTEM
tablespace for the PERFSTAT user's DEFAULT or TEMPORARY tablespace.
In such a situation, the installation will fail, stating the problem.
To install Statspack after receiving errors during the installation
To correctly install Statspack after such errors, first run the
de-install script, then the install script. Both scripts must be
run from SQL*Plus.
e.g. Start SQL*Plus, connect as a user with SYSDBA privilege, then:
SQL> @spdrop
SQL> @spcreate
Scheduled Tasks).
3.2. Using dbms_job
To use an Oracle-automated method for collecting statistics, you can use
dbms_job. A sample script on how to do this is supplied in spauto.sql,
which schedules a snapshot every hour, on the hour.
You may wish to schedule snapshots at regular times each day to reflect your
system's OLTP and/or batch peak loads. For example take snapshots at 9am,
10am, 11am, 12 midday and 6pm for the OLTP load, then a snapshot at
12 midnight and another at 6am for the batch window.
In order to use dbms_job to schedule snapshots, the job_queue_processes
initialization parameter must be set to a value greater than 0 for the job
to run automatically.
Example of setting the job_queue_processes parameter in an init.ora file:
# Set to enable the job queue process to start. This allows dbms_job
# to schedule automatic statistics collection using STATSPACK
job_queue_processes=1
If using spauto.sql in a Clustered database environment, the spauto.sql
script must be run once on each instance in the cluster. Similarly, the
job_queue_processes parameter must also be set for each instance.
Changing the interval of statistics collection
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To change the interval of statistics collection use the dbms_job.interval
procedure
e.g.
execute dbms_job.interval(1,'SYSDATE+(1/48)');
Where 'SYSDATE+(1/48)' will result in the statistics being gathered each
1/48th of a day (i.e. every 30 minutes).
To force the job to run immediately,
execute dbms_job.run(<job number>);
To remove the auto collect job,
execute dbms_job.remove(<job number>);
For more information on dbms_job, see the Supplied Packages Reference
Manual.
Note: Blank lines between lines of snapshot Id's means the instance
has been restarted (shutdown/startup) between those times this helps identify which begin and end snapshots can be used
together when running a Statspack report (ones separated by
a blank line can not).
By default, the report shows all completed snapshots for this instance
when choosing the begin and end snapshot Id's. However, the number
of days worth of snapshots to list is now configurable: to change
this, please see 'Snapshot related report settings - num_days' in the
'Configuring the Instance Report' section of this document.
e.g. on Unix
SQL> connect perfstat/perfstat_password
SQL> @?/rdbms/admin/spreport
e.g. on Windows
SQL> connect perfstat/perfstat_password
SQL> @%ORACLE_HOME%\rdbms\admin\spreport
Example output:
SQL> connect perfstat/perfstat_password
Connected.
SQL> @spreport
Current Instance
~~~~~~~~~~~~~~~~
DB Id
DB Name
Inst Num Instance
----------- ------------ -------- -----------2618106428 PRD1
1 prd1
Instances in this Statspack schema
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
DB Id
Inst Num DB Name
Instance
Host
----------- -------- ------------ ------------ -----------2618106428
1 PRD10
prd1
dlsun525
Using 261810642 for database Id
Using
1 for instance number
5
5
The
The
The
The
The
DBId
Instance Number
beginning snapshot Id
ending
snapshot Id
name of the report text file to be created
Example output:
SQL> connect perfstat/perfstat_password
Connected.
SQL> @sprepins
DB Name
-----------CON90
MAIL
Instance
-----------con90
MAIL
Host
-----------dlsun525
mailhost
are:
-> specifies
-> specifies
-> specifies
-> specifies
-> specifies
the
the
the
the
the
dbid
instance number
begin Snapshot Id
end Snapshot Id
Report output name
e.g.
SQL>
SQL>
SQL>
SQL>
SQL>
SQL>
SQL>
connect perfstat/perfstat_password
define dbid=4290976145
define inst_num=1
define begin_snap=1
define end_snap=2
define report_name=batch_run
@?/rdbms/admin/sprepins
file name before making changes to the file. Once the changes
have been made, backup the newly modified report. As this file
will be replaced when the server is upgraded to a new release, you
will need to make the same changes to this file each time the
server is upgraded.
The configuration is performed by modifying the 'Customer Configurable
Report Settings' section of the file sprepins.sql for the instance report
(and for num_days, sprsqins.sql for the SQL report).
Snapshot related report settings - num_days
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The number of days of snapshots to list when displaying the snapshots
to choose the begin and end snapshot Ids from. The default is to list
all snapshots. However it is now possible to configure the number of
days worth of snapshots to list.
This facility has been added for sites that have a large number of snapshots
stored in the Statspack schema, and who typically only look at the last
<n> days worth of data.
For example, setting the number of days of snapshots to list (num_days) to
31 would result in the most recent 31 days worth of snapshots being listed
when choosing the begin and end snapshot Ids.
Note: This variable is the only variable modifiable in both the instance
report (sprepins.sql) and the SQL report (sprsqins.sql).
The value of this variable is configured by changing the value assigned to
the variable num_days.
e.g.
define num_days = 60
The variable has the following valid values:
<null>
<n>
e.g.
-- define num_days=31
Choosing this setting as your site's default means the
instance report cannot be run in batch mode.
If num_days is set to any value other than <undefined>, you will not be
prompted to enter a value. However, if the variable is set to <undefined>
running the instance report (or the SQL report) will result in you
being prompted for the value, as follows:
Current Instance
~~~~~~~~~~~~~~~~
DB Id
DB Name
Inst Num Instance
----------- ------------ -------- -----------1296193444 MAINDB
1 maindb
Instances in this Statspack schema
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
DB Id
Inst Num DB Name
Instance
Host
----------- -------- ------------ ------------ -----------1296193444
1 MAINDB
maindb
main1
Using 1296193444 for database Id
Using
1 for instance number
Specify the number of days of snapshots to choose from
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Entering the number of days (n) will result in the most recent
(n) days of snapshots being listed. Pressing <return> without
specifying a number lists all completed snapshots.
Enter value for num_days: 5
Listing the last 5 days of Completed Snapshots
Snap
Instance
DB Name
Id
------------ ------------ ----13
14
15
16
Snap
Snap Started
Level Comment
----------------- ----- ---------------------26 Sep 2002 17:01
5
27 Sep 2002 13:28
5
27 Sep 2002 13:29
5
30 Sep 2002 14:40
5
variable top_n_sql.
e.g.
define top_n_sql = 65;
SQL section report settings - num_rows_per_hash
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This is the upper limit of the number of rows of SQL Text to print for
each SQL statement appearing in the SQL sections of the report. This
variable applies to each SQL statement (i.e. hash_value). The default value
is 4, which means at most 4 lines of the SQL text will be printed for
each SQL statement. To change this value, change the value of the variable
num_rows_per_hash.
e.g.
define num_rows_per_hash = 10;
SQL section report settings - top_pct_sql
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This is a number which restricts the rows of SQL shown in the SQL sections
of the report. Only SQL statements which exceeded top_pct_sql percentage
of resources used, are candidates for listing in the report.
The default value is 1.0% To change the default, change the value of the
variable top_pct_sql.
e.g.
define top_pct_sql = 0.5;
In the SQL ordered by gets section of the report, a top_pct_sql of 0.5% would
only include SQL statements which had exceeded 0.5% of the total buffer gets
in the interval.
Segment related report settings - top_n_segstat
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The number of top segments to display in each of the Segment sections of
the instance report.
The default value is 5, which means only the top 5 segments in each category
(e.g. top 5 logical reads) will be displayed. To change the default,
change the value of the variable top_n_segstat.
e.g.
define top_n_segstat = 5;
4.4. Running the SQL report
Once the instance report has been analyzed, often there are high-load SQL
statements which should be examined to determine if they are causing
unnecessary resource usage, and hence avoidable load.
The SQL report sprepsql.sql, displays SQL-specific statistics, the
complete SQL text and (if level 6 snapshot has been taken), information
on any SQL Plan(s) associated with that statement.
The SQL statement to be reported on is identified by the statement's Hash
Value (which is a numerical representation of the statement's SQL text).
The Hash Value for each statement is displayed in the high-load SQL
sections of the instance report.
The sprepsql.sql file is executed while being connected to the PERFSTAT
user, and is located in the rdbms/admin directory of the Oracle Home.
Note: To run sprepsql.sql in a Cluster environment, you must connect
be prompted for:
beginning snapshot
ending
snapshot
Hash Value for the
name of the report
Id
Id
SQL statement
text file to be created
Example output:
SQL> connect perfstat/perfstat_password
Connected.
SQL> @sprepsql
DB Id
DB Name
Inst Num Instance
----------- ------------ -------- -----------2618106428 PRD1
1 prd1
Completed Snapshots
Snap
Snap
Instance
DB Name
Id Snap Started
Level Comment
------------ ------------ ----- ----------------- ----- ---------------------prd1
PRD1
37 02 Mar 2001 11:01
6
38 02 Mar 2001 12:01
6
39 08 Mar 2001 09:01
40 08 Mar 2001 10:02
5
5
The variables
begin_snap
end_snap
hash_value
report_name
are:
-> specifies
-> specifies
-> specifies
-> specifies
the
the
the
the
begin Snapshot Id
end Snapshot Id
Hash Value
Report output name
e.g.
SQL>
SQL>
SQL>
SQL>
SQL>
SQL>
connect perfstat/perfstat_password
define begin_snap=39
define end_snap=40
define hash_value=1988538571
define report_name=batch_sql_run
@sprepsql
The
The
The
The
The
DBId
Instance Number
beginning snapshot Id
ending
snapshot Id
Hash Value for the SQL statement
DB Name
-----------MAINDB
MAIL
Instance
-----------maindb
MAIL
Host
-----------main1
mailhost
are:
-> specifies
-> specifies
-> specifies
-> specifies
-> specifies
-> specifies
the
the
the
the
the
the
dbid
instance number
begin Snapshot Id
end Snapshot Id
Hash Value
Report output name
e.g.
SQL>
SQL>
SQL>
SQL>
SQL>
SQL>
SQL>
SQL>
connect perfstat/perfstat_password
define dbid=4290976145
define inst_num=1
define begin_snap=1
define end_snap=2
define hash_value=1988538571
define report_name=batch_run
@?/rdbms/admin/sprsqins
that point will use the specified values, any subsequent snapshots will
use the preexisting values in the stats$statspack_parameter table.
o Changing the defaults immediately without taking a snapshot, using the
statspack.modify_statspack_parameter procedure. For example to change
the snapshot level to 10, and the SQL thresholds for buffer_gets and
disk_reads, the following statement can be issued:
SQL> execute statspack.modify_statspack_parameter (i_snap_level=>10, i_buffer_gets_th=>10000, i_disk_reads_th=>1000);
This procedure changes the values permanently, but does not
take a snapshot.
5.4 Snapshot Levels - details
Levels >= 0 General performance statistics
Statistics gathered:
This level and any level greater than 0 collects general
performance statistics, such as: wait statistics, system events,
system statistics, rollback segment data, row cache, SGA, background
events, session events, lock statistics, buffer pool statistics,
latch statistics, resource limit, enqueue statistics, and statistics
for each of the following, if enabled: automatic undo management,
buffer cache advisory data, auto PGA memory management, Cluster DB
statistics.
Levels >= 5 Additional data: SQL Statements
This level includes all statistics gathered in the lower level(s),
and additionally gathers the performance data on high resource
usage SQL statements.
In a level 5 snapshot (or above), note that the time required for the
snapshot to complete is dependent on the shared_pool_size and on the
number of SQL statements in the shared pool at the time the snapshot
is taken: the larger the shared pool, the longer the time taken to
complete the snapshot.
SQL 'Thresholds'
The SQL statements gathered by Statspack are those which exceed one of
six predefined threshold parameters:
- number of executions of the SQL statement
(default 100)
- number of disk reads performed by the SQL statement (default 1,000)
- number of parse calls performed by the SQL statement (default 1,000)
- number of buffer gets performed by the SQL statement (default 10,000)
- size of sharable memory used by the SQL statement
(default 1m)
- version count for the SQL statement
(default 20)
The values of each of these threshold parameters are used when
deciding which SQL statements to collect - if a SQL statement's
resource usage exceeds any one of the above threshold values, it
is captured during the snapshot.
The SQL threshold levels used are either those stored in the table
stats$statspack_parameter, or by the thresholds specified when
the snapshot is taken.
Levels >= 6 Additional data: SQL Plans and SQL Plan usage
This level includes all statistics gathered in the lower level(s),
Parameter Name
-----------------i_snap_level
i_ucomment
i_executions_th
Range of
Valid Values
-----------0,5,6,7,10
Text
Integer >=0
i_disk_reads_th
Integer >=0
i_parse_calls_th
Integer >=0
i_buffer_gets_th
Integer >=0
i_sharable_mem_th
Integer >=0
Valid sid
from
v$session
i_modify_parameter True,False
Default
Value
------5
<blank>
100
Meaning
----------------------------------Snapshot Level
Comment to be stored with Snapshot
SQL Threshold: number of times
the statement was executed
1,000
SQL Threshold: number of disk reads
the statement made
1,000
SQL Threshold: number of parse
calls the statement made
10,000 SQL Threshold: number of buffer
gets the statement made
1048576 SQL Threshold: amount of sharable
memory
20
SQL Threshold: number of versions
of a SQL statement
1,000
Segment statistic Threshold: number
of physical reads on a segment.
1,0000 Segment statistic Threshold: number
of logical reads on a segment.
100
Segment statistic Threshold: number
of buffer busy waits for a segment.
100
Segment statistic Threshold: number
of row lock waits for a segment.
100
Segment statistic Threshold: number
of ITL waits for a segment.
1000
Segment statistic Threshold: number
of Consistent Reads blocks served by
the instance for the segment*.
1000
Segment statistic Threshold: number
of CUrrent blocks served by the
instance for the segment*.
0 (no
Session Id of the Oracle Session
session) to capture session granular
statistics for
False
Save the parameters specified for
future snapshots?
7. Event Timings
----------------If timings are available, the Statspack report will order wait events by time
(in the Top-5 and background and foreground Wait Events sections).
If timed_statistics is false for the instance, however a subset of users or
programs set timed_statistics set to true dynamically, the Statspack report
output may look inconsistent, where some events have timings (those which the
individual programs/users waited for), and the remaining events do not.
The Top-5 section will also look unusual in this situation.
Optimally, timed_statistics should be set to true at the instance level for
ease of diagnosing performance problems.
This section describes the input parameters for the MAKE_BASELINE and
CLEAR_BASELINE procedure and function which accept Snap Ids. The input
parameters for both MAKE and CLEAR baseline are identical. The
procedures/functions will either baseline (or clear the baseline for) the
range of snapshots between the begin and end snap Ids identified (the
default), or if i_snap_range parameter is FALSE, will only operate on
the two snapshots specified.
If the function is called, it will return the number of snapshots
operated on.
Parameter Name
-----------------i_begin_snap
i_end_snap
i_snap_range
i_dbid
i_instance_number
Range of
Valid Values
----------------Any Valid Snap Id
Any valid Snap Id
TRUE/FALSE
Default
Value
------TRUE
Meaning
------------------------------SnapId to start the baseline at
SnapId to end the baseline at
Should the range of snapshots
between the begin and end snap
be included?
| Any valid DBId/ Current Caters for RAC databases
| inst number
DBId/
where you may wish to baseline
combination
Inst # snapshots on one instance
in this
which were physically taken
Statspack
on another instance
schema
Example 1:
To make a baseline of snaps 45 and 50 including the range of snapshots
in between (and you do not wish to know the number of snapshots
baselined, so call the MAKE_BASELINE procedure). Log into the PERFSTAT
user in SQL*Plus, and:
SQL> exec statspack.make_baseline (i_begin_snap => 45, i_end_snap => 50);
Or without specifying the parameter names:
SQL> exec statspack.make_baseline(45, 50);
Example 2:
To make a baseline of snaps 1237 and 1241 (including the range of
snapshots in between), and be informed of the number of snapshots
baselined (by calling the function), log into the PERFSTAT
user in SQL*Plus, and:
SQL>
SQL>
SQL>
SQL>
SQL>
SQL>
Example 3:
To make a baseline of only snapshots 1237 and 1241 (excluding the
snapshots in between), log into the PERFSTAT user in SQL*Plus,
and:
SQL> exec statspack.make_baseline(5, 12, false);
Range of
Valid Values
----------------Any valid date
Any valid date >
begin date
| Any valid DBId/
| inst number
combination
in this
Statspack
schema
Default
Value
-------
Meaning
------------------------------Date to start the baseline at
Date to end baseline at
Example 1:
To make a baseline of snapshots taken between 12-Feb-2003 at 9am, and
12-Feb-2003 at 12 midday (and be informed of the number of snapshots
affected), call the MAKE_BASELINE function. Log into the PERFSTAT
user in SQL*Plus, and:
SQL> variable num_snaps number;
SQL> begin
SQL> :num_snaps := statspack.make_baseline
(to_date('12-FEB-2003 09:00','DD-MON-YYYY HH24:MI'),
to_date('12-FEB-2003 12:00','DD-MON-YYYY HH24:MI'));
SQL> end;
SQL> /
SQL> print num_snaps
Example 2:
To clear an existing baseline which covers the times 13-Dec-2002 at
11pm and 14-Dec-2002 at 2am (without wanting to know how many
snapshots were affected), log into the PERFSTAT user in SQL*Plus, and:
SQL> exec statspack.clear_baseline (to_date('13-DEC-2002 23:00','DD-MON-YYYY HH24:MI'), to_date('14-FEB-2002 02:00','DD-MON-YYYY HH24:MI'));
8.2. Purging/removing unnecessary data
It is possible to purge unnecessary data from the PERFSTAT schema using the
PURGE procedures/functions. Any Baselined snapshots will not be purged.
NOTE:
o It is good practice to ensure you have sufficient baselined snapshots
before purging data.
o It is recommended you export the schema as a backup before running this
script. Refer to the Oracle Database Utilities manual to use Data Pump Expor
t
function which accept Snap Ids. The input parameters for both procedure
and function are identical. The procedure/function will purge all
snapshots between the begin and end snap Ids identified (inclusive, which
is the default), or if i_snap_range parameter is FALSE, will only purge
the two snapshots specified. If i_extended_purge is TRUE, an extended purge
is also performed.
If the function is called, it will return the number of snapshots purged.
Parameter Name
-----------------i_begin_snap
i_end_snap
i_snap_range
i_extended_purge
i_dbid
i_instance_number
Range of
Valid Values
----------------Any Valid Snap Id
Any valid Snap Id
TRUE/FALSE
Default
Value
------TRUE
Meaning
------------------------------SnapId to start purging from
SnapId to end purging at
Should the range of snapshots
between the begin and end snap
be included?
TRUE/FALSE
FALSE
Determines whether unused
SQL Text, SQL Plans and
Segment Identifiers will be
purged in addition to the
normal data purged
| Any valid DBId/ Current Caters for RAC databases
| inst number
DBId/
where you may wish to baseline
combination
Inst # snapshots on one instance
in this
which were physically taken
Statspack
on another instance
schema
Example 1:
Purge all snapshots between the specified begin and end snap ids. Also
purge unused SQL Text, SQL Plans and Segment Identifiers, and
return the number of snapshots purged. Log into the PERFSTAT user
in SQL*Plus, and:
SQL> variable num_snaps number;
SQL> begin
SQL> :num_snaps := statspack.purge
( i_begin_snap=>1237, i_end_snap=>1241
, i_extended_purge=>TRUE);
SQL> end;
SQL> /
SQL> print num_snaps
8.2.2. Input Parameters for the PURGE procedures and functions
which accept Begin Date and End Date
This section describes the input parameters for the PURGE procedure and
function which accept a begin date and an end date. The procedure/
function will purge all snapshots taken between the specified begin and
end dates. The input parameters for both procedure and function are
identical. If i_extended_purge is TRUE, an extended purge is also performed.
If the function is called, it will return the number of snapshots purged.
Parameter Name
-----------------i_begin_date
i_end_date
Range of
Valid Values
----------------Date
End date > begin
date
Default
Value
-------
Meaning
------------------------------Date to start purging from
Date to end purging at
SnapId to end the baseline at
i_extended_purge
i_dbid
i_instance_number
TRUE/FALSE
FALSE
Example 1:
Purge all snapshots which fall between 01-Jan-2003 and 02-Jan-2003.
Also perform an extended purge. Log into the PERFSTAT user in
SQL*Plus, and:
SQL> exec statspack.purge (i_begin_date=>to_date('01-JAN-2003', 'DD-MON-YYYY'), i_end_date =>to_date('02-JAN-2003', 'DD-MON-YYYY'), i_extended_purge=>TRUE);
8.2.3. Input Parameters for the PURGE procedure and function
which accept a single Purge Before Date
This section describes the input parameters for the PURGE procedure and
function which accept a single date. The procedure/function will purge
all snapshots older than the date specified. If i_extended_purge is TRUE,
also perform an extended purge. The input parameters for both
procedure and function are identical.
If the function is called, it will return the number of snapshots purged.
Range of
Parameter Name
Valid Values
------------------ ----------------i_purge_before_date Date
i_extended_purge
i_dbid
i_instance_number
Default
Value
Meaning
------- ------------------------------Snapshots older than this date
will be purged
TRUE/FALSE
FALSE
Determines whether unused
SQL Text, SQL Plans and
Segment Identifiers will be
purged in addition to the
normal data purged.
| Any valid DBId/ Current Caters for RAC databases
| inst number
DBId/
where you may wish to baseline
combination
Inst # snapshots on one instance
in this
which were physically taken
Statspack
on another instance
schema
Example 1:
To purge data older than a specified date, without wanting to know the
number of snapshots purged, log into the PERFSTAT user in SQL*Plus,
and:
SQL> exec statspack.purge(to_date('31-OCT-2002','DD-MON-YYYY'));
8.2.4. Input Parameters for the PURGE procedure and function
which accept the Number of Days of data to keep
This section describes the input parameters for the PURGE procedure and
function which accept the number of days of snapshots to keep. All data
older than the specified number of days will be purged. The input
parameters for both procedure and function are identical. If
i_extended_purge is TRUE, also perform an extended purge.
If the function is called, it will return the number of snapshots purged.
Range of
Parameter Name
Valid Values
------------------ ----------------i_num_days
Number > 0
i_extended_purge
i_dbid
i_instance_number
Default
Value
Meaning
------- ------------------------------Snapshots older than this
number of days will be purged
TRUE/FALSE
FALSE
Determines whether unused
SQL Text, SQL Plans and
Segment Identifiers will be
purged in addition to the
normal data purged
| Any valid DBId/ Current Caters for RAC databases
| inst number
DBId/
where you may wish to baseline
combination
Inst # snapshots on one instance
in this
which were physically taken
Statspack
on another instance
schema
Example 1:
To purge data older than 31 days, without wanting to know the number
of snapshots operated on, log into the PERFSTAT user in SQL*Plus, and:
SQL> exec statspack.purge(31);
8.2.5. Using sppurge.sql
When sppurge is run, the instance currently connected to, and the
available snapshots are displayed. The DBA is then prompted for the
low Snap Id and high Snap Id. All snapshots which fall within this
range will be purged.
WARNING: sppurge.sql has been modified to use the new Purge functionality
in the STATSPACK package, therefore it is no longer possible to
rollback a requested purge operation - the purge is automatically
committed.
e.g. Purging data - connect to PERFSTAT using SQL*Plus, then run the
sppurge.sql script - sample example output appears below.
SQL> connect perfstat/perfstat_password
SQL> set transaction use rollback segment rbig;
SQL> @sppurge
Database Instance currently connected to
========================================
Instance
DB Id
DB Name
Inst Num Name
----------- ---------- -------- ---------720559826 PERF
1 perf
Snapshot Started
--------------------30 Feb 2000 10:00:01
30 Feb 2000 12:00:06
01 Mar 2000 02:00:01
01 Mar 2000 06:00:01
Base- Snap
line? Level Host
Comment
----- ----- --------------- -------------------6 perfhost
Y
6 perfhost
Y
6 perfhost
6 perfhost
WARNING
~~~~~~~
sppurge.sql deletes all snapshots ranging between the lower and
upper bound Snapshot Id's specified, for the database instance
you are connected to. Snapshots identified as Baseline snapshots
which lie within the snapshot range will not be purged.
It is NOT possible to rollback changes once the purge begins.
You may wish to export this data before continuing.
Specify the Lo Snap Id and Hi Snap Id range to purge
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Enter value for losnapid: 1
Using 1 for lower bound.
Enter value for hisnapid: 2
Using 2 for upper bound.
Deleting snapshots 1 - 2
Purge of specified Snapshot range complete.
SQL> -- end of example output
Batch mode purging
-----------------To purge in batch mode, you must assign values to the SQL*Plus
variables which specify the low and high snapshot Ids to purge.
The variables are:
losnapid -> Begin Snapshot Id
hisnapid -> End Snapshot Id
e.g.
SQL>
SQL>
SQL>
SQL>
connect perfstat/perfstat_password
define losnapid=1
define hisnapid=2
@sppurge
NOTE:
It is recommended you export the schema as a backup before running this
script. Refer to the Oracle Database Utilities manual to use Data Pump Export
to export the schema.
If you run sptrunc.sql in error, the script allows you to exit before
beginning the truncate operation (you do this at the 'begin_or_exit'
prompt by typing in 'exit').
To truncate all data, connect to the PERFSTAT user using SQL*Plus,
and run the script - sample output which truncates data is below:
SQL> connect perfstat/perfstat_password
SQL> @sptrunc
Warning
~~~~~~~
Running sptrunc.sql removes ALL data from Statspack tables. You may
wish to export the data before continuing.
About to Truncate Statspack Tables
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
If would like to exit WITHOUT truncating the tables, enter any text at the
begin_or_exit prompt (e.g. 'exit'), otherwise if you would like to begin
the truncate operation, press <return>
Enter value for begin_or_exit:
Entered at the 'begin_or_exit' prompt
... Starting truncate operation
Table truncated.
Table truncated.
<etc...>
Commit complete.
Package altered.
... Truncate operation complete
8.4. Sharing data via export
If you wish to share data with other sites (for example if Oracle
Support requires the raw statistics), it is possible to export
the PERFSTAT user.
You can use Data Pump Export to export the perfstat schema.
For example to export using Data Pump :
% expdb perfstat/perfstat_password schemas=PERFSTAT dumpfile=STATSPACK.dmp lo
gfile=expSTATSPACK.log
This will create a file called STATSPACK.dmp and the log file expSTATSPACK.log
If you wish to load the data into another database, use Data Pump
Import. For information on using Data Pump Export and Import, please
see the Oracle Database Utilities manual.
9. New and Changed Features
---------------------------9.1. Changes between 11.1 and 12.1
o Idle Events
- Added Idle Events that span LogMiner, PQ, SQL*Net, Capture Reply
o Multitenant Database Support
- Added support for Statspack installation and reporting at the Pluggable
Database (PDB) level. However, some data sources in the report are for
the entire instance and may not be restricted to the PDB.
- Statspack installation and reporting is not supported at the root level
(CDB$ROOT)
9.1. Changes between 10.2 and 11.1
Changes on the Summary Page of the Instance Report
o Host
- Platform name has been added to the Host information.
- The number of CPU cores and sockets is displayed, where available.
- Physical Memory is now shown in GB rather than MB.
o Snapshot information
- The DB time and DB CPU in seconds, is now printed close to the
snapshot Elapsed time.
o Load Profile
- DB time and DB CPU have been added to Load Profile. Units are
Per Second, Per Transaction, Per Execute and Per Call
The addtion of this normalized data assists when examinig
to reports to see whether the load is comparable.
- The number of 'Sorts' has been replaced with 'W/A MB processed'.
Displaying workarea statistics more accurately reflects not
only sorts, but also other workarea operations such as hash
joins. Additionally, using MB processed rather than the number
of workarea operations indicates the quantity of workarea work
performed.
- The following statistics have been removed from the front page,
as they are no longer considered as important as they once were:
% Blocks changed per Read
Recursive Call %
Rollback per transaction %
Rows per Sort
o Instance Efficiency
- This section has been renamed from 'Instance Efficiency Percentages'
to 'Instance Efficiency Indicators', as this more accurately
represents that these values are simply indicators of possible
areas to consider, rather than conclusive evidence.
- 'In-memory Sort %' has been replaced with 'Optimal W/A Exec %'
as the old statistic only showed sorts, and not all workarea
operations.
Modified sections of the Instance Report
o Wait Events and Background Wait Event
The % of Total Call Time has been added to these sections. Rows which
have null for the % of Call Time are Idle Events, and so do not
identify how much of the total load can be accounted for in the
high-load SQL captured.
e.g. In the title for the SQL ordered by Gets section of the report,
a line similar to the following will appear
-> Captured SQL accounts for
This identifies that 74.8% of the total Buffer gets incurred during
the interval is attributable to the high-load SQL captured by Statspack
(Note that not all captured statements are displayed in the report, only
those which are the highest load).
o New SQL report 'SQL ordered by Cluster Wait Time'
There is a new SQL report added to the SQL reports section. This report
lists the top-SQL ordered by Cluster Wait Time. This report may be useful
in Real Application Cluster databases.
Derived Statistics
There is one new statistic in the Instance Activity Sections which
does not come from V$SYSSTAT: 'log switches (derived)'.
This statistic is derived from the v$thread view which Statspack now
captures. This statistic is shown in a new Instance Activity Stats sections
of the instance report, as described below.
Two new Instance Activity Stats sections
There are two new Instance Activity Stats sections in the instance report.
The first shows the begin and end absolute values of statistics which
should not be diffed (typically performing a diff is incorrect, because
the statistics show current values, rather than cumulative values).
These statistics come from v$sysstat (as do the other Instance Activity
statistics).
Instance Activity Stats DB/Inst: MAINDB/maindb Snaps: 22-23
-> Statistics with absolute values (should not be diffed)
-> Statistics identified by '(derived)' come from sources other than SYSSTAT
Statistic
Begin Value
End Value
--------------------------------- --------------- --------------logons current
10
10
opened cursors current
41
49
session cursor cache count
24
36
The second shows the number of log switches, which is derived from the
v$thread view.
Instance Activity Stats DB/Inst: MAINDB/maindb Snaps: 22-23
Statistic
Total per Hour
--------------------------------- ------------------ --------log switches (derived)
0
.00
New Scripts
o sprsqins.sql - Reports on a single SQL statement (i.e. hash_value),
including the SQL statistics for the snapshot, the
complete SQL text and optimizer execution plan information.
This report differs from sprepsql.sql, in that it
Cluster Features
o Real Application Clusters Statistics page (page 2 of a clustered
database report) has been modified to add new ratios and remove ratios
considered less useful.
o The Global Enqueue Statistics section, previously on page 3 of a RAC
instance report, has been moved to behind the Library Cache Activity
statistics.
o Statistics for CR and CURRENT blocks served, and for INSTANCE CACHE
TRANSFER, have been added after Global Enqueue Statistics page.
o New SQL report 'SQL ordered by Cluster Wait Time' has been added.
9.4. Changes between 9.0 and 9.2
Changes on the Summary Page of the Instance Report (spreport.sql)
o The Top 5 Wait Events has been changed to be the Top 5 Timed Events.
What was previously the Top 5 Wait Events has been expanded to give the
Top 5 timed events within the instance: i.e. in addition to including
Wait events, this section can now include the CPU time as reported in the
'CPU used by this session' statistic. This statistic will appear in the
Top 5 only if it's value is one of the Top 5 users of time for the
snapshot interval.
Note that the name of the statistic 'CPU used by this session' will
actually appear in the Top 5 section as 'CPU Time'. The statistic
name is masked in the Top 5 to avoid the confusion of the suffix
'by this session'.
The statistic will continue to appear in the System Statistics
(SYSSTAT) section of the report as 'CPU used by this session'.
Additionally, instead of the percentage calculation being the % Total
Wait Time (which is time for each wait event divided by the total wait
time), the percentage calculation is now the % Total Call Time.
Call Time is the total time spent in database calls (i.e. the total
non-idle time spent within the database either on the CPU, or actively
waiting).
We compute 'Call Time' by adding the time spent on the CPU ('CPU used by
this session' statistic) to the time used by all non-idle wait events.
i.e.
total call time = total CPU time + total wait time for non-idle events
The % Total Call Time shown in the 'Top 5' heading on the summary page
of the report, is the time for each timed event divided by the total call
time (i.e. non-idle time).
i.e.
previously the calculation was:
time for each wait event / total wait time for all events
now the calculation is:
time for each timed event / total call time
Purpose
~~~~~~~
The purpose for including CPU time with wait events:
When tuning a system, the first step is to identify where the most of the
time is spent, in order to identify where the most productive tuning
effort should be concentrated.
The majority of time could be spent in waiting for events to complete
(and so be identifiable in the wait event data), or the system could be
consuming much CPU (for which Operating System statistics, and the Oracle
CPU statistic 'CPU used by this session' in SYSSTAT are examined).
Having the CPU Time co-located with the wait events in the Top 5 section
of the instance report makes it easier to compare the relative values
and to identify whether the most productive investigation would occur
by drilling down the wait events, or in reducing Oracle CPU usage
(e.g. by tuning SQL).
Changes on the Top SQL sections of the Report (spreport.sql)
o When specified by the application, the MODULE information is reported
just before the SQL statement itself.
This information is preceded by the mention "Module: "
New columns added to
- stats$db_cache_advice
size_factor: compares the estimated cache size with the current cache size
- stats$sql_plan
search_columns: the number of index columns with matching predicates.
access_predicates: predicates used to locate rows in an access structure.
For example, start and/or stop predicates for an index range scan.
filter_predicates: predicates used to filter rows before producing them.
- stats$sql_summary
child_latch: the library cache child latch number which protects this
SQL statement (join to v$latch_children.child#). A parent SQL
statement, and all it's children are protected by the same library
cache child latch.
fetches: the number of fetches performed for this SQL statement
New Scripts
o spup90.sql - Upgrades a 9.0 Statspack schema to the 9.2 format
New Data captured/reported on - Level 1
- Shared Pool Advisory
- PGA statistics including PGA Advisory, PGA Histogram usage
New Data captured/reported on - Level 7
- Segment level Statistics
Cluster Features
o Real Application Clusters Statistics page (page 2 of a clustered database
report) has been significantly modified to add new ratios and remove
ratios deemed less useful.
o RAC specific segment level statistics are captured with level 7
SQL Plan Usage capture changed
o The logic for capturing SQL Plan Usage data (level 6) has been modified
significantly. Instead of capturing a Plan's Usage once the first time
the plan is used and never again thereafter, the algorithm now captures
the plans used each snapshot. This allows tracking whether multiple
plans are in use concurrently, or whether a plan has reverted back to
an older plan.
Note that plan usage data is only captured for high-load SQL (this is
unchanged between 9.0 and 9.2).
Due to the significant change in data capture, it is not possible to
convert existing data. Instead, any pre-existing data will be
archived into the table STATS$SQL_PLAN_USAGE_90 (this allows querying
the archived data, should this be necessary).
sprepsql.sql
o 'All Optimizer Plan(s) for this Hash Value' change:
Instead of showing the first time a plan was seen for a specific hash
value, this section now shows each time the Optimizer Plan
changed since the SQL statement was first seen e.g. if the SQL statement
had the following plan changes:
snap ids
plan hash value
---------------------1 -> 12
AAAAAAA
13 -> 134
BBBBBBB
145 -> 299
CCCCCCC
300 -> 410
AAAAAAA
Then this section of the report will now show:
snap id
plan hash value
---------------------1
AAAAAAA
13
BBBBBBB
145
CCCCCCC
300
AAAAAAA
Previously, only the rows with snap_id's 1, 13 and 145 would have been
displayed, as these were the first snap Id's these plans were found.
However this data could not show that plan AAAAAA was found again in
snap_id 300.
The new output format makes it easier to see when an older plan is again
in use. This is possible due to the change in the SQL Plan Usage
capture (described above).
9.5. Changes between 8.1.7 and 9.0
Timing data
o columns with cumulative times are now displayed in seconds.
Changes on the Summary Page
o All cache sizes are now reported in M or K
New Statistics on the Summary page
o open cursors per session values for the begin and end snapshot
o comments specified when taking a snapshot are displayed for the
begin and end snapshots
Latches
o The Latch Activity, Child and Parent Latch sections have the following
additional column:
- wait_time: cumulative time spent waiting for the latch
New Scripts
Scripts
sppurge.sql - Purges a range of Snapshot Ids
sptrunc.sql - Deletes all data
spup816.sql - Upgrades an 8.1.6 Statspack to the 8.1.7 schema
File Rename
o The Statspack files have been renamed, with all files now beginning
with the prefix sp.
The new and old file names are given below. For more information on
the purpose of each file, please see the Supplied Scripts Overview
section.
New Name
-----------spdoc.txt
spcreate.sql
spreport.sql
spauto.sql
spuexp.par
sppurge.sql
sptrunc.sql
spup816.sql
spdrop.sql
spcpkg.sql
spctab.sql
spcusr.sql
spdtab.sql
spdusr.sql
Old Name
------------statspack.doc
statscre.sql
statsrep.sql
statsauto.sql
statsuexp.par
- new file - new file - new file statsdrp.sql
statspack.sql
statsctab.sql
statscusr.sql
statsdtab.sql
statsdusr.sql
o The default Statspack report output file name prefix has been modified
to sp_ (was st_) to be consistent with the new script names.
Backups
~~~~~~~
Note: There is no downgrade script. Backup the PERFSTAT schema using
export BEFORE attempting the upgrade, in case the upgrade fails.
The only method of downgrading, or re-running the upgrade is to
de-install Statspack, and import a previously made export.
Before running the upgrade script, export the Statspack schema (for a
backup), then disable any scripts which use Statspack, as these will
interfere with the upgrade. For example, if you use a dbms_job to
gather statistics, disable this job for the duration of the upgrade.
Data Volumes
~~~~~~~~~~~~
If there is a large volume of data in the Statspack schema (i.e. a large
number of snapshots), to avoid a long upgrade time or avoid an unsuccessful
upgrade:
- ensure there is enough free space in PERFSTAT's default tablespace
before starting the upgrade (each individual upgrade section will
describe how to estimate the required disk space)
- if you do not use Automatic Undo Management, ensure you specify a large
rollback segment, if prompted
- if you do not use Automatic Memory Management, ensure you specify a large
sort_area_size (e.g. 1048576), if prompted
script (spup90.sql) will fail with errors; this is expected, as the schema
is in a partially upgraded state, and will not be fully upgraded to 10.1 until
spup92.sql is also run.
The final package compilation which is run as a part of the last upgrade
script (in this case spup92.sql), must complete successfully.
Note: The above example is not specific for the 9.0 to 10.1 upgrade,
it applies equally when upgrading Statspack through multiple
releases, no matter which releases.
10.2.1. Upgrading the Statspack schema from 11.2
to 12.1
to 11.2
to 11.1
to 10.2
to 10.1
in the Statspack schema has not been modified to reflect the new names,
so when comparing a Statspack report on a pre-10g system, be aware the
statistic names and event names may have changed.
Data Compatibility - Changing SQL Hash Value, and new SQL Id
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The computed value of the Hash Value column in the V$SQL family of tables
(v$sql, v$sqlarea, v$sqltext etc) has changed in release 10g. This means the
same SQL statement will have a different hash_value in 10g than in prior
releases. This change has been made as a consequence of introducing the
new SQL Id column. SQL Id can be considered a 'more unique' hash_value.
The new SQL Id has been introduced to further reduce the probability of a
'hash collision' where two distinct SQL statements hash to the same
hash_number.
Statspack captures SQL Id, but does not use it as the unique identifier.
Instead, Statspack continues to use the hash_value and first 31 bytes of the
SQL text to uniquely identify a SQL statement (AWR uses SQL Id).
to 9.2
Then, to estimate whether you have sufficient free space to run this
upgrade, execute the following SQL statement while connected as PERFSTAT in
SQL*Plus:
select 10 + (2*sum(bytes)/1024/1024) est_space_mb
from dba_segments
where segment_name in ('STATS$ENQUEUESTAT');
The est_space_mb column will give you a guesstimate as to the required
free space, in megabytes.
To upgrade:
- ensure you have sufficient free space in the tablespace
- disable any programs which use Statspack
- backup the Statspack schema (e.g. using export)
- run the upgrade by connecting as a user with SYSDBA privilege:
SQL> connect / as sysdba
SQL> @spup817
Once the upgrade script completes, check the log files (spup817a.lis and
spup817b.lis) for errors. If errors are evident, determine and rectify
the cause before proceeding. If no errors are evident, and you are upgrading
to 9.2, you may proceed with the upgrade.
Data Compatibility
~~~~~~~~~~~~~~~~~~
Prior to release 9.0, the STATS$ENQUEUESTAT table gathered data based on
an X$ table, rather than a V$view. In 9.0, the column data within the
underlying X$ table has been considerably improved, and the data
externalised via the V$ENQUEUE_STAT view.
The Statspack upgrade script spup817.sql migrates the data captured from
prior releases into the new format, in order to avoid losing historical data.
Note however, that the column names and data contained within the columns
has changed considerably between the two releases: the STATS$ENQUEUE_STAT
columns in 9.0 capture different data to the columns which existed in the
STATS$ENQUEUESTAT table in the 8.1. Statspack releases.
The column data migration performed by spup817.sql is as follows:
8.1 STATS$ENQUEUESTAT
--------------------GETS
WAITS
9.0 STATS$ENQUEUE_STAT
---------------------TOTAL_REQ#
TOTAL_WAIT#
- 8.1
To deinstall the package, connect as a user with SYSDBA privilege and run
the following script from SQL*Plus: spdrop
e.g.
SQL> connect / as sysdba
SQL> @spdrop
This script actually calls 2 other scripts:
1. spdtab -> Drops tables and public synonyms
2. spdusr -> Drops the user
Check each of the two output files produced (spdtab.lis, spdusr.lis)
to ensure the package was completely deinstalled.
spup90.sql
spup817.sql
spup816.sql