Professional Documents
Culture Documents
19c Real-Time and High Frequency Statistics Collection
19c Real-Time and High Frequency Statistics Collection
REAL-TIME
STATISTICS &
HIGH-FREQUENCY
STATISTICS
COLLECTION
DAVID KURTZ
ACCENTURE ENKITEC GROUP
UKOUG TECHFEST2019
WHO AM I
Oracle Documentation:
• 10.3.3.3 19c SQL Tuning Guide, Part V Optimizer Statistics, Optimizer Statistics Concepts,
• How the Database Gathers Optimizer Statistics, On-line Statistics Gathering, Real-Time
Statistics
"Online statistics … aim to reduce the possibility of the optimizer being misled by stale
statistics.
• 12c introduced online statistics gathering for CREATE TABLE AS SELECT statements
and direct-path inserts.
• 19c introduces real-time statistics, which extend online support to conventional DML
statements.
• Because statistics can go stale between DBMS_STATS jobs, real-time statistics helps the
optimizer generate more optimal plans.
• Real-time statistics augment rather than replace traditional statistics…must continue to
gather statistics regularly using DBMS_STATS"
Tracks inserts, updates and deletes, as well as whether the table has been truncated, in
memory.
• Introduced in 10g
• Enabled by default since 11g
Used to determined whether statistics are stale
• Drives statistics collection during the maintenance window
Periodically persisted to disk
• by SMON, approximately every 15 minutes
• can be flushed out manually with dbms_stats.flush_database_monitoring_info
This will
• reveal the behaviour of real-time statistics
• and suggest some limitations and risks
connect / as sysdba
alter system flush shared_pool;
A2 starts in the range 1-32, and
connect oe/oe later 1-105, but is highly skewed
drop table t purge;
create table t
(a number NOT NULL
,a2 number NOT NULL
,b varchar2(2000)
,CONSTRAINT t_pk PRIMARY KEY (a));
insert into t
with n as (select rownum n from dual connect by level <= 1000)
select n.n, ceil(sqrt(n.n)), TO_CHAR(TO_DATE(n.n,'j'),'jsp')
from n;
commit;
insert into t
with n as (select 1000+rownum n from dual connect by level <= 10000)
select n.n, ceil(sqrt(n.n)), TO_CHAR(TO_DATE(n.n,'j'),'jsp')
from n;
commit;
connect / as sysdba
exec dbms_stats.flush_database_monitoring_info;
connect oe/oe
---------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | E-Rows |E-Bytes| Cost (%CPU)| E-Time | OMem | 1Mem | Used-Mem |
---------------------------------------------------------------------------------------------------------------------
| 0 | INSERT STATEMENT | | | | 2 (100)| | | | |
| 1 | LOAD TABLE CONVENTIONAL | T | | | | | | | |
| 2 | OPTIMIZER STATISTICS GATHERING | | 1 | 13 | 2 (0)| 00:00:01 | | | |
| 3 | VIEW | | 1 | 13 | 2 (0)| 00:00:01 | | | |
| 4 | COUNT | | | | | | | | |
| 5 | CONNECT BY WITHOUT FILTERING| | | | | | 2048 | 2048 | 2048 (0)|
| 6 | FAST DUAL | | 1 | | 2 (0)| 00:00:01 | | | |
---------------------------------------------------------------------------------------------------------------------
no rows selected
• If you don't have normal statistics, you won't get real-time statistics.
create table t
(a number NOT NULL
,a2 number NOT NULL
,b varchar2(2000)
,CONSTRAINT t_pk PRIMARY KEY (a));
insert into t
with n as (select 1000+rownum n from dual connect by level <= 10000)
select n.n, ceil(sqrt(n.n)), TO_CHAR(TO_DATE(n.n,'j'),'jsp')
from n;
Table Num
Name Rows BLOCKS NOTES
----- -------- ---------- -------------------------
T 1000 5
Column Num
Name DATA_TYPE Vals LOW_VALUE LOW_V HIGH_VALUE HI_V
------ ---------- -------- ---------------- ---------------- ------------------------------------- -------------------------------------
A NUMBER 1000 C102 1 C20B 1000
A2 NUMBER 32 C102 1 C121 32
B VARCHAR2 1000 6569676874 eight 74776F2068756E647265642074776F two hundred two
------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows |E-Bytes| Cost (%CPU)| E-Time | A-Rows | A-Time | Buffers | Reads |
------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | | 3 (100)| | 1 |00:00:00.01 | 79 | 5 |
| 1 | SORT AGGREGATE | | 1 | 1 | 13 | | | 1 |00:00:00.01 | 79 | 5 |
|* 2 | TABLE ACCESS FULL| T | 1 | 425 | 5525 | 3 (0)| 00:00:01 | 1764 |00:00:00.01 | 79 | 5 |
------------------------------------------------------------------------------------------------------------------------------
2 - filter("A2"<=42)
Note
-----
- dynamic statistics used: statistics for conventional DML
exec dbms_stats.flush_database_monitoring_info;
Table Num
Name Rows BLOCKS NOTES
----- -------- ---------- -------------------------
T 1000 5
T 11000 76 STATS_ON_CONVENTIONAL_DML
Column Num
Name DATA_TYPE Vals LOW_VALUE LOW_V HIGH_VALUE HI_V
------ ---------- -------- ---------------- ---------------- ------------------------------------- -------------------------------------
A NUMBER 1000 C102 1 C20B 1000
A2 NUMBER 32 C102 1 C121 32
B VARCHAR2 1000 6569676874 eight 74776F2068756E647265642074776F two hundred two
---------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows |E-Bytes| Cost (%CPU)| E-Time | A-Rows | A-Time | Buffers |
---------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | | 22 (100)| | 1 |00:00:00.01 | 78 |
| 1 | SORT AGGREGATE | | 1 | 1 | 13 | | | 1 |00:00:00.01 | 78 |
|* 2 | TABLE ACCESS FULL| T | 1 | 4680 | 60840 | 22 (0)| 00:00:01 | 1764 |00:00:00.01 | 78 |
---------------------------------------------------------------------------------------------------------------------
…
Note
-----
- dynamic statistics used: statistics for conventional DML
• Real-time column statistics really are real-time because they are immediately visible to
the optimizer
• Real-time table statistics do not become visible until the table monitoring information
has been flushed.
• Creation of real-time statistics does not invalidate cursors.
• Assuming application requires to parse new SQLs, there are several opportunities for
execution plans to change as state of statistics change
• Stale Statistics – under estimate rows
• Real-time column statistics – worse under estimate
• Real-time table statistics – over estimate
create table t
(a number NOT NULL
,a2 number NOT NULL
,b varchar2(2000)
,CONSTRAINT t_pk PRIMARY KEY (a));
insert into t
with n as (select rownum n from dual connect by level <= 1000)
select n.n, ceil(sqrt(n.n)), TO_CHAR(TO_DATE(n.n,'j'),'jsp')
from n;
commit;
exec dbms_stats.gather_table_stats('OE','T' -
,method_opt=>'FOR ALL COLUMNS SIZE AUTO FOR COLUMNS A2 SIZE 254');
insert into t
with n as (select 1000+rownum n from dual connect by level <= 10000)
select n.n, ceil(sqrt(n.n)), TO_CHAR(TO_DATE(n.n,'j'),'jsp')
from n;
commit;
Table Num
Name Rows BLOCKS NOTES
----- -------- ---------- -------------------------
T 1000 5
Column Num
Name DATA_TYPE Vals LOW_VALUE LOW_V HIGH_VALUE HI_V
------ ---------- -------- ---------------- ---------------- ------------------------------------- -------------------------------------
A NUMBER 1000 C102 1 C20B 1000
A2 NUMBER 32 C102 1 C121 32
B VARCHAR2 1000 6569676874 eight 74776F2068756E647265642074776F two hundred two
---------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows |E-Bytes| Cost (%CPU)| E-Time | A-Rows | A-Time | Buffers |
---------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | | 3 (100)| | 1 |00:00:00.01 | 73 |
| 1 | SORT AGGREGATE | | 1 | 1 | 13 | | | 1 |00:00:00.01 | 73 |
|* 2 | TABLE ACCESS FULL| T | 1 | 1000 | 13000 | 3 (0)| 00:00:01 | 1764 |00:00:00.01 | 73 |
---------------------------------------------------------------------------------------------------------------------
…
Note
-----
- dynamic statistics used: statistics for conventional DML
Table Num
Name Rows BLOCKS NOTES
----- -------- ---------- -------------------------
T 1000 5
T 11000 71 STATS_ON_CONVENTIONAL_DML
Column Num
Name DATA_TYPE Vals LOW_VALUE LOW_V HIGH_VALUE HI_V
------ ---------- -------- ---------------- ---------------- ------------------------------------- -------------------------------------
A NUMBER 1000 C102 1 C20B 1000
A2 NUMBER 32 C102 1 C121 32
B VARCHAR2 1000 6569676874 eight 74776F2068756E647265642074776F two hundred two
---------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows |E-Bytes| Cost (%CPU)| E-Time | A-Rows | A-Time | Buffers |
---------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | | 22 (100)| | 1 |00:00:00.01 | 79 |
| 1 | SORT AGGREGATE | | 1 | 1 | 13 | | | 1 |00:00:00.01 | 79 |
|* 2 | TABLE ACCESS FULL| T | 1 | 11000 | 139K| 22 (0)| 00:00:01 | 1764 |00:00:00.01 | 79 |
---------------------------------------------------------------------------------------------------------------------…
Note
-----
- dynamic statistics used: statistics for conventional DML
create table t
(a number NOT NULL
,a2 number NOT NULL
,b varchar2(2000)
,CONSTRAINT t_pk PRIMARY KEY (a));
insert into t
with n as (select 1000+rownum n from dual connect by level <= 10000)
select n.n, ceil(sqrt(n.n)), TO_CHAR(TO_DATE(n.n,'j'),'jsp')
from n;
commit;
exec dbms_stats.flush_database_monitoring_info;
exec dbms_stats.gather_table_stats('OE','T' –
,method_opt=>'FOR ALL COLUMNS SIZE AUTO FOR COLUMNS A2 SIZE 254');
T B 11000 6569676874 eight 74776F2074686F7573616E642074776F20687 two thousand two hundred two NONE
56E647265642074776F
---------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows |E-Bytes| Cost (%CPU)| E-Time | A-Rows | A-Time | Buffers |
---------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | | 22 (100)| | 1 |00:00:00.01 | 78 |
| 1 | SORT AGGREGATE | | 1 | 1 | 4 | | | 1 |00:00:00.01 | 78 |
|* 2 | TABLE ACCESS FULL| T | 1 | 1765 | 7060 | 22 (0)| 00:00:01 | 1764 |00:00:00.01 | 78 |
---------------------------------------------------------------------------------------------------------------------
create table t
(a number NOT NULL
,a2 number NOT NULL
,b varchar2(2000)
,CONSTRAINT t_pk PRIMARY KEY (a));
exec dbms_stats.flush_database_monitoring_info;
Table Num
Name Rows BLOCKS NOTES
----- -------- ---------- -------------------------
T 1000 5
Column Num
Name DATA_TYPE Vals LOW_VALUE LOW_V HIGH_VALUE HI_V
------ ---------- -------- ---------------- ---------------- ------------------------------------- -------------------------------------
A NUMBER 1000 C102 1 C20B 1000
A2 NUMBER 32 C102 1 C121 32
B VARCHAR2 1000 6569676874 eight 74776F2068756E647265642074776F two hundred two
create table t
(a number NOT NULL A2 goes from 1-32 to 23-39
,a2 number NOT NULL B is prefixed with z
,b varchar2(2000)
,CONSTRAINT t_pk PRIMARY KEY (a));
update t
set a2 = ceil(sqrt(a+1000))
, b = 'z'||TO_CHAR(TO_DATE(a+1000,'j'),'jsp')
where a <= 500;
commit;
exec dbms_stats.flush_database_monitoring_info;
Copyright © 2019 Accenture All rights reserved. | 37
REAL-TIME STATISTICS AFTER UPDATE
Table
Name MON
Num
Rows
Increase in column maximum
----- --- --------
T YES 1000
values have been detected
Table Num
Name Rows BLOCKS NOTES
Increase in column minimum
----- -------- ---------- -------------------------
T 1000 5
value not detected
T 1000 7 STATS_ON_CONVENTIONAL_DML
Column Num
Name DATA_TYPE Vals LOW_VALUE LOW_V HIGH_VALUE HI_V
------ ---------- -------- ---------------- ---------------- ------------------------------------- -------------------------------------
A NUMBER 1000 C102 1 C20B 1000
A2 NUMBER 32 C102 1 C121 32
B VARCHAR2 1000 6569676874 eight 74776F2068756E647265642074776F two hundred two
T A2 C102 1 C126 37
STATS_ON_CONVENTIONAL_DML
create table t
(a number NOT NULL
,a2 number NOT NULL
,b varchar2(2000)
,CONSTRAINT t_pk PRIMARY KEY (a));
insert /*+APPEND*/ into t
with n as (select rownum n from dual connect by level <= 1000)
select n.n, ceil(sqrt(n.n)), TO_CHAR(TO_DATE(n.n,'j'),'jsp')
from n;
commit;
update t
set a2 = ceil(sqrt(a+1000))
, b = 'z'||TO_CHAR(TO_DATE(a+1000,'j'),'jsp')
where a <= 500;
commit;
delete from t where a > 800;
commit;
exec dbms_stats.flush_database_monitoring_info;
Copyright © 2019 Accenture All rights reserved. | 39
REAL-TIME STATISTICS UPDATE THEN DELETE
Table Num
Name MON Rows
----- --- --------
T YES 1000 Reduction in number
Table Num
of rows detected
Name Rows BLOCKS NOTES
----- -------- ---------- -------------------------
T
T
1000
800
5
7 STATS_ON_CONVENTIONAL_DML
No real-time column statistics on
A because it was not updated.
Column Num
Name DATA_TYPE Vals LOW_VALUE LOW_V HIGH_VALUE HI_V
------ ---------- -------- ---------------- ---------------- ------------------------------------- -------------------------------------
A NUMBER 1000 C102 1 C20B 1000
A2 NUMBER 32 C102 1 C121 32
B VARCHAR2 1000 6569676874 eight 74776F2068756E647265642074776F two hundred two
T A2 C102 1 C126 37
STATS_ON_CONVENTIONAL_DML
--------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows |E-Bytes| Cost (%CPU)| E-Time | A-Rows | A-Time | Buffers |
--------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | | 2 (100)| | 1 |00:00:00.01 | 2 |
| 1 | SORT AGGREGATE | | 1 | 1 | 4 | | | 1 |00:00:00.01 | 2 |
|* 2 | INDEX RANGE SCAN| T_PK | 1 | 160 | 640 | 2 (0)| 00:00:01 | 0 |00:00:00.01 | 2 |
--------------------------------------------------------------------------------------------------------------------
…
Note
-----
- dynamic statistics used: statistics for conventional DML
truncate table t;
insert into t
with n as (select 1000+rownum n from dual connect by level <= 10000)
select n.n, ceil(sqrt(n.n)), TO_CHAR(TO_DATE(n.n,'j'),'jsp')
from n;
commit;
exec dbms_stats.flush_database_monitoring_info;
Table Num
10000 rows √
Name Rows BLOCKS NOTES
----- -------- ---------- -------------------------
T 1000 5
T 10000 68 STATS_ON_CONVENTIONAL_DML
Column Num
Name DATA_TYPE Vals LOW_VALUE LOW_V HIGH_VALUE HI_V
------ ---------- -------- ---------------- ---------------- ------------------------------------- -------------------------------------
A NUMBER 1000 C102 1 C20B 1000
A2 NUMBER 32 C102 1 C121 32
B VARCHAR2 1000 6569676874 eight 74776F2068756E647265642074776F two hundred two
--------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows |E-Bytes| Cost (%CPU)| E-Time | A-Rows | A-Time | Buffers |
--------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | | 2 (100)| | 1 |00:00:00.01 | 3 |
| 1 | SORT AGGREGATE | | 1 | 1 | 13 | | | 1 |00:00:00.01 | 3 |
|* 2 | INDEX RANGE SCAN| T_PK | 1 | 459 | 5967 | 2 (0)| 00:00:01 | 0 |00:00:00.01 | 3 |
--------------------------------------------------------------------------------------------------------------------
…
Note
-----
- dynamic statistics used: statistics for conventional DML
It appears that deletes are not tracked by real-time statistics – surely a bug
• Updates are tracked
• Deletes after Updates are tracked
insert into t
with n as (select 1000+rownum n from dual connect by level <= 1000)
select n.n, ceil(sqrt(n.n)), TO_CHAR(TO_DATE(n.n,'j'),'jsp')
from n;
commit;
exec dbms_stats.flush_database_monitoring_info;
select /*C*/ count(b) from t where a >= 1900;
exec dbms_stats.gather_table_stats('OE','T',method_opt=>'FOR ALL COLUMNS SIZE AUTO FOR COLUMNS A2 SIZE 1');
select /*D*/ count(b) from t where a >= 1900;
---------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows |E-Bytes| Cost (%CPU)| E-Time | A-Rows | A-Time | Buffers |
---------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | | 3 (100)| | 1 |00:00:00.01 | 6 |
| 1 | SORT AGGREGATE | | 1 | 1 | 26 | | | 1 |00:00:00.01 | 6 |
|* 2 | TABLE ACCESS FULL| T | 1 | 101 | 2626 | 3 (0)| 00:00:01 | 101 |00:00:00.01 | 6 |
---------------------------------------------------------------------------------------------------------------------
Table Num
Name Rows BLOCKS NOTES
----- -------- ---------- -------------------------
T
T
1000
2000
5
12 STATS_ON_CONVENTIONAL_DML
Max column value of A is 1984,
actually 2000
Column Num
Name DATA_TYPE Vals LOW_VALUE LOW_V HIGH_VALUE HI_V
------ ---------- -------- ---------------- ---------------- ------------------------------------- -------------------------------------
A NUMBER 1000 C102 1 C20B 1000
A2 NUMBER 32 C102 1 C121 32
B VARCHAR2 1000 6569676874 eight 74776F2068756E647265642074776F two hundred two
T A2 C102 1 C12E 45
STATS_ON_CONVENTIONAL_DML
SQL_ID 32cu1x428w7sn, child number 0 Estimate slightly off because maximum column
-------------------------------------
select /*C*/ count(b) from t where a >= 1900 value is recorded as 1984 when actually 2000
Plan hash value: 2966233522 85/1984*2000=86
---------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows |E-Bytes| Cost (%CPU)| E-Time | A-Rows | A-Time | Buffers |
---------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | | 5 (100)| | 1 |00:00:00.01 | 14 |
| 1 | SORT AGGREGATE | | 1 | 1 | 29 | | | 1 |00:00:00.01 | 14 |
|* 2 | TABLE ACCESS FULL| T | 1 | 87 | 2523 | 5 (0)| 00:00:01 | 101 |00:00:00.01 | 14 |
---------------------------------------------------------------------------------------------------------------------
Note
-----
- dynamic statistics used: statistics for conventional DML
Good estimate on a column without skew.
This is the problem that real-time statistics
solves very well.
SQL_ID 2jv5mfvk6qswb, child number 0 The conventional statistics produce a more accurate
-------------------------------------
select /*D*/ count(b) from t where a >= 1900 estimated number of rows, and consequently we use
Plan hash value: 2053823973 an index scan rather than a full scan
---------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows |E-Bytes| Cost (%CPU)| E-Time | A-Rows | A-Time | Buffers |
---------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | | 3 (100)| | 1 |00:00:00.01 | 4 |
| 1 | SORT AGGREGATE | | 1 | 1 | 33 | | | 1 |00:00:00.01 | 4 |
| 2 | TABLE ACCESS BY INDEX ROWID BATCHED| T | 1 | 101 | 3333 | 3 (0)| 00:00:01 | 101 |00:00:00.01 | 4 |
|* 3 | INDEX RANGE SCAN | T_PK | 1 | 101 | | 2 (0)| 00:00:01 | 101 |00:00:00.01 | 2 |
---------------------------------------------------------------------------------------------------------------------------------------
insert into t
with n as (select 1000+rownum n from dual connect by level <= 1000)
select n.n, ceil(sqrt(n.n)), TO_CHAR(TO_DATE(n.n,'j'),'jsp')
from n;
commit;
exec dbms_stats.flush_database_monitoring_info;
Table Num
Name Rows BLOCKS NOTES SCOPE
----- -------- ---------- ------------------------- -------
T SHARED
T 1000 5 SESSION
Column Num
Name DATA_TYPE Vals LOW_VALUE LOW_V HIGH_VALUE HI_V
------ ---------- -------- ---------------- ---------------- ------------------------------------- -------------------------------------
A NUMBER
A2 NUMBER
B VARCHAR2
exec dbms_stats.lock_table_stats('OE','T');
update "OE".t set b = UPPER(TO_CHAR(TO_DATE(a,'j'),'jsp')) where rownum<=101;
commit;
exec dbms_stats.flush_database_monitoring_info;
Column Num
Name DATA_TYPE Vals LOW_VALUE LOW_V HIGH_VALUE HI_V
------ ---------- -------- ---------------- ---------------- ------------------------------------- -------------------------------------
A NUMBER 1000 C102 1 C20B 1000
A2 NUMBER 32 C102 1 C121 32
B VARCHAR2 1000 6569676874 eight 74776F2068756E647265642074776F two hundred two
Supress collection of statistics on initial bulk load Supress collection of real-time statistics during
of table conventional DML
• CREATE TABLE … AS SELECT …
• INSERT /*+APPEND*/ … SELECT …
• (into an empty table)
create table t
(a number NOT NULL
,a2 number NOT NULL
,b varchar2(2000)
,CONSTRAINT t_pk PRIMARY KEY (a));
insert /*+APPEND NO_GATHER_OPTIMIZER_STATISTICS*/ into t
with n as (select rownum n from dual connect by level <= 1000)
select n.n, ceil(sqrt(n.n)), TO_CHAR(TO_DATE(n.n,'j'),'jsp')
from n;
commit;
insert into t
with n as (select 1000+rownum n from dual connect by level <= 10000)
select n.n, ceil(sqrt(n.n)), TO_CHAR(TO_DATE(n.n,'j'),'jsp')
from n;
exec dbms_stats.flush_database_monitoring_info;
If you don't have normal statistics then you don't get real-time statistics
Table Num
Name MON Rows
----- --- --------
T YES
Table Num
Name Rows BLOCKS NOTES SCOPE
----- -------- ---------- ------------------------- -------
T SHARED
Column Num
Name DATA_TYPE Vals LOW_VALUE LOW_V HIGH_VALUE HI_V
------ ---------- -------- ---------------- ---------------- ------------------------------------- -------------------------------------
A NUMBER
A2 NUMBER
B VARCHAR2
no rows selected
create table t
(a number NOT NULL
,a2 number NOT NULL
,b varchar2(2000)
,CONSTRAINT t_pk PRIMARY KEY (a));
insert /*+APPEND*/ into t
with n as (select rownum n from dual connect by level <= 1000)
select n.n, ceil(sqrt(n.n)), TO_CHAR(TO_DATE(n.n,'j'),'jsp')
from n;
commit;
insert /*+NO_GATHER_OPTIMIZER_STATISTICS*/ into t
with n as (select 1000+rownum n from dual connect by level <= 10000)
select n.n, ceil(sqrt(n.n)), TO_CHAR(TO_DATE(n.n,'j'),'jsp')
from n;
exec dbms_stats.flush_database_monitoring_info;
Table Num
Name MON Rows
----- --- --------
T YES 1000
Table Num
Name Rows BLOCKS NOTES SCOPE
----- -------- ---------- ------------------------- -------
T 1000 5 SHARED
Column Num
Name DATA_TYPE Vals LOW_VALUE LOW_V HIGH_VALUE HI_V
------ ---------- -------- ---------------- ---------------- ------------------------------------- -------------------------------------
A NUMBER 1000 C102 1 C20B 1000
A2 NUMBER 32 C102 1 C121 32
B VARCHAR2 1000 6569676874 eight 74776F2068756E647265642074776F two hundred two
• In general, more accurate statistics will result in fewer problems if not better
performance.
• Skewed data has always been a challenge
• It is still going to be a challenge with real-time statistics
• Very similar to automatic statistics collection job that runs in the maintenance window
(auto optimizer stats collection).
• Not enabled by default. You have to turn it on.
• Applies to whole database.
• Key differences are limited run time, so it can be run more frequently.
• Only one instance can run concurrently. So only one CPU.
• It will run during the maintenance window
• Uses SYS.STATS_TARGET$ to drive processing
• Stalest tables within each object type first
• This also is used to prevent the two jobs running concurrently?
• Can interact with real-time statistics.
You can
• enable/disable job,
• set Execution Frequency,
• set Maximum Run-time,
• Omit tables by locking statistics
EXEC DBMS_STATS.SET_GLOBAL_PREFS('AUTO_TASK_STATUS','ON');
EXEC DBMS_STATS.SET_GLOBAL_PREFS('AUTO_TASK_INTERVAL','900');
EXEC DBMS_STATS.SET_GLOBAL_PREFS('AUTO_TASK_MAX_RUN_TIME','3600');
with x as (
SELECT x.*
, end_time-start_time diff
, start_time-(LAG(end_time,1) over (order by start_time)) start_lag
FROM DBA_AUTO_STAT_EXECUTIONS x
ORDER BY x.start_time
)
select opid, origin, status
, ((extract( day from start_lag )*24+
extract( hour from start_lag ))*60+
extract( minute from start_lag ))*60+
extract( second from start_lag ) start_lag
, start_time, end_time
, ((extract( day from diff )*24+
extract( hour from diff ))*60+
extract( minute from diff ))*60+
extract( second from diff ) secs
, completed, failed, timed_out, in_progress
from x
Where start_time >= sysdate-1/24
/
select
s.START_TIME, s.END_TIME
,s.STALENESS
,s.OSIZE
,s.OBJ#, name, NVL(subname,'<NULL>') subname
,s.TYPE#
,decode(s.type#, 0, 'NEXT OBJECT', 1, 'INDEX', 2, 'TABLE', 3, 'CLUSTER',
19, 'TABLE PARTITION', 20, 'INDEX PARTITION',
34, 'TABLE SUBPARTITION', 35, 'INDEX SUBPARTITION',
'UNDEFINED') object_type
,s.FLAGS, s.STATUS
,s.SID, s.SERIAL#
,s.PART#, s.BO#
from sys.stats_target$ s
left outer join sys.obj$ o on o.obj# = s.obj#
Where start_time >= sysdate-10/1440
order by start_time
/
Copyright © 2019 Accenture All rights reserved. | 73
SYS.STATS_TARGET$
START_TIME END_TIME STALENESS OSIZE OBJ#
----------------------------------- ----------------------------------- ---------- ---------- -------
NAME SUBNAME TYPE# OBJECT_TYPE FLAGS STATUS SID
------------------------------ ---------------------------------------- ----- ------------------ ----- ------ -----
SERIAL# PART# BO#
------- ---------- ----------
04-SEP-19 05.45.20.430796 PM +00:00 04-SEP-19 05.45.20.529349 PM +00:00 .4 131072 11576
WRM$_DATABASE_INSTANCE <NULL> 2 TABLE 288 2 17
28450
I think both features are primarily aimed at Autonomous Transaction Processing Databases
• Not ADW because you mostly direct path load, and don't update HCC objects
• It will also apply to Exadata.
• For example, they will apply to a classical ERP on Exadata
• But why not other platforms? It would help them also.
E-mail:
• david.kurtz@accenture.com