Professional Documents
Culture Documents
Using Framework To Analyze IO Usage Source
Using Framework To Analyze IO Usage Source
Using Framework To Analyze IO Usage Source
Using Realtime Active Session tracking to have direct sense about what the system is waiting. Here is the information from PRDYCRM at non peak time. There are still plenty of long running queries with user I/O waits.
Top Waits summarize one hour ASH. User I/Os are the top waits, especially waits on single block reads.
Show Event Histogram will display the wait events histograms (wait time and count).
The majority of waits have the wait time within 8 ms. But we still have more than 17% with wait time more than 16ms, which is not so good for 8K blocks.
More than 21% of waits with average wait time longer than 16ms.
We cal also use Session Events context menu to check what a long running SQL waited.
Top wait is "db file sequential read". 12.8ms average time for 8K block is not so good, but will be fine for 32K block.
Here is physical IO read info from sysmetric on node 1. 28.7MB/s and 424 IOPS. Currently node 1 IO activity is light.
AWR is a very good place to analyze IO usage. Select 24 hour AWR data to review IO information.
Here is the IO summary snap by snap. The most interest data is RD_BYTES_TOTAL_AVG and RD_BYTES_AVG. The differences between these two averages are used by system, like REDO or RMAN. The peak IO read is less than 80M/s. You can further use DG Stat tab to check which disk group/tablespace has longest average single block wait time. For YCRM, tablespace INDEX_TBS has average 11ms single block wait time, and this is 8K block.
Here are resource usages and wait times by schema. Note DISK_READS from SIEBEL and SYS have passed SADMIN, and much more than SIEBEL7DBACCOUNT, the application user, used.
Wait Events, Top Waits will show the top waits for the selected time ranges. "db file sequential read" is the top wait, while 4ms is not bad average wait time.
segment stats (Segment PIO) is the best place to find out where the IO is spent. AWR is very convenient for segment stats. It will be a little hard to use v$ view and takes snapshots to reach the same effect. It is surprising that mlog$ used more than 68% of DB reads in blocks.
Here is the plan for the top mlog$ related SQL. Note FTS here.
mlog$ does not have stats. So use dba_segments to check its size. The largest PIO read source mlog$ has 8GB. But it does not have a lot of data inside. One check it had 90K, another check it had 8K. A simple query like follows will read 8GB: SQL> select to_char(snaptime$$,'yyyy-mm-dd hh24:mi:ss') t from "SIEBEL"."MLOG$_S_AUDIT_ITEM" where rownum=1;