Online Redo Log Switch Frequency Map

A query I find myself often running is the online redo log switch frequency map query, which queries the v$log_history/gv$log_history (for cluster databases) view and show the historical log switch frequency.

Why you might ask? Well it’s important to see how frequent log switches are occurring as Oracle’s rule of thumb is to not switch more then 3 logs per hour (20 minutes of redo) at peak DML activity to prevent excessive checkpoints.  The query can also highlight abnormal churn (DML activity).

Online Redo Log Switch Frequency Map Query

set pages 999 lines 400
col h0 format 999
col h1 format 999
col h2 format 999
col h3 format 999
col h4 format 999
col h5 format 999
col h6 format 999
col h7 format 999
col h8 format 999
col h9 format 999
col h10 format 999
col h11 format 999
col h12 format 999
col h13 format 999
col h14 format 999
col h15 format 999
col h16 format 999
col h17 format 999
col h18 format 999
col h19 format 999
col h20 format 999
col h21 format 999
col h22 format 999
col h23 format 999
SELECT TRUNC (first_time) "Date", inst_id, TO_CHAR (first_time, 'Dy') "Day",
 COUNT (1) "Total",
 SUM (DECODE (TO_CHAR (first_time, 'hh24'), '00', 1, 0)) "h0",
 SUM (DECODE (TO_CHAR (first_time, 'hh24'), '01', 1, 0)) "h1",
 SUM (DECODE (TO_CHAR (first_time, 'hh24'), '02', 1, 0)) "h2",
 SUM (DECODE (TO_CHAR (first_time, 'hh24'), '03', 1, 0)) "h3",
 SUM (DECODE (TO_CHAR (first_time, 'hh24'), '04', 1, 0)) "h4",
 SUM (DECODE (TO_CHAR (first_time, 'hh24'), '05', 1, 0)) "h5",
 SUM (DECODE (TO_CHAR (first_time, 'hh24'), '06', 1, 0)) "h6",
 SUM (DECODE (TO_CHAR (first_time, 'hh24'), '07', 1, 0)) "h7",
 SUM (DECODE (TO_CHAR (first_time, 'hh24'), '08', 1, 0)) "h8",
 SUM (DECODE (TO_CHAR (first_time, 'hh24'), '09', 1, 0)) "h9",
 SUM (DECODE (TO_CHAR (first_time, 'hh24'), '10', 1, 0)) "h10",
 SUM (DECODE (TO_CHAR (first_time, 'hh24'), '11', 1, 0)) "h11",
 SUM (DECODE (TO_CHAR (first_time, 'hh24'), '12', 1, 0)) "h12",
 SUM (DECODE (TO_CHAR (first_time, 'hh24'), '13', 1, 0)) "h13",
 SUM (DECODE (TO_CHAR (first_time, 'hh24'), '14', 1, 0)) "h14",
 SUM (DECODE (TO_CHAR (first_time, 'hh24'), '15', 1, 0)) "h15",
 SUM (DECODE (TO_CHAR (first_time, 'hh24'), '16', 1, 0)) "h16",
 SUM (DECODE (TO_CHAR (first_time, 'hh24'), '17', 1, 0)) "h17",
 SUM (DECODE (TO_CHAR (first_time, 'hh24'), '18', 1, 0)) "h18",
 SUM (DECODE (TO_CHAR (first_time, 'hh24'), '19', 1, 0)) "h19",
 SUM (DECODE (TO_CHAR (first_time, 'hh24'), '20', 1, 0)) "h20",
 SUM (DECODE (TO_CHAR (first_time, 'hh24'), '21', 1, 0)) "h21",
 SUM (DECODE (TO_CHAR (first_time, 'hh24'), '22', 1, 0)) "h22",
 SUM (DECODE (TO_CHAR (first_time, 'hh24'), '23', 1, 0)) "h23",
 ROUND (COUNT (1) / 24, 2) "Avg"
FROM gv$log_history
WHERE thread# = inst_id
AND first_time > sysdate -7
GROUP BY TRUNC (first_time), inst_id, TO_CHAR (first_time, 'Dy')
ORDER BY 1,2;

Online Redo Log Switch Frequency Map Output

SQL> set pages 999 lines 400
SQL> col h0 format 999
SQL> col h1 format 999
SQL> col h2 format 999
SQL> col h3 format 999
SQL> col h4 format 999
SQL> col h5 format 999
SQL> col h6 format 999
SQL> col h7 format 999
SQL> col h8 format 999
SQL> col h9 format 999
SQL> col h10 format 999
SQL> col h11 format 999
SQL> col h12 format 999
SQL> col h13 format 999
SQL> col h14 format 999
SQL> col h15 format 999
SQL> col h16 format 999
SQL> col h17 format 999
SQL> col h18 format 999
SQL> col h19 format 999
SQL> col h20 format 999
SQL> col h21 format 999
SQL> col h22 format 999
SQL> col h23 format 999
SQL> SELECT TRUNC (first_time) "Date", inst_id, TO_CHAR (first_time, 'Dy') "Day",
SQL>  COUNT (1) "Total",
SQL>  SUM (DECODE (TO_CHAR (first_time, 'hh24'), '00', 1, 0)) "h0",
SQL>  SUM (DECODE (TO_CHAR (first_time, 'hh24'), '01', 1, 0)) "h1",
SQL>  SUM (DECODE (TO_CHAR (first_time, 'hh24'), '02', 1, 0)) "h2",
SQL>  SUM (DECODE (TO_CHAR (first_time, 'hh24'), '03', 1, 0)) "h3",
SQL>  SUM (DECODE (TO_CHAR (first_time, 'hh24'), '04', 1, 0)) "h4",
SQL>  SUM (DECODE (TO_CHAR (first_time, 'hh24'), '05', 1, 0)) "h5",
SQL>  SUM (DECODE (TO_CHAR (first_time, 'hh24'), '06', 1, 0)) "h6",
SQL>  SUM (DECODE (TO_CHAR (first_time, 'hh24'), '07', 1, 0)) "h7",
SQL>  SUM (DECODE (TO_CHAR (first_time, 'hh24'), '08', 1, 0)) "h8",
SQL>  SUM (DECODE (TO_CHAR (first_time, 'hh24'), '09', 1, 0)) "h9",
SQL>  SUM (DECODE (TO_CHAR (first_time, 'hh24'), '10', 1, 0)) "h10",
SQL>  SUM (DECODE (TO_CHAR (first_time, 'hh24'), '11', 1, 0)) "h11",
SQL>  SUM (DECODE (TO_CHAR (first_time, 'hh24'), '12', 1, 0)) "h12",
SQL>  SUM (DECODE (TO_CHAR (first_time, 'hh24'), '13', 1, 0)) "h13",
SQL>  SUM (DECODE (TO_CHAR (first_time, 'hh24'), '14', 1, 0)) "h14",
SQL>  SUM (DECODE (TO_CHAR (first_time, 'hh24'), '15', 1, 0)) "h15",
SQL>  SUM (DECODE (TO_CHAR (first_time, 'hh24'), '16', 1, 0)) "h16",
SQL>  SUM (DECODE (TO_CHAR (first_time, 'hh24'), '17', 1, 0)) "h17",
SQL>  SUM (DECODE (TO_CHAR (first_time, 'hh24'), '18', 1, 0)) "h18",
SQL>  SUM (DECODE (TO_CHAR (first_time, 'hh24'), '19', 1, 0)) "h19",
SQL>  SUM (DECODE (TO_CHAR (first_time, 'hh24'), '20', 1, 0)) "h20",
SQL>  SUM (DECODE (TO_CHAR (first_time, 'hh24'), '21', 1, 0)) "h21",
SQL>  SUM (DECODE (TO_CHAR (first_time, 'hh24'), '22', 1, 0)) "h22",
SQL>  SUM (DECODE (TO_CHAR (first_time, 'hh24'), '23', 1, 0)) "h23",
SQL>  ROUND (COUNT (1) / 24, 2) "Avg"
SQL> FROM gv$log_history
SQL> WHERE thread# = inst_id
SQL> AND first_time > sysdate -7
SQL> GROUP BY TRUNC (first_time), inst_id, TO_CHAR (first_time, 'Dy')
SQL> ORDER BY 1,2;

Date         INST_ID Day      Total   h0   h1   h2   h3   h4   h5   h6   h7   h8   h9  h10  h11  h12  h13  h14  h15  h16  h17  h18  h19  h20  h21  h22  h23        Avg
--------- ---------- --- ---------- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ----------
24-MAY-19          1 Fri          1    0    0    0    0    0    0    0    0    0    0    0    0    0    0    0    0    0    0    0    0    0    0    0    1        .04
24-MAY-19          2 Fri          1    0    0    0    0    0    0    0    0    0    0    0    0    0    0    0    0    0    0    0    0    0    0    0    1        .04
25-MAY-19          1 Sat         56    1    1    3    0    2    2    0    7    0    0   10    4    4    6    3    1    2    4    1    1    1    2    0    1       2.33
25-MAY-19          2 Sat         62    0    3    3    0    1    2    0    7    0    0    8    5    2    2    3    5    4    2    3    3    5    4    0    0       2.58
26-MAY-19          1 Sun         56    1    1    3    0    2    5    0   11    0    4    1    1    7    7    3    5    1    0    0    0    1    1    1    1       2.33
26-MAY-19          2 Sun         28    1    3    3    0    1    4    0    5    0    1    1    0    2    3    1    1    1    0    0    0    0    0    1    0       1.17
27-MAY-19          1 Mon         33    1    1    3    0    1    4    0    6    5    1    0    1    1    2    1    1    1    0    0    0    1    1    0    2       1.38
27-MAY-19          2 Mon         21    0    3    3    0    0    4    0    3    2    0    0    1    0    1    0    1    0    1    0    0    0    0    1    1        .88
28-MAY-19          1 Tue         43    2    3    3    0    3    5    1    4    3    1    3    0    1    1    1    1    2    3    2    1    1    0    1    1       1.79
28-MAY-19          2 Tue         38    2    4    3    1    1    4    1    7    1    1    1    1    0    1    0    0    2    4    0    1    0    1    1    1       1.58
29-MAY-19          1 Wed         58    2    4    4    1    3    5    2   12    3    1    3    1    2    5    4    1    1    0    1    0    1    0    2    0       2.42
29-MAY-19          2 Wed         28    0    2    3    0    1    4    1    5    1    0    1    1    0    2    1    1    1    1    0    0    1    1    0    1       1.17
30-MAY-19          1 Thu         39    2    3    3    0    1    5    1    7    1    1    2    3    1    2    2    1    0    1    1    0    0    1    0    1       1.63
30-MAY-19          2 Thu         29    0    1    3    0    2    4    1    4    2    1    1    1    1    1    2    0    1    2    0    0    1    1    0    0       1.21
31-MAY-19          1 Fri        153    2    1    3    0    2    4    1    8    3    0   23   29   10    2    1   18   20    6   12    2    3    3    0    0       6.38
31-MAY-19          2 Fri        223    1    2    3    0    1    4    0    3    1    0   71   86   29    1    0    6    7    2    4    0    1    1    0    0       9.29

16 rows selected.

SQL>

Where the log switch frequency exceeds 3 per hour, I have highlighted as red.

From the above output, we can see that most the time the 3 log switch per hour is met, with the 5th and 7th hour exceeding the threshold.  Also we can see today there as been abnormal and excessive churn at 10th and 11th hour.

Investigating this further, we look at v$log to see the size of the online redo logs:

SQL> set pages 999 lines 400
SQL> col FIRST_CHANGE# format 999999999999999
SQL> select GROUP#, THREAD#, SEQUENCE#, BYTES/1024/1024 SIZE_MB, BLOCKSIZE, MEMBERS, ARCHIVED, STATUS, FIRST_CHANGE#, FIRST_TIME, NEXT_CHANGE#, NEXT_TIME, CON_ID from v$log;

    GROUP#    THREAD#  SEQUENCE#    SIZE_MB  BLOCKSIZE    MEMBERS ARC STATUS              FIRST_CHANGE# FIRST_TIM NEXT_CHANGE# NEXT_TIME     CON_ID
---------- ---------- ---------- ---------- ---------- ---------- --- ---------------- ---------------- --------- ------------ --------- ----------
         1          1      40649       4096        512          2 YES INACTIVE             137921001754 31-MAY-19   1.3793E+11 31-MAY-19          0
         2          1      40650       4096        512          2 NO  CURRENT              137930464620 31-MAY-19   2.8147E+14                    0
         3          1      40648       4096        512          2 YES INACTIVE             137920121808 31-MAY-19   1.3792E+11 31-MAY-19          0
         4          2      39319       4096        512          2 NO  CURRENT              137921005288 31-MAY-19   2.8147E+14                    0
         5          2      39317       4096        512          2 YES INACTIVE             137917291807 31-MAY-19   1.3792E+11 31-MAY-19          0
         6          2      39318       4096        512          2 YES INACTIVE             137919186475 31-MAY-19   1.3792E+11 31-MAY-19          0

6 rows selected.

SQL>

As we can see the online redo logs are 4GB, which is the default size on Oracle Exadata, which this is.  We could increase the size of the redo logs but it trade-off between performance verse space usage.  The peak we can see was 29 log switches for instance 1 and 86 log switches for instance 2, assuming these were full redo logs before switching, this is 29 + 86 logs switches for the database.  Which is 115 for the the hour, equating to a significant 460 GB of churn!  But this is an Exadata Machine and is sized and designed for this sort of load 🙂  To resize the redo logs to switch only 3 per hour, we would need to resize to 154GB, this wouldn’t be feasible, so we could either increase to something more reasonable like 10GB or leave as is, depending on what more important, the performance or space usage.

 

If you found this blog post useful, please like as well as follow me through my various Social Media avenues available on the sidebar and/or subscribe to this oracle blog via WordPress/e-mail.

Thanks

Zed DBA (Zahid Anwar)

Querying the amount of redo in Oracle Database

When a database starts to churn more redo then normal, it is handy to be able to extract the amount of redo over time, to be able to plot this on a graph.  The below query allows you to extract this info 🙂

Query for redo generation

Query to obtain the amount of redo generation over time by hour and MB:

set pages 999 lines 400
select to_char(trunc(first_time, 'HH24'), 'DD/MM/YYYY HH24:MI:SS') date_by_hour, sum(round(blocks*block_size/1024/1024)) CHURN_IN_MB
from v$archived_log
group by trunc(first_time, 'HH24')
order by 1
/

Output

This is the output you will get from the query:

SQL> set pages 999 lines 400
SQL> select to_char(trunc(first_time, 'HH24'), 'DD/MM/YYYY HH24:MI:SS') date_by_hour, sum(round(blocks*block_size/1024/1024)) CHURN_IN_MB
2 from v$archived_log
3 group by trunc(first_time, 'HH24')
4 order by 1
5 /

DATE_BY_HOUR CHURN_IN_MB
------------------- -----------
07/12/2018 10:00:00 2
07/12/2018 11:00:00 51
07/12/2018 12:00:00 3731
07/12/2018 13:00:00 10857
07/12/2018 14:00:00 12505
07/12/2018 15:00:00 17493
07/12/2018 16:00:00 187
07/12/2018 17:00:00 173
07/12/2018 18:00:00 185
07/12/2018 19:00:00 137
07/12/2018 20:00:00 159
07/12/2018 21:00:00 155
07/12/2018 22:00:00 157
07/12/2018 23:00:00 183
08/12/2018 00:00:00 154
08/12/2018 01:00:00 184
08/12/2018 02:00:00 179
08/12/2018 03:00:00 179
08/12/2018 04:00:00 172
08/12/2018 05:00:00 177
08/12/2018 06:00:00 174
08/12/2018 07:00:00 172
08/12/2018 08:00:00 177
08/12/2018 09:00:00 175
08/12/2018 10:00:00 175
08/12/2018 11:00:00 220
08/12/2018 12:00:00 221
08/12/2018 13:00:00 218
08/12/2018 14:00:00 216
08/12/2018 15:00:00 214
08/12/2018 16:00:00 212
08/12/2018 17:00:00 208
08/12/2018 18:00:00 213
08/12/2018 19:00:00 207
08/12/2018 20:00:00 205
08/12/2018 21:00:00 205
08/12/2018 22:00:00 202
08/12/2018 23:00:00 228
09/12/2018 00:00:00 202
09/12/2018 01:00:00 238
09/12/2018 02:00:00 212
09/12/2018 03:00:00 227
09/12/2018 04:00:00 213
09/12/2018 05:00:00 206
09/12/2018 06:00:00 221
09/12/2018 07:00:00 222
09/12/2018 08:00:00 216
09/12/2018 09:00:00 220
09/12/2018 10:00:00 216
09/12/2018 11:00:00 217
09/12/2018 12:00:00 162
09/12/2018 13:00:00 163
09/12/2018 14:00:00 163
09/12/2018 15:00:00 160
09/12/2018 16:00:00 158
09/12/2018 17:00:00 159
09/12/2018 18:00:00 161
09/12/2018 19:00:00 157
09/12/2018 20:00:00 157
09/12/2018 21:00:00 153
09/12/2018 22:00:00 153
09/12/2018 23:00:00 176
10/12/2018 00:00:00 150
10/12/2018 01:00:00 174
10/12/2018 02:00:00 168
10/12/2018 03:00:00 167
10/12/2018 04:00:00 169
10/12/2018 05:00:00 162
10/12/2018 06:00:00 168
10/12/2018 07:00:00 166
10/12/2018 08:00:00 160
10/12/2018 09:00:00 162
10/12/2018 10:00:00 141
10/12/2018 11:00:00 144
10/12/2018 12:00:00 142
10/12/2018 13:00:00 141
10/12/2018 14:00:00 142
10/12/2018 15:00:00 169
10/12/2018 16:00:00 146
10/12/2018 17:00:00 173
10/12/2018 18:00:00 177
10/12/2018 19:00:00 175
10/12/2018 20:00:00 7278
10/12/2018 21:00:00 12604
10/12/2018 22:00:00 18154
10/12/2018 23:00:00 6844
11/12/2018 00:00:00 1350
11/12/2018 01:00:00 505
11/12/2018 02:00:00 1183
11/12/2018 03:00:00 508
11/12/2018 04:00:00 1488
11/12/2018 05:00:00 7071
11/12/2018 06:00:00 16453
11/12/2018 07:00:00 7076
11/12/2018 08:00:00 17310
11/12/2018 09:00:00 8063
11/12/2018 10:00:00 12681
11/12/2018 11:00:00 3678
11/12/2018 14:00:00 6026
11/12/2018 15:00:00 15569
11/12/2018 16:00:00 7069
11/12/2018 17:00:00 11772
11/12/2018 18:00:00 10167
11/12/2018 19:00:00 6159
11/12/2018 20:00:00 16450
11/12/2018 21:00:00 4106
11/12/2018 22:00:00 10115
11/12/2018 23:00:00 10355
12/12/2018 00:00:00 3203
12/12/2018 01:00:00 5160
12/12/2018 02:00:00 14468
12/12/2018 03:00:00 6591
12/12/2018 04:00:00 1376
12/12/2018 05:00:00 4053
12/12/2018 06:00:00 7947
12/12/2018 07:00:00 12433
12/12/2018 08:00:00 1434
12/12/2018 09:00:00 663
12/12/2018 10:00:00 1511
12/12/2018 11:00:00 654
12/12/2018 12:00:00 5661
12/12/2018 13:00:00 9817
12/12/2018 14:00:00 10148
12/12/2018 15:00:00 372
12/12/2018 16:00:00 1074
12/12/2018 17:00:00 672
12/12/2018 18:00:00 1094
12/12/2018 19:00:00 391
12/12/2018 20:00:00 2403
12/12/2018 21:00:00 827
12/12/2018 22:00:00 1108
12/12/2018 23:00:00 15575
13/12/2018 00:00:00 17219
13/12/2018 01:00:00 8255
13/12/2018 02:00:00 877
13/12/2018 03:00:00 180
13/12/2018 04:00:00 1782
13/12/2018 05:00:00 5284
13/12/2018 06:00:00 16191
13/12/2018 07:00:00 6251
13/12/2018 08:00:00 14533
13/12/2018 09:00:00 8138
13/12/2018 10:00:00 12629
13/12/2018 11:00:00 9701
13/12/2018 12:00:00 9869
13/12/2018 13:00:00 9554
13/12/2018 14:00:00 7106
13/12/2018 15:00:00 15094
13/12/2018 16:00:00 8622
13/12/2018 17:00:00 671
13/12/2018 18:00:00 1094
13/12/2018 19:00:00 370
13/12/2018 20:00:00 2332
13/12/2018 21:00:00 421

154 rows selected.

SQL>

The above output can then be used to create a pivot chart in Excel 🙂

If you found this blog post useful, please like as well as follow me through my various Social Media avenues available on the sidebar and/or subscribe to this oracle blog via WordPress/e-mail.

Thanks

Zed DBA (Zahid Anwar)

Index Monitoring in Oracle Database

Applies to ONLY Oracle Database 10gR1 through to 12cR1.
In Oracle Database 12cR2, this feature is replaced.  A new blog post of this feature will be posted in due course.

Since Oracle 10g, you can monitor indexes to see if they are being used or not.  Which is very useful as indexes, consume unnecessary CPU and I/O on DML activity if not used.  Therefore, it’s recommended to monitor indexes and any unused indexes can be dropped, not only freeing vital space but CPU and I/O, resulting in better throughput.

It recommended to enable index monitoring for an appropriate period that is representative of your full workload cycle.  For example if you run a report every week, month or even year, just to catch all those activities.

Please Note: It’s important that foreign key constraints have associated indexes to avoid any table level lock on DML.  These indirect uses of indexes will not be detected by index monitoring.  So be mindful when dropping indexes that are associated with foreign key constraint.

Enable Index Monitoring

To Enable Index Monitoring, you alter the index as shown below:

ALTER INDEX . MONITORING USAGE;

Checking Index Usage

To check if the index has been used, will depends on the version of Oracle:

Oracle 10g/11g

SELECT * FROM V$OBJECT_USAGE;

Database Reference 10g – V$OBJECT_USAGE
Database Reference 11g – V$OBJECT_USAGE

Please Note: Only shows the indexes belonging to the login user.

To see all indexes in the database regardless of the logged on user:

select 
do.owner,
t.name table_name, io.name index_name,
decode(bitand(i.flags, 65536), 0, 'NO', 'YES') monitoring,
decode(bitand(ou.flags, 1), 0, 'NO', 'YES') used,
ou.start_monitoring,
ou.end_monitoring 
from 
sys.obj$ io, 
sys.obj$ t, 
sys.ind$ i, 
sys.object_usage ou,
dba_objects do 
where i.obj# = ou.obj# 
and io.obj# = ou.obj# 
and t.obj# = i.bo#
and i.obj# = do.object_id
order by 1, 2, 3;

DISCLAIMER: As accessing data dictionary objects directly, this query is not guaranteed to work.

Oracle 12cR1

“The V$OBJECT_USAGE view is deprecated in Oracle Database 12c Release 1 (12.1) and maintained for backward compatibility. Support for this view may be removed in a future release. Oracle recommends that you use the USER_OBJECT_USAGE view instead of the V$OBJECT_USAGE view.”

Database Reference 12cR1 – V$OBJECT_USAGE

For the current logon user:

SELECT * FROM USER_OBJECT_USAGE;

Database Reference 12cR1 – USER_OBJECT_USAGE

For all users:

SELECT * FROM DBA_OBJECT_USAGE;

Database Reference 12cR1 – DBA_OBJECT_USAGE

Dis-enable Index Monitoring

To Dis-enable Index Monitoring, you alter the index as shown below:

ALTER INDEX . NOMONITORING USAGE;

Demo of Index Monitoring

The content of the index_monitoring_demo.sql is below:

-------------------------------------------------------------
--
-- Index Monitoring Demo
--
-- File Name: index_monitoring_demo.sql (blog.zeddba.com)
-- Created: Zahid Anwar (ZedDBA)
-- 
-- Date: 16/07/2018
-- Version: 1.0
--
-- History
--
--1.0 16/07/2018 ZA Initial Script
--
-------------------------------------------------------------
set echo on
set pages 999 lines 400

CREATE TABLE "EMP"
( "ID" NUMBER,
"FIRST_NAME" VARCHAR2(50),
"LAST_NAME" VARCHAR2(50)
);

pause Press Enter

INSERT INTO "EMP" VALUES (1, 'Zahid', 'Anwar');
INSERT INTO "EMP" VALUES (2, 'Scott', 'Tiger');
INSERT INTO "EMP" VALUES (3, 'Joe', 'Bloggs');

commit;

pause Press Enter

set autotrace on
col id format 99
col first_name format a50
col last_Name format a50

select * from emp;

pause Press Enter

create index emp_id on emp (id);

pause Press Enter

select * from emp where id = 1;

pause Press Enter

set autotrace off

col INDEX_NAME format a20
col TABLE_NAME format a20
col MONITORING format a15
col USED format a10
select * from USER_OBJECT_USAGE;

pause Press Enter

alter index emp_id monitoring usage;

pause Press Enter

select * from USER_OBJECT_USAGE;

pause Press Enter

set autotrace on

select * from emp;

pause Press Enter

set autotrace off

select * from USER_OBJECT_USAGE;

pause Press Enter

set autotrace on

select * from emp where id = 1;

pause Press Enter

set autotrace off

select * from USER_OBJECT_USAGE;

pause Press Enter

alter index emp_id nomonitoring usage;

pause Press Enter

select * from USER_OBJECT_USAGE;

pause Press Enter

DROP TABLE "EMP" PURGE;

Below is the output of the script index_monitoring_demo.sql:

SQL> @index_monitoring_demo.sql
SQL> set pages 999 lines 400
SQL>
SQL> CREATE TABLE "EMP"
2 ( "ID" NUMBER,
3 "FIRST_NAME" VARCHAR2(50),
4 "LAST_NAME" VARCHAR2(50)
5 );

Table created.

SQL>
SQL> pause Press Enter
Press Enter

Table EMP is created in my schema ZEDDBA, please ensure you have appropriate schema with correct privileges.  Next we insert 3 rows into the table EMP:

SQL> INSERT INTO "EMP" VALUES (1, 'Zahid', 'Anwar');

1 row created.

SQL> INSERT INTO "EMP" VALUES (2, 'Scott', 'Tiger');

1 row created.

SQL> INSERT INTO "EMP" VALUES (3, 'Joe', 'Bloggs');

1 row created.

SQL>
SQL> commit;

Commit complete.

SQL>
SQL> pause Press Enter
Press Enter

With the 3 rows inserted and committed, we select from the EMP table with auto trace on, so we can see the execution plan:

SQL> set autotrace on
SQL> col id format 99
SQL> col first_name format a50
SQL> col last_Name format a50
SQL>
SQL> select * from emp;

ID FIRST_NAME LAST_NAME
--- -------------------------------------------------- --------------------------------------------------
1 Zahid Anwar
2 Scott Tiger
3 Joe Bloggs


Execution Plan
----------------------------------------------------------
Plan hash value: 3956160932

----------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 3 | 201 | 2 (0)| 00:00:01 |
| 1 | TABLE ACCESS STORAGE FULL| EMP | 3 | 201 | 2 (0)| 00:00:01 |
----------------------------------------------------------------------------------

Note
-----
- dynamic statistics used: dynamic sampling (level=2)


Statistics
----------------------------------------------------------
12 recursive calls
0 db block gets
28 consistent gets
0 physical reads
0 redo size
784 bytes sent via SQL*Net to client
552 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
3 rows processed

SQL>
SQL> pause Press Enter
Press Enter

Please Note: This database is an Exadata Machine, hence the “TABLE ACCESS STORAGE FULL“, otherwise would be “TABLE ACCESS FULL“.

As we can see the Execution Plan was a Full Table Scan (FTS) as this was the only access method available.  Next, we create an index on the emp_id:

SQL> create index emp_id on emp (id);

Index created.

SQL>
SQL> pause Press Enter
Press Enter

Next, we query the table with a predicate on emp_id:

SQL> select * from emp where id = 1;

ID FIRST_NAME LAST_NAME
--- -------------------------------------------------- --------------------------------------------------
1 Zahid Anwar


Execution Plan
----------------------------------------------------------
Plan hash value: 458854468

----------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 67 | 2 (0)| 00:00:01 |
| 1 | TABLE ACCESS BY INDEX ROWID BATCHED| EMP | 1 | 67 | 2 (0)| 00:00:01 |
|* 2 | INDEX RANGE SCAN | EMP_ID | 1 | | 1 (0)| 00:00:01 |
----------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("ID"=1)

Note
-----
- dynamic statistics used: dynamic sampling (level=2)


Statistics
----------------------------------------------------------
11 recursive calls
0 db block gets
19 consistent gets
0 physical reads
0 redo size
695 bytes sent via SQL*Net to client
552 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed

SQL>
SQL> pause Press Enter
Press Enter

Now, we can see the Execution Plan used the newly created index.  Next, we check if the index was used:

SQL> set autotrace off
SQL>
SQL> col INDEX_NAME format a20
SQL> col TABLE_NAME format a20
SQL> col MONITORING format a15
SQL> col USED format a10
SQL> select * from USER_OBJECT_USAGE;

no rows selected

SQL>
SQL> pause Press Enter
Press Enter

We get no rows back as we didn’t turn on index monitoring, which isn’t on by default.  So next, we enable index monitoring:

SQL> alter index emp_id monitoring usage;

Index altered.

SQL>
SQL> pause Press Enter
Press Enter

Next, we check the index usage and we can see it monitoring but not used:

SQL> select * from USER_OBJECT_USAGE;

INDEX_NAME TABLE_NAME MONITORING USED START_MONITORING END_MONITORING
-------------------- -------------------- --------------- ---------- ------------------- -------------------
EMP_ID     EMP        YES        NO   07/16/2018 17:53:59

SQL>
SQL> pause Press Enter
Press Enter

Next, we query the table with no predicate to do a FTS:

SQL> set autotrace on
SQL>
SQL> select * from emp;

ID FIRST_NAME LAST_NAME
--- -------------------------------------------------- --------------------------------------------------
1 Zahid Anwar
2 Scott Tiger
3 Joe Bloggs


Execution Plan
----------------------------------------------------------
Plan hash value: 3956160932

----------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 3 | 201 | 2 (0)| 00:00:01 |
| 1 | TABLE ACCESS STORAGE FULL| EMP | 3 | 201 | 2 (0)| 00:00:01 |
----------------------------------------------------------------------------------

Note
-----
- dynamic statistics used: dynamic sampling (level=2)


Statistics
----------------------------------------------------------
51 recursive calls
0 db block gets
54 consistent gets
0 physical reads
132 redo size
784 bytes sent via SQL*Net to client
552 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
6 sorts (memory)
0 sorts (disk)
3 rows processed

SQL>
SQL> pause Press Enter
Press Enter

Next, we check if the index monitoring reflects the usage:

SQL> set autotrace off
SQL>
SQL> select * from USER_OBJECT_USAGE;

INDEX_NAME TABLE_NAME MONITORING USED START_MONITORING END_MONITORING
-------------------- -------------------- --------------- ---------- ------------------- -------------------
EMP_ID     EMP        YES        NO   07/16/2018 17:53:59

SQL>
SQL> pause Press Enter
Press Enter

Next, we query the table with a predicate to do Index Range Scan:

SQL> set autotrace on
SQL>
SQL> select * from emp where id = 1;

ID FIRST_NAME LAST_NAME
--- -------------------------------------------------- --------------------------------------------------
1 Zahid Anwar


Execution Plan
----------------------------------------------------------
Plan hash value: 458854468

----------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 67 | 2 (0)| 00:00:01 |
| 1 | TABLE ACCESS BY INDEX ROWID BATCHED| EMP | 1 | 67 | 2 (0)| 00:00:01 |
|* 2 | INDEX RANGE SCAN | EMP_ID | 1 | | 1 (0)| 00:00:01 |
----------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("ID"=1)

Note
-----
- dynamic statistics used: dynamic sampling (level=2)


Statistics
----------------------------------------------------------
13 recursive calls
3 db block gets
24 consistent gets
0 physical reads
876 redo size
695 bytes sent via SQL*Net to client
552 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed

SQL>
SQL> pause Press Enter
Press Enter

Next, we check if the index monitoring reflects the usage:

SQL> set autotrace off
SQL>
SQL> select * from USER_OBJECT_USAGE;

INDEX_NAME TABLE_NAME MONITORING USED START_MONITORING END_MONITORING
-------------------- -------------------- --------------- ---------- ------------------- -------------------
EMP_ID     EMP        YES        YES  07/16/2018 17:53:59

SQL>
SQL> pause Press Enter
Press Enter

Next, we now turn off index monitoring, as we know it’s been used:

SQL> alter index emp_id nomonitoring usage;

Index altered.

SQL>
SQL> pause Press Enter
Press Enter

Next, we check to see if monitoring is now disabled:

SQL> select * from USER_OBJECT_USAGE;

INDEX_NAME TABLE_NAME MONITORING USED START_MONITORING END_MONITORING
-------------------- -------------------- --------------- ---------- ------------------- -------------------
EMP_ID     EMP        NO         YES  07/16/2018 17:53:59 07/16/2018 17:54:09

SQL>
SQL> pause Press Enter
Press Enter

Finally, we drop the demo EMP table:

SQL> DROP TABLE "EMP" PURGE;

Table dropped.

SQL>

If you found this blog post useful, please like as well as follow me through my various Social Media avenues available on the sidebar and/or subscribe to this oracle blog via WordPress/e-mail.

Thanks

Zed DBA (Zahid Anwar)

Adding a Datafile to Temp Tablespace

When monitoring Tablespace Usage (see my Tablespace Usage blog post for more info), there comes a point when you need to add a datafile to the temp tablespace to allow for growth and more importantly get below a monitoring threshold for example in OEM or OpsView.

Query to see Current Temp Datafiles State

To see the current state of the temp datafiles:

set pages 999
set lines 400
col FILE_NAME format a75
select d.TABLESPACE_NAME, d.FILE_NAME, d.BYTES/1024/1024 SIZE_MB, d.AUTOEXTENSIBLE, d.MAXBYTES/1024/1024 MAXSIZE_MB, d.INCREMENT_BY*(v.BLOCK_SIZE/1024)/1024 INCREMENT_BY_MB
from dba_temp_files d,
 v$tempfile v
where d.FILE_ID = v.FILE#
order by d.TABLESPACE_NAME, d.FILE_NAME;

Please Note: For pluggable databases, please ensure you are in the right PDB as temp datafiles can be at container level and/or pluggable database level.

Output from Current Temp Datafiles State

This is the output you will get from the query:

TABLESPACE_NAME FILE_NAME SIZE_MB AUT MAXSIZE_MB INCREMENT_BY_MB
------------------------------ --------------------------------------------------------------------------- ---------- --- ---------- ---------------
TEMP +DATAC1/VERS/TEMPFILE/temp.451.891367325 32767 YES 32767 1024
TEMP +DATAC1/VERS/TEMPFILE/temp.452.891367321 32767 YES 32767 1024
TEMP +DATAC1/VERS/TEMPFILE/temp.454.891367305 32767 YES 32767 1024
TEMP +DATAC1/VERS/TEMPFILE/temp.455.891367301 32767 YES 32767 1024
TEMP +DATAC1/VERS/TEMPFILE/temp.457.891367299 32767 YES 32767 1024
TEMP +DATAC1/VERS/TEMPFILE/temp.458.891367295 32767 YES 32767 1024
TEMP +DATAC1/VERS/TEMPFILE/temp.459.891367159 32767 YES 32767 1024
TEMP +DATAC1/VERS/TEMPFILE/temp.461.891367165 32767 YES 32767 1024
TEMP +DATAC1/VERS/TEMPFILE/temp.463.891367201 32767 YES 32767 1024
TEMP +DATAC1/VERS/TEMPFILE/temp.465.891367203 32767 YES 32767 1024
TEMP +DATAC1/VERS/TEMPFILE/temp.466.891367207 32767 YES 32767 1024
TEMP +DATAC1/VERS/TEMPFILE/temp.468.891367223 32767 YES 32767 1024
TEMP +DATAC1/VERS/TEMPFILE/temp.469.891367227 32767 YES 32767 1024
TEMP +DATAC1/VERS/TEMPFILE/temp.471.891367243 32767 YES 32767 1024
TEMP +DATAC1/VERS/TEMPFILE/temp.472.891367251 32767 YES 32767 1024
TEMP +DATAC1/VERS/TEMPFILE/temp.473.891367255 32767 YES 32767 1024
TEMP +DATAC1/VERS/TEMPFILE/temp.475.891367261 32767 YES 32767 1024
TEMP +DATAC1/VERS/TEMPFILE/temp.477.891367289 32767 YES 32767 1024
...

38 rows selected.

SQL>

The … represents the several lines removed to make the output readable 🙂

Add Temp Datafile to Temp Tablespace

To add a temp datafile to ‘TEMP‘ to be initially 10G, auto extendable by 1G to maxsize of 32Gb:

ALTER TABLESPACE TEMP ADD TEMPFILE '+DATAC1' SIZE 10G AUTOEXTEND ON NEXT 1G MAXSIZE 32767M;

See Oracle Documentation for more info and Syntax:
https://docs.oracle.com/database/121/SQLRF/statements_3002.htm

Once added you’ll see the temp datafile in the above query:

TABLESPACE_NAME FILE_NAME SIZE_MB AUT MAXSIZE_MB INCREMENT_BY_MB
------------------------------ --------------------------------------------------------------------------- ---------- --- ---------- ---------------
TEMP +DATAC1/VERS/TEMPFILE/temp.451.891367325 32767 YES 32767 1024
TEMP +DATAC1/VERS/TEMPFILE/temp.452.891367321 32767 YES 32767 1024
TEMP +DATAC1/VERS/TEMPFILE/temp.454.891367305 32767 YES 32767 1024
TEMP +DATAC1/VERS/TEMPFILE/temp.455.891367301 32767 YES 32767 1024
TEMP +DATAC1/VERS/TEMPFILE/temp.457.891367299 32767 YES 32767 1024
TEMP +DATAC1/VERS/TEMPFILE/temp.458.891367295 32767 YES 32767 1024
TEMP +DATAC1/VERS/TEMPFILE/temp.459.891367159 32767 YES 32767 1024
TEMP +DATAC1/VERS/TEMPFILE/temp.461.891367165 32767 YES 32767 1024
TEMP +DATAC1/VERS/TEMPFILE/temp.463.891367201 32767 YES 32767 1024
TEMP +DATAC1/VERS/TEMPFILE/temp.465.891367203 32767 YES 32767 1024
TEMP +DATAC1/VERS/TEMPFILE/temp.466.891367207 32767 YES 32767 1024
TEMP +DATAC1/VERS/TEMPFILE/temp.468.891367223 32767 YES 32767 1024
TEMP +DATAC1/VERS/TEMPFILE/temp.469.891367227 32767 YES 32767 1024
TEMP +DATAC1/VERS/TEMPFILE/temp.471.891367243 32767 YES 32767 1024
TEMP +DATAC1/VERS/TEMPFILE/temp.472.891367251 32767 YES 32767 1024
TEMP +DATAC1/VERS/TEMPFILE/temp.473.891367255 32767 YES 32767 1024
TEMP +DATAC1/VERS/TEMPFILE/temp.475.891367261 32767 YES 32767 1024
TEMP +DATAC1/VERS/TEMPFILE/temp.477.891367289 32767 YES 32767 1024
...
TEMP +DATAC1/VERS/TEMPFILE/temp.843.973079825 10240 YES 32767 1024

39 rows selected.

SQL>

Related Blog Posts:
Querying Tablespace Usage
Adding a Datafile to Tablespace

Updates
27th April 2018: Change the query to use v$tempfile instead of v$datafile as doesn’t work for PDBs.

If you found this blog post useful, please like as well as follow me through my various Social Media avenues available on the sidebar and/or subscribe to this oracle blog via WordPress/e-mail.

Thanks

Zed DBA (Zahid Anwar)

How to fix queries on DBA_FREE_SPACE that are slow

I found myself in a situation where OpsView, a monitoring tool, was having difficulty monitoring the tablespaces for a particular pluggable database.

Upon investigation it was found the queries against the dictionary table DBA_FREE_SPACE were taking a very long time:

SQL> set timing on
SQL> select nvl(sum(dfs.bytes)/1024/1024,0) from dba_free_space dfs where dfs.tablespace_name = 'USERS';

NVL(SUM(DFS.BYTES)/1024/1024,0)
-------------------------------
 70.75

Elapsed: 00:00:10.98

There are 60 tablespaces in this pluggable database, which the time varied querying each tablespace, but was by far where most the time was spent.

I wrote a PL/SQL block to mimic Opsview as I didn’t want to create an object (procedure) in this customer’s database:

SET SERVEROUTPUT ON
SET TIMING ON
DECLARE
 cursor ts_names is select tablespace_name from dba_tablespaces where contents != 'TEMPORARY';
 sql_used VARCHAR(200) := 'select nvl(sum(dfs.bytes)/1024/1024,0) from dba_free_space dfs where dfs.tablespace_name = ''text_string''';
 sql_free VARCHAR(200) := 'select nvl(sum(dfs.bytes)/1024/1024,0) from dba_free_space dfs where dfs.tablespace_name = ''text_string''';
 sql_max VARCHAR(200) := 'select sum(maxbytes-bytes)/1024/1024 from dba_data_files where AUTOEXTENSIBLE = ''YES'' and maxbytes>bytes and tablespace_name = ''text_string''';
 num_out NUMBER;
BEGIN
 FOR ts_name in ts_names
 LOOP
 --sql for used space
 EXECUTE IMMEDIATE replace(sql_used, 'text_string', ts_name.tablespace_name) into num_out;
 dbms_output.put_line(replace(sql_used, 'text_string', ts_name.tablespace_name));
 dbms_output.put_line(num_out);
 --sql for free space
 EXECUTE IMMEDIATE replace(sql_free, 'text_string', ts_name.tablespace_name) into num_out;
 dbms_output.put_line(replace(sql_free, 'text_string', ts_name.tablespace_name));
 dbms_output.put_line(num_out);
 --sql for max
 EXECUTE IMMEDIATE replace(sql_max, 'text_string', ts_name.tablespace_name) into num_out;
 dbms_output.put_line(replace(sql_max, 'text_string', ts_name.tablespace_name));
 dbms_output.put_line(num_out);
 END LOOP;
END;
/

I ran this and the total time was shocking 😐 :

SQL> --SET SERVEROUTPUT ON
SQL> SET TIMING ON
SQL> DECLARE
 2 cursor ts_names is select tablespace_name from dba_tablespaces where contents != 'TEMPORARY';
 3 sql_used VARCHAR(200) := 'select nvl(sum(dfs.bytes)/1024/1024,0) from dba_free_space dfs where dfs.tablespace_name = ''text_string''';
 4 sql_free VARCHAR(200) := 'select nvl(sum(dfs.bytes)/1024/1024,0) from dba_free_space dfs where dfs.tablespace_name = ''text_string''';
 5 sql_max VARCHAR(200) := 'select sum(maxbytes-bytes)/1024/1024 from dba_data_files where AUTOEXTENSIBLE = ''YES'' and maxbytes>bytes and tablespace_name = ''text_string''';
 6 num_out NUMBER;
 7 BEGIN
 8 FOR ts_name in ts_names
 9 LOOP
 10 --sql for used space
 11 EXECUTE IMMEDIATE replace(sql_used, 'text_string', ts_name.tablespace_name) into num_out;
 12 dbms_output.put_line(replace(sql_used, 'text_string', ts_name.tablespace_name));
 13 dbms_output.put_line(num_out);
 14 --sql for free space
 15 EXECUTE IMMEDIATE replace(sql_free, 'text_string', ts_name.tablespace_name) into num_out;
 16 dbms_output.put_line(replace(sql_free, 'text_string', ts_name.tablespace_name));
 17 dbms_output.put_line(num_out);
 --sql for max
 18 19 EXECUTE IMMEDIATE replace(sql_max, 'text_string', ts_name.tablespace_name) into num_out;
 20 dbms_output.put_line(replace(sql_max, 'text_string', ts_name.tablespace_name));
 21 dbms_output.put_line(num_out);
 22 END LOOP;
END;
 23 24 /

PL/SQL procedure successfully completed.

Elapsed: 00:21:30.94
SQL>

So I searched My Oracle Support (MOS) and found the following MOS note:
Queries on DBA_FREE_SPACE are Slow (Doc ID 271169.1)

Which states:
“1) In release 10g, the view dba_free_space was modified to also include objects in the recycle bin.

2) Large number of objects in the recyclebin can slow down queries on  dba_free_space.

3) This is a normal behaviour.

4) For release 11g, the view dba_free_space doesn’t contain a hint which in case when there is only few objects in recyclebin, you may want to gather underlying stats of tables/dictionary to get better performance.”

The database indeed did have a lot of objects in the recycle bin (in the pluggable database):

SQL> SELECT count(*) from dba_recyclebin;

 COUNT(*)
----------
 27615

SQL>

With most of them drop recently:

SQL> select trunc(to_date(DROPTIME,'YYYY-MM-DD:HH24:MI:SS')), count(*) from dba_recyclebin group by trunc(to_date(DROPTIME,'YYYY-MM-DD:HH24:MI:SS'))
  2  order by 1
  3 /

TRUNC(TO_ COUNT(*)
--------- ----------
24-SEP-16 2
...
19-JAN-18 2506
20-JAN-18 4322
21-JAN-18 4321
22-JAN-18 4320
23-JAN-18 4321
24-JAN-18 4321
25-JAN-18 2446

421 rows selected.

SQL>

So I purged the recycle bin (with customers permission)  and re-ran the check:

SQL> purge dba_recyclebin;

DBA Recyclebin purged.

Elapsed: 00:06:30.39
SQL> --SET SERVEROUTPUT ON
SET TIMING ON
SQL> SQL> DECLARE
 2 cursor ts_names is select tablespace_name from dba_tablespaces where contents != 'TEMPORARY';
 3 sql_used VARCHAR(200) := 'select nvl(sum(dfs.bytes)/1024/1024,0) from dba_free_space dfs where dfs.tablespace_name = ''text_string''';
 4 sql_free VARCHAR(200) := 'select nvl(sum(dfs.bytes)/1024/1024,0) from dba_free_space dfs where dfs.tablespace_name = ''text_string''';
 5 sql_max VARCHAR(200) := 'select sum(maxbytes-bytes)/1024/1024 from dba_data_files where AUTOEXTENSIBLE = ''YES'' and maxbytes>bytes and tablespace_name = ''text_string''';
 6 num_out NUMBER;
 7 BEGIN
 8 FOR ts_name in ts_names
 9 LOOP
 10 --sql for used space
 11 EXECUTE IMMEDIATE replace(sql_used, 'text_string', ts_name.tablespace_name) into num_out;
 12 dbms_output.put_line(replace(sql_used, 'text_string', ts_name.tablespace_name));
 13 dbms_output.put_line(num_out);
 14 --sql for free space
 15 EXECUTE IMMEDIATE replace(sql_free, 'text_string', ts_name.tablespace_name) into num_out;
 16 dbms_output.put_line(replace(sql_free, 'text_string', ts_name.tablespace_name));
 17 dbms_output.put_line(num_out);
 18 --sql for max
 19 EXECUTE IMMEDIATE replace(sql_max, 'text_string', ts_name.tablespace_name) into num_out;
 20 dbms_output.put_line(replace(sql_max, 'text_string', ts_name.tablespace_name));
 21 dbms_output.put_line(num_out);
 22 END LOOP;
 23 END;
 24 /

PL/SQL procedure successfully completed.

Elapsed: 00:02:46.25
SQL>

Result, the duration of the PL/SQL block went from 21 minutes to just under 3 minutes.  However I need it to go under 2 minutes as this was the timeout for OpsView.

So I proceed with the next recommendation in the MOS note of gather dictionary and fixed table stats (with customers permission) using MOS note:
How to Gather Statistics on Objects Owned by the ‘SYS’ User and ‘Fixed’ Objects (Doc ID 457926.1)

SQL> EXEC DBMS_STATS.GATHER_DICTIONARY_STATS;

PL/SQL procedure successfully completed.

Elapsed: 00:00:20.49

SQL> EXEC DBMS_STATS.GATHER_FIXED_OBJECTS_STATS;

PL/SQL procedure successfully completed.

Elapsed: 00:04:28.07

SQL> --SET SERVEROUTPUT ON
SET TIMING ON
SQL> SQL> DECLARE
 2 cursor ts_names is select tablespace_name from dba_tablespaces where contents != 'TEMPORARY';
 3 sql_used VARCHAR(200) := 'select nvl(sum(dfs.bytes)/1024/1024,0) from dba_free_space dfs where dfs.tablespace_name = ''text_string''';
 4 sql_free VARCHAR(200) := 'select nvl(sum(dfs.bytes)/1024/1024,0) from dba_free_space dfs where dfs.tablespace_name = ''text_string''';
 5 sql_max VARCHAR(200) := 'select sum(maxbytes-bytes)/1024/1024 from dba_data_files where AUTOEXTENSIBLE = ''YES'' and maxbytes>bytes and tablespace_name = ''text_string''';
 6 num_out NUMBER;
 7 BEGIN
 8 FOR ts_name in ts_names
 9 LOOP
 10 --sql for used space
 11 EXECUTE IMMEDIATE replace(sql_used, 'text_string', ts_name.tablespace_name) into num_out;
 12 dbms_output.put_line(replace(sql_used, 'text_string', ts_name.tablespace_name));
 13 dbms_output.put_line(num_out);
 14 --sql for free space
 15 EXECUTE IMMEDIATE replace(sql_free, 'text_string', ts_name.tablespace_name) into num_out;
 16 dbms_output.put_line(replace(sql_free, 'text_string', ts_name.tablespace_name));
 17 dbms_output.put_line(num_out);
 18 --sql for max
 19 EXECUTE IMMEDIATE replace(sql_max, 'text_string', ts_name.tablespace_name) into num_out;
 20 dbms_output.put_line(replace(sql_max, 'text_string', ts_name.tablespace_name));
 21 dbms_output.put_line(num_out);
 22 END LOOP;
 23 END;
 24 /

PL/SQL procedure successfully completed.

Elapsed: 00:00:04.53
SQL>

Bingo! the duration of the PL/SQL block went down to 4 seconds 🙂

PLEASE NOTE: This still effects non-pluggable databases, however in pluggable databases, you need to purge the recycle bin for where the dropped objects are, the container database and the pluggable databases require independent purge.

If you found this blog post useful, please like as well as follow me through my various Social Media avenues available on the sidebar and/or subscribe to this oracle blog via WordPress/e-mail.

Thanks

Zed DBA (Zahid Anwar)