Online Redo Log Switch Frequency Map

A query I find myself often running is the online redo log switch frequency map query, which queries the v$log_history/gv$log_history (for cluster databases) view and show the historical log switch frequency.

Why you might ask? Well it’s important to see how frequent log switches are occurring as Oracle’s rule of thumb is to not switch more then 3 logs per hour (20 minutes of redo) at peak DML activity to prevent excessive checkpoints.  The query can also highlight abnormal churn (DML activity).

Online Redo Log Switch Frequency Map Query

set pages 999 lines 400
col h0 format 999
col h1 format 999
col h2 format 999
col h3 format 999
col h4 format 999
col h5 format 999
col h6 format 999
col h7 format 999
col h8 format 999
col h9 format 999
col h10 format 999
col h11 format 999
col h12 format 999
col h13 format 999
col h14 format 999
col h15 format 999
col h16 format 999
col h17 format 999
col h18 format 999
col h19 format 999
col h20 format 999
col h21 format 999
col h22 format 999
col h23 format 999
SELECT TRUNC (first_time) "Date", inst_id, TO_CHAR (first_time, 'Dy') "Day",
 COUNT (1) "Total",
 SUM (DECODE (TO_CHAR (first_time, 'hh24'), '00', 1, 0)) "h0",
 SUM (DECODE (TO_CHAR (first_time, 'hh24'), '01', 1, 0)) "h1",
 SUM (DECODE (TO_CHAR (first_time, 'hh24'), '02', 1, 0)) "h2",
 SUM (DECODE (TO_CHAR (first_time, 'hh24'), '03', 1, 0)) "h3",
 SUM (DECODE (TO_CHAR (first_time, 'hh24'), '04', 1, 0)) "h4",
 SUM (DECODE (TO_CHAR (first_time, 'hh24'), '05', 1, 0)) "h5",
 SUM (DECODE (TO_CHAR (first_time, 'hh24'), '06', 1, 0)) "h6",
 SUM (DECODE (TO_CHAR (first_time, 'hh24'), '07', 1, 0)) "h7",
 SUM (DECODE (TO_CHAR (first_time, 'hh24'), '08', 1, 0)) "h8",
 SUM (DECODE (TO_CHAR (first_time, 'hh24'), '09', 1, 0)) "h9",
 SUM (DECODE (TO_CHAR (first_time, 'hh24'), '10', 1, 0)) "h10",
 SUM (DECODE (TO_CHAR (first_time, 'hh24'), '11', 1, 0)) "h11",
 SUM (DECODE (TO_CHAR (first_time, 'hh24'), '12', 1, 0)) "h12",
 SUM (DECODE (TO_CHAR (first_time, 'hh24'), '13', 1, 0)) "h13",
 SUM (DECODE (TO_CHAR (first_time, 'hh24'), '14', 1, 0)) "h14",
 SUM (DECODE (TO_CHAR (first_time, 'hh24'), '15', 1, 0)) "h15",
 SUM (DECODE (TO_CHAR (first_time, 'hh24'), '16', 1, 0)) "h16",
 SUM (DECODE (TO_CHAR (first_time, 'hh24'), '17', 1, 0)) "h17",
 SUM (DECODE (TO_CHAR (first_time, 'hh24'), '18', 1, 0)) "h18",
 SUM (DECODE (TO_CHAR (first_time, 'hh24'), '19', 1, 0)) "h19",
 SUM (DECODE (TO_CHAR (first_time, 'hh24'), '20', 1, 0)) "h20",
 SUM (DECODE (TO_CHAR (first_time, 'hh24'), '21', 1, 0)) "h21",
 SUM (DECODE (TO_CHAR (first_time, 'hh24'), '22', 1, 0)) "h22",
 SUM (DECODE (TO_CHAR (first_time, 'hh24'), '23', 1, 0)) "h23",
 ROUND (COUNT (1) / 24, 2) "Avg"
FROM gv$log_history
WHERE thread# = inst_id
AND first_time > sysdate -7
GROUP BY TRUNC (first_time), inst_id, TO_CHAR (first_time, 'Dy')
ORDER BY 1,2;

Online Redo Log Switch Frequency Map Output

SQL> set pages 999 lines 400
SQL> col h0 format 999
SQL> col h1 format 999
SQL> col h2 format 999
SQL> col h3 format 999
SQL> col h4 format 999
SQL> col h5 format 999
SQL> col h6 format 999
SQL> col h7 format 999
SQL> col h8 format 999
SQL> col h9 format 999
SQL> col h10 format 999
SQL> col h11 format 999
SQL> col h12 format 999
SQL> col h13 format 999
SQL> col h14 format 999
SQL> col h15 format 999
SQL> col h16 format 999
SQL> col h17 format 999
SQL> col h18 format 999
SQL> col h19 format 999
SQL> col h20 format 999
SQL> col h21 format 999
SQL> col h22 format 999
SQL> col h23 format 999
SQL> SELECT TRUNC (first_time) "Date", inst_id, TO_CHAR (first_time, 'Dy') "Day",
SQL>  COUNT (1) "Total",
SQL>  SUM (DECODE (TO_CHAR (first_time, 'hh24'), '00', 1, 0)) "h0",
SQL>  SUM (DECODE (TO_CHAR (first_time, 'hh24'), '01', 1, 0)) "h1",
SQL>  SUM (DECODE (TO_CHAR (first_time, 'hh24'), '02', 1, 0)) "h2",
SQL>  SUM (DECODE (TO_CHAR (first_time, 'hh24'), '03', 1, 0)) "h3",
SQL>  SUM (DECODE (TO_CHAR (first_time, 'hh24'), '04', 1, 0)) "h4",
SQL>  SUM (DECODE (TO_CHAR (first_time, 'hh24'), '05', 1, 0)) "h5",
SQL>  SUM (DECODE (TO_CHAR (first_time, 'hh24'), '06', 1, 0)) "h6",
SQL>  SUM (DECODE (TO_CHAR (first_time, 'hh24'), '07', 1, 0)) "h7",
SQL>  SUM (DECODE (TO_CHAR (first_time, 'hh24'), '08', 1, 0)) "h8",
SQL>  SUM (DECODE (TO_CHAR (first_time, 'hh24'), '09', 1, 0)) "h9",
SQL>  SUM (DECODE (TO_CHAR (first_time, 'hh24'), '10', 1, 0)) "h10",
SQL>  SUM (DECODE (TO_CHAR (first_time, 'hh24'), '11', 1, 0)) "h11",
SQL>  SUM (DECODE (TO_CHAR (first_time, 'hh24'), '12', 1, 0)) "h12",
SQL>  SUM (DECODE (TO_CHAR (first_time, 'hh24'), '13', 1, 0)) "h13",
SQL>  SUM (DECODE (TO_CHAR (first_time, 'hh24'), '14', 1, 0)) "h14",
SQL>  SUM (DECODE (TO_CHAR (first_time, 'hh24'), '15', 1, 0)) "h15",
SQL>  SUM (DECODE (TO_CHAR (first_time, 'hh24'), '16', 1, 0)) "h16",
SQL>  SUM (DECODE (TO_CHAR (first_time, 'hh24'), '17', 1, 0)) "h17",
SQL>  SUM (DECODE (TO_CHAR (first_time, 'hh24'), '18', 1, 0)) "h18",
SQL>  SUM (DECODE (TO_CHAR (first_time, 'hh24'), '19', 1, 0)) "h19",
SQL>  SUM (DECODE (TO_CHAR (first_time, 'hh24'), '20', 1, 0)) "h20",
SQL>  SUM (DECODE (TO_CHAR (first_time, 'hh24'), '21', 1, 0)) "h21",
SQL>  SUM (DECODE (TO_CHAR (first_time, 'hh24'), '22', 1, 0)) "h22",
SQL>  SUM (DECODE (TO_CHAR (first_time, 'hh24'), '23', 1, 0)) "h23",
SQL>  ROUND (COUNT (1) / 24, 2) "Avg"
SQL> FROM gv$log_history
SQL> WHERE thread# = inst_id
SQL> AND first_time > sysdate -7
SQL> GROUP BY TRUNC (first_time), inst_id, TO_CHAR (first_time, 'Dy')
SQL> ORDER BY 1,2;

Date         INST_ID Day      Total   h0   h1   h2   h3   h4   h5   h6   h7   h8   h9  h10  h11  h12  h13  h14  h15  h16  h17  h18  h19  h20  h21  h22  h23        Avg
--------- ---------- --- ---------- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ----------
24-MAY-19          1 Fri          1    0    0    0    0    0    0    0    0    0    0    0    0    0    0    0    0    0    0    0    0    0    0    0    1        .04
24-MAY-19          2 Fri          1    0    0    0    0    0    0    0    0    0    0    0    0    0    0    0    0    0    0    0    0    0    0    0    1        .04
25-MAY-19          1 Sat         56    1    1    3    0    2    2    0    7    0    0   10    4    4    6    3    1    2    4    1    1    1    2    0    1       2.33
25-MAY-19          2 Sat         62    0    3    3    0    1    2    0    7    0    0    8    5    2    2    3    5    4    2    3    3    5    4    0    0       2.58
26-MAY-19          1 Sun         56    1    1    3    0    2    5    0   11    0    4    1    1    7    7    3    5    1    0    0    0    1    1    1    1       2.33
26-MAY-19          2 Sun         28    1    3    3    0    1    4    0    5    0    1    1    0    2    3    1    1    1    0    0    0    0    0    1    0       1.17
27-MAY-19          1 Mon         33    1    1    3    0    1    4    0    6    5    1    0    1    1    2    1    1    1    0    0    0    1    1    0    2       1.38
27-MAY-19          2 Mon         21    0    3    3    0    0    4    0    3    2    0    0    1    0    1    0    1    0    1    0    0    0    0    1    1        .88
28-MAY-19          1 Tue         43    2    3    3    0    3    5    1    4    3    1    3    0    1    1    1    1    2    3    2    1    1    0    1    1       1.79
28-MAY-19          2 Tue         38    2    4    3    1    1    4    1    7    1    1    1    1    0    1    0    0    2    4    0    1    0    1    1    1       1.58
29-MAY-19          1 Wed         58    2    4    4    1    3    5    2   12    3    1    3    1    2    5    4    1    1    0    1    0    1    0    2    0       2.42
29-MAY-19          2 Wed         28    0    2    3    0    1    4    1    5    1    0    1    1    0    2    1    1    1    1    0    0    1    1    0    1       1.17
30-MAY-19          1 Thu         39    2    3    3    0    1    5    1    7    1    1    2    3    1    2    2    1    0    1    1    0    0    1    0    1       1.63
30-MAY-19          2 Thu         29    0    1    3    0    2    4    1    4    2    1    1    1    1    1    2    0    1    2    0    0    1    1    0    0       1.21
31-MAY-19          1 Fri        153    2    1    3    0    2    4    1    8    3    0   23   29   10    2    1   18   20    6   12    2    3    3    0    0       6.38
31-MAY-19          2 Fri        223    1    2    3    0    1    4    0    3    1    0   71   86   29    1    0    6    7    2    4    0    1    1    0    0       9.29

16 rows selected.

SQL>

Where the log switch frequency exceeds 3 per hour, I have highlighted as red.

From the above output, we can see that most the time the 3 log switch per hour is met, with the 5th and 7th hour exceeding the threshold.  Also we can see today there as been abnormal and excessive churn at 10th and 11th hour.

Investigating this further, we look at v$log to see the size of the online redo logs:

SQL> set pages 999 lines 400
SQL> col FIRST_CHANGE# format 999999999999999
SQL> select GROUP#, THREAD#, SEQUENCE#, BYTES/1024/1024 SIZE_MB, BLOCKSIZE, MEMBERS, ARCHIVED, STATUS, FIRST_CHANGE#, FIRST_TIME, NEXT_CHANGE#, NEXT_TIME, CON_ID from v$log;

    GROUP#    THREAD#  SEQUENCE#    SIZE_MB  BLOCKSIZE    MEMBERS ARC STATUS              FIRST_CHANGE# FIRST_TIM NEXT_CHANGE# NEXT_TIME     CON_ID
---------- ---------- ---------- ---------- ---------- ---------- --- ---------------- ---------------- --------- ------------ --------- ----------
         1          1      40649       4096        512          2 YES INACTIVE             137921001754 31-MAY-19   1.3793E+11 31-MAY-19          0
         2          1      40650       4096        512          2 NO  CURRENT              137930464620 31-MAY-19   2.8147E+14                    0
         3          1      40648       4096        512          2 YES INACTIVE             137920121808 31-MAY-19   1.3792E+11 31-MAY-19          0
         4          2      39319       4096        512          2 NO  CURRENT              137921005288 31-MAY-19   2.8147E+14                    0
         5          2      39317       4096        512          2 YES INACTIVE             137917291807 31-MAY-19   1.3792E+11 31-MAY-19          0
         6          2      39318       4096        512          2 YES INACTIVE             137919186475 31-MAY-19   1.3792E+11 31-MAY-19          0

6 rows selected.

SQL>

As we can see the online redo logs are 4GB, which is the default size on Oracle Exadata, which this is.  We could increase the size of the redo logs but it trade-off between performance verse space usage.  The peak we can see was 29 log switches for instance 1 and 86 log switches for instance 2, assuming these were full redo logs before switching, this is 29 + 86 logs switches for the database.  Which is 115 for the the hour, equating to a significant 460 GB of churn!  But this is an Exadata Machine and is sized and designed for this sort of load 🙂  To resize the redo logs to switch only 3 per hour, we would need to resize to 154GB, this wouldn’t be feasible, so we could either increase to something more reasonable like 10GB or leave as is, depending on what more important, the performance or space usage.

 

If you found this blog post useful, please like as well as follow me through my various Social Media avenues available on the sidebar and/or subscribe to this oracle blog via WordPress/e-mail.

Thanks

Zed DBA (Zahid Anwar)

Advertisements

RMAN Incremental Differential vs Cumulative & Demo

This blog post is part of the “RMAN Back to Basics” series, which can be found here.

Differential Incremental Backups (default)

A differential incremental backup, backs up all the blocks that have changed after the most recent incremental backup, which can be either level 1 or 0.

RMAN determines which is the most recent level 1 backup and then backs up all the blocks changed after that backup, if no level 1 then all the blocks changed after the last level 0 backup are backed up.  If no level 0 backup exist, then a level 0 backup is taken.

The diagram below shows how the level 1 differential incremental backups are just backing up from the last level 1:

Incremental_Differential

Cumulative Incremental Backups

A cumulative incremental backup, backs up all the blocks that have changed after the most recent incremental backup at level 0 only.

This reduces the work needed when recovering after a restore as only the desired level 1 is required to be applied instead of several level 1 in sequence.  The trade off is as every level 1 backup needs to backup all the blocks changed from the last level 0, it is repeated the work done by the previous level 1, thus require more space and more time.

The diagram below shows how the level 1 cumulative incremental backup are backing up from the last level 0:

Incremental_Cumulative

Cumulative Incremental Backup Demo

We take an cumulative incremental level 1 backup using my script 8_incremental_level_1_cumulative.sh:

[oracle@dc1sbxdb001 demo]$ ./8_incremental_level_1_cumulative.sh 
-----------------------
Step 1: Set environment
-----------------------

Setting the Database Environment using oraenv...
The Oracle base remains unchanged with value /u01/app/oracle

ORACLE_SID: ZEDDBA
ORACLE_HOME: /u01/app/oracle/product/12.2.0/dbhome_1

Press Enter to continue

The environment is set to my ZEDDBA database, next we will take an differential backup twice, followed by cumulative incremental backup twice:

-------------------------------------------------
Step 2: Take Incremental Level 1 Cumlative Backup
-------------------------------------------------

Content of 8_incremental_level_1_cumulative.cmd file:

BACKUP INCREMENTAL LEVEL 1 DATABASE TAG 'INCR LEVEL 1';
BACKUP INCREMENTAL LEVEL 1 DATABASE TAG 'INCR LEVEL 1';
BACKUP INCREMENTAL LEVEL 1 CUMULATIVE DATABASE TAG 'INCR LEVEL 1 CUM';
BACKUP INCREMENTAL LEVEL 1 CUMULATIVE DATABASE TAG 'INCR LEVEL 1 CUM';
HOST 'read Press Enter to LIST BACKUP';
LIST BACKUP;

Press Enter to continue

Next a differential incremental backup is taken:

Calling 'rman target / cmdfile=/media/sf_Software/scripts/demo/8_incremental_level_1_cumulative.cmd'

Recovery Manager: Release 12.2.0.1.0 - Production on Fri May 24 18:27:49 2019

Copyright (c) 1982, 2017, Oracle and/or its affiliates. All rights reserved.

connected to target database: ZEDDBA (DBID=3520942925)

RMAN> BACKUP INCREMENTAL LEVEL 1 DATABASE TAG 'INCR LEVEL 1';
Starting backup at 24-MAY-19
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=157 device type=DISK
channel ORA_DISK_1: starting incremental level 1 datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00001 name=/u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_system_gftkr3fv_.dbf
input datafile file number=00002 name=/u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_sysaux_gftkr792_.dbf
input datafile file number=00003 name=/u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_undotbs1_gftkr944_.dbf
input datafile file number=00004 name=/u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_users_gftkr9fc_.dbf
channel ORA_DISK_1: starting piece 1 at 24-MAY-19
channel ORA_DISK_1: finished piece 1 at 24-MAY-19
piece handle=/u01/app/oracle/fast_recovery_area/ZEDDBA/backupset/2019_05_24/o1_mf_nnnd1_INCR_LEVEL_1_ggjbdsg0_.bkp tag=INCR LEVEL 1 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03
Finished backup at 24-MAY-19

Starting Control File and SPFILE Autobackup at 24-MAY-19
piece handle=/u01/app/oracle/fast_recovery_area/ZEDDBA/autobackup/2019_05_24/o1_mf_s_1009132076_ggjbdwoh_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 24-MAY-19

Next another differential incremental backup is taken:

RMAN> BACKUP INCREMENTAL LEVEL 1 DATABASE TAG 'INCR LEVEL 1';
Starting backup at 24-MAY-19
using channel ORA_DISK_1
channel ORA_DISK_1: starting incremental level 1 datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00001 name=/u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_system_gftkr3fv_.dbf
input datafile file number=00002 name=/u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_sysaux_gftkr792_.dbf
input datafile file number=00003 name=/u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_undotbs1_gftkr944_.dbf
input datafile file number=00004 name=/u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_users_gftkr9fc_.dbf
channel ORA_DISK_1: starting piece 1 at 24-MAY-19
channel ORA_DISK_1: finished piece 1 at 24-MAY-19
piece handle=/u01/app/oracle/fast_recovery_area/ZEDDBA/backupset/2019_05_24/o1_mf_nnnd1_INCR_LEVEL_1_ggjbdys0_.bkp tag=INCR LEVEL 1 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03
Finished backup at 24-MAY-19

Starting Control File and SPFILE Autobackup at 24-MAY-19
piece handle=/u01/app/oracle/fast_recovery_area/ZEDDBA/autobackup/2019_05_24/o1_mf_s_1009132081_ggjbf23y_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 24-MAY-19

Next a cumulative incremental backup is taken:

RMAN> BACKUP INCREMENTAL LEVEL 1 CUMULATIVE DATABASE TAG 'INCR LEVEL 1 CUM';
Starting backup at 24-MAY-19
using channel ORA_DISK_1
channel ORA_DISK_1: starting incremental level 1 datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00001 name=/u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_system_gftkr3fv_.dbf
input datafile file number=00002 name=/u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_sysaux_gftkr792_.dbf
input datafile file number=00003 name=/u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_undotbs1_gftkr944_.dbf
input datafile file number=00004 name=/u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_users_gftkr9fc_.dbf
channel ORA_DISK_1: starting piece 1 at 24-MAY-19
channel ORA_DISK_1: finished piece 1 at 24-MAY-19
piece handle=/u01/app/oracle/fast_recovery_area/ZEDDBA/backupset/2019_05_24/o1_mf_nnnd1_INCR_LEVEL_1_CUM_ggjbf3rn_.bkp tag=INCR LEVEL 1 CUM comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03
Finished backup at 24-MAY-19

Starting Control File and SPFILE Autobackup at 24-MAY-19
piece handle=/u01/app/oracle/fast_recovery_area/ZEDDBA/autobackup/2019_05_24/o1_mf_s_1009132084_ggjbf516_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 24-MAY-19

Next another cumulative incremental backup is taken:

RMAN> BACKUP INCREMENTAL LEVEL 1 CUMULATIVE DATABASE TAG 'INCR LEVEL 1 CUM';
Starting backup at 24-MAY-19
using channel ORA_DISK_1
channel ORA_DISK_1: starting incremental level 1 datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00001 name=/u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_system_gftkr3fv_.dbf
input datafile file number=00002 name=/u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_sysaux_gftkr792_.dbf
input datafile file number=00003 name=/u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_undotbs1_gftkr944_.dbf
input datafile file number=00004 name=/u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_users_gftkr9fc_.dbf
channel ORA_DISK_1: starting piece 1 at 24-MAY-19
channel ORA_DISK_1: finished piece 1 at 24-MAY-19
piece handle=/u01/app/oracle/fast_recovery_area/ZEDDBA/backupset/2019_05_24/o1_mf_nnnd1_INCR_LEVEL_1_CUM_ggjbf6rv_.bkp tag=INCR LEVEL 1 CUM comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03
Finished backup at 24-MAY-19

Starting Control File and SPFILE Autobackup at 24-MAY-19
piece handle=/u01/app/oracle/fast_recovery_area/ZEDDBA/autobackup/2019_05_24/o1_mf_s_1009132087_ggjbf81t_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 24-MAY-19

Next we list the backup within RMAN using ‘LIST BACKUP‘:

RMAN> HOST 'read Press Enter to LIST BACKUP';
host command complete

RMAN> LIST BACKUP;

List of Backup Sets
===================

...
...
...
BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
25 Incr 1 38.23M DISK 00:00:02 24-MAY-19 
BP Key: 25 Status: AVAILABLE Compressed: NO Tag: INCR LEVEL 1
Piece Name: /u01/app/oracle/fast_recovery_area/ZEDDBA/backupset/2019_05_24/o1_mf_nnnd1_INCR_LEVEL_1_ggjbdsg0_.bkp
List of Datafiles in backup set 25
File LV Type Ckp SCN Ckp Time Abs Fuz SCN Sparse Name
---- -- ---- ---------- --------- ----------- ------ ----
1 1 Incr 603534 24-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_system_gftkr3fv_.dbf
2 1 Incr 603534 24-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_sysaux_gftkr792_.dbf
3 1 Incr 603534 24-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_undotbs1_gftkr944_.dbf
4 1 Incr 603534 24-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_users_gftkr9fc_.dbf

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
26 Full 8.31M DISK 00:00:00 24-MAY-19 
BP Key: 26 Status: AVAILABLE Compressed: NO Tag: TAG20190524T182756
Piece Name: /u01/app/oracle/fast_recovery_area/ZEDDBA/autobackup/2019_05_24/o1_mf_s_1009132076_ggjbdwoh_.bkp
SPFILE Included: Modification time: 24-MAY-19
SPFILE db_unique_name: ZEDDBA
Control File Included: Ckp SCN: 603541 Ckp time: 24-MAY-19

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
27 Incr 1 64.00K DISK 00:00:03 24-MAY-19 
BP Key: 27 Status: AVAILABLE Compressed: NO Tag: INCR LEVEL 1
Piece Name: /u01/app/oracle/fast_recovery_area/ZEDDBA/backupset/2019_05_24/o1_mf_nnnd1_INCR_LEVEL_1_ggjbdys0_.bkp
List of Datafiles in backup set 27
File LV Type Ckp SCN Ckp Time Abs Fuz SCN Sparse Name
---- -- ---- ---------- --------- ----------- ------ ----
1 1 Incr 603560 24-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_system_gftkr3fv_.dbf
2 1 Incr 603560 24-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_sysaux_gftkr792_.dbf
3 1 Incr 603560 24-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_undotbs1_gftkr944_.dbf
4 1 Incr 603560 24-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_users_gftkr9fc_.dbf

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
28 Full 8.31M DISK 00:00:01 24-MAY-19 
BP Key: 28 Status: AVAILABLE Compressed: NO Tag: TAG20190524T182801
Piece Name: /u01/app/oracle/fast_recovery_area/ZEDDBA/autobackup/2019_05_24/o1_mf_s_1009132081_ggjbf23y_.bkp
SPFILE Included: Modification time: 24-MAY-19
SPFILE db_unique_name: ZEDDBA
Control File Included: Ckp SCN: 603567 Ckp time: 24-MAY-19

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
29 Incr 1 38.23M DISK 00:00:01 24-MAY-19 
BP Key: 29 Status: AVAILABLE Compressed: NO Tag: INCR LEVEL 1 CUM
Piece Name: /u01/app/oracle/fast_recovery_area/ZEDDBA/backupset/2019_05_24/o1_mf_nnnd1_INCR_LEVEL_1_CUM_ggjbf3rn_.bkp
List of Datafiles in backup set 29
File LV Type Ckp SCN Ckp Time Abs Fuz SCN Sparse Name
---- -- ---- ---------- --------- ----------- ------ ----
1 1 Incr 603585 24-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_system_gftkr3fv_.dbf
2 1 Incr 603585 24-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_sysaux_gftkr792_.dbf
3 1 Incr 603585 24-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_undotbs1_gftkr944_.dbf
4 1 Incr 603585 24-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_users_gftkr9fc_.dbf

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
30 Full 8.31M DISK 00:00:01 24-MAY-19 
BP Key: 30 Status: AVAILABLE Compressed: NO Tag: TAG20190524T182804
Piece Name: /u01/app/oracle/fast_recovery_area/ZEDDBA/autobackup/2019_05_24/o1_mf_s_1009132084_ggjbf516_.bkp
SPFILE Included: Modification time: 24-MAY-19
SPFILE db_unique_name: ZEDDBA
Control File Included: Ckp SCN: 603592 Ckp time: 24-MAY-19

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
31 Incr 1 38.23M DISK 00:00:01 24-MAY-19 
BP Key: 31 Status: AVAILABLE Compressed: NO Tag: INCR LEVEL 1 CUM
Piece Name: /u01/app/oracle/fast_recovery_area/ZEDDBA/backupset/2019_05_24/o1_mf_nnnd1_INCR_LEVEL_1_CUM_ggjbf6rv_.bkp
List of Datafiles in backup set 31
File LV Type Ckp SCN Ckp Time Abs Fuz SCN Sparse Name
---- -- ---- ---------- --------- ----------- ------ ----
1 1 Incr 603610 24-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_system_gftkr3fv_.dbf
2 1 Incr 603610 24-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_sysaux_gftkr792_.dbf
3 1 Incr 603610 24-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_undotbs1_gftkr944_.dbf
4 1 Incr 603610 24-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_users_gftkr9fc_.dbf

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
32 Full 8.31M DISK 00:00:01 24-MAY-19 
BP Key: 32 Status: AVAILABLE Compressed: NO Tag: TAG20190524T182807
Piece Name: /u01/app/oracle/fast_recovery_area/ZEDDBA/autobackup/2019_05_24/o1_mf_s_1009132087_ggjbf81t_.bkp
SPFILE Included: Modification time: 24-MAY-19
SPFILE db_unique_name: ZEDDBA
Control File Included: Ckp SCN: 603617 Ckp time: 24-MAY-19

Recovery Manager complete.

Press Enter to continue

Next we can see the first differential incremental backup backed up 39M of changed blocks, however the second differential incremental backup only backed up 72K as only backed up the blocks changed from the last level 1 backup.  Where as the cumulative incremental on both the first and second backup, backed up 39M of changed blocks:

Files size on disk:
...
...
...

/u01/app/oracle/fast_recovery_area/ZEDDBA/backupset/2019_05_24:
total 115M
-rw-r-----. 1 oracle oinstall 39M May 24 18:27 o1_mf_nnnd1_INCR_LEVEL_1_ggjbdsg0_.bkp
-rw-r-----. 1 oracle oinstall 72K May 24 18:28 o1_mf_nnnd1_INCR_LEVEL_1_ggjbdys0_.bkp
-rw-r-----. 1 oracle oinstall 39M May 24 18:28 o1_mf_nnnd1_INCR_LEVEL_1_CUM_ggjbf3rn_.bkp
-rw-r-----. 1 oracle oinstall 39M May 24 18:28 o1_mf_nnnd1_INCR_LEVEL_1_CUM_ggjbf6rv_.bkp

Press Enter to continue

Finally, we update the demo log table:

-------------------------------------
Step 3: Updating and viewing demo log
-------------------------------------
Calling 'sqlplus / as sysdba'
To updated and view demo log

1 row created.

Commit complete.

WHEN COMMENTS
------------------------------ --------------------------------------------------
16-MAY-19 05.31.24.000000 PM Enable Archive Log Mode
16-MAY-19 05.40.23.000000 PM Full Backup
16-MAY-19 05.46.17.000000 PM Full Backup plus Archive Logs
17-MAY-19 04.55.12.000000 PM Image Copy
20-MAY-19 05.13.44.000000 PM Incremental Level 0
22-MAY-19 12.09.17.000000 PM Incremental Level 1
22-MAY-19 12.54.43.000000 PM Incremental Level 1 with BCT
24-MAY-19 06.28.33.000000 PM Incremental Level 1 Cumulative

8 rows selected.

Press Enter to exit shell script

[oracle@dc1sbxdb001 demo]$

Reference

Scripts

  1. 8_incremental_level_1_cumulative.sh
  2. 8_incremental_level_1_cumulative.cmd

To download all 2 in one zip: 8_incremental_level_1_cumulative.zip

The rest of the series

  1. Oracle Database File Placement Best Practice & Create Database Demo
  2. RMAN Full Backup & Demo
  3. RMAN Image Copy & Demo
  4. RMAN Incremental & Demo Part 1 (Level 0)
  5. RMAN Incremental & Demo Part 2 (Level 1)
  6. RMAN Incremental with Block Change Tracking & Demo
  7. RMAN Incremental Differential vs Cumulative & Demo
  8. RMAN Incremental Updating Backup & Demo Part 1
  9. RMAN Incremental Updating Backup & Demo Part 2
  10. Flashback
  11. RMAN Block Media Recovery
  12. RMAN Recover database with only FRA
  13. RMAN Obsolete

Please Note: Links to the blog posts will be released daily and updated here.

 

If you found this blog post useful, please like as well as follow me through my various Social Media avenues available on the sidebar and/or subscribe to this oracle blog via WordPress/e-mail.

Thanks

Zed DBA (Zahid Anwar)

RMAN Incremental with Block Change Tracking & Demo

This blog post is part of the “RMAN Back to Basics” series, which can be found here.

Block Change Tracking

Block change tracking (BCT) improves incremental backup performance by recording the changed blocks in each datafile in the block change tracking file, thus avoiding the need to scan every block in the datafile for changes.  Only applicable for level 1 backups.  Block change tracking file is 1/30,000 the size of the data blocks being tracked, in 10MB chunk, which would track 300GB.  So to track 1TB database would only be 40MB.

Incremental with Block Change Tracking Demo

We take an incremental level 1 backup with block change tracking using my script 7_incremental_level_1_with_BCT.sh:

[oracle@dc1sbxdb001 demo]$ ./7_incremental_level_1_with_BCT.sh 
-----------------------
Step 1: Set environment
-----------------------

Setting the Database Environment using oraenv...
The Oracle base remains unchanged with value /u01/app/oracle

ORACLE_SID: ZEDDBA
ORACLE_HOME: /u01/app/oracle/product/12.2.0/dbhome_1

Press Enter to continue

The environment is set to my ZEDDBA database, then next we create some “junk” data in the database to bloat the backups to show the difference in performance with BCT:

-----------------------------------------------------------
Step 2: Take Incremental Level 1 with Block Change Tracking
-----------------------------------------------------------

Clearing all backups...

Creating junk!

Tablespace created.

Table created.

Table altered.

151956 rows created.

Commit complete.

303912 rows created.

Commit complete.

607824 rows created.

Commit complete.

1215648 rows created.

Commit complete.

2431296 rows created.

Commit complete.

4862592 rows created.

Commit complete.

insert /*+ append */ into junk value (select * from junk)
*
ERROR at line 1:
ORA-01653: unable to extend table SYS.JUNK by 8192 in tablespace JUNK

Commit complete.

insert /*+ append */ into junk value (select * from junk)
*
ERROR at line 1:
ORA-01653: unable to extend table SYS.JUNK by 8192 in tablespace JUNK

Commit complete.

BYTES/1024/1024
---------------
1984

Press Enter to continue

You can see 1,984MB of data has been created that will be backed up.

Next we will take an incremental level 0 backup followed by level 1, then repeated with BCT enabled and then finally repeat again with BCT disabled to show that caching or the like wasn’t the reason for the performance increase:

Content of 7_incremental_level_1_with_BCT.cmd file:
BACKUP INCREMENTAL LEVEL 0 DATABASE TAG 'INCR LEVEL 0';
BACKUP INCREMENTAL LEVEL 1 DATABASE TAG 'INCR LEVEL 1';
HOST 'read Press Enter to enable Block Change Tracking';
ALTER DATABASE ENABLE BLOCK CHANGE TRACKING;
SELECT STATUS, FILENAME FROM V$BLOCK_CHANGE_TRACKING;
HOST 'read Press Enter to take backup with BCT';
BACKUP INCREMENTAL LEVEL 0 DATABASE TAG 'INCR LEVEL 0';
BACKUP INCREMENTAL LEVEL 1 DATABASE TAG 'INCR LEVEL 1';
HOST 'read Press Enter to disable Block Change Tracking';
ALTER DATABASE DISABLE BLOCK CHANGE TRACKING;
BACKUP INCREMENTAL LEVEL 0 DATABASE TAG 'INCR LEVEL 0';
BACKUP INCREMENTAL LEVEL 1 DATABASE TAG 'INCR LEVEL 1';
HOST 'read Press Enter to LIST BACKUP';
LIST BACKUP;

Press Enter to continue

Next an incremental level 0 backup followed by level 1 is taken:

Calling 'rman target / cmdfile=/media/sf_Software/scripts/demo/7_incremental_level_1_with_BCT.cmd'

Recovery Manager: Release 12.2.0.1.0 - Production on Wed May 22 12:51:57 2019

Copyright (c) 1982, 2017, Oracle and/or its affiliates. All rights reserved.

connected to target database: ZEDDBA (DBID=3520942925)

RMAN> BACKUP INCREMENTAL LEVEL 0 DATABASE TAG 'INCR LEVEL 0';
Starting backup at 22-MAY-19
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=153 device type=DISK
channel ORA_DISK_1: starting incremental level 0 datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00005 name=/u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_junk_ggbfxdb4_.dbf
input datafile file number=00001 name=/u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_system_gftkr3fv_.dbf
input datafile file number=00002 name=/u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_sysaux_gftkr792_.dbf
input datafile file number=00003 name=/u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_undotbs1_gftkr944_.dbf
input datafile file number=00004 name=/u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_users_gftkr9fc_.dbf
channel ORA_DISK_1: starting piece 1 at 22-MAY-19

channel ORA_DISK_1: finished piece 1 at 22-MAY-19
piece handle=/u01/app/oracle/fast_recovery_area/ZEDDBA/backupset/2019_05_22/o1_mf_nnnd0_INCR_LEVEL_0_ggbfz0nn_.bkp tag=INCR LEVEL 0 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:35
Finished backup at 22-MAY-19

Starting Control File and SPFILE Autobackup at 22-MAY-19
piece handle=/u01/app/oracle/fast_recovery_area/ZEDDBA/autobackup/2019_05_22/o1_mf_s_1008939154_ggbg03bn_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 22-MAY-19

RMAN> BACKUP INCREMENTAL LEVEL 1 DATABASE TAG 'INCR LEVEL 1';
Starting backup at 22-MAY-19
using channel ORA_DISK_1
channel ORA_DISK_1: starting incremental level 1 datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00005 name=/u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_junk_ggbfxdb4_.dbf
input datafile file number=00001 name=/u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_system_gftkr3fv_.dbf
input datafile file number=00002 name=/u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_sysaux_gftkr792_.dbf
input datafile file number=00003 name=/u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_undotbs1_gftkr944_.dbf
input datafile file number=00004 name=/u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_users_gftkr9fc_.dbf
channel ORA_DISK_1: starting piece 1 at 22-MAY-19
channel ORA_DISK_1: finished piece 1 at 22-MAY-19
piece handle=/u01/app/oracle/fast_recovery_area/ZEDDBA/backupset/2019_05_22/o1_mf_nnnd1_INCR_LEVEL_1_ggbg052p_.bkp tag=INCR LEVEL 1 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:08
Finished backup at 22-MAY-19

Starting Control File and SPFILE Autobackup at 22-MAY-19
piece handle=/u01/app/oracle/fast_recovery_area/ZEDDBA/autobackup/2019_05_22/o1_mf_s_1008939164_ggbg0df4_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 22-MAY-19

host command complete

Please Note: It took 35 seconds to take the level 0 backup and 8 seconds to take level 1 backup.

Now we enable block change tracking:

RMAN> HOST 'read Press Enter to enable Block Change Tracking';
1> ALTER DATABASE ENABLE BLOCK CHANGE TRACKING;
2> SELECT STATUS, FILENAME FROM V$BLOCK_CHANGE_TRACKING;
Statement processed

STATUS 
----------
FILENAME 
--------------------------------------------------------------------------------
ENABLED 
/u01/app/oracle/oradata/ZEDDBA/changetracking/o1_mf_ggbg0g1y_.chg

Next an incremental level 0 backup followed by level 1 is taken again to show the performance improvement:

RMAN> HOST 'read Press Enter to take backup with BCT';

host command complete

RMAN> BACKUP INCREMENTAL LEVEL 0 DATABASE TAG 'INCR LEVEL 0';
Starting backup at 22-MAY-19
using channel ORA_DISK_1
channel ORA_DISK_1: starting incremental level 0 datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00005 name=/u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_junk_ggbfxdb4_.dbf
input datafile file number=00001 name=/u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_system_gftkr3fv_.dbf
input datafile file number=00002 name=/u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_sysaux_gftkr792_.dbf
input datafile file number=00003 name=/u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_undotbs1_gftkr944_.dbf
input datafile file number=00004 name=/u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_users_gftkr9fc_.dbf
channel ORA_DISK_1: starting piece 1 at 22-MAY-19
channel ORA_DISK_1: finished piece 1 at 22-MAY-19
piece handle=/u01/app/oracle/fast_recovery_area/ZEDDBA/backupset/2019_05_22/o1_mf_nnnd0_INCR_LEVEL_0_ggbg0nl4_.bkp tag=INCR LEVEL 0 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:35
Finished backup at 22-MAY-19

Starting Control File and SPFILE Autobackup at 22-MAY-19
piece handle=/u01/app/oracle/fast_recovery_area/ZEDDBA/autobackup/2019_05_22/o1_mf_s_1008939207_ggbg1qsp_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 22-MAY-19

RMAN> BACKUP INCREMENTAL LEVEL 1 DATABASE TAG 'INCR LEVEL 1';
Starting backup at 22-MAY-19
using channel ORA_DISK_1
channel ORA_DISK_1: starting incremental level 1 datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00005 name=/u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_junk_ggbfxdb4_.dbf
input datafile file number=00001 name=/u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_system_gftkr3fv_.dbf
input datafile file number=00002 name=/u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_sysaux_gftkr792_.dbf
input datafile file number=00003 name=/u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_undotbs1_gftkr944_.dbf
input datafile file number=00004 name=/u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_users_gftkr9fc_.dbf
channel ORA_DISK_1: starting piece 1 at 22-MAY-19
channel ORA_DISK_1: finished piece 1 at 22-MAY-19
piece handle=/u01/app/oracle/fast_recovery_area/ZEDDBA/backupset/2019_05_22/o1_mf_nnnd1_INCR_LEVEL_1_ggbg1snf_.bkp tag=INCR LEVEL 1 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 22-MAY-19

Starting Control File and SPFILE Autobackup at 22-MAY-19
piece handle=/u01/app/oracle/fast_recovery_area/ZEDDBA/autobackup/2019_05_22/o1_mf_s_1008939210_ggbg1txm_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 22-MAY-19

Now we can see the level 1 backup with block change tracking took only 1 second compared to 8 seconds without BCT 🙂

Now we disable block change tracking to prove the performance increase was due to BCT:

RMAN> HOST 'read Press Enter to disable Block Change Tracking';

host command complete

RMAN> ALTER DATABASE DISABLE BLOCK CHANGE TRACKING;
Statement processed

Next an incremental level 0 backup followed by level 1 is taken again to show the performance improvement has reverted:

RMAN> BACKUP INCREMENTAL LEVEL 0 DATABASE TAG 'INCR LEVEL 0';
Starting backup at 22-MAY-19
using channel ORA_DISK_1
channel ORA_DISK_1: starting incremental level 0 datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00005 name=/u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_junk_ggbfxdb4_.dbf
input datafile file number=00001 name=/u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_system_gftkr3fv_.dbf
input datafile file number=00002 name=/u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_sysaux_gftkr792_.dbf
input datafile file number=00003 name=/u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_undotbs1_gftkr944_.dbf
input datafile file number=00004 name=/u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_users_gftkr9fc_.dbf
channel ORA_DISK_1: starting piece 1 at 22-MAY-19
channel ORA_DISK_1: finished piece 1 at 22-MAY-19
piece handle=/u01/app/oracle/fast_recovery_area/ZEDDBA/backupset/2019_05_22/o1_mf_nnnd0_INCR_LEVEL_0_ggbg243o_.bkp tag=INCR LEVEL 0 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:35
Finished backup at 22-MAY-19

Starting Control File and SPFILE Autobackup at 22-MAY-19
piece handle=/u01/app/oracle/fast_recovery_area/ZEDDBA/autobackup/2019_05_22/o1_mf_s_1008939255_ggbg37gs_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 22-MAY-19

RMAN> BACKUP INCREMENTAL LEVEL 1 DATABASE TAG 'INCR LEVEL 1';
Starting backup at 22-MAY-19
using channel ORA_DISK_1
channel ORA_DISK_1: starting incremental level 1 datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00005 name=/u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_junk_ggbfxdb4_.dbf
input datafile file number=00001 name=/u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_system_gftkr3fv_.dbf
input datafile file number=00002 name=/u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_sysaux_gftkr792_.dbf
input datafile file number=00003 name=/u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_undotbs1_gftkr944_.dbf
input datafile file number=00004 name=/u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_users_gftkr9fc_.dbf
channel ORA_DISK_1: starting piece 1 at 22-MAY-19

channel ORA_DISK_1: finished piece 1 at 22-MAY-19
piece handle=/u01/app/oracle/fast_recovery_area/ZEDDBA/backupset/2019_05_22/o1_mf_nnnd1_INCR_LEVEL_1_ggbg3997_.bkp tag=INCR LEVEL 1 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:07
Finished backup at 22-MAY-19

Starting Control File and SPFILE Autobackup at 22-MAY-19
piece handle=/u01/app/oracle/fast_recovery_area/ZEDDBA/autobackup/2019_05_22/o1_mf_s_1008939264_ggbg3js2_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 22-MAY-19

Now we can see the level 1 backup without block change tracking took 7 seconds, similar to the original 8 seconds without BCT 🙂

Next we list the backup within RMAN using ‘LIST BACKUP‘:

RMAN> HOST 'read Press Enter to LIST BACKUP';
host command complete

RMAN> LIST BACKUP;

List of Backup Sets
===================

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
12 Incr 0 2.44G DISK 00:00:32 22-MAY-19 
BP Key: 12 Status: AVAILABLE Compressed: NO Tag: INCR LEVEL 0
Piece Name: /u01/app/oracle/fast_recovery_area/ZEDDBA/backupset/2019_05_22/o1_mf_nnnd0_INCR_LEVEL_0_ggbfz0nn_.bkp
List of Datafiles in backup set 12
File LV Type Ckp SCN Ckp Time Abs Fuz SCN Sparse Name
---- -- ---- ---------- --------- ----------- ------ ----
1 0 Incr 578031 22-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_system_gftkr3fv_.dbf
2 0 Incr 578031 22-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_sysaux_gftkr792_.dbf
3 0 Incr 578031 22-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_undotbs1_gftkr944_.dbf
4 0 Incr 578031 22-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_users_gftkr9fc_.dbf
5 0 Incr 578031 22-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_junk_ggbfxdb4_.dbf

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
13 Full 8.31M DISK 00:00:01 22-MAY-19 
BP Key: 13 Status: AVAILABLE Compressed: NO Tag: TAG20190522T125234
Piece Name: /u01/app/oracle/fast_recovery_area/ZEDDBA/autobackup/2019_05_22/o1_mf_s_1008939154_ggbg03bn_.bkp
SPFILE Included: Modification time: 22-MAY-19
SPFILE db_unique_name: ZEDDBA
Control File Included: Ckp SCN: 578048 Ckp time: 22-MAY-19

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
14 Incr 1 72.00K DISK 00:00:06 22-MAY-19 
BP Key: 14 Status: AVAILABLE Compressed: NO Tag: INCR LEVEL 1
Piece Name: /u01/app/oracle/fast_recovery_area/ZEDDBA/backupset/2019_05_22/o1_mf_nnnd1_INCR_LEVEL_1_ggbg052p_.bkp
List of Datafiles in backup set 14
File LV Type Ckp SCN Ckp Time Abs Fuz SCN Sparse Name
---- -- ---- ---------- --------- ----------- ------ ----
1 1 Incr 578067 22-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_system_gftkr3fv_.dbf
2 1 Incr 578067 22-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_sysaux_gftkr792_.dbf
3 1 Incr 578067 22-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_undotbs1_gftkr944_.dbf
4 1 Incr 578067 22-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_users_gftkr9fc_.dbf
5 1 Incr 578067 22-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_junk_ggbfxdb4_.dbf

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
15 Full 8.31M DISK 00:00:00 22-MAY-19 
BP Key: 15 Status: AVAILABLE Compressed: NO Tag: TAG20190522T125244
Piece Name: /u01/app/oracle/fast_recovery_area/ZEDDBA/autobackup/2019_05_22/o1_mf_s_1008939164_ggbg0df4_.bkp
SPFILE Included: Modification time: 22-MAY-19
SPFILE db_unique_name: ZEDDBA
Control File Included: Ckp SCN: 578078 Ckp time: 22-MAY-19

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
16 Incr 0 2.44G DISK 00:00:32 22-MAY-19 
BP Key: 16 Status: AVAILABLE Compressed: NO Tag: INCR LEVEL 0
Piece Name: /u01/app/oracle/fast_recovery_area/ZEDDBA/backupset/2019_05_22/o1_mf_nnnd0_INCR_LEVEL_0_ggbg0nl4_.bkp
List of Datafiles in backup set 16
File LV Type Ckp SCN Ckp Time Abs Fuz SCN Sparse Name
---- -- ---- ---------- --------- ----------- ------ ----
1 0 Incr 578123 22-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_system_gftkr3fv_.dbf
2 0 Incr 578123 22-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_sysaux_gftkr792_.dbf
3 0 Incr 578123 22-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_undotbs1_gftkr944_.dbf
4 0 Incr 578123 22-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_users_gftkr9fc_.dbf
5 0 Incr 578123 22-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_junk_ggbfxdb4_.dbf

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
17 Full 8.31M DISK 00:00:00 22-MAY-19 
BP Key: 17 Status: AVAILABLE Compressed: NO Tag: TAG20190522T125327
Piece Name: /u01/app/oracle/fast_recovery_area/ZEDDBA/autobackup/2019_05_22/o1_mf_s_1008939207_ggbg1qsp_.bkp
SPFILE Included: Modification time: 22-MAY-19
SPFILE db_unique_name: ZEDDBA
Control File Included: Ckp SCN: 578142 Ckp time: 22-MAY-19

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
18 Incr 1 72.00K DISK 00:00:00 22-MAY-19 
BP Key: 18 Status: AVAILABLE Compressed: NO Tag: INCR LEVEL 1
Piece Name: /u01/app/oracle/fast_recovery_area/ZEDDBA/backupset/2019_05_22/o1_mf_nnnd1_INCR_LEVEL_1_ggbg1snf_.bkp
List of Datafiles in backup set 18
File LV Type Ckp SCN Ckp Time Abs Fuz SCN Sparse Name
---- -- ---- ---------- --------- ----------- ------ ----
1 1 Incr 578162 22-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_system_gftkr3fv_.dbf
2 1 Incr 578162 22-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_sysaux_gftkr792_.dbf
3 1 Incr 578162 22-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_undotbs1_gftkr944_.dbf
4 1 Incr 578162 22-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_users_gftkr9fc_.dbf
5 1 Incr 578162 22-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_junk_ggbfxdb4_.dbf

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
19 Full 8.31M DISK 00:00:00 22-MAY-19 
BP Key: 19 Status: AVAILABLE Compressed: NO Tag: TAG20190522T125330
Piece Name: /u01/app/oracle/fast_recovery_area/ZEDDBA/autobackup/2019_05_22/o1_mf_s_1008939210_ggbg1txm_.bkp
SPFILE Included: Modification time: 22-MAY-19
SPFILE db_unique_name: ZEDDBA
Control File Included: Ckp SCN: 578169 Ckp time: 22-MAY-19

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
20 Incr 0 2.44G DISK 00:00:30 22-MAY-19 
BP Key: 20 Status: AVAILABLE Compressed: NO Tag: INCR LEVEL 0
Piece Name: /u01/app/oracle/fast_recovery_area/ZEDDBA/backupset/2019_05_22/o1_mf_nnnd0_INCR_LEVEL_0_ggbg243o_.bkp
List of Datafiles in backup set 20
File LV Type Ckp SCN Ckp Time Abs Fuz SCN Sparse Name
---- -- ---- ---------- --------- ----------- ------ ----
1 0 Incr 578206 22-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_system_gftkr3fv_.dbf
2 0 Incr 578206 22-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_sysaux_gftkr792_.dbf
3 0 Incr 578206 22-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_undotbs1_gftkr944_.dbf
4 0 Incr 578206 22-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_users_gftkr9fc_.dbf
5 0 Incr 578206 22-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_junk_ggbfxdb4_.dbf

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
21 Full 8.31M DISK 00:00:00 22-MAY-19 
BP Key: 21 Status: AVAILABLE Compressed: NO Tag: TAG20190522T125415
Piece Name: /u01/app/oracle/fast_recovery_area/ZEDDBA/autobackup/2019_05_22/o1_mf_s_1008939255_ggbg37gs_.bkp
SPFILE Included: Modification time: 22-MAY-19
SPFILE db_unique_name: ZEDDBA
Control File Included: Ckp SCN: 578223 Ckp time: 22-MAY-19

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
22 Incr 1 72.00K DISK 00:00:06 22-MAY-19 
BP Key: 22 Status: AVAILABLE Compressed: NO Tag: INCR LEVEL 1
Piece Name: /u01/app/oracle/fast_recovery_area/ZEDDBA/backupset/2019_05_22/o1_mf_nnnd1_INCR_LEVEL_1_ggbg3997_.bkp
List of Datafiles in backup set 22
File LV Type Ckp SCN Ckp Time Abs Fuz SCN Sparse Name
---- -- ---- ---------- --------- ----------- ------ ----
1 1 Incr 578242 22-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_system_gftkr3fv_.dbf
2 1 Incr 578242 22-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_sysaux_gftkr792_.dbf
3 1 Incr 578242 22-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_undotbs1_gftkr944_.dbf
4 1 Incr 578242 22-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_users_gftkr9fc_.dbf
5 1 Incr 578242 22-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_junk_ggbfxdb4_.dbf

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
23 Full 8.31M DISK 00:00:00 22-MAY-19 
BP Key: 23 Status: AVAILABLE Compressed: NO Tag: TAG20190522T125424
Piece Name: /u01/app/oracle/fast_recovery_area/ZEDDBA/autobackup/2019_05_22/o1_mf_s_1008939264_ggbg3js2_.bkp
SPFILE Included: Modification time: 22-MAY-19
SPFILE db_unique_name: ZEDDBA
Control File Included: Ckp SCN: 578250 Ckp time: 22-MAY-19

RMAN>

Recovery Manager complete.

Press Enter to continue

Next we show the size of the backups to show they were consistently the same size throughout:

Files size on disk:
/u01/app/oracle/fast_recovery_area/ZEDDBA/backupset/2019_05_16:
total 0

/u01/app/oracle/fast_recovery_area/ZEDDBA/backupset/2019_05_20:
total 0

/u01/app/oracle/fast_recovery_area/ZEDDBA/backupset/2019_05_22:
total 7.4G
-rw-r-----. 1 oracle oinstall 2.5G May 22 12:52 o1_mf_nnnd0_INCR_LEVEL_0_ggbfz0nn_.bkp
-rw-r-----. 1 oracle oinstall 80K May 22 12:52 o1_mf_nnnd1_INCR_LEVEL_1_ggbg052p_.bkp
-rw-r-----. 1 oracle oinstall 2.5G May 22 12:53 o1_mf_nnnd0_INCR_LEVEL_0_ggbg0nl4_.bkp
-rw-r-----. 1 oracle oinstall 80K May 22 12:53 o1_mf_nnnd1_INCR_LEVEL_1_ggbg1snf_.bkp
-rw-r-----. 1 oracle oinstall 2.5G May 22 12:54 o1_mf_nnnd0_INCR_LEVEL_0_ggbg243o_.bkp
-rw-r-----. 1 oracle oinstall 80K May 22 12:54 o1_mf_nnnd1_INCR_LEVEL_1_ggbg3997_.bkp

Press Enter to continue

Next we drop the junk as we no longer require this:

Dropping junk!

Tablespace dropped.

Press Enter to continue

Finally, we update the demo log table:

-------------------------------------
Step 3: Updating and viewing demo log
-------------------------------------
Calling 'sqlplus / as sysdba'
To updated and view demo log

1 row created.


Commit complete.


WHEN COMMENTS
------------------------------ --------------------------------------------------
16-MAY-19 05.31.24.000000 PM Enable Archive Log Mode
16-MAY-19 05.40.23.000000 PM Full Backup
16-MAY-19 05.46.17.000000 PM Full Backup plus Archive Logs
17-MAY-19 04.55.12.000000 PM Image Copy
20-MAY-19 05.13.44.000000 PM Incremental Level 0
22-MAY-19 12.09.17.000000 PM Incremental Level 1
22-MAY-19 12.54.43.000000 PM Incremental Level 1 with BCT

7 rows selected.

Press Enter to exit shell script

[oracle@dc1sbxdb001 demo]$

License Implications

BCT is part of Enterprise Edition and is not available on Standard Edition.  In a non Data Guard Environment, there no issue with enabling BCT on Enterprise Edition.  However, in a Data Guard environment, BCT can not be used on a Physical Standby unless you have Active Data Guard as BCT on Physical Standby is part of the Active Data Guard, which is a licensable product.

This is explained well in this blog post by Bastiaan Bak:
Using Block Change Tracking with Oracle 12c Data Guard

Reference

Scripts

  1. 7_incremental_level_1_with_BCT.sh
  2. 7_incremental_level_1_with_BCT.cmd
  3. 7_create_junk.sql
  4. 7_drop_junk.sql
  5. delete_all_backups.cmd

To download all 5 in one zip: 7_incremental_level_1_with_BCT.zip

The rest of the series

  1. Oracle Database File Placement Best Practice & Create Database Demo
  2. RMAN Full Backup & Demo
  3. RMAN Image Copy & Demo
  4. RMAN Incremental & Demo Part 1 (Level 0)
  5. RMAN Incremental & Demo Part 2 (Level 1)
  6. RMAN Incremental with Block Change Tracking & Demo
  7. RMAN Incremental Differential vs Cumulative & Demo
  8. RMAN Incremental Updating Backup & Demo Part 1
  9. RMAN Incremental Updating Backup & Demo Part 2
  10. Flashback
  11. RMAN Block Media Recovery
  12. RMAN Recover database with only FRA
  13. RMAN Obsolete

Please Note: Links to the blog posts will be released daily and updated here.

Updates

30th May 2019: Added “License Implications” as requested in the comments.

 

If you found this blog post useful, please like as well as follow me through my various Social Media avenues available on the sidebar and/or subscribe to this oracle blog via WordPress/e-mail.

Thanks

Zed DBA (Zahid Anwar)

Querying the amount of redo in Oracle Database

When a database starts to churn more redo then normal, it is handy to be able to extract the amount of redo over time, to be able to plot this on a graph.  The below query allows you to extract this info 🙂

Query for redo generation

Query to obtain the amount of redo generation over time by hour and MB:

set pages 999 lines 400
select to_char(trunc(first_time, 'HH24'), 'DD/MM/YYYY HH24:MI:SS') date_by_hour, sum(round(blocks*block_size/1024/1024)) CHURN_IN_MB
from v$archived_log
group by trunc(first_time, 'HH24')
order by 1
/

Output

This is the output you will get from the query:

SQL> set pages 999 lines 400
SQL> select to_char(trunc(first_time, 'HH24'), 'DD/MM/YYYY HH24:MI:SS') date_by_hour, sum(round(blocks*block_size/1024/1024)) CHURN_IN_MB
2 from v$archived_log
3 group by trunc(first_time, 'HH24')
4 order by 1
5 /

DATE_BY_HOUR CHURN_IN_MB
------------------- -----------
07/12/2018 10:00:00 2
07/12/2018 11:00:00 51
07/12/2018 12:00:00 3731
07/12/2018 13:00:00 10857
07/12/2018 14:00:00 12505
07/12/2018 15:00:00 17493
07/12/2018 16:00:00 187
07/12/2018 17:00:00 173
07/12/2018 18:00:00 185
07/12/2018 19:00:00 137
07/12/2018 20:00:00 159
07/12/2018 21:00:00 155
07/12/2018 22:00:00 157
07/12/2018 23:00:00 183
08/12/2018 00:00:00 154
08/12/2018 01:00:00 184
08/12/2018 02:00:00 179
08/12/2018 03:00:00 179
08/12/2018 04:00:00 172
08/12/2018 05:00:00 177
08/12/2018 06:00:00 174
08/12/2018 07:00:00 172
08/12/2018 08:00:00 177
08/12/2018 09:00:00 175
08/12/2018 10:00:00 175
08/12/2018 11:00:00 220
08/12/2018 12:00:00 221
08/12/2018 13:00:00 218
08/12/2018 14:00:00 216
08/12/2018 15:00:00 214
08/12/2018 16:00:00 212
08/12/2018 17:00:00 208
08/12/2018 18:00:00 213
08/12/2018 19:00:00 207
08/12/2018 20:00:00 205
08/12/2018 21:00:00 205
08/12/2018 22:00:00 202
08/12/2018 23:00:00 228
09/12/2018 00:00:00 202
09/12/2018 01:00:00 238
09/12/2018 02:00:00 212
09/12/2018 03:00:00 227
09/12/2018 04:00:00 213
09/12/2018 05:00:00 206
09/12/2018 06:00:00 221
09/12/2018 07:00:00 222
09/12/2018 08:00:00 216
09/12/2018 09:00:00 220
09/12/2018 10:00:00 216
09/12/2018 11:00:00 217
09/12/2018 12:00:00 162
09/12/2018 13:00:00 163
09/12/2018 14:00:00 163
09/12/2018 15:00:00 160
09/12/2018 16:00:00 158
09/12/2018 17:00:00 159
09/12/2018 18:00:00 161
09/12/2018 19:00:00 157
09/12/2018 20:00:00 157
09/12/2018 21:00:00 153
09/12/2018 22:00:00 153
09/12/2018 23:00:00 176
10/12/2018 00:00:00 150
10/12/2018 01:00:00 174
10/12/2018 02:00:00 168
10/12/2018 03:00:00 167
10/12/2018 04:00:00 169
10/12/2018 05:00:00 162
10/12/2018 06:00:00 168
10/12/2018 07:00:00 166
10/12/2018 08:00:00 160
10/12/2018 09:00:00 162
10/12/2018 10:00:00 141
10/12/2018 11:00:00 144
10/12/2018 12:00:00 142
10/12/2018 13:00:00 141
10/12/2018 14:00:00 142
10/12/2018 15:00:00 169
10/12/2018 16:00:00 146
10/12/2018 17:00:00 173
10/12/2018 18:00:00 177
10/12/2018 19:00:00 175
10/12/2018 20:00:00 7278
10/12/2018 21:00:00 12604
10/12/2018 22:00:00 18154
10/12/2018 23:00:00 6844
11/12/2018 00:00:00 1350
11/12/2018 01:00:00 505
11/12/2018 02:00:00 1183
11/12/2018 03:00:00 508
11/12/2018 04:00:00 1488
11/12/2018 05:00:00 7071
11/12/2018 06:00:00 16453
11/12/2018 07:00:00 7076
11/12/2018 08:00:00 17310
11/12/2018 09:00:00 8063
11/12/2018 10:00:00 12681
11/12/2018 11:00:00 3678
11/12/2018 14:00:00 6026
11/12/2018 15:00:00 15569
11/12/2018 16:00:00 7069
11/12/2018 17:00:00 11772
11/12/2018 18:00:00 10167
11/12/2018 19:00:00 6159
11/12/2018 20:00:00 16450
11/12/2018 21:00:00 4106
11/12/2018 22:00:00 10115
11/12/2018 23:00:00 10355
12/12/2018 00:00:00 3203
12/12/2018 01:00:00 5160
12/12/2018 02:00:00 14468
12/12/2018 03:00:00 6591
12/12/2018 04:00:00 1376
12/12/2018 05:00:00 4053
12/12/2018 06:00:00 7947
12/12/2018 07:00:00 12433
12/12/2018 08:00:00 1434
12/12/2018 09:00:00 663
12/12/2018 10:00:00 1511
12/12/2018 11:00:00 654
12/12/2018 12:00:00 5661
12/12/2018 13:00:00 9817
12/12/2018 14:00:00 10148
12/12/2018 15:00:00 372
12/12/2018 16:00:00 1074
12/12/2018 17:00:00 672
12/12/2018 18:00:00 1094
12/12/2018 19:00:00 391
12/12/2018 20:00:00 2403
12/12/2018 21:00:00 827
12/12/2018 22:00:00 1108
12/12/2018 23:00:00 15575
13/12/2018 00:00:00 17219
13/12/2018 01:00:00 8255
13/12/2018 02:00:00 877
13/12/2018 03:00:00 180
13/12/2018 04:00:00 1782
13/12/2018 05:00:00 5284
13/12/2018 06:00:00 16191
13/12/2018 07:00:00 6251
13/12/2018 08:00:00 14533
13/12/2018 09:00:00 8138
13/12/2018 10:00:00 12629
13/12/2018 11:00:00 9701
13/12/2018 12:00:00 9869
13/12/2018 13:00:00 9554
13/12/2018 14:00:00 7106
13/12/2018 15:00:00 15094
13/12/2018 16:00:00 8622
13/12/2018 17:00:00 671
13/12/2018 18:00:00 1094
13/12/2018 19:00:00 370
13/12/2018 20:00:00 2332
13/12/2018 21:00:00 421

154 rows selected.

SQL>

The above output can then be used to create a pivot chart in Excel 🙂

If you found this blog post useful, please like as well as follow me through my various Social Media avenues available on the sidebar and/or subscribe to this oracle blog via WordPress/e-mail.

Thanks

Zed DBA (Zahid Anwar)

Oracle Exadata Smart Flash Logging

What is Exadata Smart Flash Logging?

In an OLTP environment, it is crucial to have fast response times to redo log writes i.e. low latency.  When multiplexing redo logs for high availability i.e. to protect against hardware failure, redo log writes are only acknowledged when redo is written to all redo log members i.e when the slowest disk completes the write.  By this nature, whenever a disk slows down even if for a moment it can have impact on redo log performance and throughput.

Flash alone can’t resolve this issue as flash can also momentarily slow down due to issues in erase cycles or wear leveling and remember the acknowledgement is only given when the redo is written to all redo log members.

Exadata Smart Flash Logging, is the feature that writes to both hard disk and flash with the acknowledgement given as soon as either completes the write, thus improving response time and throughput.  So if a write is slow to hard disk the flash will give a quicker acknowledgement but when flash is experiencing a slow down due to erase cycles or wear leveling then the hard disk will acknowledge, smoothing out response times.

The Exadata Smart Flash Cache isn’t permanent but a temporary store to provide fast response times by storing redo until it’s safely written to disk.

No changes are required to redo log configuration and is transparent to database and recovery.

How to enable Smart Flash Logging?

It’s enabled out the box or for older systems it’s enabled when applying cell patch version 11.2.2.4 and also requires Database 11.2.0.2 Bundle Patch 11 or higher.

How to disable Smart Flash Logging?

This shouldn’t be done unless instructed to do so by Oracle Support or Development.

How much flash is used by Smart Flash Logging?

By default just 512Mb is used per cell, which should be sufficient for most situations.   It’s a small investment for huge performance benefit.  Statistics record the number of successful write and unsuccessful writes due to the temporary space filled.  In which case the size may need to be increased.  Also I/O Resource Manager (IORM) can be used to disable Smart Flash Logging for none critical databases.

Do standby redo logs use Smart Flash Logging?

Yes, standby redo logs benefit from Smart Flash Logging just as redo logs as long as cell patch 11.2.2.4 or higher is applied and Database 11.2.0.2 Bundle Patch 11 or higher is applied.

How to check that Smart Flash Logging is configured?

Using CellCLI run “LIST FLASHLOG DETAIL” and if output is returned as shown below with the details, then this means that Smart Flash Logging is configured:

[root@v1ex1dbadm01 ~]# dcli -g /opt/oracle.SupportTools/onecommand/cell_group -l root cellcli -e "list flashlog detail"
 v1ex1celadm01: name: v1ex1celadm01_FLASHLOG
 v1ex1celadm01: cellDisk: FD_00_v1ex1celadm01,FD_01_v1ex1celadm01
 v1ex1celadm01: creationTime: 2015-06-28T17:52:43+01:00
 v1ex1celadm01: degradedCelldisks:
 v1ex1celadm01: effectiveSize: 512M
 v1ex1celadm01: efficiency: 100.0
 v1ex1celadm01: id: 366421ec-bf77-499e-870f-f0cf5390343e
 v1ex1celadm01: size: 512M
 v1ex1celadm01: status: normal
 v1ex1celadm02: name: v1ex1celadm02_FLASHLOG
 v1ex1celadm02: cellDisk: FD_01_v1ex1celadm02,FD_00_v1ex1celadm02
 v1ex1celadm02: creationTime: 2015-06-28T17:52:44+01:00
 v1ex1celadm02: degradedCelldisks:
 v1ex1celadm02: effectiveSize: 512M
 v1ex1celadm02: efficiency: 100.0
 v1ex1celadm02: id: 9f670843-c9cc-4156-a32e-8d23fa79cdb8
 v1ex1celadm02: size: 512M
 v1ex1celadm02: status: normal
 v1ex1celadm03: name: v1ex1celadm03_FLASHLOG
 v1ex1celadm03: cellDisk: FD_01_v1ex1celadm03,FD_00_v1ex1celadm03
 v1ex1celadm03: creationTime: 2015-06-28T17:52:33+01:00
 v1ex1celadm03: degradedCelldisks:
 v1ex1celadm03: effectiveSize: 512M
 v1ex1celadm03: efficiency: 100.0
 v1ex1celadm03: id: 749bada6-8ae2-4c51-8410-97622f9a9532
 v1ex1celadm03: size: 512M
 v1ex1celadm03: status: normal
[root@v1ex1dbadm01 ~]#

For more info:
Exadata Smart Flash Logging FAQ (Doc ID 1372894.1)
Oracle Exadata Whitepaper:  Exadata Smart Flash Cache Features and the Oracle Exadata Database Machine

If you found this blog post useful, please like as well as follow me through my various Social Media avenues available on the sidebar and/or subscribe to this oracle blog via WordPress/e-mail.

Thanks

Zed DBA (Zahid Anwar)

How to fix queries on DBA_FREE_SPACE that are slow

I found myself in a situation where OpsView, a monitoring tool, was having difficulty monitoring the tablespaces for a particular pluggable database.

Upon investigation it was found the queries against the dictionary table DBA_FREE_SPACE were taking a very long time:

SQL> set timing on
SQL> select nvl(sum(dfs.bytes)/1024/1024,0) from dba_free_space dfs where dfs.tablespace_name = 'USERS';

NVL(SUM(DFS.BYTES)/1024/1024,0)
-------------------------------
 70.75

Elapsed: 00:00:10.98

There are 60 tablespaces in this pluggable database, which the time varied querying each tablespace, but was by far where most the time was spent.

I wrote a PL/SQL block to mimic Opsview as I didn’t want to create an object (procedure) in this customer’s database:

SET SERVEROUTPUT ON
SET TIMING ON
DECLARE
 cursor ts_names is select tablespace_name from dba_tablespaces where contents != 'TEMPORARY';
 sql_used VARCHAR(200) := 'select nvl(sum(dfs.bytes)/1024/1024,0) from dba_free_space dfs where dfs.tablespace_name = ''text_string''';
 sql_free VARCHAR(200) := 'select nvl(sum(dfs.bytes)/1024/1024,0) from dba_free_space dfs where dfs.tablespace_name = ''text_string''';
 sql_max VARCHAR(200) := 'select sum(maxbytes-bytes)/1024/1024 from dba_data_files where AUTOEXTENSIBLE = ''YES'' and maxbytes>bytes and tablespace_name = ''text_string''';
 num_out NUMBER;
BEGIN
 FOR ts_name in ts_names
 LOOP
 --sql for used space
 EXECUTE IMMEDIATE replace(sql_used, 'text_string', ts_name.tablespace_name) into num_out;
 dbms_output.put_line(replace(sql_used, 'text_string', ts_name.tablespace_name));
 dbms_output.put_line(num_out);
 --sql for free space
 EXECUTE IMMEDIATE replace(sql_free, 'text_string', ts_name.tablespace_name) into num_out;
 dbms_output.put_line(replace(sql_free, 'text_string', ts_name.tablespace_name));
 dbms_output.put_line(num_out);
 --sql for max
 EXECUTE IMMEDIATE replace(sql_max, 'text_string', ts_name.tablespace_name) into num_out;
 dbms_output.put_line(replace(sql_max, 'text_string', ts_name.tablespace_name));
 dbms_output.put_line(num_out);
 END LOOP;
END;
/

I ran this and the total time was shocking 😐 :

SQL> --SET SERVEROUTPUT ON
SQL> SET TIMING ON
SQL> DECLARE
 2 cursor ts_names is select tablespace_name from dba_tablespaces where contents != 'TEMPORARY';
 3 sql_used VARCHAR(200) := 'select nvl(sum(dfs.bytes)/1024/1024,0) from dba_free_space dfs where dfs.tablespace_name = ''text_string''';
 4 sql_free VARCHAR(200) := 'select nvl(sum(dfs.bytes)/1024/1024,0) from dba_free_space dfs where dfs.tablespace_name = ''text_string''';
 5 sql_max VARCHAR(200) := 'select sum(maxbytes-bytes)/1024/1024 from dba_data_files where AUTOEXTENSIBLE = ''YES'' and maxbytes>bytes and tablespace_name = ''text_string''';
 6 num_out NUMBER;
 7 BEGIN
 8 FOR ts_name in ts_names
 9 LOOP
 10 --sql for used space
 11 EXECUTE IMMEDIATE replace(sql_used, 'text_string', ts_name.tablespace_name) into num_out;
 12 dbms_output.put_line(replace(sql_used, 'text_string', ts_name.tablespace_name));
 13 dbms_output.put_line(num_out);
 14 --sql for free space
 15 EXECUTE IMMEDIATE replace(sql_free, 'text_string', ts_name.tablespace_name) into num_out;
 16 dbms_output.put_line(replace(sql_free, 'text_string', ts_name.tablespace_name));
 17 dbms_output.put_line(num_out);
 --sql for max
 18 19 EXECUTE IMMEDIATE replace(sql_max, 'text_string', ts_name.tablespace_name) into num_out;
 20 dbms_output.put_line(replace(sql_max, 'text_string', ts_name.tablespace_name));
 21 dbms_output.put_line(num_out);
 22 END LOOP;
END;
 23 24 /

PL/SQL procedure successfully completed.

Elapsed: 00:21:30.94
SQL>

So I searched My Oracle Support (MOS) and found the following MOS note:
Queries on DBA_FREE_SPACE are Slow (Doc ID 271169.1)

Which states:
“1) In release 10g, the view dba_free_space was modified to also include objects in the recycle bin.

2) Large number of objects in the recyclebin can slow down queries on  dba_free_space.

3) This is a normal behaviour.

4) For release 11g, the view dba_free_space doesn’t contain a hint which in case when there is only few objects in recyclebin, you may want to gather underlying stats of tables/dictionary to get better performance.”

The database indeed did have a lot of objects in the recycle bin (in the pluggable database):

SQL> SELECT count(*) from dba_recyclebin;

 COUNT(*)
----------
 27615

SQL>

With most of them drop recently:

SQL> select trunc(to_date(DROPTIME,'YYYY-MM-DD:HH24:MI:SS')), count(*) from dba_recyclebin group by trunc(to_date(DROPTIME,'YYYY-MM-DD:HH24:MI:SS'))
  2  order by 1
  3 /

TRUNC(TO_ COUNT(*)
--------- ----------
24-SEP-16 2
...
19-JAN-18 2506
20-JAN-18 4322
21-JAN-18 4321
22-JAN-18 4320
23-JAN-18 4321
24-JAN-18 4321
25-JAN-18 2446

421 rows selected.

SQL>

So I purged the recycle bin (with customers permission)  and re-ran the check:

SQL> purge dba_recyclebin;

DBA Recyclebin purged.

Elapsed: 00:06:30.39
SQL> --SET SERVEROUTPUT ON
SET TIMING ON
SQL> SQL> DECLARE
 2 cursor ts_names is select tablespace_name from dba_tablespaces where contents != 'TEMPORARY';
 3 sql_used VARCHAR(200) := 'select nvl(sum(dfs.bytes)/1024/1024,0) from dba_free_space dfs where dfs.tablespace_name = ''text_string''';
 4 sql_free VARCHAR(200) := 'select nvl(sum(dfs.bytes)/1024/1024,0) from dba_free_space dfs where dfs.tablespace_name = ''text_string''';
 5 sql_max VARCHAR(200) := 'select sum(maxbytes-bytes)/1024/1024 from dba_data_files where AUTOEXTENSIBLE = ''YES'' and maxbytes>bytes and tablespace_name = ''text_string''';
 6 num_out NUMBER;
 7 BEGIN
 8 FOR ts_name in ts_names
 9 LOOP
 10 --sql for used space
 11 EXECUTE IMMEDIATE replace(sql_used, 'text_string', ts_name.tablespace_name) into num_out;
 12 dbms_output.put_line(replace(sql_used, 'text_string', ts_name.tablespace_name));
 13 dbms_output.put_line(num_out);
 14 --sql for free space
 15 EXECUTE IMMEDIATE replace(sql_free, 'text_string', ts_name.tablespace_name) into num_out;
 16 dbms_output.put_line(replace(sql_free, 'text_string', ts_name.tablespace_name));
 17 dbms_output.put_line(num_out);
 18 --sql for max
 19 EXECUTE IMMEDIATE replace(sql_max, 'text_string', ts_name.tablespace_name) into num_out;
 20 dbms_output.put_line(replace(sql_max, 'text_string', ts_name.tablespace_name));
 21 dbms_output.put_line(num_out);
 22 END LOOP;
 23 END;
 24 /

PL/SQL procedure successfully completed.

Elapsed: 00:02:46.25
SQL>

Result, the duration of the PL/SQL block went from 21 minutes to just under 3 minutes.  However I need it to go under 2 minutes as this was the timeout for OpsView.

So I proceed with the next recommendation in the MOS note of gather dictionary and fixed table stats (with customers permission) using MOS note:
How to Gather Statistics on Objects Owned by the ‘SYS’ User and ‘Fixed’ Objects (Doc ID 457926.1)

SQL> EXEC DBMS_STATS.GATHER_DICTIONARY_STATS;

PL/SQL procedure successfully completed.

Elapsed: 00:00:20.49

SQL> EXEC DBMS_STATS.GATHER_FIXED_OBJECTS_STATS;

PL/SQL procedure successfully completed.

Elapsed: 00:04:28.07

SQL> --SET SERVEROUTPUT ON
SET TIMING ON
SQL> SQL> DECLARE
 2 cursor ts_names is select tablespace_name from dba_tablespaces where contents != 'TEMPORARY';
 3 sql_used VARCHAR(200) := 'select nvl(sum(dfs.bytes)/1024/1024,0) from dba_free_space dfs where dfs.tablespace_name = ''text_string''';
 4 sql_free VARCHAR(200) := 'select nvl(sum(dfs.bytes)/1024/1024,0) from dba_free_space dfs where dfs.tablespace_name = ''text_string''';
 5 sql_max VARCHAR(200) := 'select sum(maxbytes-bytes)/1024/1024 from dba_data_files where AUTOEXTENSIBLE = ''YES'' and maxbytes>bytes and tablespace_name = ''text_string''';
 6 num_out NUMBER;
 7 BEGIN
 8 FOR ts_name in ts_names
 9 LOOP
 10 --sql for used space
 11 EXECUTE IMMEDIATE replace(sql_used, 'text_string', ts_name.tablespace_name) into num_out;
 12 dbms_output.put_line(replace(sql_used, 'text_string', ts_name.tablespace_name));
 13 dbms_output.put_line(num_out);
 14 --sql for free space
 15 EXECUTE IMMEDIATE replace(sql_free, 'text_string', ts_name.tablespace_name) into num_out;
 16 dbms_output.put_line(replace(sql_free, 'text_string', ts_name.tablespace_name));
 17 dbms_output.put_line(num_out);
 18 --sql for max
 19 EXECUTE IMMEDIATE replace(sql_max, 'text_string', ts_name.tablespace_name) into num_out;
 20 dbms_output.put_line(replace(sql_max, 'text_string', ts_name.tablespace_name));
 21 dbms_output.put_line(num_out);
 22 END LOOP;
 23 END;
 24 /

PL/SQL procedure successfully completed.

Elapsed: 00:00:04.53
SQL>

Bingo! the duration of the PL/SQL block went down to 4 seconds 🙂

PLEASE NOTE: This still effects non-pluggable databases, however in pluggable databases, you need to purge the recycle bin for where the dropped objects are, the container database and the pluggable databases require independent purge.

If you found this blog post useful, please like as well as follow me through my various Social Media avenues available on the sidebar and/or subscribe to this oracle blog via WordPress/e-mail.

Thanks

Zed DBA (Zahid Anwar)

How to Enable Exadata Write-Back Flash Cache

Please check the following blog post “How to check if Exadata Write-Back Flash Cache is Enabled” for:

  • What is Exadata Write-Back Flash Cache?
  • What are the Performance Benefits of Exadata Write-Back Flash Cache?
  • How to check if Exadata Write-Back Flash Cache is Enabled?
  • Pre-requisites and minimum versions.

You can also get more info from My Oracle Support (MOS) note:
Exadata Write-Back Flash Cache – FAQ (Doc ID 1500257.1)
OTN Article: Oracle Exadata Database Machine – Write-Back Flash Cache

How to Enable Exadata Write-Back Flash Cache

PLEASE NOTE: Although I have illustrated the steps below, please cross check with the MOS note to ensure the method below matches your setup or the steps haven’t changed with future releases (after the time of writing).

With Exadata software 11.2.3.3.1 or higher, it is not required to stop the cellsrv process on the storage cells or to inactivate griddisk.  If you are 11.2.3.2.1 to 11.2.3.3.0, the refer to the MOS notes for additional steps.

It is recommend to enabled Write-Back Flash Cache during a period of reduced workload to reduce the performance impact on the database.

Before proceeding with the enabling of Write-Back Flash Cache, it is recommended to check the caching policy of the grid disks, as we don’t want to enable Write-Back Flash Cache for grid disks that don’t need it i.e. RECO and DBFS disk groups:

[root@v1oex2dbadm01 ~]# dcli -l root -g /opt/oracle.SupportTools/onecommand/cell_group cellcli -e list griddisk attributes name,cachingpolicy,cachedby
 v1oex2celadm01: DATAC1_CD_00_v1oex2celadm01 default
 v1oex2celadm01: DATAC1_CD_01_v1oex2celadm01 default
 v1oex2celadm01: DATAC1_CD_02_v1oex2celadm01 default
 v1oex2celadm01: DATAC1_CD_03_v1oex2celadm01 default
 v1oex2celadm01: DATAC1_CD_04_v1oex2celadm01 default
 v1oex2celadm01: DATAC1_CD_05_v1oex2celadm01 default
 v1oex2celadm01: DBFS_DG_CD_02_v1oex2celadm01 default
 v1oex2celadm01: DBFS_DG_CD_03_v1oex2celadm01 default
 v1oex2celadm01: DBFS_DG_CD_04_v1oex2celadm01 default
 v1oex2celadm01: DBFS_DG_CD_05_v1oex2celadm01 default
 v1oex2celadm01: RECOC1_CD_00_v1oex2celadm01 default
 v1oex2celadm01: RECOC1_CD_01_v1oex2celadm01 default
 v1oex2celadm01: RECOC1_CD_02_v1oex2celadm01 default
 v1oex2celadm01: RECOC1_CD_03_v1oex2celadm01 default
 v1oex2celadm01: RECOC1_CD_04_v1oex2celadm01 default
 v1oex2celadm01: RECOC1_CD_05_v1oex2celadm01 default
 v1oex2celadm02: DATAC1_CD_00_v1oex2celadm02 default
 v1oex2celadm02: DATAC1_CD_01_v1oex2celadm02 default
 v1oex2celadm02: DATAC1_CD_02_v1oex2celadm02 default
 v1oex2celadm02: DATAC1_CD_03_v1oex2celadm02 default
 v1oex2celadm02: DATAC1_CD_04_v1oex2celadm02 default
 v1oex2celadm02: DATAC1_CD_05_v1oex2celadm02 default
 v1oex2celadm02: DBFS_DG_CD_02_v1oex2celadm02 default
 v1oex2celadm02: DBFS_DG_CD_03_v1oex2celadm02 default
 v1oex2celadm02: DBFS_DG_CD_04_v1oex2celadm02 default
 v1oex2celadm02: DBFS_DG_CD_05_v1oex2celadm02 default
 v1oex2celadm02: RECOC1_CD_00_v1oex2celadm02 default
 v1oex2celadm02: RECOC1_CD_01_v1oex2celadm02 default
 v1oex2celadm02: RECOC1_CD_02_v1oex2celadm02 default
 v1oex2celadm02: RECOC1_CD_03_v1oex2celadm02 default
 v1oex2celadm02: RECOC1_CD_04_v1oex2celadm02 default
 v1oex2celadm02: RECOC1_CD_05_v1oex2celadm02 default
 v1oex2celadm03: DATAC1_CD_00_v1oex2celadm03 default
 v1oex2celadm03: DATAC1_CD_01_v1oex2celadm03 default
 v1oex2celadm03: DATAC1_CD_02_v1oex2celadm03 default
 v1oex2celadm03: DATAC1_CD_03_v1oex2celadm03 default
 v1oex2celadm03: DATAC1_CD_04_v1oex2celadm03 default
 v1oex2celadm03: DATAC1_CD_05_v1oex2celadm03 default
 v1oex2celadm03: DBFS_DG_CD_02_v1oex2celadm03 default
 v1oex2celadm03: DBFS_DG_CD_03_v1oex2celadm03 default
 v1oex2celadm03: DBFS_DG_CD_04_v1oex2celadm03 default
 v1oex2celadm03: DBFS_DG_CD_05_v1oex2celadm03 default
 v1oex2celadm03: RECOC1_CD_00_v1oex2celadm03 default
 v1oex2celadm03: RECOC1_CD_01_v1oex2celadm03 default
 v1oex2celadm03: RECOC1_CD_02_v1oex2celadm03 default
 v1oex2celadm03: RECOC1_CD_03_v1oex2celadm03 default
 v1oex2celadm03: RECOC1_CD_04_v1oex2celadm03 default
 v1oex2celadm03: RECOC1_CD_05_v1oex2celadm03 default
 [root@v1oex2dbadm01 ~]#

As you can see, all the grid disks have default caching policy.  As per the following MOS note, we disable caching for RECO and DBFS disk groups:
Oracle Exadata Database Machine Setup/Configuration Best Practices (Doc ID 1274318.1)

[root@v1oex2dbadm01 ~]# dcli -c v1oex2celadm01 -l root cellcli -e alter griddisk DBFS_DG_CD_02_v1oex2celadm01,DBFS_DG_CD_03_v1oex2celadm01,DBFS_DG_CD_04_v1oex2celadm01,DBFS_DG_CD_05_v1oex2celadm01 cachingPolicy="none"
 v1oex2celadm01: GridDisk DBFS_DG_CD_02_v1oex2celadm01 successfully altered
 v1oex2celadm01: GridDisk DBFS_DG_CD_03_v1oex2celadm01 successfully altered
 v1oex2celadm01: GridDisk DBFS_DG_CD_04_v1oex2celadm01 successfully altered
 v1oex2celadm01: GridDisk DBFS_DG_CD_05_v1oex2celadm01 successfully altered
[root@v1oex2dbadm01 ~]# dcli -c v1oex2celadm02 -l root cellcli -e alter griddisk DBFS_DG_CD_02_v1oex2celadm02,DBFS_DG_CD_03_v1oex2celadm02,DBFS_DG_CD_04_v1oex2celadm02,DBFS_DG_CD_05_v1oex2celadm02 cachingPolicy="none"
 v1oex2celadm02: GridDisk DBFS_DG_CD_02_v1oex2celadm02 successfully altered
 v1oex2celadm02: GridDisk DBFS_DG_CD_03_v1oex2celadm02 successfully altered
 v1oex2celadm02: GridDisk DBFS_DG_CD_04_v1oex2celadm02 successfully altered
 v1oex2celadm02: GridDisk DBFS_DG_CD_05_v1oex2celadm02 successfully altered
[root@v1oex2dbadm01 ~]# dcli -c v1oex2celadm03 -l root cellcli -e alter griddisk DBFS_DG_CD_02_v1oex2celadm03,DBFS_DG_CD_03_v1oex2celadm03,DBFS_DG_CD_04_v1oex2celadm03,DBFS_DG_CD_05_v1oex2celadm03 cachingPolicy="none"
 v1oex2celadm03: GridDisk DBFS_DG_CD_02_v1oex2celadm03 successfully altered
 v1oex2celadm03: GridDisk DBFS_DG_CD_03_v1oex2celadm03 successfully altered
 v1oex2celadm03: GridDisk DBFS_DG_CD_04_v1oex2celadm03 successfully altered
 v1oex2celadm03: GridDisk DBFS_DG_CD_05_v1oex2celadm03 successfully altered 
[root@v1oex2dbadm01 ~]# dcli -c v1oex2celadm01 -l root cellcli -e alter griddisk RECOC1_CD_00_v1oex2celadm01,RECOC1_CD_01_v1oex2celadm01,RECOC1_CD_02_v1oex2celadm01,RECOC1_CD_03_v1oex2celadm01,RECOC1_CD_04_v1oex2celadm01,RECOC1_CD_05_v1oex2celadm01 cachingPolicy="none"
 v1oex2celadm01: GridDisk RECOC1_CD_00_v1oex2celadm01 successfully altered
 v1oex2celadm01: GridDisk RECOC1_CD_01_v1oex2celadm01 successfully altered
 v1oex2celadm01: GridDisk RECOC1_CD_02_v1oex2celadm01 successfully altered
 v1oex2celadm01: GridDisk RECOC1_CD_03_v1oex2celadm01 successfully altered
 v1oex2celadm01: GridDisk RECOC1_CD_04_v1oex2celadm01 successfully altered
 v1oex2celadm01: GridDisk RECOC1_CD_05_v1oex2celadm01 successfully altered 
[root@v1oex2dbadm01 ~]# dcli -c v1oex2celadm02 -l root cellcli -e alter griddisk RECOC1_CD_00_v1oex2celadm02,RECOC1_CD_01_v1oex2celadm02,RECOC1_CD_02_v1oex2celadm02,RECOC1_CD_03_v1oex2celadm02,RECOC1_CD_04_v1oex2celadm02,RECOC1_CD_05_v1oex2celadm02 cachingPolicy="none"
 v1oex2celadm02: GridDisk RECOC1_CD_00_v1oex2celadm02 successfully altered
 v1oex2celadm02: GridDisk RECOC1_CD_01_v1oex2celadm02 successfully altered
 v1oex2celadm02: GridDisk RECOC1_CD_02_v1oex2celadm02 successfully altered
 v1oex2celadm02: GridDisk RECOC1_CD_03_v1oex2celadm02 successfully altered
 v1oex2celadm02: GridDisk RECOC1_CD_04_v1oex2celadm02 successfully altered
 v1oex2celadm02: GridDisk RECOC1_CD_05_v1oex2celadm02 successfully altered
[root@v1oex2dbadm01 ~]# dcli -c v1oex2celadm03 -l root cellcli -e alter griddisk RECOC1_CD_00_v1oex2celadm03,RECOC1_CD_01_v1oex2celadm03,RECOC1_CD_02_v1oex2celadm03,RECOC1_CD_03_v1oex2celadm03,RECOC1_CD_04_v1oex2celadm03,RECOC1_CD_05_v1oex2celadm03 cachingPolicy="none"
 v1oex2celadm03: GridDisk RECOC1_CD_00_v1oex2celadm03 successfully altered
 v1oex2celadm03: GridDisk RECOC1_CD_01_v1oex2celadm03 successfully altered
 v1oex2celadm03: GridDisk RECOC1_CD_02_v1oex2celadm03 successfully altered
 v1oex2celadm03: GridDisk RECOC1_CD_03_v1oex2celadm03 successfully altered
 v1oex2celadm03: GridDisk RECOC1_CD_04_v1oex2celadm03 successfully altered
 v1oex2celadm03: GridDisk RECOC1_CD_05_v1oex2celadm03 successfully altered
[root@v1oex2dbadm01 ~]#

Now when we enabling of Write-Back Flash Cache, it will not cache for grid disks for RECO and DBFS disk group, avoiding the need to flush to disk and change policy as post step:

[root@v1oex2dbadm01 ~]# dcli -g /opt/oracle.SupportTools/onecommand/cell_group -l root cellcli -e list griddisk attributes name,cachingpolicy,cachedby
 v1oex2celadm01: DATAC1_CD_00_v1oex2celadm01 default
 v1oex2celadm01: DATAC1_CD_01_v1oex2celadm01 default
 v1oex2celadm01: DATAC1_CD_02_v1oex2celadm01 default
 v1oex2celadm01: DATAC1_CD_03_v1oex2celadm01 default
 v1oex2celadm01: DATAC1_CD_04_v1oex2celadm01 default
 v1oex2celadm01: DATAC1_CD_05_v1oex2celadm01 default
 v1oex2celadm01: DBFS_DG_CD_02_v1oex2celadm01 none
 v1oex2celadm01: DBFS_DG_CD_03_v1oex2celadm01 none
 v1oex2celadm01: DBFS_DG_CD_04_v1oex2celadm01 none
 v1oex2celadm01: DBFS_DG_CD_05_v1oex2celadm01 none
 v1oex2celadm01: RECOC1_CD_00_v1oex2celadm01 none
 v1oex2celadm01: RECOC1_CD_01_v1oex2celadm01 none
 v1oex2celadm01: RECOC1_CD_02_v1oex2celadm01 none
 v1oex2celadm01: RECOC1_CD_03_v1oex2celadm01 none
 v1oex2celadm01: RECOC1_CD_04_v1oex2celadm01 none
 v1oex2celadm01: RECOC1_CD_05_v1oex2celadm01 none
 v1oex2celadm02: DATAC1_CD_00_v1oex2celadm02 default
 v1oex2celadm02: DATAC1_CD_01_v1oex2celadm02 default
 v1oex2celadm02: DATAC1_CD_02_v1oex2celadm02 default
 v1oex2celadm02: DATAC1_CD_03_v1oex2celadm02 default
 v1oex2celadm02: DATAC1_CD_04_v1oex2celadm02 default
 v1oex2celadm02: DATAC1_CD_05_v1oex2celadm02 default
 v1oex2celadm02: DBFS_DG_CD_02_v1oex2celadm02 none
 v1oex2celadm02: DBFS_DG_CD_03_v1oex2celadm02 none
 v1oex2celadm02: DBFS_DG_CD_04_v1oex2celadm02 none
 v1oex2celadm02: DBFS_DG_CD_05_v1oex2celadm02 none
 v1oex2celadm02: RECOC1_CD_00_v1oex2celadm02 none
 v1oex2celadm02: RECOC1_CD_01_v1oex2celadm02 none
 v1oex2celadm02: RECOC1_CD_02_v1oex2celadm02 none
 v1oex2celadm02: RECOC1_CD_03_v1oex2celadm02 none
 v1oex2celadm02: RECOC1_CD_04_v1oex2celadm02 none
 v1oex2celadm02: RECOC1_CD_05_v1oex2celadm02 none
 v1oex2celadm03: DATAC1_CD_00_v1oex2celadm03 default
 v1oex2celadm03: DATAC1_CD_01_v1oex2celadm03 default
 v1oex2celadm03: DATAC1_CD_02_v1oex2celadm03 default
 v1oex2celadm03: DATAC1_CD_03_v1oex2celadm03 default
 v1oex2celadm03: DATAC1_CD_04_v1oex2celadm03 default
 v1oex2celadm03: DATAC1_CD_05_v1oex2celadm03 default
 v1oex2celadm03: DBFS_DG_CD_02_v1oex2celadm03 none
 v1oex2celadm03: DBFS_DG_CD_03_v1oex2celadm03 none
 v1oex2celadm03: DBFS_DG_CD_04_v1oex2celadm03 none
 v1oex2celadm03: DBFS_DG_CD_05_v1oex2celadm03 none
 v1oex2celadm03: RECOC1_CD_00_v1oex2celadm03 none
 v1oex2celadm03: RECOC1_CD_01_v1oex2celadm03 none
 v1oex2celadm03: RECOC1_CD_02_v1oex2celadm03 none
 v1oex2celadm03: RECOC1_CD_03_v1oex2celadm03 none
 v1oex2celadm03: RECOC1_CD_04_v1oex2celadm03 none
 v1oex2celadm03: RECOC1_CD_05_v1oex2celadm03 none
 [root@v1oex2dbadm01 ~]#

Next we check that all the grid disks on all storage cells have the asmdeactivationoutcome and asmmodestatus as “Yes” and “ONLINE” respectively.

[root@v1ex2dbadm01 ~]# dcli -g /opt/oracle.SupportTools/onecommand/cell_group -l root cellcli -e list griddisk attributes asmdeactivationoutcome, asmmodestatus
v1ex2celadm01: Yes ONLINE
v1ex2celadm01: Yes ONLINE
v1ex2celadm01: Yes ONLINE
v1ex2celadm01: Yes ONLINE
v1ex2celadm01: Yes ONLINE
v1ex2celadm01: Yes ONLINE
v1ex2celadm01: Yes ONLINE
v1ex2celadm01: Yes ONLINE
v1ex2celadm01: Yes ONLINE
v1ex2celadm01: Yes ONLINE
v1ex2celadm01: Yes ONLINE
v1ex2celadm01: Yes ONLINE
v1ex2celadm01: Yes ONLINE
v1ex2celadm01: Yes ONLINE
v1ex2celadm01: Yes ONLINE
v1ex2celadm01: Yes ONLINE
v1ex2celadm02: Yes ONLINE
v1ex2celadm02: Yes ONLINE
v1ex2celadm02: Yes ONLINE
v1ex2celadm02: Yes ONLINE
v1ex2celadm02: Yes ONLINE
v1ex2celadm02: Yes ONLINE
v1ex2celadm02: Yes ONLINE
v1ex2celadm02: Yes ONLINE
v1ex2celadm02: Yes ONLINE
v1ex2celadm02: Yes ONLINE
v1ex2celadm02: Yes ONLINE
v1ex2celadm02: Yes ONLINE
v1ex2celadm02: Yes ONLINE
v1ex2celadm02: Yes ONLINE
v1ex2celadm02: Yes ONLINE
v1ex2celadm02: Yes ONLINE
v1ex2celadm03: Yes ONLINE
v1ex2celadm03: Yes ONLINE
v1ex2celadm03: Yes ONLINE
v1ex2celadm03: Yes ONLINE
v1ex2celadm03: Yes ONLINE
v1ex2celadm03: Yes ONLINE
v1ex2celadm03: Yes ONLINE
v1ex2celadm03: Yes ONLINE
v1ex2celadm03: Yes ONLINE
v1ex2celadm03: Yes ONLINE
v1ex2celadm03: Yes ONLINE
v1ex2celadm03: Yes ONLINE
v1ex2celadm03: Yes ONLINE
v1ex2celadm03: Yes ONLINE
v1ex2celadm03: Yes ONLINE
v1ex2celadm03: Yes ONLINE
[root@v1ex2dbadm01 ~]#

Next we check that all of the Flash Cache are in the “normal” state and that no flash disks are in a degraded or critical state:

[root@v1ex2dbadm01 ~]# dcli -g /opt/oracle.SupportTools/onecommand/cell_group -l root cellcli -e list flashcache detail
v1ex2celadm01: name: v1ex2celadm01_FLASHCACHE
v1ex2celadm01: cellDisk: FD_01_v1ex2celadm01,FD_00_v1ex2celadm01
v1ex2celadm01: creationTime: 2015-07-01T13:39:22+01:00
v1ex2celadm01: degradedCelldisks:
v1ex2celadm01: effectiveCacheSize: 2.910369873046875T
v1ex2celadm01: id: 655bdb7a-8d3b-40e5-88af-cd42843dd3f7
v1ex2celadm01: size: 2.910369873046875T
v1ex2celadm01: status: normal
v1ex2celadm02: name: v1ex2celadm02_FLASHCACHE
v1ex2celadm02: cellDisk: FD_01_v1ex2celadm02,FD_00_v1ex2celadm02
v1ex2celadm02: creationTime: 2015-07-01T06:38:05+01:00
v1ex2celadm02: degradedCelldisks:
v1ex2celadm02: effectiveCacheSize: 2.910369873046875T
v1ex2celadm02: id: 1cc0f7a4-885a-4e23-aec5-b47bc488e8e3
v1ex2celadm02: size: 2.910369873046875T
v1ex2celadm02: status: normal
v1ex2celadm03: name: v1ex2celadm03_FLASHCACHE
v1ex2celadm03: cellDisk: FD_01_v1ex2celadm03,FD_00_v1ex2celadm03
v1ex2celadm03: creationTime: 2015-07-01T20:39:30+01:00
v1ex2celadm03: degradedCelldisks:
v1ex2celadm03: effectiveCacheSize: 2.910369873046875T
v1ex2celadm03: id: b07f6011-1d66-4c3f-a25f-26d1e6b55633
v1ex2celadm03: size: 2.910369873046875T
v1ex2celadm03: status: normal
[root@v1ex2dbadm01 ~]#

Next we validate all the Physical Disks are in the “NORMAL” state before we modify the Flash Cache:

[root@v1ex2dbadm01 ~]# dcli -l root -g /opt/oracle.SupportTools/onecommand/cell_group cellcli -e "list physicaldisk attributes name,status"
v1ex2celadm01: 8:0 normal
v1ex2celadm01: 8:1 normal
v1ex2celadm01: 8:2 normal
v1ex2celadm01: 8:3 normal
v1ex2celadm01: 8:4 normal
v1ex2celadm01: 8:5 normal
v1ex2celadm01: 8:6 normal
v1ex2celadm01: 8:7 normal
v1ex2celadm01: 8:8 normal
v1ex2celadm01: 8:9 normal
v1ex2celadm01: 8:10 normal
v1ex2celadm01: 8:11 normal
v1ex2celadm01: FLASH_1_1 normal
v1ex2celadm01: FLASH_2_1 normal
v1ex2celadm01: FLASH_4_1 normal
v1ex2celadm01: FLASH_5_1 normal
v1ex2celadm02: 8:0 normal
v1ex2celadm02: 8:1 normal
v1ex2celadm02: 8:2 normal
v1ex2celadm02: 8:3 normal
v1ex2celadm02: 8:4 normal
v1ex2celadm02: 8:5 normal
v1ex2celadm02: 8:6 normal
v1ex2celadm02: 8:7 normal
v1ex2celadm02: 8:8 normal
v1ex2celadm02: 8:9 normal
v1ex2celadm02: 8:10 normal
v1ex2celadm02: 8:11 normal
v1ex2celadm02: FLASH_1_1 normal
v1ex2celadm02: FLASH_2_1 normal
v1ex2celadm02: FLASH_4_1 normal
v1ex2celadm02: FLASH_5_1 normal
v1ex2celadm03: 8:0 normal
v1ex2celadm03: 8:1 normal
v1ex2celadm03: 8:2 normal
v1ex2celadm03: 8:3 normal
v1ex2celadm03: 8:4 normal
v1ex2celadm03: 8:5 normal
v1ex2celadm03: 8:6 normal
v1ex2celadm03: 8:7 normal
v1ex2celadm03: 8:8 normal
v1ex2celadm03: 8:9 normal
v1ex2celadm03: 8:10 normal
v1ex2celadm03: 8:11 normal
v1ex2celadm03: FLASH_1_1 normal
v1ex2celadm03: FLASH_2_1 normal
v1ex2celadm03: FLASH_4_1 normal
v1ex2celadm03: FLASH_5_1 normal
[root@v1ex2dbadm01 ~]#

You can run the same command with inverse grep on “normal” to ensure you didn’t miss any disks that are not normal:

[root@v1ex2dbadm01 ~]# dcli -l root -g /opt/oracle.SupportTools/onecommand/cell_group cellcli -e "list physicaldisk attributes name,status"|grep -v normal
[root@v1ex2dbadm01 ~]#

Next we drop the Flash Cache to be able to change the attribute:

PLEASE NOTE: Any data that is currently cached in Flash Cache and being served will then need to be served by Hard Disks and a noticeable performance degradation will be observed.  Hence it is recommend to enabled Write-Back Flash Cache during a period of reduced workload to reduce the performance impact on the database.

[root@v1ex2dbadm01 ~]# dcli -l root -g /opt/oracle.SupportTools/onecommand/cell_group cellcli -e drop flashcache 
v1ex2celadm01: Flash cache v1ex2celadm01_FLASHCACHE successfully dropped 
v1ex2celadm02: Flash cache v1ex2celadm02_FLASHCACHE successfully dropped 
v1ex2celadm03: Flash cache v1ex2celadm03_FLASHCACHE successfully dropped 
[root@v1ex2dbadm01 ~]#

Next we set the “flashCacheMode” attribute to “writeback“:

[root@v1ex2dbadm01 ~]# dcli -l root -g /opt/oracle.SupportTools/onecommand/cell_group cellcli -e "alter cell flashCacheMode=writeback"
v1ex2celadm01: Cell v1ex2celadm01 successfully altered
v1ex2celadm02: Cell v1ex2celadm02 successfully altered
v1ex2celadm03: Cell v1ex2celadm03 successfully altered
[root@v1ex2dbadm01 ~]#

Next we re-create the Flash Cache, which will be in Write-Back instead of WriteThrough:

[root@v1ex2dbadm01 ~]# dcli -l root -g /opt/oracle.SupportTools/onecommand/cell_group cellcli -e create flashcache all
v1ex2celadm01: Flash cache v1ex2celadm01_FLASHCACHE successfully created
v1ex2celadm02: Flash cache v1ex2celadm02_FLASHCACHE successfully created
v1ex2celadm03: Flash cache v1ex2celadm03_FLASHCACHE successfully created
[root@v1ex2dbadm01 ~]#

Next we check the attribute “flashCacheMode” is actually now “writeback“:

[root@v1ex2dbadm01 ~]# dcli -g /opt/oracle.SupportTools/onecommand/cell_group -l root cellcli -e "list cell attributes flashcachemode"
v1ex2celadm01: writeback
v1ex2celadm02: writeback
v1ex2celadm03: writeback
[root@v1ex2dbadm01 ~]#

At this point, write I/O will go straight to flash and then can be moved to hard disk if aged or not required for read caching.  The Flash Cache will be repopulated over time and performance will return to normal for reads with addition performance for writes 🙂

You can check the usage increase as Flash Cache repopulates as follows:

[root@v1oex2dbadm01 ~]# dcli -g /opt/oracle.SupportTools/onecommand/cell_group -l root cellcli -e LIST METRICCURRENT FC_BY_USED
 v1oex2celadm01: FC_BY_USED FLASHCACHE 104,838 MB
 v1oex2celadm02: FC_BY_USED FLASHCACHE 104,479 MB
 v1oex2celadm03: FC_BY_USED FLASHCACHE 105,192 MB
[root@v1oex2dbadm01 ~]#

Finally, we validate grid disk attributes cachingPolicy and cachedby, where we can see only the DATA disk group is being cached by Flash Cache and by which Flash Disk:

[root@v1oex2dbadm01 ~]# dcli -g /opt/oracle.SupportTools/onecommand/cell_group -l root cellcli -e list griddisk attributes name,cachingpolicy,cachedby
v1oex2celadm01: DATAC1_CD_00_v1oex2celadm01 default FD_01_v1oex2celadm01
v1oex2celadm01: DATAC1_CD_01_v1oex2celadm01 default FD_01_v1oex2celadm01
v1oex2celadm01: DATAC1_CD_02_v1oex2celadm01 default FD_00_v1oex2celadm01
v1oex2celadm01: DATAC1_CD_03_v1oex2celadm01 default FD_00_v1oex2celadm01
v1oex2celadm01: DATAC1_CD_04_v1oex2celadm01 default FD_01_v1oex2celadm01
v1oex2celadm01: DATAC1_CD_05_v1oex2celadm01 default FD_00_v1oex2celadm01
v1oex2celadm01: DBFS_DG_CD_02_v1oex2celadm01 none
v1oex2celadm01: DBFS_DG_CD_03_v1oex2celadm01 none
v1oex2celadm01: DBFS_DG_CD_04_v1oex2celadm01 none
v1oex2celadm01: DBFS_DG_CD_05_v1oex2celadm01 none
v1oex2celadm01: RECOC1_CD_00_v1oex2celadm01 none
v1oex2celadm01: RECOC1_CD_01_v1oex2celadm01 none
v1oex2celadm01: RECOC1_CD_02_v1oex2celadm01 none
v1oex2celadm01: RECOC1_CD_03_v1oex2celadm01 none
v1oex2celadm01: RECOC1_CD_04_v1oex2celadm01 none
v1oex2celadm01: RECOC1_CD_05_v1oex2celadm01 none
v1oex2celadm02: DATAC1_CD_00_v1oex2celadm02 default FD_01_v1oex2celadm02
v1oex2celadm02: DATAC1_CD_01_v1oex2celadm02 default FD_00_v1oex2celadm02
v1oex2celadm02: DATAC1_CD_02_v1oex2celadm02 default FD_01_v1oex2celadm02
v1oex2celadm02: DATAC1_CD_03_v1oex2celadm02 default FD_01_v1oex2celadm02
v1oex2celadm02: DATAC1_CD_04_v1oex2celadm02 default FD_00_v1oex2celadm02
v1oex2celadm02: DATAC1_CD_05_v1oex2celadm02 default FD_00_v1oex2celadm02
v1oex2celadm02: DBFS_DG_CD_02_v1oex2celadm02 none
v1oex2celadm02: DBFS_DG_CD_03_v1oex2celadm02 none
v1oex2celadm02: DBFS_DG_CD_04_v1oex2celadm02 none
v1oex2celadm02: DBFS_DG_CD_05_v1oex2celadm02 none
v1oex2celadm02: RECOC1_CD_00_v1oex2celadm02 none
v1oex2celadm02: RECOC1_CD_01_v1oex2celadm02 none
v1oex2celadm02: RECOC1_CD_02_v1oex2celadm02 none
v1oex2celadm02: RECOC1_CD_03_v1oex2celadm02 none
v1oex2celadm02: RECOC1_CD_04_v1oex2celadm02 none
v1oex2celadm02: RECOC1_CD_05_v1oex2celadm02 none
v1oex2celadm03: DATAC1_CD_00_v1oex2celadm03 default FD_01_v1oex2celadm03
v1oex2celadm03: DATAC1_CD_01_v1oex2celadm03 default FD_01_v1oex2celadm03
v1oex2celadm03: DATAC1_CD_02_v1oex2celadm03 default FD_00_v1oex2celadm03
v1oex2celadm03: DATAC1_CD_03_v1oex2celadm03 default FD_00_v1oex2celadm03
v1oex2celadm03: DATAC1_CD_04_v1oex2celadm03 default FD_01_v1oex2celadm03
v1oex2celadm03: DATAC1_CD_05_v1oex2celadm03 default FD_00_v1oex2celadm03
v1oex2celadm03: DBFS_DG_CD_02_v1oex2celadm03 none
v1oex2celadm03: DBFS_DG_CD_03_v1oex2celadm03 none
v1oex2celadm03: DBFS_DG_CD_04_v1oex2celadm03 none
v1oex2celadm03: DBFS_DG_CD_05_v1oex2celadm03 none
v1oex2celadm03: RECOC1_CD_00_v1oex2celadm03 none
v1oex2celadm03: RECOC1_CD_01_v1oex2celadm03 none
v1oex2celadm03: RECOC1_CD_02_v1oex2celadm03 none
v1oex2celadm03: RECOC1_CD_03_v1oex2celadm03 none
v1oex2celadm03: RECOC1_CD_04_v1oex2celadm03 none
v1oex2celadm03: RECOC1_CD_05_v1oex2celadm03 none
[root@v1oex2dbadm01 ~]#

Final note, there is a script provided by Oracle that can do this all for you called setWBFC, however the version 1.0.0.2.1.20160602 didn’t work for me as it detected 4 Flash Disks in eighth rack when it expected 2.  Although there are only 2 in use in eighth rack, there are 4 physically present, so I believe this is a bug.  I did raise an SR with Oracle Support, which is yet to be concluded.  Below is the output for those who are interested:

[root@v1oex2dbadm01 WBFC]# ./setWBFC.sh
 setWBFC Version: 1.0.0.2.1.20160602
 Usage:
 ./setWBFC.sh -g cell_group_file [-d dbs_group_file ]
 [ -h ] [ -i ] [ -l log_directory ]
 [ -m WriteBack | WriteThrough ] [ -o rolling | non-rolling ]
 [ -p ] [ -s step_number ] [ -t time_out_seconds ]
 [ -x trace_level ] [ -v ]

-g file file that lists cell host names, one per line
 -d file file that lists the database host names, one
 per line. Required for non-rolling.
 -h help, print this information
 -i run in interactive mode
 -l log directory directory path for log files
 -m FC_mode flashcache mode: WriteBack | WriteThrough
 -o exec_mode execution mode: rolling | non-rolling (default)
 -p perform a precheck only
 -s step # (*) specify step number to restart at
 -t timeout sec specify in seconds the amount of time to wait
 for griddisks to come ONLINE - range: [600 - 43200]
 Default: 21600 (6 hours)
 -x trace level # specify trace level for further diagnostics
 -v show version

(*) -- Option not yet implemented.

 [root@v1oex2dbadm01 WBFC]# ./setWBFC.sh -g /opt/oracle.SupportTools/onecommand/cell_group -l /root/v1/WBFC/logs -m WriteBack -o rolling -p
 ./setWBFC.sh: Using log directory '/root/v1/WBFC/logs'
 ./setWBFC.sh: Log File '/root/v1/WBFC/logs/setWBFC_18335_2018-01-17-10:46:26.log' created successfully
 2018-01-17 10:46:26
 Starting ./setWBFC.sh on v1oex2dbadm01
 Version: 1.0.0.2.1.20160602
 Command line options used:
 -g /opt/oracle.SupportTools/onecommand/cell_group
 -o rolling
 -m WriteBack
 -p (Perform pre-req checks only)
 -t 21600
 -x 0

2018-01-17 10:46:26
 Performing pre-req checks.....
 2018-01-17 10:46:26
 Creating baseline inventory for griddisks
 2018-01-17 10:46:27
 Creating baseline inventory for flashdisks
 2018-01-17 10:46:28
 Creating baseline inventory for flashsize
 2018-01-17 10:46:28
 dcli present and in PATH. [PASSED]
 2018-01-17 10:46:28
 Checking cell nodes are valid storage servers...
 2018-01-17 10:46:29
 All cells are valid Exadata storage cells.
 2018-01-17 10:46:29
 Checking Exadata Storage Software Versions...
 2018-01-17 10:46:33
 Software versions of the following cells:
 v1oex2celadm01: 12.1.2.3.5.170418 [PASSED]
 v1oex2celadm02: 12.1.2.3.5.170418 [PASSED]
 v1oex2celadm03: 12.1.2.3.5.170418 [PASSED]

2018-01-17 10:46:33
 Checking Grid Infrastructure Software Version...
 2018-01-17 10:46:38
 Grid Infrastructure version: 12.1.0.2.00 [PASSED]

2018-01-17 10:46:38
 Checking for active ASM operations....
 2018-01-17 10:46:38
 Check for no active ASM operations: [PASSED]
 2018-01-17 10:46:38
 Checking griddisk status across all cells....
 2018-01-17 10:46:39
 All griddisks across all cells have asmdeactivationoutcome = Yes
 All griddisks across all cells are ONLINE
 Griddisk checks: [PASSED]
 2018-01-17 10:46:39
 Checking flash cache status.....
 2018-01-17 10:46:40
 Flashcache status normal: [PASSED]
 2018-01-17 10:46:40
 Checking that all FlashDisks are present...
 2018-01-17 10:46:42
 Cell v1oex2celadm01 has one or more FlashDisk missing. Expecting 2 but found 4

2018-01-17 10:46:42
 FlashDisk validation: [FAILED]
 2018-01-17 10:46:42
 Checking current flash cache mode.....
 2018-01-17 10:46:43
 Flashcache not already in target mode: [PASSED]
 2018-01-17 10:46:43
 Pre-req checks failed with status 7. Exiting....

[root@v1oex2dbadm01 WBFC]#

If this works for you, great then I would recommend using this method, otherwise it can be used to double check the pre-requisites at least and then you can do manually as I did shown above 🙂

If you found this blog post useful, please like as well as follow me through my various Social Media avenues available on the sidebar and/or subscribe to this oracle blog via WordPress/e-mail.

Thanks

Zed DBA (Zahid Anwar)