How to fix queries on DBA_FREE_SPACE that are slow

I found myself in a situation where OpsView, a monitoring tool, was having difficulty monitoring the tablespaces for a particular pluggable database.

Upon investigation it was found the queries against the dictionary table DBA_FREE_SPACE were taking a very long time:

SQL> set timing on
SQL> select nvl(sum(dfs.bytes)/1024/1024,0) from dba_free_space dfs where dfs.tablespace_name = 'USERS';

NVL(SUM(DFS.BYTES)/1024/1024,0)
-------------------------------
 70.75

Elapsed: 00:00:10.98

There are 60 tablespaces in this pluggable database, which the time varied querying each tablespace, but was by far where most the time was spent.

I wrote a PL/SQL block to mimic Opsview as I didn’t want to create an object (procedure) in this customer’s database:

SET SERVEROUTPUT ON
SET TIMING ON
DECLARE
 cursor ts_names is select tablespace_name from dba_tablespaces where contents != 'TEMPORARY';
 sql_used VARCHAR(200) := 'select nvl(sum(dfs.bytes)/1024/1024,0) from dba_free_space dfs where dfs.tablespace_name = ''text_string''';
 sql_free VARCHAR(200) := 'select nvl(sum(dfs.bytes)/1024/1024,0) from dba_free_space dfs where dfs.tablespace_name = ''text_string''';
 sql_max VARCHAR(200) := 'select sum(maxbytes-bytes)/1024/1024 from dba_data_files where AUTOEXTENSIBLE = ''YES'' and maxbytes>bytes and tablespace_name = ''text_string''';
 num_out NUMBER;
BEGIN
 FOR ts_name in ts_names
 LOOP
 --sql for used space
 EXECUTE IMMEDIATE replace(sql_used, 'text_string', ts_name.tablespace_name) into num_out;
 dbms_output.put_line(replace(sql_used, 'text_string', ts_name.tablespace_name));
 dbms_output.put_line(num_out);
 --sql for free space
 EXECUTE IMMEDIATE replace(sql_free, 'text_string', ts_name.tablespace_name) into num_out;
 dbms_output.put_line(replace(sql_free, 'text_string', ts_name.tablespace_name));
 dbms_output.put_line(num_out);
 --sql for max
 EXECUTE IMMEDIATE replace(sql_max, 'text_string', ts_name.tablespace_name) into num_out;
 dbms_output.put_line(replace(sql_max, 'text_string', ts_name.tablespace_name));
 dbms_output.put_line(num_out);
 END LOOP;
END;
/

I ran this and the total time was shocking 😐 :

SQL> --SET SERVEROUTPUT ON
SQL> SET TIMING ON
SQL> DECLARE
 2 cursor ts_names is select tablespace_name from dba_tablespaces where contents != 'TEMPORARY';
 3 sql_used VARCHAR(200) := 'select nvl(sum(dfs.bytes)/1024/1024,0) from dba_free_space dfs where dfs.tablespace_name = ''text_string''';
 4 sql_free VARCHAR(200) := 'select nvl(sum(dfs.bytes)/1024/1024,0) from dba_free_space dfs where dfs.tablespace_name = ''text_string''';
 5 sql_max VARCHAR(200) := 'select sum(maxbytes-bytes)/1024/1024 from dba_data_files where AUTOEXTENSIBLE = ''YES'' and maxbytes>bytes and tablespace_name = ''text_string''';
 6 num_out NUMBER;
 7 BEGIN
 8 FOR ts_name in ts_names
 9 LOOP
 10 --sql for used space
 11 EXECUTE IMMEDIATE replace(sql_used, 'text_string', ts_name.tablespace_name) into num_out;
 12 dbms_output.put_line(replace(sql_used, 'text_string', ts_name.tablespace_name));
 13 dbms_output.put_line(num_out);
 14 --sql for free space
 15 EXECUTE IMMEDIATE replace(sql_free, 'text_string', ts_name.tablespace_name) into num_out;
 16 dbms_output.put_line(replace(sql_free, 'text_string', ts_name.tablespace_name));
 17 dbms_output.put_line(num_out);
 --sql for max
 18 19 EXECUTE IMMEDIATE replace(sql_max, 'text_string', ts_name.tablespace_name) into num_out;
 20 dbms_output.put_line(replace(sql_max, 'text_string', ts_name.tablespace_name));
 21 dbms_output.put_line(num_out);
 22 END LOOP;
END;
 23 24 /

PL/SQL procedure successfully completed.

Elapsed: 00:21:30.94
SQL>

So I searched My Oracle Support (MOS) and found the following MOS note:
Queries on DBA_FREE_SPACE are Slow (Doc ID 271169.1)

Which states:
“1) In release 10g, the view dba_free_space was modified to also include objects in the recycle bin.

2) Large number of objects in the recyclebin can slow down queries on  dba_free_space.

3) This is a normal behaviour.

4) For release 11g, the view dba_free_space doesn’t contain a hint which in case when there is only few objects in recyclebin, you may want to gather underlying stats of tables/dictionary to get better performance.”

The database indeed did have a lot of objects in the recycle bin (in the pluggable database):

SQL> SELECT count(*) from dba_recyclebin;

 COUNT(*)
----------
 27615

SQL>

With most of them drop recently:

SQL> select trunc(to_date(DROPTIME,'YYYY-MM-DD:HH24:MI:SS')), count(*) from dba_recyclebin group by trunc(to_date(DROPTIME,'YYYY-MM-DD:HH24:MI:SS'))
  2  order by 1
  3 /

TRUNC(TO_ COUNT(*)
--------- ----------
24-SEP-16 2
...
19-JAN-18 2506
20-JAN-18 4322
21-JAN-18 4321
22-JAN-18 4320
23-JAN-18 4321
24-JAN-18 4321
25-JAN-18 2446

421 rows selected.

SQL>

So I purged the recycle bin (with customers permission)  and re-ran the check:

SQL> purge dba_recyclebin;

DBA Recyclebin purged.

Elapsed: 00:06:30.39
SQL> --SET SERVEROUTPUT ON
SET TIMING ON
SQL> SQL> DECLARE
 2 cursor ts_names is select tablespace_name from dba_tablespaces where contents != 'TEMPORARY';
 3 sql_used VARCHAR(200) := 'select nvl(sum(dfs.bytes)/1024/1024,0) from dba_free_space dfs where dfs.tablespace_name = ''text_string''';
 4 sql_free VARCHAR(200) := 'select nvl(sum(dfs.bytes)/1024/1024,0) from dba_free_space dfs where dfs.tablespace_name = ''text_string''';
 5 sql_max VARCHAR(200) := 'select sum(maxbytes-bytes)/1024/1024 from dba_data_files where AUTOEXTENSIBLE = ''YES'' and maxbytes>bytes and tablespace_name = ''text_string''';
 6 num_out NUMBER;
 7 BEGIN
 8 FOR ts_name in ts_names
 9 LOOP
 10 --sql for used space
 11 EXECUTE IMMEDIATE replace(sql_used, 'text_string', ts_name.tablespace_name) into num_out;
 12 dbms_output.put_line(replace(sql_used, 'text_string', ts_name.tablespace_name));
 13 dbms_output.put_line(num_out);
 14 --sql for free space
 15 EXECUTE IMMEDIATE replace(sql_free, 'text_string', ts_name.tablespace_name) into num_out;
 16 dbms_output.put_line(replace(sql_free, 'text_string', ts_name.tablespace_name));
 17 dbms_output.put_line(num_out);
 18 --sql for max
 19 EXECUTE IMMEDIATE replace(sql_max, 'text_string', ts_name.tablespace_name) into num_out;
 20 dbms_output.put_line(replace(sql_max, 'text_string', ts_name.tablespace_name));
 21 dbms_output.put_line(num_out);
 22 END LOOP;
 23 END;
 24 /

PL/SQL procedure successfully completed.

Elapsed: 00:02:46.25
SQL>

Result, the duration of the PL/SQL block went from 21 minutes to just under 3 minutes.  However I need it to go under 2 minutes as this was the timeout for OpsView.

So I proceed with the next recommendation in the MOS note of gather dictionary and fixed table stats (with customers permission) using MOS note:
How to Gather Statistics on Objects Owned by the ‘SYS’ User and ‘Fixed’ Objects (Doc ID 457926.1)

SQL> EXEC DBMS_STATS.GATHER_DICTIONARY_STATS;

PL/SQL procedure successfully completed.

Elapsed: 00:00:20.49

SQL> EXEC DBMS_STATS.GATHER_FIXED_OBJECTS_STATS;

PL/SQL procedure successfully completed.

Elapsed: 00:04:28.07

SQL> --SET SERVEROUTPUT ON
SET TIMING ON
SQL> SQL> DECLARE
 2 cursor ts_names is select tablespace_name from dba_tablespaces where contents != 'TEMPORARY';
 3 sql_used VARCHAR(200) := 'select nvl(sum(dfs.bytes)/1024/1024,0) from dba_free_space dfs where dfs.tablespace_name = ''text_string''';
 4 sql_free VARCHAR(200) := 'select nvl(sum(dfs.bytes)/1024/1024,0) from dba_free_space dfs where dfs.tablespace_name = ''text_string''';
 5 sql_max VARCHAR(200) := 'select sum(maxbytes-bytes)/1024/1024 from dba_data_files where AUTOEXTENSIBLE = ''YES'' and maxbytes>bytes and tablespace_name = ''text_string''';
 6 num_out NUMBER;
 7 BEGIN
 8 FOR ts_name in ts_names
 9 LOOP
 10 --sql for used space
 11 EXECUTE IMMEDIATE replace(sql_used, 'text_string', ts_name.tablespace_name) into num_out;
 12 dbms_output.put_line(replace(sql_used, 'text_string', ts_name.tablespace_name));
 13 dbms_output.put_line(num_out);
 14 --sql for free space
 15 EXECUTE IMMEDIATE replace(sql_free, 'text_string', ts_name.tablespace_name) into num_out;
 16 dbms_output.put_line(replace(sql_free, 'text_string', ts_name.tablespace_name));
 17 dbms_output.put_line(num_out);
 18 --sql for max
 19 EXECUTE IMMEDIATE replace(sql_max, 'text_string', ts_name.tablespace_name) into num_out;
 20 dbms_output.put_line(replace(sql_max, 'text_string', ts_name.tablespace_name));
 21 dbms_output.put_line(num_out);
 22 END LOOP;
 23 END;
 24 /

PL/SQL procedure successfully completed.

Elapsed: 00:00:04.53
SQL>

Bingo! the duration of the PL/SQL block went down to 4 seconds 🙂

PLEASE NOTE: This still effects non-pluggable databases, however in pluggable databases, you need to purge the recycle bin for where the dropped objects are, the container database and the pluggable databases require independent purge.

If you found this blog post useful, please like as well as follow me through my various Social Media avenues available on the sidebar and/or subscribe to this oracle blog via WordPress/e-mail.

Thanks

Zed DBA (Zahid Anwar)

How to Enable Exadata Write-Back Flash Cache

Please check the following blog post “How to check if Exadata Write-Back Flash Cache is Enabled” for:

  • What is Exadata Write-Back Flash Cache?
  • What are the Performance Benefits of Exadata Write-Back Flash Cache?
  • How to check if Exadata Write-Back Flash Cache is Enabled?
  • Pre-requisites and minimum versions.

You can also get more info from My Oracle Support (MOS) note:
Exadata Write-Back Flash Cache – FAQ (Doc ID 1500257.1)
OTN Article: Oracle Exadata Database Machine – Write-Back Flash Cache

How to Enable Exadata Write-Back Flash Cache

PLEASE NOTE: Although I have illustrated the steps below, please cross check with the MOS note to ensure the method below matches your setup or the steps haven’t changed with future releases (after the time of writing).

With Exadata software 11.2.3.3.1 or higher, it is not required to stop the cellsrv process on the storage cells or to inactivate griddisk.  If you are 11.2.3.2.1 to 11.2.3.3.0, the refer to the MOS notes for additional steps.

It is recommend to enabled Write-Back Flash Cache during a period of reduced workload to reduce the performance impact on the database.

Before proceeding with the enabling of Write-Back Flash Cache, it is recommended to check the caching policy of the grid disks, as we don’t want to enable Write-Back Flash Cache for grid disks that don’t need it i.e. RECO and DBFS disk groups:

[root@v1oex2dbadm01 ~]# dcli -l root -g /opt/oracle.SupportTools/onecommand/cell_group cellcli -e list griddisk attributes name,cachingpolicy,cachedby
 v1oex2celadm01: DATAC1_CD_00_v1oex2celadm01 default
 v1oex2celadm01: DATAC1_CD_01_v1oex2celadm01 default
 v1oex2celadm01: DATAC1_CD_02_v1oex2celadm01 default
 v1oex2celadm01: DATAC1_CD_03_v1oex2celadm01 default
 v1oex2celadm01: DATAC1_CD_04_v1oex2celadm01 default
 v1oex2celadm01: DATAC1_CD_05_v1oex2celadm01 default
 v1oex2celadm01: DBFS_DG_CD_02_v1oex2celadm01 default
 v1oex2celadm01: DBFS_DG_CD_03_v1oex2celadm01 default
 v1oex2celadm01: DBFS_DG_CD_04_v1oex2celadm01 default
 v1oex2celadm01: DBFS_DG_CD_05_v1oex2celadm01 default
 v1oex2celadm01: RECOC1_CD_00_v1oex2celadm01 default
 v1oex2celadm01: RECOC1_CD_01_v1oex2celadm01 default
 v1oex2celadm01: RECOC1_CD_02_v1oex2celadm01 default
 v1oex2celadm01: RECOC1_CD_03_v1oex2celadm01 default
 v1oex2celadm01: RECOC1_CD_04_v1oex2celadm01 default
 v1oex2celadm01: RECOC1_CD_05_v1oex2celadm01 default
 v1oex2celadm02: DATAC1_CD_00_v1oex2celadm02 default
 v1oex2celadm02: DATAC1_CD_01_v1oex2celadm02 default
 v1oex2celadm02: DATAC1_CD_02_v1oex2celadm02 default
 v1oex2celadm02: DATAC1_CD_03_v1oex2celadm02 default
 v1oex2celadm02: DATAC1_CD_04_v1oex2celadm02 default
 v1oex2celadm02: DATAC1_CD_05_v1oex2celadm02 default
 v1oex2celadm02: DBFS_DG_CD_02_v1oex2celadm02 default
 v1oex2celadm02: DBFS_DG_CD_03_v1oex2celadm02 default
 v1oex2celadm02: DBFS_DG_CD_04_v1oex2celadm02 default
 v1oex2celadm02: DBFS_DG_CD_05_v1oex2celadm02 default
 v1oex2celadm02: RECOC1_CD_00_v1oex2celadm02 default
 v1oex2celadm02: RECOC1_CD_01_v1oex2celadm02 default
 v1oex2celadm02: RECOC1_CD_02_v1oex2celadm02 default
 v1oex2celadm02: RECOC1_CD_03_v1oex2celadm02 default
 v1oex2celadm02: RECOC1_CD_04_v1oex2celadm02 default
 v1oex2celadm02: RECOC1_CD_05_v1oex2celadm02 default
 v1oex2celadm03: DATAC1_CD_00_v1oex2celadm03 default
 v1oex2celadm03: DATAC1_CD_01_v1oex2celadm03 default
 v1oex2celadm03: DATAC1_CD_02_v1oex2celadm03 default
 v1oex2celadm03: DATAC1_CD_03_v1oex2celadm03 default
 v1oex2celadm03: DATAC1_CD_04_v1oex2celadm03 default
 v1oex2celadm03: DATAC1_CD_05_v1oex2celadm03 default
 v1oex2celadm03: DBFS_DG_CD_02_v1oex2celadm03 default
 v1oex2celadm03: DBFS_DG_CD_03_v1oex2celadm03 default
 v1oex2celadm03: DBFS_DG_CD_04_v1oex2celadm03 default
 v1oex2celadm03: DBFS_DG_CD_05_v1oex2celadm03 default
 v1oex2celadm03: RECOC1_CD_00_v1oex2celadm03 default
 v1oex2celadm03: RECOC1_CD_01_v1oex2celadm03 default
 v1oex2celadm03: RECOC1_CD_02_v1oex2celadm03 default
 v1oex2celadm03: RECOC1_CD_03_v1oex2celadm03 default
 v1oex2celadm03: RECOC1_CD_04_v1oex2celadm03 default
 v1oex2celadm03: RECOC1_CD_05_v1oex2celadm03 default
 [root@v1oex2dbadm01 ~]#

As you can see, all the grid disks have default caching policy.  As per the following MOS note, we disable caching for RECO and DBFS disk groups:
Oracle Exadata Database Machine Setup/Configuration Best Practices (Doc ID 1274318.1)

[root@v1oex2dbadm01 ~]# dcli -c v1oex2celadm01 -l root cellcli -e alter griddisk DBFS_DG_CD_02_v1oex2celadm01,DBFS_DG_CD_03_v1oex2celadm01,DBFS_DG_CD_04_v1oex2celadm01,DBFS_DG_CD_05_v1oex2celadm01 cachingPolicy="none"
 v1oex2celadm01: GridDisk DBFS_DG_CD_02_v1oex2celadm01 successfully altered
 v1oex2celadm01: GridDisk DBFS_DG_CD_03_v1oex2celadm01 successfully altered
 v1oex2celadm01: GridDisk DBFS_DG_CD_04_v1oex2celadm01 successfully altered
 v1oex2celadm01: GridDisk DBFS_DG_CD_05_v1oex2celadm01 successfully altered
[root@v1oex2dbadm01 ~]# dcli -c v1oex2celadm02 -l root cellcli -e alter griddisk DBFS_DG_CD_02_v1oex2celadm02,DBFS_DG_CD_03_v1oex2celadm02,DBFS_DG_CD_04_v1oex2celadm02,DBFS_DG_CD_05_v1oex2celadm02 cachingPolicy="none"
 v1oex2celadm02: GridDisk DBFS_DG_CD_02_v1oex2celadm02 successfully altered
 v1oex2celadm02: GridDisk DBFS_DG_CD_03_v1oex2celadm02 successfully altered
 v1oex2celadm02: GridDisk DBFS_DG_CD_04_v1oex2celadm02 successfully altered
 v1oex2celadm02: GridDisk DBFS_DG_CD_05_v1oex2celadm02 successfully altered
[root@v1oex2dbadm01 ~]# dcli -c v1oex2celadm03 -l root cellcli -e alter griddisk DBFS_DG_CD_02_v1oex2celadm03,DBFS_DG_CD_03_v1oex2celadm03,DBFS_DG_CD_04_v1oex2celadm03,DBFS_DG_CD_05_v1oex2celadm03 cachingPolicy="none"
 v1oex2celadm03: GridDisk DBFS_DG_CD_02_v1oex2celadm03 successfully altered
 v1oex2celadm03: GridDisk DBFS_DG_CD_03_v1oex2celadm03 successfully altered
 v1oex2celadm03: GridDisk DBFS_DG_CD_04_v1oex2celadm03 successfully altered
 v1oex2celadm03: GridDisk DBFS_DG_CD_05_v1oex2celadm03 successfully altered 
[root@v1oex2dbadm01 ~]# dcli -c v1oex2celadm01 -l root cellcli -e alter griddisk RECOC1_CD_00_v1oex2celadm01,RECOC1_CD_01_v1oex2celadm01,RECOC1_CD_02_v1oex2celadm01,RECOC1_CD_03_v1oex2celadm01,RECOC1_CD_04_v1oex2celadm01,RECOC1_CD_05_v1oex2celadm01 cachingPolicy="none"
 v1oex2celadm01: GridDisk RECOC1_CD_00_v1oex2celadm01 successfully altered
 v1oex2celadm01: GridDisk RECOC1_CD_01_v1oex2celadm01 successfully altered
 v1oex2celadm01: GridDisk RECOC1_CD_02_v1oex2celadm01 successfully altered
 v1oex2celadm01: GridDisk RECOC1_CD_03_v1oex2celadm01 successfully altered
 v1oex2celadm01: GridDisk RECOC1_CD_04_v1oex2celadm01 successfully altered
 v1oex2celadm01: GridDisk RECOC1_CD_05_v1oex2celadm01 successfully altered 
[root@v1oex2dbadm01 ~]# dcli -c v1oex2celadm02 -l root cellcli -e alter griddisk RECOC1_CD_00_v1oex2celadm02,RECOC1_CD_01_v1oex2celadm02,RECOC1_CD_02_v1oex2celadm02,RECOC1_CD_03_v1oex2celadm02,RECOC1_CD_04_v1oex2celadm02,RECOC1_CD_05_v1oex2celadm02 cachingPolicy="none"
 v1oex2celadm02: GridDisk RECOC1_CD_00_v1oex2celadm02 successfully altered
 v1oex2celadm02: GridDisk RECOC1_CD_01_v1oex2celadm02 successfully altered
 v1oex2celadm02: GridDisk RECOC1_CD_02_v1oex2celadm02 successfully altered
 v1oex2celadm02: GridDisk RECOC1_CD_03_v1oex2celadm02 successfully altered
 v1oex2celadm02: GridDisk RECOC1_CD_04_v1oex2celadm02 successfully altered
 v1oex2celadm02: GridDisk RECOC1_CD_05_v1oex2celadm02 successfully altered
[root@v1oex2dbadm01 ~]# dcli -c v1oex2celadm03 -l root cellcli -e alter griddisk RECOC1_CD_00_v1oex2celadm03,RECOC1_CD_01_v1oex2celadm03,RECOC1_CD_02_v1oex2celadm03,RECOC1_CD_03_v1oex2celadm03,RECOC1_CD_04_v1oex2celadm03,RECOC1_CD_05_v1oex2celadm03 cachingPolicy="none"
 v1oex2celadm03: GridDisk RECOC1_CD_00_v1oex2celadm03 successfully altered
 v1oex2celadm03: GridDisk RECOC1_CD_01_v1oex2celadm03 successfully altered
 v1oex2celadm03: GridDisk RECOC1_CD_02_v1oex2celadm03 successfully altered
 v1oex2celadm03: GridDisk RECOC1_CD_03_v1oex2celadm03 successfully altered
 v1oex2celadm03: GridDisk RECOC1_CD_04_v1oex2celadm03 successfully altered
 v1oex2celadm03: GridDisk RECOC1_CD_05_v1oex2celadm03 successfully altered
[root@v1oex2dbadm01 ~]#

Now when we enabling of Write-Back Flash Cache, it will not cache for grid disks for RECO and DBFS disk group, avoiding the need to flush to disk and change policy as post step:

[root@v1oex2dbadm01 ~]# dcli -g /opt/oracle.SupportTools/onecommand/cell_group -l root cellcli -e list griddisk attributes name,cachingpolicy,cachedby
 v1oex2celadm01: DATAC1_CD_00_v1oex2celadm01 default
 v1oex2celadm01: DATAC1_CD_01_v1oex2celadm01 default
 v1oex2celadm01: DATAC1_CD_02_v1oex2celadm01 default
 v1oex2celadm01: DATAC1_CD_03_v1oex2celadm01 default
 v1oex2celadm01: DATAC1_CD_04_v1oex2celadm01 default
 v1oex2celadm01: DATAC1_CD_05_v1oex2celadm01 default
 v1oex2celadm01: DBFS_DG_CD_02_v1oex2celadm01 none
 v1oex2celadm01: DBFS_DG_CD_03_v1oex2celadm01 none
 v1oex2celadm01: DBFS_DG_CD_04_v1oex2celadm01 none
 v1oex2celadm01: DBFS_DG_CD_05_v1oex2celadm01 none
 v1oex2celadm01: RECOC1_CD_00_v1oex2celadm01 none
 v1oex2celadm01: RECOC1_CD_01_v1oex2celadm01 none
 v1oex2celadm01: RECOC1_CD_02_v1oex2celadm01 none
 v1oex2celadm01: RECOC1_CD_03_v1oex2celadm01 none
 v1oex2celadm01: RECOC1_CD_04_v1oex2celadm01 none
 v1oex2celadm01: RECOC1_CD_05_v1oex2celadm01 none
 v1oex2celadm02: DATAC1_CD_00_v1oex2celadm02 default
 v1oex2celadm02: DATAC1_CD_01_v1oex2celadm02 default
 v1oex2celadm02: DATAC1_CD_02_v1oex2celadm02 default
 v1oex2celadm02: DATAC1_CD_03_v1oex2celadm02 default
 v1oex2celadm02: DATAC1_CD_04_v1oex2celadm02 default
 v1oex2celadm02: DATAC1_CD_05_v1oex2celadm02 default
 v1oex2celadm02: DBFS_DG_CD_02_v1oex2celadm02 none
 v1oex2celadm02: DBFS_DG_CD_03_v1oex2celadm02 none
 v1oex2celadm02: DBFS_DG_CD_04_v1oex2celadm02 none
 v1oex2celadm02: DBFS_DG_CD_05_v1oex2celadm02 none
 v1oex2celadm02: RECOC1_CD_00_v1oex2celadm02 none
 v1oex2celadm02: RECOC1_CD_01_v1oex2celadm02 none
 v1oex2celadm02: RECOC1_CD_02_v1oex2celadm02 none
 v1oex2celadm02: RECOC1_CD_03_v1oex2celadm02 none
 v1oex2celadm02: RECOC1_CD_04_v1oex2celadm02 none
 v1oex2celadm02: RECOC1_CD_05_v1oex2celadm02 none
 v1oex2celadm03: DATAC1_CD_00_v1oex2celadm03 default
 v1oex2celadm03: DATAC1_CD_01_v1oex2celadm03 default
 v1oex2celadm03: DATAC1_CD_02_v1oex2celadm03 default
 v1oex2celadm03: DATAC1_CD_03_v1oex2celadm03 default
 v1oex2celadm03: DATAC1_CD_04_v1oex2celadm03 default
 v1oex2celadm03: DATAC1_CD_05_v1oex2celadm03 default
 v1oex2celadm03: DBFS_DG_CD_02_v1oex2celadm03 none
 v1oex2celadm03: DBFS_DG_CD_03_v1oex2celadm03 none
 v1oex2celadm03: DBFS_DG_CD_04_v1oex2celadm03 none
 v1oex2celadm03: DBFS_DG_CD_05_v1oex2celadm03 none
 v1oex2celadm03: RECOC1_CD_00_v1oex2celadm03 none
 v1oex2celadm03: RECOC1_CD_01_v1oex2celadm03 none
 v1oex2celadm03: RECOC1_CD_02_v1oex2celadm03 none
 v1oex2celadm03: RECOC1_CD_03_v1oex2celadm03 none
 v1oex2celadm03: RECOC1_CD_04_v1oex2celadm03 none
 v1oex2celadm03: RECOC1_CD_05_v1oex2celadm03 none
 [root@v1oex2dbadm01 ~]#

Next we check that all the grid disks on all storage cells have the asmdeactivationoutcome and asmmodestatus as “Yes” and “ONLINE” respectively.

[root@v1ex2dbadm01 ~]# dcli -g /opt/oracle.SupportTools/onecommand/cell_group -l root cellcli -e list griddisk attributes asmdeactivationoutcome, asmmodestatus
v1ex2celadm01: Yes ONLINE
v1ex2celadm01: Yes ONLINE
v1ex2celadm01: Yes ONLINE
v1ex2celadm01: Yes ONLINE
v1ex2celadm01: Yes ONLINE
v1ex2celadm01: Yes ONLINE
v1ex2celadm01: Yes ONLINE
v1ex2celadm01: Yes ONLINE
v1ex2celadm01: Yes ONLINE
v1ex2celadm01: Yes ONLINE
v1ex2celadm01: Yes ONLINE
v1ex2celadm01: Yes ONLINE
v1ex2celadm01: Yes ONLINE
v1ex2celadm01: Yes ONLINE
v1ex2celadm01: Yes ONLINE
v1ex2celadm01: Yes ONLINE
v1ex2celadm02: Yes ONLINE
v1ex2celadm02: Yes ONLINE
v1ex2celadm02: Yes ONLINE
v1ex2celadm02: Yes ONLINE
v1ex2celadm02: Yes ONLINE
v1ex2celadm02: Yes ONLINE
v1ex2celadm02: Yes ONLINE
v1ex2celadm02: Yes ONLINE
v1ex2celadm02: Yes ONLINE
v1ex2celadm02: Yes ONLINE
v1ex2celadm02: Yes ONLINE
v1ex2celadm02: Yes ONLINE
v1ex2celadm02: Yes ONLINE
v1ex2celadm02: Yes ONLINE
v1ex2celadm02: Yes ONLINE
v1ex2celadm02: Yes ONLINE
v1ex2celadm03: Yes ONLINE
v1ex2celadm03: Yes ONLINE
v1ex2celadm03: Yes ONLINE
v1ex2celadm03: Yes ONLINE
v1ex2celadm03: Yes ONLINE
v1ex2celadm03: Yes ONLINE
v1ex2celadm03: Yes ONLINE
v1ex2celadm03: Yes ONLINE
v1ex2celadm03: Yes ONLINE
v1ex2celadm03: Yes ONLINE
v1ex2celadm03: Yes ONLINE
v1ex2celadm03: Yes ONLINE
v1ex2celadm03: Yes ONLINE
v1ex2celadm03: Yes ONLINE
v1ex2celadm03: Yes ONLINE
v1ex2celadm03: Yes ONLINE
[root@v1ex2dbadm01 ~]#

Next we check that all of the Flash Cache are in the “normal” state and that no flash disks are in a degraded or critical state:

[root@v1ex2dbadm01 ~]# dcli -g /opt/oracle.SupportTools/onecommand/cell_group -l root cellcli -e list flashcache detail
v1ex2celadm01: name: v1ex2celadm01_FLASHCACHE
v1ex2celadm01: cellDisk: FD_01_v1ex2celadm01,FD_00_v1ex2celadm01
v1ex2celadm01: creationTime: 2015-07-01T13:39:22+01:00
v1ex2celadm01: degradedCelldisks:
v1ex2celadm01: effectiveCacheSize: 2.910369873046875T
v1ex2celadm01: id: 655bdb7a-8d3b-40e5-88af-cd42843dd3f7
v1ex2celadm01: size: 2.910369873046875T
v1ex2celadm01: status: normal
v1ex2celadm02: name: v1ex2celadm02_FLASHCACHE
v1ex2celadm02: cellDisk: FD_01_v1ex2celadm02,FD_00_v1ex2celadm02
v1ex2celadm02: creationTime: 2015-07-01T06:38:05+01:00
v1ex2celadm02: degradedCelldisks:
v1ex2celadm02: effectiveCacheSize: 2.910369873046875T
v1ex2celadm02: id: 1cc0f7a4-885a-4e23-aec5-b47bc488e8e3
v1ex2celadm02: size: 2.910369873046875T
v1ex2celadm02: status: normal
v1ex2celadm03: name: v1ex2celadm03_FLASHCACHE
v1ex2celadm03: cellDisk: FD_01_v1ex2celadm03,FD_00_v1ex2celadm03
v1ex2celadm03: creationTime: 2015-07-01T20:39:30+01:00
v1ex2celadm03: degradedCelldisks:
v1ex2celadm03: effectiveCacheSize: 2.910369873046875T
v1ex2celadm03: id: b07f6011-1d66-4c3f-a25f-26d1e6b55633
v1ex2celadm03: size: 2.910369873046875T
v1ex2celadm03: status: normal
[root@v1ex2dbadm01 ~]#

Next we validate all the Physical Disks are in the “NORMAL” state before we modify the Flash Cache:

[root@v1ex2dbadm01 ~]# dcli -l root -g /opt/oracle.SupportTools/onecommand/cell_group cellcli -e "list physicaldisk attributes name,status"
v1ex2celadm01: 8:0 normal
v1ex2celadm01: 8:1 normal
v1ex2celadm01: 8:2 normal
v1ex2celadm01: 8:3 normal
v1ex2celadm01: 8:4 normal
v1ex2celadm01: 8:5 normal
v1ex2celadm01: 8:6 normal
v1ex2celadm01: 8:7 normal
v1ex2celadm01: 8:8 normal
v1ex2celadm01: 8:9 normal
v1ex2celadm01: 8:10 normal
v1ex2celadm01: 8:11 normal
v1ex2celadm01: FLASH_1_1 normal
v1ex2celadm01: FLASH_2_1 normal
v1ex2celadm01: FLASH_4_1 normal
v1ex2celadm01: FLASH_5_1 normal
v1ex2celadm02: 8:0 normal
v1ex2celadm02: 8:1 normal
v1ex2celadm02: 8:2 normal
v1ex2celadm02: 8:3 normal
v1ex2celadm02: 8:4 normal
v1ex2celadm02: 8:5 normal
v1ex2celadm02: 8:6 normal
v1ex2celadm02: 8:7 normal
v1ex2celadm02: 8:8 normal
v1ex2celadm02: 8:9 normal
v1ex2celadm02: 8:10 normal
v1ex2celadm02: 8:11 normal
v1ex2celadm02: FLASH_1_1 normal
v1ex2celadm02: FLASH_2_1 normal
v1ex2celadm02: FLASH_4_1 normal
v1ex2celadm02: FLASH_5_1 normal
v1ex2celadm03: 8:0 normal
v1ex2celadm03: 8:1 normal
v1ex2celadm03: 8:2 normal
v1ex2celadm03: 8:3 normal
v1ex2celadm03: 8:4 normal
v1ex2celadm03: 8:5 normal
v1ex2celadm03: 8:6 normal
v1ex2celadm03: 8:7 normal
v1ex2celadm03: 8:8 normal
v1ex2celadm03: 8:9 normal
v1ex2celadm03: 8:10 normal
v1ex2celadm03: 8:11 normal
v1ex2celadm03: FLASH_1_1 normal
v1ex2celadm03: FLASH_2_1 normal
v1ex2celadm03: FLASH_4_1 normal
v1ex2celadm03: FLASH_5_1 normal
[root@v1ex2dbadm01 ~]#

You can run the same command with inverse grep on “normal” to ensure you didn’t miss any disks that are not normal:

[root@v1ex2dbadm01 ~]# dcli -l root -g /opt/oracle.SupportTools/onecommand/cell_group cellcli -e "list physicaldisk attributes name,status"|grep -v normal
[root@v1ex2dbadm01 ~]#

Next we drop the Flash Cache to be able to change the attribute:

PLEASE NOTE: Any data that is currently cached in Flash Cache and being served will then need to be served by Hard Disks and a noticeable performance degradation will be observed.  Hence it is recommend to enabled Write-Back Flash Cache during a period of reduced workload to reduce the performance impact on the database.

[root@v1ex2dbadm01 ~]# dcli -l root -g /opt/oracle.SupportTools/onecommand/cell_group cellcli -e drop flashcache 
v1ex2celadm01: Flash cache v1ex2celadm01_FLASHCACHE successfully dropped 
v1ex2celadm02: Flash cache v1ex2celadm02_FLASHCACHE successfully dropped 
v1ex2celadm03: Flash cache v1ex2celadm03_FLASHCACHE successfully dropped 
[root@v1ex2dbadm01 ~]#

Next we set the “flashCacheMode” attribute to “writeback“:

[root@v1ex2dbadm01 ~]# dcli -l root -g /opt/oracle.SupportTools/onecommand/cell_group cellcli -e "alter cell flashCacheMode=writeback"
v1ex2celadm01: Cell v1ex2celadm01 successfully altered
v1ex2celadm02: Cell v1ex2celadm02 successfully altered
v1ex2celadm03: Cell v1ex2celadm03 successfully altered
[root@v1ex2dbadm01 ~]#

Next we re-create the Flash Cache, which will be in Write-Back instead of WriteThrough:

[root@v1ex2dbadm01 ~]# dcli -l root -g /opt/oracle.SupportTools/onecommand/cell_group cellcli -e create flashcache all
v1ex2celadm01: Flash cache v1ex2celadm01_FLASHCACHE successfully created
v1ex2celadm02: Flash cache v1ex2celadm02_FLASHCACHE successfully created
v1ex2celadm03: Flash cache v1ex2celadm03_FLASHCACHE successfully created
[root@v1ex2dbadm01 ~]#

Next we check the attribute “flashCacheMode” is actually now “writeback“:

[root@v1ex2dbadm01 ~]# dcli -g /opt/oracle.SupportTools/onecommand/cell_group -l root cellcli -e "list cell attributes flashcachemode"
v1ex2celadm01: writeback
v1ex2celadm02: writeback
v1ex2celadm03: writeback
[root@v1ex2dbadm01 ~]#

At this point, write I/O will go straight to flash and then can be moved to hard disk if aged or not required for read caching.  The Flash Cache will be repopulated over time and performance will return to normal for reads with addition performance for writes 🙂

You can check the usage increase as Flash Cache repopulates as follows:

[root@v1oex2dbadm01 ~]# dcli -g /opt/oracle.SupportTools/onecommand/cell_group -l root cellcli -e LIST METRICCURRENT FC_BY_USED
 v1oex2celadm01: FC_BY_USED FLASHCACHE 104,838 MB
 v1oex2celadm02: FC_BY_USED FLASHCACHE 104,479 MB
 v1oex2celadm03: FC_BY_USED FLASHCACHE 105,192 MB
[root@v1oex2dbadm01 ~]#

Finally, we validate grid disk attributes cachingPolicy and cachedby, where we can see only the DATA disk group is being cached by Flash Cache and by which Flash Disk:

[root@v1oex2dbadm01 ~]# dcli -g /opt/oracle.SupportTools/onecommand/cell_group -l root cellcli -e list griddisk attributes name,cachingpolicy,cachedby
v1oex2celadm01: DATAC1_CD_00_v1oex2celadm01 default FD_01_v1oex2celadm01
v1oex2celadm01: DATAC1_CD_01_v1oex2celadm01 default FD_01_v1oex2celadm01
v1oex2celadm01: DATAC1_CD_02_v1oex2celadm01 default FD_00_v1oex2celadm01
v1oex2celadm01: DATAC1_CD_03_v1oex2celadm01 default FD_00_v1oex2celadm01
v1oex2celadm01: DATAC1_CD_04_v1oex2celadm01 default FD_01_v1oex2celadm01
v1oex2celadm01: DATAC1_CD_05_v1oex2celadm01 default FD_00_v1oex2celadm01
v1oex2celadm01: DBFS_DG_CD_02_v1oex2celadm01 none
v1oex2celadm01: DBFS_DG_CD_03_v1oex2celadm01 none
v1oex2celadm01: DBFS_DG_CD_04_v1oex2celadm01 none
v1oex2celadm01: DBFS_DG_CD_05_v1oex2celadm01 none
v1oex2celadm01: RECOC1_CD_00_v1oex2celadm01 none
v1oex2celadm01: RECOC1_CD_01_v1oex2celadm01 none
v1oex2celadm01: RECOC1_CD_02_v1oex2celadm01 none
v1oex2celadm01: RECOC1_CD_03_v1oex2celadm01 none
v1oex2celadm01: RECOC1_CD_04_v1oex2celadm01 none
v1oex2celadm01: RECOC1_CD_05_v1oex2celadm01 none
v1oex2celadm02: DATAC1_CD_00_v1oex2celadm02 default FD_01_v1oex2celadm02
v1oex2celadm02: DATAC1_CD_01_v1oex2celadm02 default FD_00_v1oex2celadm02
v1oex2celadm02: DATAC1_CD_02_v1oex2celadm02 default FD_01_v1oex2celadm02
v1oex2celadm02: DATAC1_CD_03_v1oex2celadm02 default FD_01_v1oex2celadm02
v1oex2celadm02: DATAC1_CD_04_v1oex2celadm02 default FD_00_v1oex2celadm02
v1oex2celadm02: DATAC1_CD_05_v1oex2celadm02 default FD_00_v1oex2celadm02
v1oex2celadm02: DBFS_DG_CD_02_v1oex2celadm02 none
v1oex2celadm02: DBFS_DG_CD_03_v1oex2celadm02 none
v1oex2celadm02: DBFS_DG_CD_04_v1oex2celadm02 none
v1oex2celadm02: DBFS_DG_CD_05_v1oex2celadm02 none
v1oex2celadm02: RECOC1_CD_00_v1oex2celadm02 none
v1oex2celadm02: RECOC1_CD_01_v1oex2celadm02 none
v1oex2celadm02: RECOC1_CD_02_v1oex2celadm02 none
v1oex2celadm02: RECOC1_CD_03_v1oex2celadm02 none
v1oex2celadm02: RECOC1_CD_04_v1oex2celadm02 none
v1oex2celadm02: RECOC1_CD_05_v1oex2celadm02 none
v1oex2celadm03: DATAC1_CD_00_v1oex2celadm03 default FD_01_v1oex2celadm03
v1oex2celadm03: DATAC1_CD_01_v1oex2celadm03 default FD_01_v1oex2celadm03
v1oex2celadm03: DATAC1_CD_02_v1oex2celadm03 default FD_00_v1oex2celadm03
v1oex2celadm03: DATAC1_CD_03_v1oex2celadm03 default FD_00_v1oex2celadm03
v1oex2celadm03: DATAC1_CD_04_v1oex2celadm03 default FD_01_v1oex2celadm03
v1oex2celadm03: DATAC1_CD_05_v1oex2celadm03 default FD_00_v1oex2celadm03
v1oex2celadm03: DBFS_DG_CD_02_v1oex2celadm03 none
v1oex2celadm03: DBFS_DG_CD_03_v1oex2celadm03 none
v1oex2celadm03: DBFS_DG_CD_04_v1oex2celadm03 none
v1oex2celadm03: DBFS_DG_CD_05_v1oex2celadm03 none
v1oex2celadm03: RECOC1_CD_00_v1oex2celadm03 none
v1oex2celadm03: RECOC1_CD_01_v1oex2celadm03 none
v1oex2celadm03: RECOC1_CD_02_v1oex2celadm03 none
v1oex2celadm03: RECOC1_CD_03_v1oex2celadm03 none
v1oex2celadm03: RECOC1_CD_04_v1oex2celadm03 none
v1oex2celadm03: RECOC1_CD_05_v1oex2celadm03 none
[root@v1oex2dbadm01 ~]#

Final note, there is a script provided by Oracle that can do this all for you called setWBFC, however the version 1.0.0.2.1.20160602 didn’t work for me as it detected 4 Flash Disks in eighth rack when it expected 2.  Although there are only 2 in use in eighth rack, there are 4 physically present, so I believe this is a bug.  I did raise an SR with Oracle Support, which is yet to be concluded.  Below is the output for those who are interested:

[root@v1oex2dbadm01 WBFC]# ./setWBFC.sh
 setWBFC Version: 1.0.0.2.1.20160602
 Usage:
 ./setWBFC.sh -g cell_group_file [-d dbs_group_file ]
 [ -h ] [ -i ] [ -l log_directory ]
 [ -m WriteBack | WriteThrough ] [ -o rolling | non-rolling ]
 [ -p ] [ -s step_number ] [ -t time_out_seconds ]
 [ -x trace_level ] [ -v ]

-g file file that lists cell host names, one per line
 -d file file that lists the database host names, one
 per line. Required for non-rolling.
 -h help, print this information
 -i run in interactive mode
 -l log directory directory path for log files
 -m FC_mode flashcache mode: WriteBack | WriteThrough
 -o exec_mode execution mode: rolling | non-rolling (default)
 -p perform a precheck only
 -s step # (*) specify step number to restart at
 -t timeout sec specify in seconds the amount of time to wait
 for griddisks to come ONLINE - range: [600 - 43200]
 Default: 21600 (6 hours)
 -x trace level # specify trace level for further diagnostics
 -v show version

(*) -- Option not yet implemented.

 [root@v1oex2dbadm01 WBFC]# ./setWBFC.sh -g /opt/oracle.SupportTools/onecommand/cell_group -l /root/v1/WBFC/logs -m WriteBack -o rolling -p
 ./setWBFC.sh: Using log directory '/root/v1/WBFC/logs'
 ./setWBFC.sh: Log File '/root/v1/WBFC/logs/setWBFC_18335_2018-01-17-10:46:26.log' created successfully
 2018-01-17 10:46:26
 Starting ./setWBFC.sh on v1oex2dbadm01
 Version: 1.0.0.2.1.20160602
 Command line options used:
 -g /opt/oracle.SupportTools/onecommand/cell_group
 -o rolling
 -m WriteBack
 -p (Perform pre-req checks only)
 -t 21600
 -x 0

2018-01-17 10:46:26
 Performing pre-req checks.....
 2018-01-17 10:46:26
 Creating baseline inventory for griddisks
 2018-01-17 10:46:27
 Creating baseline inventory for flashdisks
 2018-01-17 10:46:28
 Creating baseline inventory for flashsize
 2018-01-17 10:46:28
 dcli present and in PATH. [PASSED]
 2018-01-17 10:46:28
 Checking cell nodes are valid storage servers...
 2018-01-17 10:46:29
 All cells are valid Exadata storage cells.
 2018-01-17 10:46:29
 Checking Exadata Storage Software Versions...
 2018-01-17 10:46:33
 Software versions of the following cells:
 v1oex2celadm01: 12.1.2.3.5.170418 [PASSED]
 v1oex2celadm02: 12.1.2.3.5.170418 [PASSED]
 v1oex2celadm03: 12.1.2.3.5.170418 [PASSED]

2018-01-17 10:46:33
 Checking Grid Infrastructure Software Version...
 2018-01-17 10:46:38
 Grid Infrastructure version: 12.1.0.2.00 [PASSED]

2018-01-17 10:46:38
 Checking for active ASM operations....
 2018-01-17 10:46:38
 Check for no active ASM operations: [PASSED]
 2018-01-17 10:46:38
 Checking griddisk status across all cells....
 2018-01-17 10:46:39
 All griddisks across all cells have asmdeactivationoutcome = Yes
 All griddisks across all cells are ONLINE
 Griddisk checks: [PASSED]
 2018-01-17 10:46:39
 Checking flash cache status.....
 2018-01-17 10:46:40
 Flashcache status normal: [PASSED]
 2018-01-17 10:46:40
 Checking that all FlashDisks are present...
 2018-01-17 10:46:42
 Cell v1oex2celadm01 has one or more FlashDisk missing. Expecting 2 but found 4

2018-01-17 10:46:42
 FlashDisk validation: [FAILED]
 2018-01-17 10:46:42
 Checking current flash cache mode.....
 2018-01-17 10:46:43
 Flashcache not already in target mode: [PASSED]
 2018-01-17 10:46:43
 Pre-req checks failed with status 7. Exiting....

[root@v1oex2dbadm01 WBFC]#

If this works for you, great then I would recommend using this method, otherwise it can be used to double check the pre-requisites at least and then you can do manually as I did shown above 🙂

If you found this blog post useful, please like as well as follow me through my various Social Media avenues available on the sidebar and/or subscribe to this oracle blog via WordPress/e-mail.

Thanks

Zed DBA (Zahid Anwar)

How to check if Exadata Write-Back Flash Cache is Enabled

What is Exadata Write-Back Flash Cache?

Exadata Write-Back Flash Cache provides the ability to cache not only read I/Os but write I/O to the Exadata’s PCI flash on the storage cells.  Exadata storage software 11.2.3.2.1 or higher and Grid Infrastructure and Database software 11.2.0.3.9 or higher is required to use Exadata Write-Back Flash Cache, which is persistent across storage cell restarts.

The default since April 2017 for the Oracle Exadata Deployment Assistant (OEDA) is Write-Back Flash Cache when DATA diskgroup is HIGH redundancy and Grid Infrastructure and Database software are:

  • 11.2.0.4.1 or higher
  • 12.1.0.2 or higher
  • 12.2.0.2 or higher

PLEASE NOTE: This option is only applicable to High Capacity as Extreme Flash doesn’t have Hard Disks and therefore Write-Back Flash Cache is explicitly enabled and can’t be disabled.

What are the Performance Benefit of Exadata Write-Back Flash Cache?

Write-Back Flash Cache can significantly improve write intensive operations because writing to Flash Cache is significantly faster than writing to Hard Disks.  Depending on the workload, write performance (IOPS) can be improved by 10x on older generations of Exadata Machines V2 and X2 and 20x on newer generations X3 onwards (correct at time of writing).

If you are experiencing high write I/O times on storage cells from AWR Reports or Storage Cell metrics, then you should consider enabling Write-Back Flash Cache to alleviate write operations on Hard Disks and move to Flash Cache.

See the following My Oracle Support (MOS) Note for more info:
Exadata Write-Back Flash Cache – FAQ (Doc ID 1500257.1)

How to check if Exadata Write-Back Flash Cache is Enabled?

To check if Exadata Write-Back Flash Cache is enabled, run “list cell attributes flashcachemode” on the storage cell using CellCLI as shown below:

[root@v1ex2celadm01 ~]# cellcli
CellCLI: Release 12.1.2.3.5 - Production on Wed Jan 17 10:09:51 GMT 2018

Copyright (c) 2007, 2016, Oracle. All rights reserved.

CellCLI> list cell attributes flashcachemode
 WriteThrough

CellCLI> exit
quitting

[root@v1ex2celadm01 ~]#

If “WriteThrough” then Write-Back Flash Cache is disabled (writes go straight to hard disk and then can be placed in flash for caching reads if required), otherwise if “WriteBack” then Write-Back Flash Cache is enabled as the name suggests (writes go straight to flash and then can be moved to hard disk if aged or not required for read caching).

You can also run “list cell detail” using CellCLI as shown below:

[root@v1ex2celadm01 ~]# cellcli
CellCLI: Release 12.1.2.3.5 - Production on Wed Jan 17 10:10:22 GMT 2018

Copyright (c) 2007, 2016, Oracle. All rights reserved.

CellCLI> list cell detail
 name: v1ex2celadm01
 accessLevelPerm: remoteLoginEnabled
 bbuStatus: normal
 cellVersion: OSS_12.1.2.3.5_LINUX.X64_170418
 cpuCount: 16/32
 diagHistoryDays: 7
 eighthRack: TRUE
 fanCount: 8/8
 fanStatus: normal
 flashCacheMode: WriteThrough
 id: xxxxxxxxxx
 interconnectCount: 2
 interconnect1: ib0
 interconnect2: ib1
 iormBoost: 6.4
 ipaddress1: 10.1.11.14/22
 ipaddress2: 10.1.11.15/22
 kernelVersion: 2.6.39-400.294.4.el6uek.x86_64
 locatorLEDStatus: off
 makeModel: Oracle Corporation ORACLE SERVER X5-2L High Capacity
 memoryGB: 95
 metricHistoryDays: 7
 notificationMethod: snmp
 notificationPolicy: critical,warning,clear
 offloadGroupEvents:
 powerCount: 2/2
 powerStatus: normal
 releaseImageStatus: success
 releaseVersion: 12.1.2.3.5.170418
 rpmVersion: cell-12.1.2.3.5_LINUX.X64_170418-1.x86_64
 releaseTrackingBug: 25509078
 rollbackVersion: 12.1.2.3.4.170111
 securityCert: PrivateKey OK
 Certificate: Subject CN=v1ex2celadm01.v1.com,OU=Oracle Exadata,O=Oracle Corporation,L=Redwood City,ST=California,C=US
 Issuer CN=v1ex2celadm01.v1.com,OU=Oracle Exadata,O=Oracle Corporation,L=Redwood City,ST=California,C=US
 snmpSubscriber: host=v1ex2dbadm02.v1.com,port=1830,community=public
 host=v1ex2dbadm01.v1.com,port=1830,community=public
 host=v1ex2dbadm01.v1.com,port=3872,community=public
 host=v1ex2dbadm02.v1.com,port=3872,community=public
 status: online
 temperatureReading: 24.0
 temperatureStatus: normal
 upTime: 105 days, 7:35
 usbStatus: normal
 cellsrvStatus: running
 msStatus: running
 rsStatus: running

CellCLI> exit
quitting

[root@v1ex2celadm01 ~]#

However, the simpler way to check is via dcli, especially when you have lots of storage cells as shown below:

[root@v1ex2dbadm01 ~]# dcli -g /opt/oracle.SupportTools/onecommand/cell_group -l root cellcli -e "list cell attributes flashcachemode"
v1ex2celadm01: WriteThrough
v1ex2celadm02: WriteThrough
v1ex2celadm03: WriteThrough

Related Posts:
How to Enable Exadata Write-Back Flash Cache

If you found this blog post useful, please like as well as follow me through my various Social Media avenues available on the sidebar and/or subscribe to this oracle blog via WordPress/e-mail.

Thanks

Zed DBA (Zahid Anwar)

How to use the Oracle Exadata Diagnostics Collection Tool (sundiag.sh)

What is the Oracle Exadata Diagnostics Collection Tool sundiag.sh

Very often when creating a Support Request (SR) for an issue on an Oracle Exadata Database Machine, you’ll need to run the script “sundiag.sh“.  Which is the “Oracle Exadata Database Machine – Diagnostics Collection Tool“.

The tool collects a lot of diagnostics information that assist the support analyst in diagnosing your problem, such as failed hardware like a failed disk, etc.

More information can be found on My Oracle Support (MOS) Note:
SRDC – EEST Sundiag (Doc ID 1683842.1)
Oracle Exadata Diagnostic Information required for Disk Failures and some other Hardware issues (Doc ID 761868.1)

How to run the Diagnostics Collection Tool

To run “sundiag.sh“, is very simple as shown below:

[root@v1ex1dbadm01 ~]# /opt/oracle.SupportTools/sundiag.sh
Oracle Exadata Database Machine - Diagnostics Collection Tool
Gathering Linux information
Skipping collection of OSWatcher/ExaWatcher logs, Cell Metrics and Traces
Skipping ILOM collection. Use the ilom or snapshot options, or login to ILOM
over the network and run Snapshot separately if necessary.
/var/log/exadatatmp/sundiag_v1ex1dbadm01_xxxxxxxxxx_2018_01_17_13_49
Gathering dbms information
Generating diagnostics tarball and removing temp directory
==============================================================================
Done. The report files are bzip2 compressed in /var/log/exadatatmp/sundiag_v1ex1dbadm01_xxxxxxxxxx_2018_01_17_13_49.tar.bz2
==============================================================================
[root@v1ex1dbadm01 ~]#

For more advanced collections, use the option switches to override default behaviour as shown in the help:

[root@v1ex1dbadm01 ~]# /opt/oracle.SupportTools/sundiag.sh -h

Oracle Exadata Database Machine - Diagnostics Collection Tool

Version: 12.1.2.3.3.161109

By default sundiag will collect OSWatcher/ExaWatcher, Cell Metrics and traces,
if there was an alert in the last 7 days. If there is more than one alert, latest
alert is chosen to set the time range for data collection.
Time range is 8hrs prior to and 1hr after the latest alert, for the total of 9 hrs
e.g: latest alert timestamp = 2014-03-29T01:20:04-05:00
 echo Time range = 2014-03-28_16:00:00 and 2014-03-29_01:00:00
User can also specify time ranges (as explained in usage below), which takes
precedence over default behavior of checking for alerts

Usage: /opt/oracle.SupportTools/sundiag.sh [ilom | snapshot] [osw <time ranges>]
 osw - This argument when used expects value of one or more comma separated
 time ranges. OSWatcher/ExaWatcher, cell metrics and traces will be gathered
 in those time ranges.
 The format for time range(s) is <from>-<to>,<from>-<to> and so on without spaces
 where <from> and <to> format is <date>_<time>
 <date> and <time> format should be any valid format that can be recognized by
 'date' command. The command 'date -d <date>' or 'date -d <time>' should be valid
 e.g: /opt/oracle.SupportTools/sundiag.sh osw 2014/03/31_15:00:00-2014/03/31_18:00:00
 Note: Total time range should not exceed 9 hrs. Only the time ranges that
 fall within this limit are considered for the collection of above data
 ilom - User level ILOM data gathering option via ipmitool, in place of
 separately using root login to get ILOM snapshot over the network.
 snapshot - Collects node ILOM snapshot- requires host root password for ILOM
 to send snapshot data over the network.
[root@v1ex1dbadm01 ~]#

Then just upload the bzip2 file to your SR on MOS.

I tend to run this as part of my SR creation and upload to save time.

If you found this blog post useful, please like as well as follow me through my various Social Media avenues available on the sidebar and/or subscribe to this oracle blog via WordPress/e-mail.

Thanks

Zed DBA (Zahid Anwar)