ODC Appreciation Day : Oracle dcli Utility

What is ODC Appreciation Day?

The Oracle Developer Community (ODC) Appreciation Day formally known as OTN Appreciation Day, is a great initiative by Tim Hall aka Oracle-Base.com.  Where we take the opportunity to say thanks to the Oracle Developer Community #ThanksODC.

More info on Oracle Developer Community can be found here:
About Oracle Developer Community

When is it?

This year, it is on Thursday 11th October 2018.

You can see my previous post here:
2017 – ODC Appreciation Day : Oracle Exadata Database Machine

You can see a summary of previous years blog posts here:
2016 – OTN Appreciation Day : Summary
2017 – ODC Appreciation Day 2017 : It’s a Wrap (#ThanksODC)

My Contribution : Oracle dcli Utility

When thinking of a subject, Oracle’s dcli Utility on Oracle Exadata Database Machine came to mind due to the frequent use 🙂

What is dcli Utility?

Distributed Command Line Interface (dcli), which it’s main purpose is to execute commands on storage cells on Exadata in parallel.  Which actually is just a Python script.  Those who don’t know Exadata, it’s an Engineered Systems which includes storage in the form of storage cells i.e. servers that have multiple disks utilised by Automatic Storage Management (ASM).  However, the smaller of offer still has 3 storage cells that can go up to 18 storage cells in a rack (elastic configuration).  More info on Exadata can be found here on the latest datasheet (at time of writing):
Exadata X7-2 Datasheet

So as you can imagine, executing commands on 3 servers is tidiuos enough, let alone 18, hence the power and usefulness of dcli!  I don’t just use it for storage cells but compute nodes (database servers), as well as the InfiniBand switches 🙂

More info on dcli can be found in the Exadata Documentation:
Exadata System Software User’s Guide -> 9 Using the dcli Utility

Example usage

Quickly see the version of Exadata Software on your Exadata Machine:

Storage Cells:

[root@v1ex1dbadm01 ~]# dcli -g /opt/oracle.SupportTools/onecommand/cell_group -l root imageinfo | grep "Kernel version\|Active image version"
v1ex1celadm01: Kernel version: 4.1.12-94.8.4.el6uek.x86_64 #2 SMP Sat May 5 16:14:51 PDT 2018 x86_64
v1ex1celadm01: Active image version: 18.1.7.0.0.180821
v1ex1celadm02: Kernel version: 4.1.12-94.8.4.el6uek.x86_64 #2 SMP Sat May 5 16:14:51 PDT 2018 x86_64
v1ex1celadm02: Active image version: 18.1.7.0.0.180821
v1ex1celadm03: Kernel version: 4.1.12-94.8.4.el6uek.x86_64 #2 SMP Sat May 5 16:14:51 PDT 2018 x86_64
v1ex1celadm03: Active image version: 18.1.7.0.0.180821
[root@v1ex1dbadm01 ~]#

Compute Nodes:

[root@v1ex1dbadm01 ~]# dcli -g /opt/oracle.SupportTools/onecommand/dbs_group -l root imageinfo | grep "Kernel version\|Image version"
v1ex1dbadm01: Kernel version: 4.1.12-94.8.4.el6uek.x86_64 #2 SMP Sat May 5 16:14:51 PDT 2018 x86_64
v1ex1dbadm01: Image version: 18.1.7.0.0.180821
v1ex1dbadm02: Kernel version: 4.1.12-94.8.4.el6uek.x86_64 #2 SMP Sat May 5 16:14:51 PDT 2018 x86_64
v1ex1dbadm02: Image version: 18.1.7.0.0.180821
[root@v1ex1dbadm01 ~]#

InfiniBands:

[root@v1ex1dbadm01 ~]# dcli -g /opt/oracle.SupportTools/onecommand/ib_group -l root version | grep "version"
v1ex1sw-iba01: SUN DCS 36p version: 2.2.9-3
v1ex1sw-iba01: BIOS version: SUN0R100
v1ex1sw-ibb01: SUN DCS 36p version: 2.2.9-3
v1ex1sw-ibb01: BIOS version: SUN0R100
[root@v1ex1dbadm01 ~]#

The usage are endless 🙂

When I get a chance, I will create a more in depth blog post about dcli including, how to setup, etc.  I will add the link here, for ease of reference.

Finally Happy ODC Appreciation Day! #ThanksODC #ThanksOTN 🙂

If you found this blog post useful, please like as well as follow me through my various Social Media avenues available on the sidebar and/or subscribe to this oracle blog via WordPress/e-mail.

Thanks

Zed DBA (Zahid Anwar)

Advertisements

Resolving Slow Performance, Skipped Checks and Timeouts on Exa Check (exachk)

Background

For more information with regards to Exa Check, please read the following post:
How to use Oracle Exadata Database Machine Exa Check (exachk)

Slow Performance, Skipped Checks and Timeouts

When running the latest exachk (at time of writing, version 18.3.0_20180808), you may notice it takes a long time to run compared to the past.  This is due to the vast amount of additional checks carried out by the tool.  Due to this, you may also notice you get timeout issues reported in the report:

Killed Processes

exachk found that below commands were killed during the run, so some checks might have failed to execute properly. Refer to the “Slow Performance, Skipped Checks, and Timeouts” section of the user guide for corrective actions.

Killed check Manage ASM Audit File Directory Growth with cron (CHECK-ID: 9DEBED7B8DAB583DE040E50A1EC01BA0) at v1ex2dbadm01 because it timed out
Killed check Manage ASM Audit File Directory Growth with cron (CHECK-ID: 9DEBED7B8DAB583DE040E50A1EC01BA0) at v1ex2dbadm02 because it timed out

 

If you refer to the documentation “Slow Performance, Skipped Checks, and Timeouts“, you’ll see there are various parameters you can set in your environment to increase the default timeouts, which I have done below:

[root@v1ex2dbadm01 exachk]# export RAT_TIMEOUT=300
[root@v1ex2dbadm01 exachk]# export RAT_ROOT_TIMEOUT=900
[root@v1ex2dbadm01 exachk]# export RAT_PASSWORDCHECK_TIMEOUT=10
[root@v1ex2dbadm01 exachk]# export RAT_PROMPT_TIMEOUT=30
[root@v1ex2dbadm01 exachk]# export RAT_PROMPT_WAIT_TIMEOUT=60
[root@v1ex2dbadm01 exachk]# export RAT_REMOTE_RUN_TIMEOUT=10800
[root@v1ex2dbadm01 exachk]#
[root@v1ex2dbadm01 exachk]# env | grep RAT
RAT_ROOT_TIMEOUT=900
RAT_PROMPT_TIMEOUT=30
RAT_TIMEOUT=300
RAT_REMOTE_RUN_TIMEOUT=10800
RAT_PASSWORDCHECK_TIMEOUT=10
RAT_PROMPT_WAIT_TIMEOUT=60
[root@v1ex2dbadm01 exachk]#

Now when you run exachk, it will wait longer before killing processes.

In addition, if you run the “-dbparallelmax” option, you will increase the number of slave processes for database checks:

[root@v1ex2dbadm01 exachk]# ./exachk -dbparallelmax

PLEASE NOTE: This will consume more resources but will run quicker, so use with caution.  Alternatively you can run with “-dbparallel” with a acceptable number of processes and increase as per your requirements.

Now you should not have any timeouts and if you still do, then you will need to review the parameters above and increase again.  Alternatively raise an Support Request with Oracle Support.

 

If you found this blog post useful, please like as well as follow me through my various Social Media avenues available on the sidebar and/or subscribe to this oracle blog via WordPress/e-mail.

Thanks

Zed DBA (Zahid Anwar)

Extending the root LVM Partition on Exadata

On an Oracle Exadata Database Machine, the ‘/’ (root) is defaulted to a size of 30Gb, which can easily fill up.  Luckily this is just a Logical Volume and there’s normally lots of space available on the Logical Volume Group which is usually untapped.

Extending ‘/’

Identify how much space is used and free on ‘/’ using df:

[root@v1ex1dbadm01 ~]# df -h /
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VGExaDb-LVDbSys1
                       30G   22G  6.2G  79% /
[root@v1ex1dbadm01 ~]#

Display the current logical volume configuration using the lvs command:

[root@v1ex1dbadm01 ~]# lvs -o lv_name,lv_path,vg_name,lv_size
  LV                 Path                            VG      LSize
  LVDbOra1           /dev/VGExaDb/LVDbOra1           VGExaDb 200.00g
  LVDbSwap1          /dev/VGExaDb/LVDbSwap1          VGExaDb  24.00g
  LVDbSys1           /dev/VGExaDb/LVDbSys1           VGExaDb  30.00g
  LVDbSys2           /dev/VGExaDb/LVDbSys2           VGExaDb  30.00g
  LVDoNotRemoveOrUse /dev/VGExaDb/LVDoNotRemoveOrUse VGExaDb   1.00g
[root@v1ex1dbadm01 ~]#

PLEASE NOTE: On Exadata there are 2 SYS volumes, of which one is active and the other inactive.  These are used when patching the compute node, as one is a backup of the current and is used for rollback purposes.

Check the online resize option is available using the tune2fs command:

[root@v1ex1dbadm01 ~]# tune2fs -l /dev/VGExaDb/LVDbSys1 | grep resize_inode
Filesystem features:      has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize
[root@v1ex1dbadm01 ~]# tune2fs -l /dev/VGExaDb/LVDbSys2 | grep resize_inode
Filesystem features:      has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize
[root@v1ex1dbadm01 ~]#

If not available then the file system needs to be unmounted before resizing.  Refer to documentation:

Extending the root LVM Partition on Systems Running Oracle Exadata System Software Earlier than Release 11.2.3.2.1

Verify there’s space available in the Logical Volume Group using vgdisplay command:

[root@v1ex1dbadm01 ~]# vgdisplay -s
  "VGExaDb" 1.63 TiB  [285.00 GiB used / 1.36 TiB free]
[root@v1ex1dbadm01 ~]#

Finally if there’s enough space, then extend the Logical Volumes using lvextend command:

[root@v1ex1dbadm01 ~]# lvextend -L +70G /dev/VGExaDb/LVDbSys1
  Size of logical volume VGExaDb/LVDbSys1 changed from 30.00 GiB (7680 extents) to 100.00 GiB (25600 extents).
  Logical volume LVDbSys1 successfully resized.
[root@v1ex1dbadm01 ~]#
[root@v1ex1dbadm01 ~]# lvextend -L +70G /dev/VGExaDb/LVDbSys2
  Size of logical volume VGExaDb/LVDbSys2 changed from 30.00 GiB (7680 extents) to 100.00 GiB (25600 extents).
  Logical volume LVDbSys2 successfully resized.
[root@v1ex1dbadm01 ~]#

Followed by a resize of the file system using resize2fs command:

[root@v1ex1dbadm01 ~]# resize2fs /dev/VGExaDb/LVDbSys1
resize2fs 1.43-WIP (20-Jun-2013)
Filesystem at /dev/VGExaDb/LVDbSys1 is mounted on /; on-line resizing required
old_desc_blocks = 2, new_desc_blocks = 7
Performing an on-line resize of /dev/VGExaDb/LVDbSys1 to 26214400 (4k) blocks.
The filesystem on /dev/VGExaDb/LVDbSys1 is now 26214400 blocks long.
[root@v1ex1dbadm01 ~]#

The inactive SYS volume may give you errors as shown below:

[root@v1ex1dbadm01 ~]# resize2fs /dev/VGExaDb/LVDbSys2
resize2fs 1.43-WIP (20-Jun-2013)
Please run 'e2fsck -f /dev/VGExaDb/LVDbSys2' first.
[root@v1ex1dbadm01 ~]#

In which case, you just need to run the command to check the file system for error that may have occurred with journal-ling after unclear shutdown:

[root@v1ex1dbadm01 ~]# e2fsck -f /dev/VGExaDb/LVDbSys2
e2fsck 1.43-WIP (20-Jun-2013)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/VGExaDb/LVDbSys2: 111629/1966080 files (0.1% non-contiguous), 5031185/7864320 blocks
[root@v1ex1dbadm01 ~]#

Now re-run the resize of the file system using resize2fs command:

[root@v1ex1dbadm01 ~]# resize2fs /dev/VGExaDb/LVDbSys2
resize2fs 1.43-WIP (20-Jun-2013)
Resizing the filesystem on /dev/VGExaDb/LVDbSys2 to 26214400 (4k) blocks.
The filesystem on /dev/VGExaDb/LVDbSys2 is now 26214400 blocks long.
[root@v1ex1dbadm01 ~]#

You should now see ‘/’ with additional 70Gb less formatting:

[root@v1ex1dbadm01 ~]# df -h /
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VGExaDb-LVDbSys1
                       99G   22G   72G  24% /
[root@v1ex1dbadm01 ~]#

Also see the Logical Volumes are now 100Gb from 30Gb:

[root@v1ex1dbadm01 ~]# lvs -o lv_name,lv_path,vg_name,lv_size
  LV                 Path                            VG      LSize
  LVDbOra1           /dev/VGExaDb/LVDbOra1           VGExaDb 200.00g
  LVDbSwap1          /dev/VGExaDb/LVDbSwap1          VGExaDb  24.00g
  LVDbSys1           /dev/VGExaDb/LVDbSys1           VGExaDb 100.00g
  LVDbSys2           /dev/VGExaDb/LVDbSys2           VGExaDb 100.00g
  LVDoNotRemoveOrUse /dev/VGExaDb/LVDoNotRemoveOrUse VGExaDb   1.00g
[root@v1ex1dbadm01 ~]#

Documentation for reference:
Extending the root LVM Partition on Systems Running Oracle Exadata System Software Release 11.2.3.2.1 or Later

Related Post:
Extending a Non-root LVM Partition on Exadata

If you found this blog post useful, please like as well as follow me through my various Social Media avenues available on the sidebar and/or subscribe to this oracle blog via WordPress/e-mail.

Thanks

Zed DBA (Zahid Anwar)

Stop password for user accounts expiring on Exadata

Depending on how the Oracle Exadata Machine was setup, the password for user accounts can expire thus requiring the password to be changed.

This has the knock on effect of the crontab not being accessible and more importantly jobs do not run:

[oracle@v1ex1dbadm01 ~]$ crontab -l

Authentication token is no longer valid; new one required
You (oracle) are not allowed to access to (crontab) because of pam configuration.
[oracle@v1ex1dbadm01 ~]$

You can check the pam configuration for the password expiry as shown below as the root user:

[root@v1ex1dbadm01 ~]# chage -l oracle
Last password change : Dec 11, 2017
Password expires : Mar 11, 2018
Password inactive : never
Account expires : never
Minimum number of days between password change : 1
Maximum number of days between password change : 90
Number of days of warning before password expires : 7
[root@v1ex1dbadm01 ~]#

We can see the password expired on the 11th March 2018, hence why the crontab jobs are not running since then.

To change, so the password doesn’t expire, use chage as shown below:

[root@v1ex1dbadm01 ~]# chage -d 9999 -E -1 -m 0 -M -1 oracle

The manual page for chage explains the switches:

-d, --lastday LAST_DAY
 Set the number of days since January 1st, 1970 when the password was last changed. The date may also be expressed in the format YYYY-MM-DD (or the format more commonly used in your area). If the LAST_DAY is set to 0 the user is forced to change his password on the next log on.

-E, --expiredate EXPIRE_DATE
 Set the date or number of days since January 1, 1970 on which the user´s account will no longer be accessible. The date may also be expressed in the format YYYY-MM-DD (or the format more commonly used in your area). A user whose account is locked must contact the system administrator before being able to use the system again.

Passing the number -1 as the EXPIRE_DATE will remove an account expiration date.

-m, --mindays MIN_DAYS
 Set the minimum number of days between password changes to MIN_DAYS. A value of zero for this field indicates that the user may change his/her password at any time.

-M, --maxdays MAX_DAYS
 Set the maximum number of days during which a password is valid. When MAX_DAYS plus LAST_DAY is less than the current day, the user will be required to change his/her password before being able to use his/her account. This occurrence can be planned for in advance by use of the -W option, which provides the user with advance warning.

Passing the number -1 as MAX_DAYS will remove checking a password´s validity.

Now, when re-checking the password expiry, you can see it’s changed to ‘never‘:

[root@v1ex1dbadm01 ~]# chage -l oracle
Last password change : May 01, 2008
Password expires : never
Password inactive : never
Account expires : never
Minimum number of days between password change : 0
Maximum number of days between password change : -1
Number of days of warning before password expires : 7
[root@v1ex1dbadm01 ~]#

And we didn’t need to change the password for the user and the crontab job work again 🙂

This doesn’t just apply to Exadata but to Linux.

See Related MOS Note:
Expiry of user accounts on Oracle Linux 5 (Doc ID 2327855.1)

If you found this blog post useful, please like as well as follow me through my various Social Media avenues available on the sidebar and/or subscribe to this oracle blog via WordPress/e-mail.

Thanks

Zed DBA (Zahid Anwar)

Oracle Exadata Smart Flash Logging

What is Exadata Smart Flash Logging?

In an OLTP environment, it is crucial to have fast response times to redo log writes i.e. low latency.  When multiplexing redo logs for high availability i.e. to protect against hardware failure, redo log writes are only acknowledged when redo is written to all redo log members i.e when the slowest disk completes the write.  By this nature, whenever a disk slows down even if for a moment it can have impact on redo log performance and throughput.

Flash alone can’t resolve this issue as flash can also momentarily slow down due to issues in erase cycles or wear leveling and remember the acknowledgement is only given when the redo is written to all redo log members.

Exadata Smart Flash Logging, is the feature that writes to both hard disk and flash with the acknowledgement given as soon as either completes the write, thus improving response time and throughput.  So if a write is slow to hard disk the flash will give a quicker acknowledgement but when flash is experiencing a slow down due to erase cycles or wear leveling then the hard disk will acknowledge, smoothing out response times.

The Exadata Smart Flash Cache isn’t permanent but a temporary store to provide fast response times by storing redo until it’s safely written to disk.

No changes are required to redo log configuration and is transparent to database and recovery.

How to enable Smart Flash Logging?

It’s enabled out the box or for older systems it’s enabled when applying cell patch version 11.2.2.4 and also requires Database 11.2.0.2 Bundle Patch 11 or higher.

How to disable Smart Flash Logging?

This shouldn’t be done unless instructed to do so by Oracle Support or Development.

How much flash is used by Smart Flash Logging?

By default just 512Mb is used per cell, which should be sufficient for most situations.   It’s a small investment for huge performance benefit.  Statistics record the number of successful write and unsuccessful writes due to the temporary space filled.  In which case the size may need to be increased.  Also I/O Resource Manager (IORM) can be used to disable Smart Flash Logging for none critical databases.

Do standby redo logs use Smart Flash Logging?

Yes, standby redo logs benefit from Smart Flash Logging just as redo logs as long as cell patch 11.2.2.4 or higher is applied and Database 11.2.0.2 Bundle Patch 11 or higher is applied.

How to check that Smart Flash Logging is configured?

Using CellCLI run “LIST FLASHLOG DETAIL” and if output is returned as shown below with the details, then this means that Smart Flash Logging is configured:

[root@v1ex1dbadm01 ~]# dcli -g /opt/oracle.SupportTools/onecommand/cell_group -l root cellcli -e "list flashlog detail"
 v1ex1celadm01: name: v1ex1celadm01_FLASHLOG
 v1ex1celadm01: cellDisk: FD_00_v1ex1celadm01,FD_01_v1ex1celadm01
 v1ex1celadm01: creationTime: 2015-06-28T17:52:43+01:00
 v1ex1celadm01: degradedCelldisks:
 v1ex1celadm01: effectiveSize: 512M
 v1ex1celadm01: efficiency: 100.0
 v1ex1celadm01: id: 366421ec-bf77-499e-870f-f0cf5390343e
 v1ex1celadm01: size: 512M
 v1ex1celadm01: status: normal
 v1ex1celadm02: name: v1ex1celadm02_FLASHLOG
 v1ex1celadm02: cellDisk: FD_01_v1ex1celadm02,FD_00_v1ex1celadm02
 v1ex1celadm02: creationTime: 2015-06-28T17:52:44+01:00
 v1ex1celadm02: degradedCelldisks:
 v1ex1celadm02: effectiveSize: 512M
 v1ex1celadm02: efficiency: 100.0
 v1ex1celadm02: id: 9f670843-c9cc-4156-a32e-8d23fa79cdb8
 v1ex1celadm02: size: 512M
 v1ex1celadm02: status: normal
 v1ex1celadm03: name: v1ex1celadm03_FLASHLOG
 v1ex1celadm03: cellDisk: FD_01_v1ex1celadm03,FD_00_v1ex1celadm03
 v1ex1celadm03: creationTime: 2015-06-28T17:52:33+01:00
 v1ex1celadm03: degradedCelldisks:
 v1ex1celadm03: effectiveSize: 512M
 v1ex1celadm03: efficiency: 100.0
 v1ex1celadm03: id: 749bada6-8ae2-4c51-8410-97622f9a9532
 v1ex1celadm03: size: 512M
 v1ex1celadm03: status: normal
[root@v1ex1dbadm01 ~]#

For more info:
Exadata Smart Flash Logging FAQ (Doc ID 1372894.1)
Oracle Exadata Whitepaper:  Exadata Smart Flash Cache Features and the Oracle Exadata Database Machine

If you found this blog post useful, please like as well as follow me through my various Social Media avenues available on the sidebar and/or subscribe to this oracle blog via WordPress/e-mail.

Thanks

Zed DBA (Zahid Anwar)

How to Enable Exadata Write-Back Flash Cache

Please check the following blog post “How to check if Exadata Write-Back Flash Cache is Enabled” for:

  • What is Exadata Write-Back Flash Cache?
  • What are the Performance Benefits of Exadata Write-Back Flash Cache?
  • How to check if Exadata Write-Back Flash Cache is Enabled?
  • Pre-requisites and minimum versions.

You can also get more info from My Oracle Support (MOS) note:
Exadata Write-Back Flash Cache – FAQ (Doc ID 1500257.1)
OTN Article: Oracle Exadata Database Machine – Write-Back Flash Cache

How to Enable Exadata Write-Back Flash Cache

PLEASE NOTE: Although I have illustrated the steps below, please cross check with the MOS note to ensure the method below matches your setup or the steps haven’t changed with future releases (after the time of writing).

With Exadata software 11.2.3.3.1 or higher, it is not required to stop the cellsrv process on the storage cells or to inactivate griddisk.  If you are 11.2.3.2.1 to 11.2.3.3.0, the refer to the MOS notes for additional steps.

It is recommend to enabled Write-Back Flash Cache during a period of reduced workload to reduce the performance impact on the database.

Before proceeding with the enabling of Write-Back Flash Cache, it is recommended to check the caching policy of the grid disks, as we don’t want to enable Write-Back Flash Cache for grid disks that don’t need it i.e. RECO and DBFS disk groups:

[root@v1oex2dbadm01 ~]# dcli -l root -g /opt/oracle.SupportTools/onecommand/cell_group cellcli -e list griddisk attributes name,cachingpolicy,cachedby
 v1oex2celadm01: DATAC1_CD_00_v1oex2celadm01 default
 v1oex2celadm01: DATAC1_CD_01_v1oex2celadm01 default
 v1oex2celadm01: DATAC1_CD_02_v1oex2celadm01 default
 v1oex2celadm01: DATAC1_CD_03_v1oex2celadm01 default
 v1oex2celadm01: DATAC1_CD_04_v1oex2celadm01 default
 v1oex2celadm01: DATAC1_CD_05_v1oex2celadm01 default
 v1oex2celadm01: DBFS_DG_CD_02_v1oex2celadm01 default
 v1oex2celadm01: DBFS_DG_CD_03_v1oex2celadm01 default
 v1oex2celadm01: DBFS_DG_CD_04_v1oex2celadm01 default
 v1oex2celadm01: DBFS_DG_CD_05_v1oex2celadm01 default
 v1oex2celadm01: RECOC1_CD_00_v1oex2celadm01 default
 v1oex2celadm01: RECOC1_CD_01_v1oex2celadm01 default
 v1oex2celadm01: RECOC1_CD_02_v1oex2celadm01 default
 v1oex2celadm01: RECOC1_CD_03_v1oex2celadm01 default
 v1oex2celadm01: RECOC1_CD_04_v1oex2celadm01 default
 v1oex2celadm01: RECOC1_CD_05_v1oex2celadm01 default
 v1oex2celadm02: DATAC1_CD_00_v1oex2celadm02 default
 v1oex2celadm02: DATAC1_CD_01_v1oex2celadm02 default
 v1oex2celadm02: DATAC1_CD_02_v1oex2celadm02 default
 v1oex2celadm02: DATAC1_CD_03_v1oex2celadm02 default
 v1oex2celadm02: DATAC1_CD_04_v1oex2celadm02 default
 v1oex2celadm02: DATAC1_CD_05_v1oex2celadm02 default
 v1oex2celadm02: DBFS_DG_CD_02_v1oex2celadm02 default
 v1oex2celadm02: DBFS_DG_CD_03_v1oex2celadm02 default
 v1oex2celadm02: DBFS_DG_CD_04_v1oex2celadm02 default
 v1oex2celadm02: DBFS_DG_CD_05_v1oex2celadm02 default
 v1oex2celadm02: RECOC1_CD_00_v1oex2celadm02 default
 v1oex2celadm02: RECOC1_CD_01_v1oex2celadm02 default
 v1oex2celadm02: RECOC1_CD_02_v1oex2celadm02 default
 v1oex2celadm02: RECOC1_CD_03_v1oex2celadm02 default
 v1oex2celadm02: RECOC1_CD_04_v1oex2celadm02 default
 v1oex2celadm02: RECOC1_CD_05_v1oex2celadm02 default
 v1oex2celadm03: DATAC1_CD_00_v1oex2celadm03 default
 v1oex2celadm03: DATAC1_CD_01_v1oex2celadm03 default
 v1oex2celadm03: DATAC1_CD_02_v1oex2celadm03 default
 v1oex2celadm03: DATAC1_CD_03_v1oex2celadm03 default
 v1oex2celadm03: DATAC1_CD_04_v1oex2celadm03 default
 v1oex2celadm03: DATAC1_CD_05_v1oex2celadm03 default
 v1oex2celadm03: DBFS_DG_CD_02_v1oex2celadm03 default
 v1oex2celadm03: DBFS_DG_CD_03_v1oex2celadm03 default
 v1oex2celadm03: DBFS_DG_CD_04_v1oex2celadm03 default
 v1oex2celadm03: DBFS_DG_CD_05_v1oex2celadm03 default
 v1oex2celadm03: RECOC1_CD_00_v1oex2celadm03 default
 v1oex2celadm03: RECOC1_CD_01_v1oex2celadm03 default
 v1oex2celadm03: RECOC1_CD_02_v1oex2celadm03 default
 v1oex2celadm03: RECOC1_CD_03_v1oex2celadm03 default
 v1oex2celadm03: RECOC1_CD_04_v1oex2celadm03 default
 v1oex2celadm03: RECOC1_CD_05_v1oex2celadm03 default
 [root@v1oex2dbadm01 ~]#

As you can see, all the grid disks have default caching policy.  As per the following MOS note, we disable caching for RECO and DBFS disk groups:
Oracle Exadata Database Machine Setup/Configuration Best Practices (Doc ID 1274318.1)

[root@v1oex2dbadm01 ~]# dcli -c v1oex2celadm01 -l root cellcli -e alter griddisk DBFS_DG_CD_02_v1oex2celadm01,DBFS_DG_CD_03_v1oex2celadm01,DBFS_DG_CD_04_v1oex2celadm01,DBFS_DG_CD_05_v1oex2celadm01 cachingPolicy="none"
 v1oex2celadm01: GridDisk DBFS_DG_CD_02_v1oex2celadm01 successfully altered
 v1oex2celadm01: GridDisk DBFS_DG_CD_03_v1oex2celadm01 successfully altered
 v1oex2celadm01: GridDisk DBFS_DG_CD_04_v1oex2celadm01 successfully altered
 v1oex2celadm01: GridDisk DBFS_DG_CD_05_v1oex2celadm01 successfully altered
[root@v1oex2dbadm01 ~]# dcli -c v1oex2celadm02 -l root cellcli -e alter griddisk DBFS_DG_CD_02_v1oex2celadm02,DBFS_DG_CD_03_v1oex2celadm02,DBFS_DG_CD_04_v1oex2celadm02,DBFS_DG_CD_05_v1oex2celadm02 cachingPolicy="none"
 v1oex2celadm02: GridDisk DBFS_DG_CD_02_v1oex2celadm02 successfully altered
 v1oex2celadm02: GridDisk DBFS_DG_CD_03_v1oex2celadm02 successfully altered
 v1oex2celadm02: GridDisk DBFS_DG_CD_04_v1oex2celadm02 successfully altered
 v1oex2celadm02: GridDisk DBFS_DG_CD_05_v1oex2celadm02 successfully altered
[root@v1oex2dbadm01 ~]# dcli -c v1oex2celadm03 -l root cellcli -e alter griddisk DBFS_DG_CD_02_v1oex2celadm03,DBFS_DG_CD_03_v1oex2celadm03,DBFS_DG_CD_04_v1oex2celadm03,DBFS_DG_CD_05_v1oex2celadm03 cachingPolicy="none"
 v1oex2celadm03: GridDisk DBFS_DG_CD_02_v1oex2celadm03 successfully altered
 v1oex2celadm03: GridDisk DBFS_DG_CD_03_v1oex2celadm03 successfully altered
 v1oex2celadm03: GridDisk DBFS_DG_CD_04_v1oex2celadm03 successfully altered
 v1oex2celadm03: GridDisk DBFS_DG_CD_05_v1oex2celadm03 successfully altered 
[root@v1oex2dbadm01 ~]# dcli -c v1oex2celadm01 -l root cellcli -e alter griddisk RECOC1_CD_00_v1oex2celadm01,RECOC1_CD_01_v1oex2celadm01,RECOC1_CD_02_v1oex2celadm01,RECOC1_CD_03_v1oex2celadm01,RECOC1_CD_04_v1oex2celadm01,RECOC1_CD_05_v1oex2celadm01 cachingPolicy="none"
 v1oex2celadm01: GridDisk RECOC1_CD_00_v1oex2celadm01 successfully altered
 v1oex2celadm01: GridDisk RECOC1_CD_01_v1oex2celadm01 successfully altered
 v1oex2celadm01: GridDisk RECOC1_CD_02_v1oex2celadm01 successfully altered
 v1oex2celadm01: GridDisk RECOC1_CD_03_v1oex2celadm01 successfully altered
 v1oex2celadm01: GridDisk RECOC1_CD_04_v1oex2celadm01 successfully altered
 v1oex2celadm01: GridDisk RECOC1_CD_05_v1oex2celadm01 successfully altered 
[root@v1oex2dbadm01 ~]# dcli -c v1oex2celadm02 -l root cellcli -e alter griddisk RECOC1_CD_00_v1oex2celadm02,RECOC1_CD_01_v1oex2celadm02,RECOC1_CD_02_v1oex2celadm02,RECOC1_CD_03_v1oex2celadm02,RECOC1_CD_04_v1oex2celadm02,RECOC1_CD_05_v1oex2celadm02 cachingPolicy="none"
 v1oex2celadm02: GridDisk RECOC1_CD_00_v1oex2celadm02 successfully altered
 v1oex2celadm02: GridDisk RECOC1_CD_01_v1oex2celadm02 successfully altered
 v1oex2celadm02: GridDisk RECOC1_CD_02_v1oex2celadm02 successfully altered
 v1oex2celadm02: GridDisk RECOC1_CD_03_v1oex2celadm02 successfully altered
 v1oex2celadm02: GridDisk RECOC1_CD_04_v1oex2celadm02 successfully altered
 v1oex2celadm02: GridDisk RECOC1_CD_05_v1oex2celadm02 successfully altered
[root@v1oex2dbadm01 ~]# dcli -c v1oex2celadm03 -l root cellcli -e alter griddisk RECOC1_CD_00_v1oex2celadm03,RECOC1_CD_01_v1oex2celadm03,RECOC1_CD_02_v1oex2celadm03,RECOC1_CD_03_v1oex2celadm03,RECOC1_CD_04_v1oex2celadm03,RECOC1_CD_05_v1oex2celadm03 cachingPolicy="none"
 v1oex2celadm03: GridDisk RECOC1_CD_00_v1oex2celadm03 successfully altered
 v1oex2celadm03: GridDisk RECOC1_CD_01_v1oex2celadm03 successfully altered
 v1oex2celadm03: GridDisk RECOC1_CD_02_v1oex2celadm03 successfully altered
 v1oex2celadm03: GridDisk RECOC1_CD_03_v1oex2celadm03 successfully altered
 v1oex2celadm03: GridDisk RECOC1_CD_04_v1oex2celadm03 successfully altered
 v1oex2celadm03: GridDisk RECOC1_CD_05_v1oex2celadm03 successfully altered
[root@v1oex2dbadm01 ~]#

Now when we enabling of Write-Back Flash Cache, it will not cache for grid disks for RECO and DBFS disk group, avoiding the need to flush to disk and change policy as post step:

[root@v1oex2dbadm01 ~]# dcli -g /opt/oracle.SupportTools/onecommand/cell_group -l root cellcli -e list griddisk attributes name,cachingpolicy,cachedby
 v1oex2celadm01: DATAC1_CD_00_v1oex2celadm01 default
 v1oex2celadm01: DATAC1_CD_01_v1oex2celadm01 default
 v1oex2celadm01: DATAC1_CD_02_v1oex2celadm01 default
 v1oex2celadm01: DATAC1_CD_03_v1oex2celadm01 default
 v1oex2celadm01: DATAC1_CD_04_v1oex2celadm01 default
 v1oex2celadm01: DATAC1_CD_05_v1oex2celadm01 default
 v1oex2celadm01: DBFS_DG_CD_02_v1oex2celadm01 none
 v1oex2celadm01: DBFS_DG_CD_03_v1oex2celadm01 none
 v1oex2celadm01: DBFS_DG_CD_04_v1oex2celadm01 none
 v1oex2celadm01: DBFS_DG_CD_05_v1oex2celadm01 none
 v1oex2celadm01: RECOC1_CD_00_v1oex2celadm01 none
 v1oex2celadm01: RECOC1_CD_01_v1oex2celadm01 none
 v1oex2celadm01: RECOC1_CD_02_v1oex2celadm01 none
 v1oex2celadm01: RECOC1_CD_03_v1oex2celadm01 none
 v1oex2celadm01: RECOC1_CD_04_v1oex2celadm01 none
 v1oex2celadm01: RECOC1_CD_05_v1oex2celadm01 none
 v1oex2celadm02: DATAC1_CD_00_v1oex2celadm02 default
 v1oex2celadm02: DATAC1_CD_01_v1oex2celadm02 default
 v1oex2celadm02: DATAC1_CD_02_v1oex2celadm02 default
 v1oex2celadm02: DATAC1_CD_03_v1oex2celadm02 default
 v1oex2celadm02: DATAC1_CD_04_v1oex2celadm02 default
 v1oex2celadm02: DATAC1_CD_05_v1oex2celadm02 default
 v1oex2celadm02: DBFS_DG_CD_02_v1oex2celadm02 none
 v1oex2celadm02: DBFS_DG_CD_03_v1oex2celadm02 none
 v1oex2celadm02: DBFS_DG_CD_04_v1oex2celadm02 none
 v1oex2celadm02: DBFS_DG_CD_05_v1oex2celadm02 none
 v1oex2celadm02: RECOC1_CD_00_v1oex2celadm02 none
 v1oex2celadm02: RECOC1_CD_01_v1oex2celadm02 none
 v1oex2celadm02: RECOC1_CD_02_v1oex2celadm02 none
 v1oex2celadm02: RECOC1_CD_03_v1oex2celadm02 none
 v1oex2celadm02: RECOC1_CD_04_v1oex2celadm02 none
 v1oex2celadm02: RECOC1_CD_05_v1oex2celadm02 none
 v1oex2celadm03: DATAC1_CD_00_v1oex2celadm03 default
 v1oex2celadm03: DATAC1_CD_01_v1oex2celadm03 default
 v1oex2celadm03: DATAC1_CD_02_v1oex2celadm03 default
 v1oex2celadm03: DATAC1_CD_03_v1oex2celadm03 default
 v1oex2celadm03: DATAC1_CD_04_v1oex2celadm03 default
 v1oex2celadm03: DATAC1_CD_05_v1oex2celadm03 default
 v1oex2celadm03: DBFS_DG_CD_02_v1oex2celadm03 none
 v1oex2celadm03: DBFS_DG_CD_03_v1oex2celadm03 none
 v1oex2celadm03: DBFS_DG_CD_04_v1oex2celadm03 none
 v1oex2celadm03: DBFS_DG_CD_05_v1oex2celadm03 none
 v1oex2celadm03: RECOC1_CD_00_v1oex2celadm03 none
 v1oex2celadm03: RECOC1_CD_01_v1oex2celadm03 none
 v1oex2celadm03: RECOC1_CD_02_v1oex2celadm03 none
 v1oex2celadm03: RECOC1_CD_03_v1oex2celadm03 none
 v1oex2celadm03: RECOC1_CD_04_v1oex2celadm03 none
 v1oex2celadm03: RECOC1_CD_05_v1oex2celadm03 none
 [root@v1oex2dbadm01 ~]#

Next we check that all the grid disks on all storage cells have the asmdeactivationoutcome and asmmodestatus as “Yes” and “ONLINE” respectively.

[root@v1ex2dbadm01 ~]# dcli -g /opt/oracle.SupportTools/onecommand/cell_group -l root cellcli -e list griddisk attributes asmdeactivationoutcome, asmmodestatus
v1ex2celadm01: Yes ONLINE
v1ex2celadm01: Yes ONLINE
v1ex2celadm01: Yes ONLINE
v1ex2celadm01: Yes ONLINE
v1ex2celadm01: Yes ONLINE
v1ex2celadm01: Yes ONLINE
v1ex2celadm01: Yes ONLINE
v1ex2celadm01: Yes ONLINE
v1ex2celadm01: Yes ONLINE
v1ex2celadm01: Yes ONLINE
v1ex2celadm01: Yes ONLINE
v1ex2celadm01: Yes ONLINE
v1ex2celadm01: Yes ONLINE
v1ex2celadm01: Yes ONLINE
v1ex2celadm01: Yes ONLINE
v1ex2celadm01: Yes ONLINE
v1ex2celadm02: Yes ONLINE
v1ex2celadm02: Yes ONLINE
v1ex2celadm02: Yes ONLINE
v1ex2celadm02: Yes ONLINE
v1ex2celadm02: Yes ONLINE
v1ex2celadm02: Yes ONLINE
v1ex2celadm02: Yes ONLINE
v1ex2celadm02: Yes ONLINE
v1ex2celadm02: Yes ONLINE
v1ex2celadm02: Yes ONLINE
v1ex2celadm02: Yes ONLINE
v1ex2celadm02: Yes ONLINE
v1ex2celadm02: Yes ONLINE
v1ex2celadm02: Yes ONLINE
v1ex2celadm02: Yes ONLINE
v1ex2celadm02: Yes ONLINE
v1ex2celadm03: Yes ONLINE
v1ex2celadm03: Yes ONLINE
v1ex2celadm03: Yes ONLINE
v1ex2celadm03: Yes ONLINE
v1ex2celadm03: Yes ONLINE
v1ex2celadm03: Yes ONLINE
v1ex2celadm03: Yes ONLINE
v1ex2celadm03: Yes ONLINE
v1ex2celadm03: Yes ONLINE
v1ex2celadm03: Yes ONLINE
v1ex2celadm03: Yes ONLINE
v1ex2celadm03: Yes ONLINE
v1ex2celadm03: Yes ONLINE
v1ex2celadm03: Yes ONLINE
v1ex2celadm03: Yes ONLINE
v1ex2celadm03: Yes ONLINE
[root@v1ex2dbadm01 ~]#

Next we check that all of the Flash Cache are in the “normal” state and that no flash disks are in a degraded or critical state:

[root@v1ex2dbadm01 ~]# dcli -g /opt/oracle.SupportTools/onecommand/cell_group -l root cellcli -e list flashcache detail
v1ex2celadm01: name: v1ex2celadm01_FLASHCACHE
v1ex2celadm01: cellDisk: FD_01_v1ex2celadm01,FD_00_v1ex2celadm01
v1ex2celadm01: creationTime: 2015-07-01T13:39:22+01:00
v1ex2celadm01: degradedCelldisks:
v1ex2celadm01: effectiveCacheSize: 2.910369873046875T
v1ex2celadm01: id: 655bdb7a-8d3b-40e5-88af-cd42843dd3f7
v1ex2celadm01: size: 2.910369873046875T
v1ex2celadm01: status: normal
v1ex2celadm02: name: v1ex2celadm02_FLASHCACHE
v1ex2celadm02: cellDisk: FD_01_v1ex2celadm02,FD_00_v1ex2celadm02
v1ex2celadm02: creationTime: 2015-07-01T06:38:05+01:00
v1ex2celadm02: degradedCelldisks:
v1ex2celadm02: effectiveCacheSize: 2.910369873046875T
v1ex2celadm02: id: 1cc0f7a4-885a-4e23-aec5-b47bc488e8e3
v1ex2celadm02: size: 2.910369873046875T
v1ex2celadm02: status: normal
v1ex2celadm03: name: v1ex2celadm03_FLASHCACHE
v1ex2celadm03: cellDisk: FD_01_v1ex2celadm03,FD_00_v1ex2celadm03
v1ex2celadm03: creationTime: 2015-07-01T20:39:30+01:00
v1ex2celadm03: degradedCelldisks:
v1ex2celadm03: effectiveCacheSize: 2.910369873046875T
v1ex2celadm03: id: b07f6011-1d66-4c3f-a25f-26d1e6b55633
v1ex2celadm03: size: 2.910369873046875T
v1ex2celadm03: status: normal
[root@v1ex2dbadm01 ~]#

Next we validate all the Physical Disks are in the “NORMAL” state before we modify the Flash Cache:

[root@v1ex2dbadm01 ~]# dcli -l root -g /opt/oracle.SupportTools/onecommand/cell_group cellcli -e "list physicaldisk attributes name,status"
v1ex2celadm01: 8:0 normal
v1ex2celadm01: 8:1 normal
v1ex2celadm01: 8:2 normal
v1ex2celadm01: 8:3 normal
v1ex2celadm01: 8:4 normal
v1ex2celadm01: 8:5 normal
v1ex2celadm01: 8:6 normal
v1ex2celadm01: 8:7 normal
v1ex2celadm01: 8:8 normal
v1ex2celadm01: 8:9 normal
v1ex2celadm01: 8:10 normal
v1ex2celadm01: 8:11 normal
v1ex2celadm01: FLASH_1_1 normal
v1ex2celadm01: FLASH_2_1 normal
v1ex2celadm01: FLASH_4_1 normal
v1ex2celadm01: FLASH_5_1 normal
v1ex2celadm02: 8:0 normal
v1ex2celadm02: 8:1 normal
v1ex2celadm02: 8:2 normal
v1ex2celadm02: 8:3 normal
v1ex2celadm02: 8:4 normal
v1ex2celadm02: 8:5 normal
v1ex2celadm02: 8:6 normal
v1ex2celadm02: 8:7 normal
v1ex2celadm02: 8:8 normal
v1ex2celadm02: 8:9 normal
v1ex2celadm02: 8:10 normal
v1ex2celadm02: 8:11 normal
v1ex2celadm02: FLASH_1_1 normal
v1ex2celadm02: FLASH_2_1 normal
v1ex2celadm02: FLASH_4_1 normal
v1ex2celadm02: FLASH_5_1 normal
v1ex2celadm03: 8:0 normal
v1ex2celadm03: 8:1 normal
v1ex2celadm03: 8:2 normal
v1ex2celadm03: 8:3 normal
v1ex2celadm03: 8:4 normal
v1ex2celadm03: 8:5 normal
v1ex2celadm03: 8:6 normal
v1ex2celadm03: 8:7 normal
v1ex2celadm03: 8:8 normal
v1ex2celadm03: 8:9 normal
v1ex2celadm03: 8:10 normal
v1ex2celadm03: 8:11 normal
v1ex2celadm03: FLASH_1_1 normal
v1ex2celadm03: FLASH_2_1 normal
v1ex2celadm03: FLASH_4_1 normal
v1ex2celadm03: FLASH_5_1 normal
[root@v1ex2dbadm01 ~]#

You can run the same command with inverse grep on “normal” to ensure you didn’t miss any disks that are not normal:

[root@v1ex2dbadm01 ~]# dcli -l root -g /opt/oracle.SupportTools/onecommand/cell_group cellcli -e "list physicaldisk attributes name,status"|grep -v normal
[root@v1ex2dbadm01 ~]#

Next we drop the Flash Cache to be able to change the attribute:

PLEASE NOTE: Any data that is currently cached in Flash Cache and being served will then need to be served by Hard Disks and a noticeable performance degradation will be observed.  Hence it is recommend to enabled Write-Back Flash Cache during a period of reduced workload to reduce the performance impact on the database.

[root@v1ex2dbadm01 ~]# dcli -l root -g /opt/oracle.SupportTools/onecommand/cell_group cellcli -e drop flashcache 
v1ex2celadm01: Flash cache v1ex2celadm01_FLASHCACHE successfully dropped 
v1ex2celadm02: Flash cache v1ex2celadm02_FLASHCACHE successfully dropped 
v1ex2celadm03: Flash cache v1ex2celadm03_FLASHCACHE successfully dropped 
[root@v1ex2dbadm01 ~]#

Next we set the “flashCacheMode” attribute to “writeback“:

[root@v1ex2dbadm01 ~]# dcli -l root -g /opt/oracle.SupportTools/onecommand/cell_group cellcli -e "alter cell flashCacheMode=writeback"
v1ex2celadm01: Cell v1ex2celadm01 successfully altered
v1ex2celadm02: Cell v1ex2celadm02 successfully altered
v1ex2celadm03: Cell v1ex2celadm03 successfully altered
[root@v1ex2dbadm01 ~]#

Next we re-create the Flash Cache, which will be in Write-Back instead of WriteThrough:

[root@v1ex2dbadm01 ~]# dcli -l root -g /opt/oracle.SupportTools/onecommand/cell_group cellcli -e create flashcache all
v1ex2celadm01: Flash cache v1ex2celadm01_FLASHCACHE successfully created
v1ex2celadm02: Flash cache v1ex2celadm02_FLASHCACHE successfully created
v1ex2celadm03: Flash cache v1ex2celadm03_FLASHCACHE successfully created
[root@v1ex2dbadm01 ~]#

Next we check the attribute “flashCacheMode” is actually now “writeback“:

[root@v1ex2dbadm01 ~]# dcli -g /opt/oracle.SupportTools/onecommand/cell_group -l root cellcli -e "list cell attributes flashcachemode"
v1ex2celadm01: writeback
v1ex2celadm02: writeback
v1ex2celadm03: writeback
[root@v1ex2dbadm01 ~]#

At this point, write I/O will go straight to flash and then can be moved to hard disk if aged or not required for read caching.  The Flash Cache will be repopulated over time and performance will return to normal for reads with addition performance for writes 🙂

You can check the usage increase as Flash Cache repopulates as follows:

[root@v1oex2dbadm01 ~]# dcli -g /opt/oracle.SupportTools/onecommand/cell_group -l root cellcli -e LIST METRICCURRENT FC_BY_USED
 v1oex2celadm01: FC_BY_USED FLASHCACHE 104,838 MB
 v1oex2celadm02: FC_BY_USED FLASHCACHE 104,479 MB
 v1oex2celadm03: FC_BY_USED FLASHCACHE 105,192 MB
[root@v1oex2dbadm01 ~]#

Finally, we validate grid disk attributes cachingPolicy and cachedby, where we can see only the DATA disk group is being cached by Flash Cache and by which Flash Disk:

[root@v1oex2dbadm01 ~]# dcli -g /opt/oracle.SupportTools/onecommand/cell_group -l root cellcli -e list griddisk attributes name,cachingpolicy,cachedby
v1oex2celadm01: DATAC1_CD_00_v1oex2celadm01 default FD_01_v1oex2celadm01
v1oex2celadm01: DATAC1_CD_01_v1oex2celadm01 default FD_01_v1oex2celadm01
v1oex2celadm01: DATAC1_CD_02_v1oex2celadm01 default FD_00_v1oex2celadm01
v1oex2celadm01: DATAC1_CD_03_v1oex2celadm01 default FD_00_v1oex2celadm01
v1oex2celadm01: DATAC1_CD_04_v1oex2celadm01 default FD_01_v1oex2celadm01
v1oex2celadm01: DATAC1_CD_05_v1oex2celadm01 default FD_00_v1oex2celadm01
v1oex2celadm01: DBFS_DG_CD_02_v1oex2celadm01 none
v1oex2celadm01: DBFS_DG_CD_03_v1oex2celadm01 none
v1oex2celadm01: DBFS_DG_CD_04_v1oex2celadm01 none
v1oex2celadm01: DBFS_DG_CD_05_v1oex2celadm01 none
v1oex2celadm01: RECOC1_CD_00_v1oex2celadm01 none
v1oex2celadm01: RECOC1_CD_01_v1oex2celadm01 none
v1oex2celadm01: RECOC1_CD_02_v1oex2celadm01 none
v1oex2celadm01: RECOC1_CD_03_v1oex2celadm01 none
v1oex2celadm01: RECOC1_CD_04_v1oex2celadm01 none
v1oex2celadm01: RECOC1_CD_05_v1oex2celadm01 none
v1oex2celadm02: DATAC1_CD_00_v1oex2celadm02 default FD_01_v1oex2celadm02
v1oex2celadm02: DATAC1_CD_01_v1oex2celadm02 default FD_00_v1oex2celadm02
v1oex2celadm02: DATAC1_CD_02_v1oex2celadm02 default FD_01_v1oex2celadm02
v1oex2celadm02: DATAC1_CD_03_v1oex2celadm02 default FD_01_v1oex2celadm02
v1oex2celadm02: DATAC1_CD_04_v1oex2celadm02 default FD_00_v1oex2celadm02
v1oex2celadm02: DATAC1_CD_05_v1oex2celadm02 default FD_00_v1oex2celadm02
v1oex2celadm02: DBFS_DG_CD_02_v1oex2celadm02 none
v1oex2celadm02: DBFS_DG_CD_03_v1oex2celadm02 none
v1oex2celadm02: DBFS_DG_CD_04_v1oex2celadm02 none
v1oex2celadm02: DBFS_DG_CD_05_v1oex2celadm02 none
v1oex2celadm02: RECOC1_CD_00_v1oex2celadm02 none
v1oex2celadm02: RECOC1_CD_01_v1oex2celadm02 none
v1oex2celadm02: RECOC1_CD_02_v1oex2celadm02 none
v1oex2celadm02: RECOC1_CD_03_v1oex2celadm02 none
v1oex2celadm02: RECOC1_CD_04_v1oex2celadm02 none
v1oex2celadm02: RECOC1_CD_05_v1oex2celadm02 none
v1oex2celadm03: DATAC1_CD_00_v1oex2celadm03 default FD_01_v1oex2celadm03
v1oex2celadm03: DATAC1_CD_01_v1oex2celadm03 default FD_01_v1oex2celadm03
v1oex2celadm03: DATAC1_CD_02_v1oex2celadm03 default FD_00_v1oex2celadm03
v1oex2celadm03: DATAC1_CD_03_v1oex2celadm03 default FD_00_v1oex2celadm03
v1oex2celadm03: DATAC1_CD_04_v1oex2celadm03 default FD_01_v1oex2celadm03
v1oex2celadm03: DATAC1_CD_05_v1oex2celadm03 default FD_00_v1oex2celadm03
v1oex2celadm03: DBFS_DG_CD_02_v1oex2celadm03 none
v1oex2celadm03: DBFS_DG_CD_03_v1oex2celadm03 none
v1oex2celadm03: DBFS_DG_CD_04_v1oex2celadm03 none
v1oex2celadm03: DBFS_DG_CD_05_v1oex2celadm03 none
v1oex2celadm03: RECOC1_CD_00_v1oex2celadm03 none
v1oex2celadm03: RECOC1_CD_01_v1oex2celadm03 none
v1oex2celadm03: RECOC1_CD_02_v1oex2celadm03 none
v1oex2celadm03: RECOC1_CD_03_v1oex2celadm03 none
v1oex2celadm03: RECOC1_CD_04_v1oex2celadm03 none
v1oex2celadm03: RECOC1_CD_05_v1oex2celadm03 none
[root@v1oex2dbadm01 ~]#

Final note, there is a script provided by Oracle that can do this all for you called setWBFC, however the version 1.0.0.2.1.20160602 didn’t work for me as it detected 4 Flash Disks in eighth rack when it expected 2.  Although there are only 2 in use in eighth rack, there are 4 physically present, so I believe this is a bug.  I did raise an SR with Oracle Support, which is yet to be concluded.  Below is the output for those who are interested:

[root@v1oex2dbadm01 WBFC]# ./setWBFC.sh
 setWBFC Version: 1.0.0.2.1.20160602
 Usage:
 ./setWBFC.sh -g cell_group_file [-d dbs_group_file ]
 [ -h ] [ -i ] [ -l log_directory ]
 [ -m WriteBack | WriteThrough ] [ -o rolling | non-rolling ]
 [ -p ] [ -s step_number ] [ -t time_out_seconds ]
 [ -x trace_level ] [ -v ]

-g file file that lists cell host names, one per line
 -d file file that lists the database host names, one
 per line. Required for non-rolling.
 -h help, print this information
 -i run in interactive mode
 -l log directory directory path for log files
 -m FC_mode flashcache mode: WriteBack | WriteThrough
 -o exec_mode execution mode: rolling | non-rolling (default)
 -p perform a precheck only
 -s step # (*) specify step number to restart at
 -t timeout sec specify in seconds the amount of time to wait
 for griddisks to come ONLINE - range: [600 - 43200]
 Default: 21600 (6 hours)
 -x trace level # specify trace level for further diagnostics
 -v show version

(*) -- Option not yet implemented.

 [root@v1oex2dbadm01 WBFC]# ./setWBFC.sh -g /opt/oracle.SupportTools/onecommand/cell_group -l /root/v1/WBFC/logs -m WriteBack -o rolling -p
 ./setWBFC.sh: Using log directory '/root/v1/WBFC/logs'
 ./setWBFC.sh: Log File '/root/v1/WBFC/logs/setWBFC_18335_2018-01-17-10:46:26.log' created successfully
 2018-01-17 10:46:26
 Starting ./setWBFC.sh on v1oex2dbadm01
 Version: 1.0.0.2.1.20160602
 Command line options used:
 -g /opt/oracle.SupportTools/onecommand/cell_group
 -o rolling
 -m WriteBack
 -p (Perform pre-req checks only)
 -t 21600
 -x 0

2018-01-17 10:46:26
 Performing pre-req checks.....
 2018-01-17 10:46:26
 Creating baseline inventory for griddisks
 2018-01-17 10:46:27
 Creating baseline inventory for flashdisks
 2018-01-17 10:46:28
 Creating baseline inventory for flashsize
 2018-01-17 10:46:28
 dcli present and in PATH. [PASSED]
 2018-01-17 10:46:28
 Checking cell nodes are valid storage servers...
 2018-01-17 10:46:29
 All cells are valid Exadata storage cells.
 2018-01-17 10:46:29
 Checking Exadata Storage Software Versions...
 2018-01-17 10:46:33
 Software versions of the following cells:
 v1oex2celadm01: 12.1.2.3.5.170418 [PASSED]
 v1oex2celadm02: 12.1.2.3.5.170418 [PASSED]
 v1oex2celadm03: 12.1.2.3.5.170418 [PASSED]

2018-01-17 10:46:33
 Checking Grid Infrastructure Software Version...
 2018-01-17 10:46:38
 Grid Infrastructure version: 12.1.0.2.00 [PASSED]

2018-01-17 10:46:38
 Checking for active ASM operations....
 2018-01-17 10:46:38
 Check for no active ASM operations: [PASSED]
 2018-01-17 10:46:38
 Checking griddisk status across all cells....
 2018-01-17 10:46:39
 All griddisks across all cells have asmdeactivationoutcome = Yes
 All griddisks across all cells are ONLINE
 Griddisk checks: [PASSED]
 2018-01-17 10:46:39
 Checking flash cache status.....
 2018-01-17 10:46:40
 Flashcache status normal: [PASSED]
 2018-01-17 10:46:40
 Checking that all FlashDisks are present...
 2018-01-17 10:46:42
 Cell v1oex2celadm01 has one or more FlashDisk missing. Expecting 2 but found 4

2018-01-17 10:46:42
 FlashDisk validation: [FAILED]
 2018-01-17 10:46:42
 Checking current flash cache mode.....
 2018-01-17 10:46:43
 Flashcache not already in target mode: [PASSED]
 2018-01-17 10:46:43
 Pre-req checks failed with status 7. Exiting....

[root@v1oex2dbadm01 WBFC]#

If this works for you, great then I would recommend using this method, otherwise it can be used to double check the pre-requisites at least and then you can do manually as I did shown above 🙂

If you found this blog post useful, please like as well as follow me through my various Social Media avenues available on the sidebar and/or subscribe to this oracle blog via WordPress/e-mail.

Thanks

Zed DBA (Zahid Anwar)

How to check if Exadata Write-Back Flash Cache is Enabled

What is Exadata Write-Back Flash Cache?

Exadata Write-Back Flash Cache provides the ability to cache not only read I/Os but write I/O to the Exadata’s PCI flash on the storage cells.  Exadata storage software 11.2.3.2.1 or higher and Grid Infrastructure and Database software 11.2.0.3.9 or higher is required to use Exadata Write-Back Flash Cache, which is persistent across storage cell restarts.

The default since April 2017 for the Oracle Exadata Deployment Assistant (OEDA) is Write-Back Flash Cache when DATA diskgroup is HIGH redundancy and Grid Infrastructure and Database software are:

  • 11.2.0.4.1 or higher
  • 12.1.0.2 or higher
  • 12.2.0.2 or higher

PLEASE NOTE: This option is only applicable to High Capacity as Extreme Flash doesn’t have Hard Disks and therefore Write-Back Flash Cache is explicitly enabled and can’t be disabled.

What are the Performance Benefit of Exadata Write-Back Flash Cache?

Write-Back Flash Cache can significantly improve write intensive operations because writing to Flash Cache is significantly faster than writing to Hard Disks.  Depending on the workload, write performance (IOPS) can be improved by 10x on older generations of Exadata Machines V2 and X2 and 20x on newer generations X3 onwards (correct at time of writing).

If you are experiencing high write I/O times on storage cells from AWR Reports or Storage Cell metrics, then you should consider enabling Write-Back Flash Cache to alleviate write operations on Hard Disks and move to Flash Cache.

See the following My Oracle Support (MOS) Note for more info:
Exadata Write-Back Flash Cache – FAQ (Doc ID 1500257.1)

How to check if Exadata Write-Back Flash Cache is Enabled?

To check if Exadata Write-Back Flash Cache is enabled, run “list cell attributes flashcachemode” on the storage cell using CellCLI as shown below:

[root@v1ex2celadm01 ~]# cellcli
CellCLI: Release 12.1.2.3.5 - Production on Wed Jan 17 10:09:51 GMT 2018

Copyright (c) 2007, 2016, Oracle. All rights reserved.

CellCLI> list cell attributes flashcachemode
 WriteThrough

CellCLI> exit
quitting

[root@v1ex2celadm01 ~]#

If “WriteThrough” then Write-Back Flash Cache is disabled (writes go straight to hard disk and then can be placed in flash for caching reads if required), otherwise if “WriteBack” then Write-Back Flash Cache is enabled as the name suggests (writes go straight to flash and then can be moved to hard disk if aged or not required for read caching).

You can also run “list cell detail” using CellCLI as shown below:

[root@v1ex2celadm01 ~]# cellcli
CellCLI: Release 12.1.2.3.5 - Production on Wed Jan 17 10:10:22 GMT 2018

Copyright (c) 2007, 2016, Oracle. All rights reserved.

CellCLI> list cell detail
 name: v1ex2celadm01
 accessLevelPerm: remoteLoginEnabled
 bbuStatus: normal
 cellVersion: OSS_12.1.2.3.5_LINUX.X64_170418
 cpuCount: 16/32
 diagHistoryDays: 7
 eighthRack: TRUE
 fanCount: 8/8
 fanStatus: normal
 flashCacheMode: WriteThrough
 id: xxxxxxxxxx
 interconnectCount: 2
 interconnect1: ib0
 interconnect2: ib1
 iormBoost: 6.4
 ipaddress1: 10.1.11.14/22
 ipaddress2: 10.1.11.15/22
 kernelVersion: 2.6.39-400.294.4.el6uek.x86_64
 locatorLEDStatus: off
 makeModel: Oracle Corporation ORACLE SERVER X5-2L High Capacity
 memoryGB: 95
 metricHistoryDays: 7
 notificationMethod: snmp
 notificationPolicy: critical,warning,clear
 offloadGroupEvents:
 powerCount: 2/2
 powerStatus: normal
 releaseImageStatus: success
 releaseVersion: 12.1.2.3.5.170418
 rpmVersion: cell-12.1.2.3.5_LINUX.X64_170418-1.x86_64
 releaseTrackingBug: 25509078
 rollbackVersion: 12.1.2.3.4.170111
 securityCert: PrivateKey OK
 Certificate: Subject CN=v1ex2celadm01.v1.com,OU=Oracle Exadata,O=Oracle Corporation,L=Redwood City,ST=California,C=US
 Issuer CN=v1ex2celadm01.v1.com,OU=Oracle Exadata,O=Oracle Corporation,L=Redwood City,ST=California,C=US
 snmpSubscriber: host=v1ex2dbadm02.v1.com,port=1830,community=public
 host=v1ex2dbadm01.v1.com,port=1830,community=public
 host=v1ex2dbadm01.v1.com,port=3872,community=public
 host=v1ex2dbadm02.v1.com,port=3872,community=public
 status: online
 temperatureReading: 24.0
 temperatureStatus: normal
 upTime: 105 days, 7:35
 usbStatus: normal
 cellsrvStatus: running
 msStatus: running
 rsStatus: running

CellCLI> exit
quitting

[root@v1ex2celadm01 ~]#

However, the simpler way to check is via dcli, especially when you have lots of storage cells as shown below:

[root@v1ex2dbadm01 ~]# dcli -g /opt/oracle.SupportTools/onecommand/cell_group -l root cellcli -e "list cell attributes flashcachemode"
v1ex2celadm01: WriteThrough
v1ex2celadm02: WriteThrough
v1ex2celadm03: WriteThrough

Related Posts:
How to Enable Exadata Write-Back Flash Cache

If you found this blog post useful, please like as well as follow me through my various Social Media avenues available on the sidebar and/or subscribe to this oracle blog via WordPress/e-mail.

Thanks

Zed DBA (Zahid Anwar)