OGB Appreciation Day : Exadata X8M

What is OGB Appreciation Day?

The Oracle Groundbreakers (OGB) Appreciation Day formally known as OTN Appreciation Day and ODC Appreciation Day, is a great initiative by Tim Hall aka Oracle-Base.com.  Where we take the opportunity to say thanks to the Oracle Community which includes but not limited to ACEs, Java Champions, Ambassadors and all those who have the Groundbreakers spirit #ThanksOGB 🙂

I wonder what will be the name will be next year 😉

More info on Oracle Groundbreakers Community can be found here:
About Oracle Groundbreakers Community

When is it?

This year, it is on Thursday 10th October 2019 and I have to confess I totally missed it and thus my post is a few days out but I didn’t want to do disservice to the spirit of the initiative.

You can see my previous post here:
2017 – ODC Appreciation Day : Oracle Exadata Database Machine
2018 – ODC Appreciation Day : Oracle dcli Utility

You can see a summary of previous years blog posts here:
2016 – OTN Appreciation Day : Summary
2017 – ODC Appreciation Day 2017 : It’s a Wrap (#ThanksODC)
2018 – ODC Appreciation Day 2018 : It’s a Wrap (#ThanksODC)
2019 – OGB Appreciation Day 2019 : It’s a Wrap (#ThanksOGB) – this year

My Contribution : Exadata X8M

When I was at Oracle Open World 2019 a few weeks ago, Larry Ellison (CTO of Oracle) announced the new Exadata X8M:

IMG_6715

Key point being in-memory performance utilising persistence memory and RDMA Network over Converged Ethernet (RoCE), which I will detail later on in this blog post.

Larry also boasted the Exadata X8M storage is 50x faster then AWS and 100x faster then Azure All flash storage:

IMG_6716

Following the announcement I attended another 2 sessions with Juan Loaiza (Executive Vice President, Mission Critical Database Technologies, Oracle) and Kothanda Umamageswaran (Senior Vice President, Exadata Development)/Gavin Parish (Senior Principal Product Manager, Exadata Development), who gave more details on the Exadata X8M:

IMG_6880

The keys changes are:

  1. 100Gb/Sec RoCE internal fabric
  2. 1.5TB Persistent Memory per storage server/cell

IMG_6881

RoCE Networking

IMG_6882

RoCE stand for RDMA (Remote Direct Memory Access) over Converged Ethernet, which initially from the start of Exadata had been over InfiniBand, however Oracle stated Ethernet has caught up and surpassed InfiniBand giving 100Gb/sec throughput as opposed to 40Gb/sec with InfiniBand which is 2.5 times faster:

IMG_6883

IMG_6910

RoCE uses InfiniBand RDMA software on top of Ethernet, so includes all the optimisation and allows for backwards compatibility:

IMG_6911

Also mentioned is the smart network prioritisation which can prioritise critical database messages such as transaction commits, cache fusion over backups, etc using Class of Service:

IMG_6914

An another nice addition is instance failure detected through use of RoCE, because if all 4 ports don’t respond it confirmed server failure and instantly evicted from cluster:

IMG_6915

Persistent Memory

IMG_6884

The Exadata X8M uses Intel Optane DC persistent memory a new silicon technology that capacity, performance and cost is between DRAM and flash:

IMG_6885

In the Exadata X8M, the persistent memory is shared, just as disks and flash are.  So you get all the benefit of aggregated performance, redundancy, etc:

IMG_6887

The benefit of RoCE with persistent memory is the Persistent Memory Data Accelerator, that allows the database to use RDMA instead of I/O bypassing network and IO software, interrupts, context switches:

IMG_6919

Another benefit of persistent memory is the Memory Commit Accelerator, which like Smart Flash Logging, uses persistent memory to further speed up log writes by 8x using oersistent memory as a buffer which is flushed to flash or disk later on:

IMG_6920

Smart capacity management of persistent memory, so primaries on persistent memory and secondary on flash, which is automatically moved to persistent memory when primary is unavailable:

IMG_6921

If Exadata was not fast enough, all this innovation has lead to the “Worlds Fastest Database Machine” with a astonishing 16 million IOPS with less then 19 microseconds:

IMG_6909

For more information on Exadata X8M can be found here.

Finally Happy OGB Appreciation Day! #ThanksOGB #ThanksODC #ThanksOTN 🙂

If you found this blog post useful, please like as well as follow me through my various Social Media avenues available on the sidebar and/or subscribe to this oracle blog via WordPress/e-mail.

Thanks

Zed DBA (Zahid Anwar)

Oracle Exadata Smart Flash Logging

What is Exadata Smart Flash Logging?

In an OLTP environment, it is crucial to have fast response times to redo log writes i.e. low latency.  When multiplexing redo logs for high availability i.e. to protect against hardware failure, redo log writes are only acknowledged when redo is written to all redo log members i.e when the slowest disk completes the write.  By this nature, whenever a disk slows down even if for a moment it can have impact on redo log performance and throughput.

Flash alone can’t resolve this issue as flash can also momentarily slow down due to issues in erase cycles or wear leveling and remember the acknowledgement is only given when the redo is written to all redo log members.

Exadata Smart Flash Logging, is the feature that writes to both hard disk and flash with the acknowledgement given as soon as either completes the write, thus improving response time and throughput.  So if a write is slow to hard disk the flash will give a quicker acknowledgement but when flash is experiencing a slow down due to erase cycles or wear leveling then the hard disk will acknowledge, smoothing out response times.

The Exadata Smart Flash Cache isn’t permanent but a temporary store to provide fast response times by storing redo until it’s safely written to disk.

No changes are required to redo log configuration and is transparent to database and recovery.

How to enable Smart Flash Logging?

It’s enabled out the box or for older systems it’s enabled when applying cell patch version 11.2.2.4 and also requires Database 11.2.0.2 Bundle Patch 11 or higher.

How to disable Smart Flash Logging?

This shouldn’t be done unless instructed to do so by Oracle Support or Development.

How much flash is used by Smart Flash Logging?

By default just 512Mb is used per cell, which should be sufficient for most situations.   It’s a small investment for huge performance benefit.  Statistics record the number of successful write and unsuccessful writes due to the temporary space filled.  In which case the size may need to be increased.  Also I/O Resource Manager (IORM) can be used to disable Smart Flash Logging for none critical databases.

Do standby redo logs use Smart Flash Logging?

Yes, standby redo logs benefit from Smart Flash Logging just as redo logs as long as cell patch 11.2.2.4 or higher is applied and Database 11.2.0.2 Bundle Patch 11 or higher is applied.

How to check that Smart Flash Logging is configured?

Using CellCLI run “LIST FLASHLOG DETAIL” and if output is returned as shown below with the details, then this means that Smart Flash Logging is configured:

[root@v1ex1dbadm01 ~]# dcli -g /opt/oracle.SupportTools/onecommand/cell_group -l root cellcli -e "list flashlog detail"
 v1ex1celadm01: name: v1ex1celadm01_FLASHLOG
 v1ex1celadm01: cellDisk: FD_00_v1ex1celadm01,FD_01_v1ex1celadm01
 v1ex1celadm01: creationTime: 2015-06-28T17:52:43+01:00
 v1ex1celadm01: degradedCelldisks:
 v1ex1celadm01: effectiveSize: 512M
 v1ex1celadm01: efficiency: 100.0
 v1ex1celadm01: id: 366421ec-bf77-499e-870f-f0cf5390343e
 v1ex1celadm01: size: 512M
 v1ex1celadm01: status: normal
 v1ex1celadm02: name: v1ex1celadm02_FLASHLOG
 v1ex1celadm02: cellDisk: FD_01_v1ex1celadm02,FD_00_v1ex1celadm02
 v1ex1celadm02: creationTime: 2015-06-28T17:52:44+01:00
 v1ex1celadm02: degradedCelldisks:
 v1ex1celadm02: effectiveSize: 512M
 v1ex1celadm02: efficiency: 100.0
 v1ex1celadm02: id: 9f670843-c9cc-4156-a32e-8d23fa79cdb8
 v1ex1celadm02: size: 512M
 v1ex1celadm02: status: normal
 v1ex1celadm03: name: v1ex1celadm03_FLASHLOG
 v1ex1celadm03: cellDisk: FD_01_v1ex1celadm03,FD_00_v1ex1celadm03
 v1ex1celadm03: creationTime: 2015-06-28T17:52:33+01:00
 v1ex1celadm03: degradedCelldisks:
 v1ex1celadm03: effectiveSize: 512M
 v1ex1celadm03: efficiency: 100.0
 v1ex1celadm03: id: 749bada6-8ae2-4c51-8410-97622f9a9532
 v1ex1celadm03: size: 512M
 v1ex1celadm03: status: normal
[root@v1ex1dbadm01 ~]#

For more info:
Exadata Smart Flash Logging FAQ (Doc ID 1372894.1)
Oracle Exadata Whitepaper:  Exadata Smart Flash Cache Features and the Oracle Exadata Database Machine

If you found this blog post useful, please like as well as follow me through my various Social Media avenues available on the sidebar and/or subscribe to this oracle blog via WordPress/e-mail.

Thanks

Zed DBA (Zahid Anwar)

How to Enable Exadata Write-Back Flash Cache

Please check the following blog post “How to check if Exadata Write-Back Flash Cache is Enabled” for:

  • What is Exadata Write-Back Flash Cache?
  • What are the Performance Benefits of Exadata Write-Back Flash Cache?
  • How to check if Exadata Write-Back Flash Cache is Enabled?
  • Pre-requisites and minimum versions.

You can also get more info from My Oracle Support (MOS) note:
Exadata Write-Back Flash Cache – FAQ (Doc ID 1500257.1)
OTN Article: Oracle Exadata Database Machine – Write-Back Flash Cache

How to Enable Exadata Write-Back Flash Cache

PLEASE NOTE: Although I have illustrated the steps below, please cross check with the MOS note to ensure the method below matches your setup or the steps haven’t changed with future releases (after the time of writing).

With Exadata software 11.2.3.3.1 or higher, it is not required to stop the cellsrv process on the storage cells or to inactivate griddisk.  If you are 11.2.3.2.1 to 11.2.3.3.0, the refer to the MOS notes for additional steps.

It is recommend to enabled Write-Back Flash Cache during a period of reduced workload to reduce the performance impact on the database.

Before proceeding with the enabling of Write-Back Flash Cache, it is recommended to check the caching policy of the grid disks, as we don’t want to enable Write-Back Flash Cache for grid disks that don’t need it i.e. RECO and DBFS disk groups:

[root@v1oex2dbadm01 ~]# dcli -l root -g /opt/oracle.SupportTools/onecommand/cell_group cellcli -e list griddisk attributes name,cachingpolicy,cachedby
 v1oex2celadm01: DATAC1_CD_00_v1oex2celadm01 default
 v1oex2celadm01: DATAC1_CD_01_v1oex2celadm01 default
 v1oex2celadm01: DATAC1_CD_02_v1oex2celadm01 default
 v1oex2celadm01: DATAC1_CD_03_v1oex2celadm01 default
 v1oex2celadm01: DATAC1_CD_04_v1oex2celadm01 default
 v1oex2celadm01: DATAC1_CD_05_v1oex2celadm01 default
 v1oex2celadm01: DBFS_DG_CD_02_v1oex2celadm01 default
 v1oex2celadm01: DBFS_DG_CD_03_v1oex2celadm01 default
 v1oex2celadm01: DBFS_DG_CD_04_v1oex2celadm01 default
 v1oex2celadm01: DBFS_DG_CD_05_v1oex2celadm01 default
 v1oex2celadm01: RECOC1_CD_00_v1oex2celadm01 default
 v1oex2celadm01: RECOC1_CD_01_v1oex2celadm01 default
 v1oex2celadm01: RECOC1_CD_02_v1oex2celadm01 default
 v1oex2celadm01: RECOC1_CD_03_v1oex2celadm01 default
 v1oex2celadm01: RECOC1_CD_04_v1oex2celadm01 default
 v1oex2celadm01: RECOC1_CD_05_v1oex2celadm01 default
 v1oex2celadm02: DATAC1_CD_00_v1oex2celadm02 default
 v1oex2celadm02: DATAC1_CD_01_v1oex2celadm02 default
 v1oex2celadm02: DATAC1_CD_02_v1oex2celadm02 default
 v1oex2celadm02: DATAC1_CD_03_v1oex2celadm02 default
 v1oex2celadm02: DATAC1_CD_04_v1oex2celadm02 default
 v1oex2celadm02: DATAC1_CD_05_v1oex2celadm02 default
 v1oex2celadm02: DBFS_DG_CD_02_v1oex2celadm02 default
 v1oex2celadm02: DBFS_DG_CD_03_v1oex2celadm02 default
 v1oex2celadm02: DBFS_DG_CD_04_v1oex2celadm02 default
 v1oex2celadm02: DBFS_DG_CD_05_v1oex2celadm02 default
 v1oex2celadm02: RECOC1_CD_00_v1oex2celadm02 default
 v1oex2celadm02: RECOC1_CD_01_v1oex2celadm02 default
 v1oex2celadm02: RECOC1_CD_02_v1oex2celadm02 default
 v1oex2celadm02: RECOC1_CD_03_v1oex2celadm02 default
 v1oex2celadm02: RECOC1_CD_04_v1oex2celadm02 default
 v1oex2celadm02: RECOC1_CD_05_v1oex2celadm02 default
 v1oex2celadm03: DATAC1_CD_00_v1oex2celadm03 default
 v1oex2celadm03: DATAC1_CD_01_v1oex2celadm03 default
 v1oex2celadm03: DATAC1_CD_02_v1oex2celadm03 default
 v1oex2celadm03: DATAC1_CD_03_v1oex2celadm03 default
 v1oex2celadm03: DATAC1_CD_04_v1oex2celadm03 default
 v1oex2celadm03: DATAC1_CD_05_v1oex2celadm03 default
 v1oex2celadm03: DBFS_DG_CD_02_v1oex2celadm03 default
 v1oex2celadm03: DBFS_DG_CD_03_v1oex2celadm03 default
 v1oex2celadm03: DBFS_DG_CD_04_v1oex2celadm03 default
 v1oex2celadm03: DBFS_DG_CD_05_v1oex2celadm03 default
 v1oex2celadm03: RECOC1_CD_00_v1oex2celadm03 default
 v1oex2celadm03: RECOC1_CD_01_v1oex2celadm03 default
 v1oex2celadm03: RECOC1_CD_02_v1oex2celadm03 default
 v1oex2celadm03: RECOC1_CD_03_v1oex2celadm03 default
 v1oex2celadm03: RECOC1_CD_04_v1oex2celadm03 default
 v1oex2celadm03: RECOC1_CD_05_v1oex2celadm03 default
 [root@v1oex2dbadm01 ~]#

As you can see, all the grid disks have default caching policy.  As per the following MOS note, we disable caching for RECO and DBFS disk groups:
Oracle Exadata Database Machine Setup/Configuration Best Practices (Doc ID 1274318.1)

[root@v1oex2dbadm01 ~]# dcli -c v1oex2celadm01 -l root cellcli -e alter griddisk DBFS_DG_CD_02_v1oex2celadm01,DBFS_DG_CD_03_v1oex2celadm01,DBFS_DG_CD_04_v1oex2celadm01,DBFS_DG_CD_05_v1oex2celadm01 cachingPolicy="none"
 v1oex2celadm01: GridDisk DBFS_DG_CD_02_v1oex2celadm01 successfully altered
 v1oex2celadm01: GridDisk DBFS_DG_CD_03_v1oex2celadm01 successfully altered
 v1oex2celadm01: GridDisk DBFS_DG_CD_04_v1oex2celadm01 successfully altered
 v1oex2celadm01: GridDisk DBFS_DG_CD_05_v1oex2celadm01 successfully altered
[root@v1oex2dbadm01 ~]# dcli -c v1oex2celadm02 -l root cellcli -e alter griddisk DBFS_DG_CD_02_v1oex2celadm02,DBFS_DG_CD_03_v1oex2celadm02,DBFS_DG_CD_04_v1oex2celadm02,DBFS_DG_CD_05_v1oex2celadm02 cachingPolicy="none"
 v1oex2celadm02: GridDisk DBFS_DG_CD_02_v1oex2celadm02 successfully altered
 v1oex2celadm02: GridDisk DBFS_DG_CD_03_v1oex2celadm02 successfully altered
 v1oex2celadm02: GridDisk DBFS_DG_CD_04_v1oex2celadm02 successfully altered
 v1oex2celadm02: GridDisk DBFS_DG_CD_05_v1oex2celadm02 successfully altered
[root@v1oex2dbadm01 ~]# dcli -c v1oex2celadm03 -l root cellcli -e alter griddisk DBFS_DG_CD_02_v1oex2celadm03,DBFS_DG_CD_03_v1oex2celadm03,DBFS_DG_CD_04_v1oex2celadm03,DBFS_DG_CD_05_v1oex2celadm03 cachingPolicy="none"
 v1oex2celadm03: GridDisk DBFS_DG_CD_02_v1oex2celadm03 successfully altered
 v1oex2celadm03: GridDisk DBFS_DG_CD_03_v1oex2celadm03 successfully altered
 v1oex2celadm03: GridDisk DBFS_DG_CD_04_v1oex2celadm03 successfully altered
 v1oex2celadm03: GridDisk DBFS_DG_CD_05_v1oex2celadm03 successfully altered 
[root@v1oex2dbadm01 ~]# dcli -c v1oex2celadm01 -l root cellcli -e alter griddisk RECOC1_CD_00_v1oex2celadm01,RECOC1_CD_01_v1oex2celadm01,RECOC1_CD_02_v1oex2celadm01,RECOC1_CD_03_v1oex2celadm01,RECOC1_CD_04_v1oex2celadm01,RECOC1_CD_05_v1oex2celadm01 cachingPolicy="none"
 v1oex2celadm01: GridDisk RECOC1_CD_00_v1oex2celadm01 successfully altered
 v1oex2celadm01: GridDisk RECOC1_CD_01_v1oex2celadm01 successfully altered
 v1oex2celadm01: GridDisk RECOC1_CD_02_v1oex2celadm01 successfully altered
 v1oex2celadm01: GridDisk RECOC1_CD_03_v1oex2celadm01 successfully altered
 v1oex2celadm01: GridDisk RECOC1_CD_04_v1oex2celadm01 successfully altered
 v1oex2celadm01: GridDisk RECOC1_CD_05_v1oex2celadm01 successfully altered 
[root@v1oex2dbadm01 ~]# dcli -c v1oex2celadm02 -l root cellcli -e alter griddisk RECOC1_CD_00_v1oex2celadm02,RECOC1_CD_01_v1oex2celadm02,RECOC1_CD_02_v1oex2celadm02,RECOC1_CD_03_v1oex2celadm02,RECOC1_CD_04_v1oex2celadm02,RECOC1_CD_05_v1oex2celadm02 cachingPolicy="none"
 v1oex2celadm02: GridDisk RECOC1_CD_00_v1oex2celadm02 successfully altered
 v1oex2celadm02: GridDisk RECOC1_CD_01_v1oex2celadm02 successfully altered
 v1oex2celadm02: GridDisk RECOC1_CD_02_v1oex2celadm02 successfully altered
 v1oex2celadm02: GridDisk RECOC1_CD_03_v1oex2celadm02 successfully altered
 v1oex2celadm02: GridDisk RECOC1_CD_04_v1oex2celadm02 successfully altered
 v1oex2celadm02: GridDisk RECOC1_CD_05_v1oex2celadm02 successfully altered
[root@v1oex2dbadm01 ~]# dcli -c v1oex2celadm03 -l root cellcli -e alter griddisk RECOC1_CD_00_v1oex2celadm03,RECOC1_CD_01_v1oex2celadm03,RECOC1_CD_02_v1oex2celadm03,RECOC1_CD_03_v1oex2celadm03,RECOC1_CD_04_v1oex2celadm03,RECOC1_CD_05_v1oex2celadm03 cachingPolicy="none"
 v1oex2celadm03: GridDisk RECOC1_CD_00_v1oex2celadm03 successfully altered
 v1oex2celadm03: GridDisk RECOC1_CD_01_v1oex2celadm03 successfully altered
 v1oex2celadm03: GridDisk RECOC1_CD_02_v1oex2celadm03 successfully altered
 v1oex2celadm03: GridDisk RECOC1_CD_03_v1oex2celadm03 successfully altered
 v1oex2celadm03: GridDisk RECOC1_CD_04_v1oex2celadm03 successfully altered
 v1oex2celadm03: GridDisk RECOC1_CD_05_v1oex2celadm03 successfully altered
[root@v1oex2dbadm01 ~]#

Now when we enabling of Write-Back Flash Cache, it will not cache for grid disks for RECO and DBFS disk group, avoiding the need to flush to disk and change policy as post step:

[root@v1oex2dbadm01 ~]# dcli -g /opt/oracle.SupportTools/onecommand/cell_group -l root cellcli -e list griddisk attributes name,cachingpolicy,cachedby
 v1oex2celadm01: DATAC1_CD_00_v1oex2celadm01 default
 v1oex2celadm01: DATAC1_CD_01_v1oex2celadm01 default
 v1oex2celadm01: DATAC1_CD_02_v1oex2celadm01 default
 v1oex2celadm01: DATAC1_CD_03_v1oex2celadm01 default
 v1oex2celadm01: DATAC1_CD_04_v1oex2celadm01 default
 v1oex2celadm01: DATAC1_CD_05_v1oex2celadm01 default
 v1oex2celadm01: DBFS_DG_CD_02_v1oex2celadm01 none
 v1oex2celadm01: DBFS_DG_CD_03_v1oex2celadm01 none
 v1oex2celadm01: DBFS_DG_CD_04_v1oex2celadm01 none
 v1oex2celadm01: DBFS_DG_CD_05_v1oex2celadm01 none
 v1oex2celadm01: RECOC1_CD_00_v1oex2celadm01 none
 v1oex2celadm01: RECOC1_CD_01_v1oex2celadm01 none
 v1oex2celadm01: RECOC1_CD_02_v1oex2celadm01 none
 v1oex2celadm01: RECOC1_CD_03_v1oex2celadm01 none
 v1oex2celadm01: RECOC1_CD_04_v1oex2celadm01 none
 v1oex2celadm01: RECOC1_CD_05_v1oex2celadm01 none
 v1oex2celadm02: DATAC1_CD_00_v1oex2celadm02 default
 v1oex2celadm02: DATAC1_CD_01_v1oex2celadm02 default
 v1oex2celadm02: DATAC1_CD_02_v1oex2celadm02 default
 v1oex2celadm02: DATAC1_CD_03_v1oex2celadm02 default
 v1oex2celadm02: DATAC1_CD_04_v1oex2celadm02 default
 v1oex2celadm02: DATAC1_CD_05_v1oex2celadm02 default
 v1oex2celadm02: DBFS_DG_CD_02_v1oex2celadm02 none
 v1oex2celadm02: DBFS_DG_CD_03_v1oex2celadm02 none
 v1oex2celadm02: DBFS_DG_CD_04_v1oex2celadm02 none
 v1oex2celadm02: DBFS_DG_CD_05_v1oex2celadm02 none
 v1oex2celadm02: RECOC1_CD_00_v1oex2celadm02 none
 v1oex2celadm02: RECOC1_CD_01_v1oex2celadm02 none
 v1oex2celadm02: RECOC1_CD_02_v1oex2celadm02 none
 v1oex2celadm02: RECOC1_CD_03_v1oex2celadm02 none
 v1oex2celadm02: RECOC1_CD_04_v1oex2celadm02 none
 v1oex2celadm02: RECOC1_CD_05_v1oex2celadm02 none
 v1oex2celadm03: DATAC1_CD_00_v1oex2celadm03 default
 v1oex2celadm03: DATAC1_CD_01_v1oex2celadm03 default
 v1oex2celadm03: DATAC1_CD_02_v1oex2celadm03 default
 v1oex2celadm03: DATAC1_CD_03_v1oex2celadm03 default
 v1oex2celadm03: DATAC1_CD_04_v1oex2celadm03 default
 v1oex2celadm03: DATAC1_CD_05_v1oex2celadm03 default
 v1oex2celadm03: DBFS_DG_CD_02_v1oex2celadm03 none
 v1oex2celadm03: DBFS_DG_CD_03_v1oex2celadm03 none
 v1oex2celadm03: DBFS_DG_CD_04_v1oex2celadm03 none
 v1oex2celadm03: DBFS_DG_CD_05_v1oex2celadm03 none
 v1oex2celadm03: RECOC1_CD_00_v1oex2celadm03 none
 v1oex2celadm03: RECOC1_CD_01_v1oex2celadm03 none
 v1oex2celadm03: RECOC1_CD_02_v1oex2celadm03 none
 v1oex2celadm03: RECOC1_CD_03_v1oex2celadm03 none
 v1oex2celadm03: RECOC1_CD_04_v1oex2celadm03 none
 v1oex2celadm03: RECOC1_CD_05_v1oex2celadm03 none
 [root@v1oex2dbadm01 ~]#

Next we check that all the grid disks on all storage cells have the asmdeactivationoutcome and asmmodestatus as “Yes” and “ONLINE” respectively.

[root@v1ex2dbadm01 ~]# dcli -g /opt/oracle.SupportTools/onecommand/cell_group -l root cellcli -e list griddisk attributes asmdeactivationoutcome, asmmodestatus
v1ex2celadm01: Yes ONLINE
v1ex2celadm01: Yes ONLINE
v1ex2celadm01: Yes ONLINE
v1ex2celadm01: Yes ONLINE
v1ex2celadm01: Yes ONLINE
v1ex2celadm01: Yes ONLINE
v1ex2celadm01: Yes ONLINE
v1ex2celadm01: Yes ONLINE
v1ex2celadm01: Yes ONLINE
v1ex2celadm01: Yes ONLINE
v1ex2celadm01: Yes ONLINE
v1ex2celadm01: Yes ONLINE
v1ex2celadm01: Yes ONLINE
v1ex2celadm01: Yes ONLINE
v1ex2celadm01: Yes ONLINE
v1ex2celadm01: Yes ONLINE
v1ex2celadm02: Yes ONLINE
v1ex2celadm02: Yes ONLINE
v1ex2celadm02: Yes ONLINE
v1ex2celadm02: Yes ONLINE
v1ex2celadm02: Yes ONLINE
v1ex2celadm02: Yes ONLINE
v1ex2celadm02: Yes ONLINE
v1ex2celadm02: Yes ONLINE
v1ex2celadm02: Yes ONLINE
v1ex2celadm02: Yes ONLINE
v1ex2celadm02: Yes ONLINE
v1ex2celadm02: Yes ONLINE
v1ex2celadm02: Yes ONLINE
v1ex2celadm02: Yes ONLINE
v1ex2celadm02: Yes ONLINE
v1ex2celadm02: Yes ONLINE
v1ex2celadm03: Yes ONLINE
v1ex2celadm03: Yes ONLINE
v1ex2celadm03: Yes ONLINE
v1ex2celadm03: Yes ONLINE
v1ex2celadm03: Yes ONLINE
v1ex2celadm03: Yes ONLINE
v1ex2celadm03: Yes ONLINE
v1ex2celadm03: Yes ONLINE
v1ex2celadm03: Yes ONLINE
v1ex2celadm03: Yes ONLINE
v1ex2celadm03: Yes ONLINE
v1ex2celadm03: Yes ONLINE
v1ex2celadm03: Yes ONLINE
v1ex2celadm03: Yes ONLINE
v1ex2celadm03: Yes ONLINE
v1ex2celadm03: Yes ONLINE
[root@v1ex2dbadm01 ~]#

Next we check that all of the Flash Cache are in the “normal” state and that no flash disks are in a degraded or critical state:

[root@v1ex2dbadm01 ~]# dcli -g /opt/oracle.SupportTools/onecommand/cell_group -l root cellcli -e list flashcache detail
v1ex2celadm01: name: v1ex2celadm01_FLASHCACHE
v1ex2celadm01: cellDisk: FD_01_v1ex2celadm01,FD_00_v1ex2celadm01
v1ex2celadm01: creationTime: 2015-07-01T13:39:22+01:00
v1ex2celadm01: degradedCelldisks:
v1ex2celadm01: effectiveCacheSize: 2.910369873046875T
v1ex2celadm01: id: 655bdb7a-8d3b-40e5-88af-cd42843dd3f7
v1ex2celadm01: size: 2.910369873046875T
v1ex2celadm01: status: normal
v1ex2celadm02: name: v1ex2celadm02_FLASHCACHE
v1ex2celadm02: cellDisk: FD_01_v1ex2celadm02,FD_00_v1ex2celadm02
v1ex2celadm02: creationTime: 2015-07-01T06:38:05+01:00
v1ex2celadm02: degradedCelldisks:
v1ex2celadm02: effectiveCacheSize: 2.910369873046875T
v1ex2celadm02: id: 1cc0f7a4-885a-4e23-aec5-b47bc488e8e3
v1ex2celadm02: size: 2.910369873046875T
v1ex2celadm02: status: normal
v1ex2celadm03: name: v1ex2celadm03_FLASHCACHE
v1ex2celadm03: cellDisk: FD_01_v1ex2celadm03,FD_00_v1ex2celadm03
v1ex2celadm03: creationTime: 2015-07-01T20:39:30+01:00
v1ex2celadm03: degradedCelldisks:
v1ex2celadm03: effectiveCacheSize: 2.910369873046875T
v1ex2celadm03: id: b07f6011-1d66-4c3f-a25f-26d1e6b55633
v1ex2celadm03: size: 2.910369873046875T
v1ex2celadm03: status: normal
[root@v1ex2dbadm01 ~]#

Next we validate all the Physical Disks are in the “NORMAL” state before we modify the Flash Cache:

[root@v1ex2dbadm01 ~]# dcli -l root -g /opt/oracle.SupportTools/onecommand/cell_group cellcli -e "list physicaldisk attributes name,status"
v1ex2celadm01: 8:0 normal
v1ex2celadm01: 8:1 normal
v1ex2celadm01: 8:2 normal
v1ex2celadm01: 8:3 normal
v1ex2celadm01: 8:4 normal
v1ex2celadm01: 8:5 normal
v1ex2celadm01: 8:6 normal
v1ex2celadm01: 8:7 normal
v1ex2celadm01: 8:8 normal
v1ex2celadm01: 8:9 normal
v1ex2celadm01: 8:10 normal
v1ex2celadm01: 8:11 normal
v1ex2celadm01: FLASH_1_1 normal
v1ex2celadm01: FLASH_2_1 normal
v1ex2celadm01: FLASH_4_1 normal
v1ex2celadm01: FLASH_5_1 normal
v1ex2celadm02: 8:0 normal
v1ex2celadm02: 8:1 normal
v1ex2celadm02: 8:2 normal
v1ex2celadm02: 8:3 normal
v1ex2celadm02: 8:4 normal
v1ex2celadm02: 8:5 normal
v1ex2celadm02: 8:6 normal
v1ex2celadm02: 8:7 normal
v1ex2celadm02: 8:8 normal
v1ex2celadm02: 8:9 normal
v1ex2celadm02: 8:10 normal
v1ex2celadm02: 8:11 normal
v1ex2celadm02: FLASH_1_1 normal
v1ex2celadm02: FLASH_2_1 normal
v1ex2celadm02: FLASH_4_1 normal
v1ex2celadm02: FLASH_5_1 normal
v1ex2celadm03: 8:0 normal
v1ex2celadm03: 8:1 normal
v1ex2celadm03: 8:2 normal
v1ex2celadm03: 8:3 normal
v1ex2celadm03: 8:4 normal
v1ex2celadm03: 8:5 normal
v1ex2celadm03: 8:6 normal
v1ex2celadm03: 8:7 normal
v1ex2celadm03: 8:8 normal
v1ex2celadm03: 8:9 normal
v1ex2celadm03: 8:10 normal
v1ex2celadm03: 8:11 normal
v1ex2celadm03: FLASH_1_1 normal
v1ex2celadm03: FLASH_2_1 normal
v1ex2celadm03: FLASH_4_1 normal
v1ex2celadm03: FLASH_5_1 normal
[root@v1ex2dbadm01 ~]#

You can run the same command with inverse grep on “normal” to ensure you didn’t miss any disks that are not normal:

[root@v1ex2dbadm01 ~]# dcli -l root -g /opt/oracle.SupportTools/onecommand/cell_group cellcli -e "list physicaldisk attributes name,status"|grep -v normal
[root@v1ex2dbadm01 ~]#

Next we drop the Flash Cache to be able to change the attribute:

PLEASE NOTE: Any data that is currently cached in Flash Cache and being served will then need to be served by Hard Disks and a noticeable performance degradation will be observed.  Hence it is recommend to enabled Write-Back Flash Cache during a period of reduced workload to reduce the performance impact on the database.

[root@v1ex2dbadm01 ~]# dcli -l root -g /opt/oracle.SupportTools/onecommand/cell_group cellcli -e drop flashcache 
v1ex2celadm01: Flash cache v1ex2celadm01_FLASHCACHE successfully dropped 
v1ex2celadm02: Flash cache v1ex2celadm02_FLASHCACHE successfully dropped 
v1ex2celadm03: Flash cache v1ex2celadm03_FLASHCACHE successfully dropped 
[root@v1ex2dbadm01 ~]#

Next we set the “flashCacheMode” attribute to “writeback“:

[root@v1ex2dbadm01 ~]# dcli -l root -g /opt/oracle.SupportTools/onecommand/cell_group cellcli -e "alter cell flashCacheMode=writeback"
v1ex2celadm01: Cell v1ex2celadm01 successfully altered
v1ex2celadm02: Cell v1ex2celadm02 successfully altered
v1ex2celadm03: Cell v1ex2celadm03 successfully altered
[root@v1ex2dbadm01 ~]#

Next we re-create the Flash Cache, which will be in Write-Back instead of WriteThrough:

[root@v1ex2dbadm01 ~]# dcli -l root -g /opt/oracle.SupportTools/onecommand/cell_group cellcli -e create flashcache all
v1ex2celadm01: Flash cache v1ex2celadm01_FLASHCACHE successfully created
v1ex2celadm02: Flash cache v1ex2celadm02_FLASHCACHE successfully created
v1ex2celadm03: Flash cache v1ex2celadm03_FLASHCACHE successfully created
[root@v1ex2dbadm01 ~]#

Next we check the attribute “flashCacheMode” is actually now “writeback“:

[root@v1ex2dbadm01 ~]# dcli -g /opt/oracle.SupportTools/onecommand/cell_group -l root cellcli -e "list cell attributes flashcachemode"
v1ex2celadm01: writeback
v1ex2celadm02: writeback
v1ex2celadm03: writeback
[root@v1ex2dbadm01 ~]#

At this point, write I/O will go straight to flash and then can be moved to hard disk if aged or not required for read caching.  The Flash Cache will be repopulated over time and performance will return to normal for reads with addition performance for writes 🙂

You can check the usage increase as Flash Cache repopulates as follows:

[root@v1oex2dbadm01 ~]# dcli -g /opt/oracle.SupportTools/onecommand/cell_group -l root cellcli -e LIST METRICCURRENT FC_BY_USED
 v1oex2celadm01: FC_BY_USED FLASHCACHE 104,838 MB
 v1oex2celadm02: FC_BY_USED FLASHCACHE 104,479 MB
 v1oex2celadm03: FC_BY_USED FLASHCACHE 105,192 MB
[root@v1oex2dbadm01 ~]#

Finally, we validate grid disk attributes cachingPolicy and cachedby, where we can see only the DATA disk group is being cached by Flash Cache and by which Flash Disk:

[root@v1oex2dbadm01 ~]# dcli -g /opt/oracle.SupportTools/onecommand/cell_group -l root cellcli -e list griddisk attributes name,cachingpolicy,cachedby
v1oex2celadm01: DATAC1_CD_00_v1oex2celadm01 default FD_01_v1oex2celadm01
v1oex2celadm01: DATAC1_CD_01_v1oex2celadm01 default FD_01_v1oex2celadm01
v1oex2celadm01: DATAC1_CD_02_v1oex2celadm01 default FD_00_v1oex2celadm01
v1oex2celadm01: DATAC1_CD_03_v1oex2celadm01 default FD_00_v1oex2celadm01
v1oex2celadm01: DATAC1_CD_04_v1oex2celadm01 default FD_01_v1oex2celadm01
v1oex2celadm01: DATAC1_CD_05_v1oex2celadm01 default FD_00_v1oex2celadm01
v1oex2celadm01: DBFS_DG_CD_02_v1oex2celadm01 none
v1oex2celadm01: DBFS_DG_CD_03_v1oex2celadm01 none
v1oex2celadm01: DBFS_DG_CD_04_v1oex2celadm01 none
v1oex2celadm01: DBFS_DG_CD_05_v1oex2celadm01 none
v1oex2celadm01: RECOC1_CD_00_v1oex2celadm01 none
v1oex2celadm01: RECOC1_CD_01_v1oex2celadm01 none
v1oex2celadm01: RECOC1_CD_02_v1oex2celadm01 none
v1oex2celadm01: RECOC1_CD_03_v1oex2celadm01 none
v1oex2celadm01: RECOC1_CD_04_v1oex2celadm01 none
v1oex2celadm01: RECOC1_CD_05_v1oex2celadm01 none
v1oex2celadm02: DATAC1_CD_00_v1oex2celadm02 default FD_01_v1oex2celadm02
v1oex2celadm02: DATAC1_CD_01_v1oex2celadm02 default FD_00_v1oex2celadm02
v1oex2celadm02: DATAC1_CD_02_v1oex2celadm02 default FD_01_v1oex2celadm02
v1oex2celadm02: DATAC1_CD_03_v1oex2celadm02 default FD_01_v1oex2celadm02
v1oex2celadm02: DATAC1_CD_04_v1oex2celadm02 default FD_00_v1oex2celadm02
v1oex2celadm02: DATAC1_CD_05_v1oex2celadm02 default FD_00_v1oex2celadm02
v1oex2celadm02: DBFS_DG_CD_02_v1oex2celadm02 none
v1oex2celadm02: DBFS_DG_CD_03_v1oex2celadm02 none
v1oex2celadm02: DBFS_DG_CD_04_v1oex2celadm02 none
v1oex2celadm02: DBFS_DG_CD_05_v1oex2celadm02 none
v1oex2celadm02: RECOC1_CD_00_v1oex2celadm02 none
v1oex2celadm02: RECOC1_CD_01_v1oex2celadm02 none
v1oex2celadm02: RECOC1_CD_02_v1oex2celadm02 none
v1oex2celadm02: RECOC1_CD_03_v1oex2celadm02 none
v1oex2celadm02: RECOC1_CD_04_v1oex2celadm02 none
v1oex2celadm02: RECOC1_CD_05_v1oex2celadm02 none
v1oex2celadm03: DATAC1_CD_00_v1oex2celadm03 default FD_01_v1oex2celadm03
v1oex2celadm03: DATAC1_CD_01_v1oex2celadm03 default FD_01_v1oex2celadm03
v1oex2celadm03: DATAC1_CD_02_v1oex2celadm03 default FD_00_v1oex2celadm03
v1oex2celadm03: DATAC1_CD_03_v1oex2celadm03 default FD_00_v1oex2celadm03
v1oex2celadm03: DATAC1_CD_04_v1oex2celadm03 default FD_01_v1oex2celadm03
v1oex2celadm03: DATAC1_CD_05_v1oex2celadm03 default FD_00_v1oex2celadm03
v1oex2celadm03: DBFS_DG_CD_02_v1oex2celadm03 none
v1oex2celadm03: DBFS_DG_CD_03_v1oex2celadm03 none
v1oex2celadm03: DBFS_DG_CD_04_v1oex2celadm03 none
v1oex2celadm03: DBFS_DG_CD_05_v1oex2celadm03 none
v1oex2celadm03: RECOC1_CD_00_v1oex2celadm03 none
v1oex2celadm03: RECOC1_CD_01_v1oex2celadm03 none
v1oex2celadm03: RECOC1_CD_02_v1oex2celadm03 none
v1oex2celadm03: RECOC1_CD_03_v1oex2celadm03 none
v1oex2celadm03: RECOC1_CD_04_v1oex2celadm03 none
v1oex2celadm03: RECOC1_CD_05_v1oex2celadm03 none
[root@v1oex2dbadm01 ~]#

Final note, there is a script provided by Oracle that can do this all for you called setWBFC, however the version 1.0.0.2.1.20160602 didn’t work for me as it detected 4 Flash Disks in eighth rack when it expected 2.  Although there are only 2 in use in eighth rack, there are 4 physically present, so I believe this is a bug.  I did raise an SR with Oracle Support, which is yet to be concluded.  Below is the output for those who are interested:

[root@v1oex2dbadm01 WBFC]# ./setWBFC.sh
 setWBFC Version: 1.0.0.2.1.20160602
 Usage:
 ./setWBFC.sh -g cell_group_file [-d dbs_group_file ]
 [ -h ] [ -i ] [ -l log_directory ]
 [ -m WriteBack | WriteThrough ] [ -o rolling | non-rolling ]
 [ -p ] [ -s step_number ] [ -t time_out_seconds ]
 [ -x trace_level ] [ -v ]

-g file file that lists cell host names, one per line
 -d file file that lists the database host names, one
 per line. Required for non-rolling.
 -h help, print this information
 -i run in interactive mode
 -l log directory directory path for log files
 -m FC_mode flashcache mode: WriteBack | WriteThrough
 -o exec_mode execution mode: rolling | non-rolling (default)
 -p perform a precheck only
 -s step # (*) specify step number to restart at
 -t timeout sec specify in seconds the amount of time to wait
 for griddisks to come ONLINE - range: [600 - 43200]
 Default: 21600 (6 hours)
 -x trace level # specify trace level for further diagnostics
 -v show version

(*) -- Option not yet implemented.

 [root@v1oex2dbadm01 WBFC]# ./setWBFC.sh -g /opt/oracle.SupportTools/onecommand/cell_group -l /root/v1/WBFC/logs -m WriteBack -o rolling -p
 ./setWBFC.sh: Using log directory '/root/v1/WBFC/logs'
 ./setWBFC.sh: Log File '/root/v1/WBFC/logs/setWBFC_18335_2018-01-17-10:46:26.log' created successfully
 2018-01-17 10:46:26
 Starting ./setWBFC.sh on v1oex2dbadm01
 Version: 1.0.0.2.1.20160602
 Command line options used:
 -g /opt/oracle.SupportTools/onecommand/cell_group
 -o rolling
 -m WriteBack
 -p (Perform pre-req checks only)
 -t 21600
 -x 0

2018-01-17 10:46:26
 Performing pre-req checks.....
 2018-01-17 10:46:26
 Creating baseline inventory for griddisks
 2018-01-17 10:46:27
 Creating baseline inventory for flashdisks
 2018-01-17 10:46:28
 Creating baseline inventory for flashsize
 2018-01-17 10:46:28
 dcli present and in PATH. [PASSED]
 2018-01-17 10:46:28
 Checking cell nodes are valid storage servers...
 2018-01-17 10:46:29
 All cells are valid Exadata storage cells.
 2018-01-17 10:46:29
 Checking Exadata Storage Software Versions...
 2018-01-17 10:46:33
 Software versions of the following cells:
 v1oex2celadm01: 12.1.2.3.5.170418 [PASSED]
 v1oex2celadm02: 12.1.2.3.5.170418 [PASSED]
 v1oex2celadm03: 12.1.2.3.5.170418 [PASSED]

2018-01-17 10:46:33
 Checking Grid Infrastructure Software Version...
 2018-01-17 10:46:38
 Grid Infrastructure version: 12.1.0.2.00 [PASSED]

2018-01-17 10:46:38
 Checking for active ASM operations....
 2018-01-17 10:46:38
 Check for no active ASM operations: [PASSED]
 2018-01-17 10:46:38
 Checking griddisk status across all cells....
 2018-01-17 10:46:39
 All griddisks across all cells have asmdeactivationoutcome = Yes
 All griddisks across all cells are ONLINE
 Griddisk checks: [PASSED]
 2018-01-17 10:46:39
 Checking flash cache status.....
 2018-01-17 10:46:40
 Flashcache status normal: [PASSED]
 2018-01-17 10:46:40
 Checking that all FlashDisks are present...
 2018-01-17 10:46:42
 Cell v1oex2celadm01 has one or more FlashDisk missing. Expecting 2 but found 4

2018-01-17 10:46:42
 FlashDisk validation: [FAILED]
 2018-01-17 10:46:42
 Checking current flash cache mode.....
 2018-01-17 10:46:43
 Flashcache not already in target mode: [PASSED]
 2018-01-17 10:46:43
 Pre-req checks failed with status 7. Exiting....

[root@v1oex2dbadm01 WBFC]#

If this works for you, great then I would recommend using this method, otherwise it can be used to double check the pre-requisites at least and then you can do manually as I did shown above 🙂

If you found this blog post useful, please like as well as follow me through my various Social Media avenues available on the sidebar and/or subscribe to this oracle blog via WordPress/e-mail.

Thanks

Zed DBA (Zahid Anwar)

How to check if Exadata Write-Back Flash Cache is Enabled

What is Exadata Write-Back Flash Cache?

Exadata Write-Back Flash Cache provides the ability to cache not only read I/Os but write I/O to the Exadata’s PCI flash on the storage cells.  Exadata storage software 11.2.3.2.1 or higher and Grid Infrastructure and Database software 11.2.0.3.9 or higher is required to use Exadata Write-Back Flash Cache, which is persistent across storage cell restarts.

The default since April 2017 for the Oracle Exadata Deployment Assistant (OEDA) is Write-Back Flash Cache when DATA diskgroup is HIGH redundancy and Grid Infrastructure and Database software are:

  • 11.2.0.4.1 or higher
  • 12.1.0.2 or higher
  • 12.2.0.2 or higher

PLEASE NOTE: This option is only applicable to High Capacity as Extreme Flash doesn’t have Hard Disks and therefore Write-Back Flash Cache is explicitly enabled and can’t be disabled.

What are the Performance Benefit of Exadata Write-Back Flash Cache?

Write-Back Flash Cache can significantly improve write intensive operations because writing to Flash Cache is significantly faster than writing to Hard Disks.  Depending on the workload, write performance (IOPS) can be improved by 10x on older generations of Exadata Machines V2 and X2 and 20x on newer generations X3 onwards (correct at time of writing).

If you are experiencing high write I/O times on storage cells from AWR Reports or Storage Cell metrics, then you should consider enabling Write-Back Flash Cache to alleviate write operations on Hard Disks and move to Flash Cache.

See the following My Oracle Support (MOS) Note for more info:
Exadata Write-Back Flash Cache – FAQ (Doc ID 1500257.1)

How to check if Exadata Write-Back Flash Cache is Enabled?

To check if Exadata Write-Back Flash Cache is enabled, run “list cell attributes flashcachemode” on the storage cell using CellCLI as shown below:

[root@v1ex2celadm01 ~]# cellcli
CellCLI: Release 12.1.2.3.5 - Production on Wed Jan 17 10:09:51 GMT 2018

Copyright (c) 2007, 2016, Oracle. All rights reserved.

CellCLI> list cell attributes flashcachemode
 WriteThrough

CellCLI> exit
quitting

[root@v1ex2celadm01 ~]#

If “WriteThrough” then Write-Back Flash Cache is disabled (writes go straight to hard disk and then can be placed in flash for caching reads if required), otherwise if “WriteBack” then Write-Back Flash Cache is enabled as the name suggests (writes go straight to flash and then can be moved to hard disk if aged or not required for read caching).

You can also run “list cell detail” using CellCLI as shown below:

[root@v1ex2celadm01 ~]# cellcli
CellCLI: Release 12.1.2.3.5 - Production on Wed Jan 17 10:10:22 GMT 2018

Copyright (c) 2007, 2016, Oracle. All rights reserved.

CellCLI> list cell detail
 name: v1ex2celadm01
 accessLevelPerm: remoteLoginEnabled
 bbuStatus: normal
 cellVersion: OSS_12.1.2.3.5_LINUX.X64_170418
 cpuCount: 16/32
 diagHistoryDays: 7
 eighthRack: TRUE
 fanCount: 8/8
 fanStatus: normal
 flashCacheMode: WriteThrough
 id: xxxxxxxxxx
 interconnectCount: 2
 interconnect1: ib0
 interconnect2: ib1
 iormBoost: 6.4
 ipaddress1: 10.1.11.14/22
 ipaddress2: 10.1.11.15/22
 kernelVersion: 2.6.39-400.294.4.el6uek.x86_64
 locatorLEDStatus: off
 makeModel: Oracle Corporation ORACLE SERVER X5-2L High Capacity
 memoryGB: 95
 metricHistoryDays: 7
 notificationMethod: snmp
 notificationPolicy: critical,warning,clear
 offloadGroupEvents:
 powerCount: 2/2
 powerStatus: normal
 releaseImageStatus: success
 releaseVersion: 12.1.2.3.5.170418
 rpmVersion: cell-12.1.2.3.5_LINUX.X64_170418-1.x86_64
 releaseTrackingBug: 25509078
 rollbackVersion: 12.1.2.3.4.170111
 securityCert: PrivateKey OK
 Certificate: Subject CN=v1ex2celadm01.v1.com,OU=Oracle Exadata,O=Oracle Corporation,L=Redwood City,ST=California,C=US
 Issuer CN=v1ex2celadm01.v1.com,OU=Oracle Exadata,O=Oracle Corporation,L=Redwood City,ST=California,C=US
 snmpSubscriber: host=v1ex2dbadm02.v1.com,port=1830,community=public
 host=v1ex2dbadm01.v1.com,port=1830,community=public
 host=v1ex2dbadm01.v1.com,port=3872,community=public
 host=v1ex2dbadm02.v1.com,port=3872,community=public
 status: online
 temperatureReading: 24.0
 temperatureStatus: normal
 upTime: 105 days, 7:35
 usbStatus: normal
 cellsrvStatus: running
 msStatus: running
 rsStatus: running

CellCLI> exit
quitting

[root@v1ex2celadm01 ~]#

However, the simpler way to check is via dcli, especially when you have lots of storage cells as shown below:

[root@v1ex2dbadm01 ~]# dcli -g /opt/oracle.SupportTools/onecommand/cell_group -l root cellcli -e "list cell attributes flashcachemode"
v1ex2celadm01: WriteThrough
v1ex2celadm02: WriteThrough
v1ex2celadm03: WriteThrough

Related Posts:
How to Enable Exadata Write-Back Flash Cache

If you found this blog post useful, please like as well as follow me through my various Social Media avenues available on the sidebar and/or subscribe to this oracle blog via WordPress/e-mail.

Thanks

Zed DBA (Zahid Anwar)

ODC Appreciation Day : Oracle Exadata Database Machine

Those that know me well, will know about my “appreciation” of the “Oracle Exadata Database Machine“, more commonly known as “Exadata” 🙂

So this will be my contribution to ODC Appreciation Day formally known as OTN Appreciation Day, a great initiative by Tim Hall aka Oracle-Base.com.

You can see a summary of last year’s blog post here:
OTN Appreciation Day : Summary

The very first Exadata, was the V1 model, the hardware by HP and the software by Oracle.  I still remember being very excited by this in my previous employment at Auto Trader and trying very hard to convince them to get one 🙂

I, of course, became an instant fan of the brawn hardware with smart software, Oracle labelling as “Hardware and Software optimised together“.

Oracle’s partnership with HP only lasted a year with Oracle switching to Sun on the V2 model, when shortly after Oracle then brought Sun in 2010.  This is when Oracle switched from the V models to X models, with the initial models being the X2-2 (2 socket) and X2-8 (8 sockets).

I still remember this old video “Oracle Exadata. Are You Ready?” that I played at an internal Auto Trader conference which was about sharing knowledge, interesting new things, etc:

Exadata has come a long way since the initial release that was aimed at being a Data Warehouse to a Full On OLTP, Data Warehouse, mixed load, consolidation platform, etc with “record-breaking” IOPS and scan rate!

My favourite feature is the smart scan, the ability to off load data intensive SQL operations from the database servers directly into the storage servers, mitigating the need to pull lots of data from storage to database server.  Yes you can have very fast All Flash Storage, but the network to ship all this to the database server becomes the bottleneck and the compute to filter the data on the database server.  Exadata does this at the storage server meaning only the rows and columns that are directly relevant to a query are sent to the database servers.

Another one is storage indexes where the min and max values are stored of a column in 1Mb chunk in memory to allow for unnecessary I/O to be avoided when it’s known that block of data doesn’t meet the predicate condition.

I didn’t manage to convince Auto Trader, however I have since been very fortunate in my current employment at Version 1 to have worked on Exadata since 2014 from the Exadata X2-2 through to X5-2.  I do really appreciate these “Engineered Systems” for the Extreme Performance, Reliability and Availability.  The whole concept of being “Engineered” and the whole stack optimised, really works and the fact that all Exadatas are the same hardware makes you appreciate their supportability.  Even patching them with patchmgr is pretty much a doddle these days! 🙂

For more info, visit the following site:
www.Oracle.com/Exadata
https://en.wikipedia.org/wiki/Oracle_Exadata

Tuesday 10th October 2017

Happy ODC Appreciation Day! #ThanksODC #ThanksOTN 🙂

If you found this blog post useful, please like as well as follow me through my various Social Media avenues available on the sidebar and/or subscribe to this oracle blog via WordPress/e-mail.

Thanks

Zed DBA (Zahid Anwar)