VMware Expert Database Workshop Program Oracle Edition – Day 3

Day 3 kicked off again with another early start at 7am, yawn 🙂

Again, it was a very intense day, with lots of presentations and then ending with video interviews of each attendee:

  • Dean Bolton from VLSS, talked to us about “License Fortress from VLSS”
    • Very interesting product, where VLSS can advise you how to run your Oracle on VMware, then protect you from Oracle with the License Fortress guarantee, that has lawyers ready to defend you in case of license compliance backed by an insurance policy if required.  So the customer is never at risk when they take out the “License Fortress from VLSS”.
    • Plug for my current employer who also offer License Audit Consulting.
  • Chris Rohan from VMware, talked to us about “VMware vSphere Core & SDDC – Networking – NSX & VCNS”
  • Marcus Thordal from Brocade, talked to us about “Brocade and VMware Technology and the VMware Solutions Lab”
    • Interestingly Brocade have worked out a way to tag network packets, so you can identify which VM guest is causing network traffic
    • Also had a good example of how NVMe is causing need to higher network bandwidth
  • Simon Guyennet from VMware, talked to us about “Emerging Products” & “VMware Integrated Containers and Oracle”
    • Lot of NDA stuff, so those interested in this area, keep a look out, some interesting stuff coming soon 🙂
  • Mike Adams from VMware, talked to us about “The CPBU, vSphere and Friends, and the Experts Program”
    • Key take away that was not NDA, is VMware on AWS that is currently available to select customers and will be Generally Available soon 🙂
  • Somu Rajarathinam and Ron Ekins from Pure Storage gave a Technical Session
  • Feidhlim O’Leary from VMware, talked to us about “High Availability and Disaster Recovery in the SDDC”
  • Alain Geenrits from Blue Medora, talked to us about “Management & Monitoring – Blue Medora and Oracle on vSphere”
  • Daniel Hesselink from License Consulting, talked to us about “License Audit with License Consulting”
  • The duo Sudhir Balasubramanian and Mohan Potheri, talked to us about “vSphere HA or Oracle RAC, SRM or Data Guard, they are all complimentary when Oracle is run in the SDDC”
    • I enjoyed the labs from this duo, with their “good cop, bad cop” style 🙂

The workshop ended with a short video interview, where we were each asked to introduce ourselves, answer a few questions about the workshop and Pure Storage.  I’m not the best at this sort of things, so I don’t think I’ll end up in the marketing video of the event, but time will tell 🙂

My OCM buddy Yvonne Murphy, then gave a few of us an extended tour of the Global Support Services (GSS) whilst we waited for the shuttle back to the hotel.

Then it was a quick chauffeured ride to the airport and a short flight back home to Manchester.

Another great day that concluded the workshop, it increased my knowledge of VMware and gave me a great opportunity to network with Oracle Database Experts from around Europe 🙂

Many thanks to VMware and Pure Storage for organising this workshop and allowing me to be a part of it 🙂

My tweets for the day can be seen here.

The VMware Expert Database Workshop Program hashtag is #VMWORA

My related Blog Posts

VMware Expert Database Workshop Program Oracle Edition
VMware Expert Database Workshop Program Oracle Edition – Day 1
VMware Expert Database Workshop Program Oracle Edition – Day 2

Other related Blog Posts

Tim Hall (Oracle Base) – VMware Expert Database Workshop Program Oracle Edition
Tim Hall (Oracle Base) – VMware Workshop – The Journey Begins
Tim Hall (Oracle Base) – VMware Workshop – Day 1
Tim Hall (Oracle Base) – VMware Workshop – Day 2
Tim Hall (Oracle Base) – VMware Workshop – Day 3
Michael Corey (Columnist) – VMware Experts Program Oracle Edition
Michael Corey (Columnist) – Day 1 VMware Experts Program Oracle Edition
Michael Corey (Columnist) – Day 2 VMware Experts Program Oracle Edition

If you found this blog post useful, please like as well as follow me through my various Social Media avenues available on the sidebar and/or subscribe to this oracle blog via WordPress/e-mail.

Thanks

Zed DBA (Zahid Anwar)

VMware Expert Database Workshop Program Oracle Edition – Day 2

Day 2 kicked off again with another early start at 7am, coupled with the late night, I was a bit tired to say the least, but was all worth it 🙂

Again, it was a very intense day, with lots of presentations and technical deep dives, ending with a lab session:

  • My OCM buddy Yvonne Murphy, kicked the day off by talking to us about “The Best Oracle Support Team on Earth – Global Support Services Oracle Team”
  • Dave Welch, House of Brick, talked to us about “Oracle on vSphere Licensing”
    • Some interesting Oracle license cases of which the main one discussed is available here
  • Jad El-Zein, talked to us about “vRealizeAutomation and Oracle”
  • Andreas Scherr, talked to us about “vSphere Core Storage Fundamentals” & “Modern Converged Storage, & vSAN & vVols”
    • I was impressed with the vSAN concept of using spare drive bays in an ESXi host to put a Hard Disks or SSD Drives in and/or using spare PCIe slots to put a Flash Cards in (including NVMe 🙂 )
    • Then these server attached storage devices are pooled together to provide a shared Datastore that has resilience built in using software rather then hardware 🙂
    • Can be all flash or hybird but each ESXi needs a cache device i.e. SSD Drive or Flash Card
    • More info can be found here
  • Valentin Bondzio, gave us an very enjoyable deep dive on CPU usage in Virtual Environment and how to troubleshoot 🙂
  • We then finished the day with a lab session with Sudhir Bala and Mohan Potheri, where we got to for example:
    • Create a Virtual Hard Drive
    • Attach an Existing Virtual Hard Drive (useful for RAC clusters 🙂 )
    • Then we played a game to stress test a Pure Storage, however I could only get 2Gb a second using 10 sessions running a parallel query of 10 on the largest object in the database.  This is not because the Pure Storage but because of I/O queue in the VMware stack, which we didn’t get time to change but highlight the point 🙂

Another great day of the 3 days workshop, in which I got to gain even more new knowledge in regards to VMware, in particular CPU usage and vSAN 🙂

The day ended with a meal and drinks at The Oliver Plunkett, more socialising with Johannes AhrendsRon EkinsFrits HooglandTim Hall and Mohan Potheri.  Then a nice walk back to the hotel with Mohan PotheriJohannes Ahrends and Martin Klier.  It’s a small world as I found Martin Klier and I had overlap with some customers and people 🙂  The socialising then continued in the reception lounge with Carl Dudley, Ron EkinsFrits Hoogland and Tim Hall.

Many thanks to VMware and Pure Storage, I’m looking forward to the rest of the workshop 🙂

My tweets for the day can be seen here.

The VMware Expert Database Workshop Program hashtag is #VMWORA

My related Blog Posts

VMware Expert Database Workshop Program Oracle Edition
VMware Expert Database Workshop Program Oracle Edition – Day 1

Other related Blog Posts

Tim Hall (Oracle Base) – VMware Expert Database Workshop Program Oracle Edition
Tim Hall (Oracle Base) – VMware Workshop – The Journey Begins
Tim Hall (Oracle Base) – VMware Workshop – Day 1
Tim Hall (Oracle Base) – VMware Workshop – Day 2
Michael Corey (Columnist) – VMware Experts Program Oracle Edition
Michael Corey (Columnist) – Day 1 VMware Experts Program Oracle Edition
Michael Corey (Columnist) – Day 2 VMware Experts Program Oracle Edition

If you found this blog post useful, please like as well as follow me through my various Social Media avenues available on the sidebar and/or subscribe to this oracle blog via WordPress/e-mail.

Thanks

Zed DBA (Zahid Anwar)

Adding a Datafile to Tablespace

When monitoring Tablespace Usage (see my Tablespace Usage blog post for more info), there comes a point when you need to add a datafile to a tablespace to allow for growth and more importantly get below a monitoring threshold for example in OEM or OpsView.

Query to see Current Datafiles State

To see the current state of the datafiles:

set pages 999
set lines 400
col FILE_NAME format a75
select d.TABLESPACE_NAME, d.FILE_NAME, d.BYTES/1024/1024 SIZE_MB, d.AUTOEXTENSIBLE, d.MAXBYTES/1024/1024 MAXSIZE_MB, d.INCREMENT_BY*(v.BLOCK_SIZE/1024)/1024 INCREMENT_BY_MB
from dba_data_files d,
 v$datafile v
where d.FILE_ID = v.FILE#
order by d.TABLESPACE_NAME, d.FILE_NAME;

Output from Current Datafiles State

This is the output you will get from the query:

TABLESPACE_NAME FILE_NAME                                      SIZE_MB AUT MAXSIZE_MB INCREMENT_BY_MB
--------------- ---------------------------------------------- ------- --- ---------- ---------------
VERS_DATA       +DATAC1/VERS/DATAFILE/vers_data.395.891367103    32767 YES      32767            1024
...
VERS_DATA       +DATAC1/VERS/DATAFILE/vers_data.772.947087161     4096 YES      32767            1024
VERS_INDEX      +DATAC1/VERS/DATAFILE/vers_index.434.891367657    3072 YES      32767            1024
VERS_INDX       +DATAC1/VERS/DATAFILE/vers_indx.422.891367801    32767 YES      32767            1024
...
VERS_INDX       +DATAC1/VERS/DATAFILE/vers_indx.783.947764359    10240 YES      32767            1024
SNP_DATA        +DATAC1/VERS/DATAFILE/snp_data.421.891367815      1024 YES      32767            1024
SYSAUX          +DATAC1/VERS/DATAFILE/sysaux.488.891349085       31500 YES 32767.9844              10
SYSAUX          +DATAC1/VERS/DATAFILE/sysaux.577.909753421       12288 YES      12288              10
SYSTEM          +DATAC1/VERS/DATAFILE/system.487.891349083         890 YES 32767.9844              10
UNDOTBS1        +DATAC1/VERS/DATAFILE/undotbs1.489.891349085     30565 YES 32767.9844               5
UNDOTBS2        +DATAC1/VERS/DATAFILE/undotbs2.491.891349091     20555 YES 32767.9844               5
USERS           +DATAC1/VERS/DATAFILE/users.492.891349091            5 YES 32767.9844            1.25

273 rows selected.

SQL>

The … represents the several lines removed to make the output readable 🙂

Add Datafile to Tablespace

To add a datafile to ‘VERS_DATA‘ to be initially 10G, auto extendable by 1G to maxsize of 32Gb:

ALTER TABLESPACE VERS_DATA ADD DATAFILE '+DATAC1' SIZE 10G AUTOEXTEND ON NEXT 1G MAXSIZE 32767M;

See Oracle Documentation for more info and Syntax:

https://docs.oracle.com/database/121/SQLRF/statements_3002.htm

Once added you’ll see the datafile in the above query:

TABLESPACE_NAME FILE_NAME                                      SIZE_MB AUT MAXSIZE_MB INCREMENT_BY_MB
--------------- ---------------------------------------------- ------- --- ---------- ---------------
VERS_DATA       +DATAC1/VERS/DATAFILE/vers_data.395.891367103    32767 YES      32767            1024
...
VERS_DATA       +DATAC1/VERS/DATAFILE/vers_data.772.947087161     4096 YES      32767            1024
VERS_DATA       +DATAC1/VERS/DATAFILE/vers_data.774.947426643    10240 YES      32767            1024
VERS_INDEX      +DATAC1/VERS/DATAFILE/vers_index.434.891367657    3072 YES      32767            1024
VERS_INDX       +DATAC1/VERS/DATAFILE/vers_indx.422.891367801    32767 YES      32767            1024
...
VERS_INDX       +DATAC1/VERS/DATAFILE/vers_indx.783.947764359    10240 YES      32767            1024
SNP_DATA        +DATAC1/VERS/DATAFILE/snp_data.421.891367815      1024 YES      32767            1024
SYSAUX          +DATAC1/VERS/DATAFILE/sysaux.488.891349085       31500 YES 32767.9844              10
SYSAUX          +DATAC1/VERS/DATAFILE/sysaux.577.909753421       12288 YES      12288              10
SYSTEM          +DATAC1/VERS/DATAFILE/system.487.891349083         890 YES 32767.9844              10
UNDOTBS1        +DATAC1/VERS/DATAFILE/undotbs1.489.891349085     30565 YES 32767.9844               5
UNDOTBS2        +DATAC1/VERS/DATAFILE/undotbs2.491.891349091     20555 YES 32767.9844               5
USERS           +DATAC1/VERS/DATAFILE/users.492.891349091            5 YES 32767.9844            1.25

274 rows selected.

SQL>

Related Blog Posts:
Querying Tablespace Usage

If you found this blog post useful, please like as well as follow me through my various Social Media avenues available on the sidebar and/or subscribe to this oracle blog via WordPress/e-mail.

Thanks

Zed DBA (Zahid Anwar)

Querying Tablespace Usage

Tablespace management in an Oracle database is important and something a DBA will need to do quite frequently.  Therefore I wrote a query, which I believe I originally got from the internet and then it evolved by me adding in temp tablespace and few other things 🙂

Query for Tablespace Usage

Query to obtain Tablespace Usage:

set pages 999
set lines 400
SELECT df.tablespace_name tablespace_name,
 max(df.autoextensible) auto_ext,
 round(df.maxbytes / (1024 * 1024), 2) max_ts_size,
 round((df.bytes - sum(fs.bytes)) / (df.maxbytes) * 100, 2) max_ts_pct_used,
 round(df.bytes / (1024 * 1024), 2) curr_ts_size,
 round((df.bytes - sum(fs.bytes)) / (1024 * 1024), 2) used_ts_size,
 round((df.bytes-sum(fs.bytes)) * 100 / df.bytes, 2) ts_pct_used,
 round(sum(fs.bytes) / (1024 * 1024), 2) free_ts_size,
 nvl(round(sum(fs.bytes) * 100 / df.bytes), 2) ts_pct_free
FROM dba_free_space fs,
 (select tablespace_name,
 sum(bytes) bytes,
 sum(decode(maxbytes, 0, bytes, maxbytes)) maxbytes,
 max(autoextensible) autoextensible
 from dba_data_files
 group by tablespace_name) df
WHERE fs.tablespace_name (+) = df.tablespace_name
GROUP BY df.tablespace_name, df.bytes, df.maxbytes
UNION ALL
SELECT df.tablespace_name tablespace_name,
 max(df.autoextensible) auto_ext,
 round(df.maxbytes / (1024 * 1024), 2) max_ts_size,
 round((df.bytes - sum(fs.bytes)) / (df.maxbytes) * 100, 2) max_ts_pct_used,
 round(df.bytes / (1024 * 1024), 2) curr_ts_size,
 round((df.bytes - sum(fs.bytes)) / (1024 * 1024), 2) used_ts_size,
 round((df.bytes-sum(fs.bytes)) * 100 / df.bytes, 2) ts_pct_used,
 round(sum(fs.bytes) / (1024 * 1024), 2) free_ts_size,
 nvl(round(sum(fs.bytes) * 100 / df.bytes), 2) ts_pct_free
FROM (select tablespace_name, bytes_used bytes
 from V$temp_space_header
 group by tablespace_name, bytes_free, bytes_used) fs,
 (select tablespace_name,
 sum(bytes) bytes,
 sum(decode(maxbytes, 0, bytes, maxbytes)) maxbytes,
 max(autoextensible) autoextensible
 from dba_temp_files
 group by tablespace_name) df
WHERE fs.tablespace_name (+) = df.tablespace_name
GROUP BY df.tablespace_name, df.bytes, df.maxbytes
ORDER BY 4 DESC;

Output from Tablespace Usage Query

This is the output you will get from the query:

TABLESPACE_NAME AUT MAX_TS_SIZE MAX_TS_PCT_USED CURR_TS_SIZE USED_TS_SIZE TS_PCT_USED FREE_TS_SIZE TS_PCT_FREE
--------------- --- ----------- --------------- ------------ ------------ ----------- ------------ -----------
VERS_DATA       YES     6061895           85.81      5270187   5201788.31        98.7     68398.69           1
VERS_INDX       YES  2391991.98           85.55   2101084.98   2046384.98        97.4        54700           3
SYSAUX          YES    45055.98           73.43        43788     33084.44       75.56     10703.56          24
UNDOTBS1        YES    32767.98           53.95        30565     17678.44       57.84     12886.56          42
VERS_INDEX      YES       32767            6.95         3072      2278.88       74.18       793.13          26
TEMP            YES  1245165.69            4.72       134141        58728       43.78        75413          56
SYSTEM          YES    32767.98            2.69          890       882.19       99.12         7.81           1
UNDOTBS2        YES    32767.98              .2        20555        64.38         .31     20490.63         100
SNP_DATA        YES       32767               0         1024            1          .1         1023         100
USERS           YES    32767.98               0            5         1.38        27.5         3.63          73

10 rows selected.

Column definitions

TABLESPACE_NAME: This is the Tablespace Name.

AUTO_EXT: If the datafiles are ‘Auto Extendable’ or not.

Please Note: This is using a max function, so if all are ‘NO’, then the ‘NO’ is true for all datafiles, however if one is ‘YES’, then the ‘YES’ is possible for one through to all of the datafiles.

MAX_TS_SIZE: This is the maximum Tablespace Size if all the datafile reach their max size.

MAX_TS_PCT_USED: This is the percent of MAX_TS_SIZE reached and is the most important value in the query, as this reflects the true usage before DBA intervention is required.

CURR_TS_SIZE: This is the current size of the Tablespace.

USED_TS_SIZE: This is how much of the CURR_TS_SIZE is used.

TS_PCT_USED: This is the percent of CURR_TS_SIZE which if ‘Auto Extendable’ is on, is a little meaningless.  Use MAX_TS_PCT_USED for actual usage.

FREE_TS_SIZE: This is how much is free in CURR_TS_SIZE.

TS_PCT_FREE: This is how much is free in CURR_TS_SIZE as a percent.

Please Note: All sizes are in Megabytes, this can be changed to Gigabytes by added a ‘/1024’ to the columns.

Related Blog Posts
Adding a datafile to tablespace

If you found this blog post useful, please like as well as follow me through my various Social Media avenues available on the sidebar and/or subscribe to this oracle blog via WordPress/e-mail.

Thanks

Zed DBA (Zahid Anwar)

Assess Performance using Calibrate on Exadata

For those who are fortunate to have an Oracle Exadata Database Machine, may wonder if their Exadata meets the IOPS/MBPS as per the technical specifications.  Well with the command CALIBRATE in CellCLI, you can run raw performance tests on the cells’ hard disks and flash drives, enabling you to verify the disk/drive performance:

[root@v1ex1celadm01 ~]# cellcli
CellCLI: Release 12.1.2.3.4 - Production on Tue Jun 13 19:02:05 IST 2017

Copyright (c) 2007, 2016, Oracle. All rights reserved.

CellCLI> calibrate force;
Calibration will take a few minutes...
Aggregate random read throughput across all hard disk LUNs: 1823 MBPS
Aggregate random read throughput across all flash disk LUNs: 9973 MBPS
Aggregate random read IOs per second (IOPS) across all hard disk LUNs: 3002
Calibrating hard disks (read only) ...
LUN 0_0 on drive [8:0 ] random read throughput: 152.00 MBPS, and 243 IOPS
LUN 0_1 on drive [8:1 ] random read throughput: 157.00 MBPS, and 246 IOPS
LUN 0_10 on drive [8:10 ] random read throughput: 161.00 MBPS, and 253 IOPS
LUN 0_11 on drive [8:11 ] random read throughput: 157.00 MBPS, and 251 IOPS
LUN 0_2 on drive [8:2 ] random read throughput: 157.00 MBPS, and 244 IOPS
LUN 0_3 on drive [8:3 ] random read throughput: 158.00 MBPS, and 245 IOPS
LUN 0_4 on drive [8:4 ] random read throughput: 156.00 MBPS, and 248 IOPS
LUN 0_5 on drive [8:5 ] random read throughput: 161.00 MBPS, and 250 IOPS
LUN 0_6 on drive [8:6 ] random read throughput: 159.00 MBPS, and 252 IOPS
LUN 0_7 on drive [8:7 ] random read throughput: 158.00 MBPS, and 251 IOPS
LUN 0_8 on drive [8:8 ] random read throughput: 157.00 MBPS, and 251 IOPS
LUN 0_9 on drive [8:9 ] random read throughput: 159.00 MBPS, and 254 IOPS
Calibrating flash disks (read only, note that writes will be significantly slower) ...
LUN 1_1 on drive [FLASH_1_1] random read throughput: 2,157.00 MBPS, and 280525 IOPS
LUN 2_1 on drive [FLASH_2_1] random read throughput: 2,156.00 MBPS, and 274304 IOPS
LUN 4_1 on drive [FLASH_4_1] random read throughput: 2,158.00 MBPS, and 282083 IOPS
LUN 5_1 on drive [FLASH_5_1] random read throughput: 2,160.00 MBPS, and 287786 IOPS
CALIBRATE results are within an acceptable range.
Calibration has finished.

CellCLI>

 

The CALIBRATE FORCE, allows the test to run when CELLSRV is still up, this is acceptable if there is no user workload.  It is therefore recommended to not run during normal operations.  Without the FORCE, the CELLSRV must be shut down.

PLEASE NOTE: This is a single run on a single storage cell, you will need to run on all storage cells in the Exadata Machine to get the total IOPS/MBPS.  You can use dcli to run this across all the cells 🙂

If you found this blog post useful, please like as well as follow me through my various Social Media avenues available on the sidebar and/or subscribe to this oracle blog via WordPress/e-mail.

Thanks

Zed DBA (Zahid Anwar)

How to easily delete files in the Oracle Cloud using CloudBerry Explorer

So you have some Oracle Cloud storage, which was probably thrown in as a freebie initially by Oracle 🙂  Now your freebie is expiring and you decide you want to retain the service but now as metered, so you want to delete all the unnecessary files, which in my case are old Oracle database backups.

As you can see here I have 50k plus files using 897Gb:

Oracle Cloud Web Console Details

Here are the files listed in the Web Console:

Oracle Cloud Web Console List Objects

To delete them one by one isn’t feasible.

So the solution is to use a 3rd party file explorer, in my case CloudBerry Explorer from CloudBerry Labs:

https://www.cloudberrylab.com/explorer/openstack.aspx

The freeware version is perfectly fine, no need to purchase Pro or use the trial.  Just click ‘Cancel’ on the Register Product dialogue and then the application will load.

Once installed, to connect to your Oracle Cloud storage, you can follow this link:

https://www.cloudberrylab.com/blog/how-to-use-cloudberry-explorer-with-oracle-cloud-storage/

However, the key to connecting is to have the username in the format of:

<Your Oracle Cloud Service Instance Name>-<Your Oracle Cloud Identity Domain>:<Your Oracle Cloud User Name>

e.g. zeddbacloud-zeddba:oraclecloudbackup@zeddba.com

Also select the correct ‘Account location’ which will fill the ‘Authentication Service’:

CloudBerry Login

Keystone, set to ‘Do not use’.

When you finally manage to get connected, you’ll see something like this:

CloudBerry Explorer

Now you have the freedom, to browse your files and delete at leisure 🙂

If you found this blog post useful, please like as well as follow me through my various Social Media avenues available on the sidebar and/or subscribe to this oracle blog via WordPress/e-mail.

Thanks

Zed DBA (Zahid Anwar)