Session using a database link hangs on “SQL*Net more data from dblink”

I have a client who recently move a database server from on-premise to a Cloud provider.  A database on this database server had a database link to their E-business database in the Oracle Cloud.  Since the move, any sessions in the database that use the database link to the E-business database would hang if the query was to return large dataset.

Below is selecting from dual over the database link that worked:

SQL> set timing on
SQL> set autotrace on
SQL> select * from dual@ebs;

D
-
X

Elapsed: 00:00:00.13

Execution Plan
----------------------------------------------------------
Plan hash value: 272002086

----------------------------------------------------------------------------------------
| Id  | Operation              | Name | Rows  | Bytes | Cost (%CPU)| Time     | Inst   |
----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT REMOTE|      |     1 |     2 |     2   (0)| 00:00:01 |        |
|   1 |  TABLE ACCESS FULL     | DUAL |     1 |     2 |     2   (0)| 00:00:01 | PWGPSI |
----------------------------------------------------------------------------------------

Note
-----
   - fully remote statement


Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          0  consistent gets
          0  physical reads
          0  redo size
        511  bytes sent via SQL*Net to client
        492  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

SQL>

But selecting from a table hangs:

SQL> select * from XXX_PER_PEOPLE_F@ebs;

Hangs!......

The session in the source database shows the wait “SQL*Net more data from dblink”:

       SID USERNAME       STATUS   MACHINE                         TERMINAL                       PROGRAM                                          EVENT                                        WAIT_CLASS                   WAIT_TIME SECONDS_IN_WAIT STATE

---------- -------------- -------- ------------------------------- ------------------------------ ------------------------------------------------ -------------------------------------------- ---------------------------- --------- --------------- -------------------

       144 XXXXXXXXX      ACTIVE   XXX                             unknown                        sqlplus@xxxxx.xxxxx.xxx (TNS V1-V3)              SQL*Net more data from dblink                Network                      0         341             WAITING

The session in the target database shows the wait “SQL*Net message from client”:

SID USERNAME EVENT                       SQL_ID   MACHINE       PROGRAM                        PROCESS 
--- -------- --------------------------- -------- ------------- ------------------------------ ---------- 
35  XXX      SQL*Net message from client XXX      XXX.XXX.XXX   oracle@XXX.XXX.XXX (TNS V1-V3) 15340

I traced the source session and could see the session hangs waiting for more data from the target (EBS) database:

WAIT #2: nam='SQL*Net message to client' ela= 2 driver id=1650815232 #bytes=1 p3=0 obj#=-1 tim=1495920328443464

*** 2018-07-17 11:13:47.934

WAIT #2: nam='SQL*Net message from client' ela= 11336000 driver id=1650815232 #bytes=1 p3=0 obj#=-1 tim=1495920339779586
WAIT #0: nam='single-task message' ela= 179653 p1=0 p2=0 p3=0 obj#=-1 tim=1495920339960679
WAIT #0: nam='SQL*Net message to dblink' ela= 1 driver id=1413697536 #bytes=1 p3=0 obj#=-1 tim=1495920339989423
WAIT #0: nam='SQL*Net message from dblink' ela= 29064 driver id=1413697536 #bytes=1 p3=0 obj#=-1 tim=1495920340018526
WAIT #0: nam='SQL*Net message to dblink' ela= 0 driver id=1413697536 #bytes=1 p3=0 obj#=-1 tim=1495920340018982
WAIT #0: nam='SQL*Net message from dblink' ela= 30534 driver id=1413697536 #bytes=1 p3=0 obj#=-1 tim=1495920340049542
WAIT #0: nam='SQL*Net message to dblink' ela= 1 driver id=1413697536 #bytes=1 p3=0 obj#=-1 tim=1495920340049793
WAIT #0: nam='SQL*Net message from dblink' ela= 42171 driver id=1413697536 #bytes=1 p3=0 obj#=-1 tim=1495920340091996
WAIT #0: nam='SQL*Net more data from dblink' ela= 4 driver id=1413697536 #bytes=17 p3=0 obj#=-1 tim=1495920340092053
WAIT #0: nam='SQL*Net message to dblink' ela= 0 driver id=1413697536 #bytes=1 p3=0 obj#=-1 tim=1495920340092253
WAIT #0: nam='SQL*Net message from dblink' ela= 28448 driver id=1413697536 #bytes=1 p3=0 obj#=-1 tim=1495920340120720
WAIT #0: nam='SQL*Net message to dblink' ela= 0 driver id=1413697536 #bytes=1 p3=0 obj#=-1 tim=1495920340120854
WAIT #0: nam='SQL*Net message from dblink' ela= 30884 driver id=1413697536 #bytes=1 p3=0 obj#=-1 tim=1495920340151761
WAIT #0: nam='SQL*Net message to dblink' ela= 1 driver id=1413697536 #bytes=1 p3=0 obj#=-1 tim=1495920340151816
WAIT #0: nam='SQL*Net message from dblink' ela= 28590 driver id=1413697536 #bytes=1 p3=0 obj#=-1 tim=1495920340180420
WAIT #0: nam='SQL*Net message to client' ela= 2 driver id=1650815232 #bytes=1 p3=0 obj#=-1 tim=1495920340180491
WAIT #0: nam='SQL*Net message from client' ela= 207 driver id=1650815232 #bytes=1 p3=0 obj#=-1 tim=1495920340180713
WAIT #0: nam='SQL*Net message to dblink' ela= 0 driver id=1413697536 #bytes=1 p3=0 obj#=-1 tim=1495920340180787
WAIT #0: nam='SQL*Net message from dblink' ela= 28635 driver id=1413697536 #bytes=1 p3=0 obj#=-1 tim=1495920340209434
WAIT #0: nam='SQL*Net message to client' ela= 1 driver id=1650815232 #bytes=1 p3=0 obj#=-1 tim=1495920340209472
WAIT #0: nam='SQL*Net message from client' ela= 70 driver id=1650815232 #bytes=1 p3=0 obj#=-1 tim=1495920340209554
WAIT #0: nam='SQL*Net message to dblink' ela= 0 driver id=1413697536 #bytes=1 p3=0 obj#=-1 tim=1495920340209601
WAIT #0: nam='SQL*Net message from dblink' ela= 29357 driver id=1413697536 #bytes=1 p3=0 obj#=-1 tim=1495920340238969
WAIT #0: nam='SQL*Net more data from dblink' ela= 6 driver id=1413697536 #bytes=4 p3=0 obj#=-1 tim=1495920340239133
WAIT #0: nam='SQL*Net more data from dblink' ela= 37 driver id=1413697536 #bytes=1 p3=0 obj#=-1 tim=1495920340239196
WAIT #0: nam='SQL*Net more data from dblink' ela= 124 driver id=1413697536 #bytes=2 p3=0 obj#=-1 tim=1495920340239340
WAIT #0: nam='SQL*Net more data from dblink' ela= 37 driver id=1413697536 #bytes=4 p3=0 obj#=-1 tim=1495920340239409
WAIT #0: nam='SQL*Net more data from dblink' ela= 93 driver id=1413697536 #bytes=3 p3=0 obj#=-1 tim=1495920340239528
WAIT #0: nam='SQL*Net more data from dblink' ela= 79 driver id=1413697536 #bytes=1 p3=0 obj#=-1 tim=1495920340239632
WAIT #0: nam='SQL*Net more data from dblink' ela= 28330 driver id=1413697536 #bytes=2 p3=0 obj#=-1 tim=1495920340268062
WAIT #0: nam='SQL*Net more data from dblink' ela= 4 driver id=1413697536 #bytes=2 p3=0 obj#=-1 tim=1495920340268111
WAIT #0: nam='SQL*Net more data from dblink' ela= 168 driver id=1413697536 #bytes=1 p3=0 obj#=-1 tim=1495920340268297
WAIT #0: nam='SQL*Net more data from dblink' ela= 55 driver id=1413697536 #bytes=1 p3=0 obj#=-1 tim=1495920340268378
WAIT #0: nam='SQL*Net more data from dblink' ela= 69 driver id=1413697536 #bytes=4 p3=0 obj#=-1 tim=1495920340268483
WAIT #0: nam='SQL*Net more data from dblink' ela= 47 driver id=1413697536 #bytes=4 p3=0 obj#=-1 tim=1495920340268553
Hangs!...

After raising a SR with My Oracle Support (MOS) for the database we manage and for the one Oracle manage in the Oracle Cloud, it was concluded the package size (Session Data Unit) was larger then the allowable for a network component between the two databases.  They referred to MOS note:

Query on “bigtable” from remote Client hangs (certain queries or fields) (Doc ID 2104257.1)

“SYMPTOMS

Certain queries are hanging when run from some remote Clients.  However, other (smaller) queries are successful.

This is especially evident on queries that require more than 2kb of data to be returned.
Some examples:

select * from v$database; –> hangs
select count(*) from v$database; –> works

DESCRIBE with large data results –> hangs
DESCRIBE with small data results –> works

This might also show up with Database Links (DBLINK) as well.

CAUSE

A Network “security” device or setting (possibly local especially on a Microsoft Windows machine) is preventing or “altering” larger TCP packets from being transported across the network.
This in turn is causing the Client to wait on the Server for the data from the query, and the Server to wait on the Client (which thinks part of the packet is still on the way).

1. Check for settings like the DF (“Don’t Fragment”) bit being set.
2. Check for ALG SQL settings being enabled.

*Note: these causes are all external to Oracle so provided only as potential causes.

SOLUTION

Workaround
~~~~~~~~~
As a workaround (or test to prove this is or is not the issue) lower the SQL*Net SDU from the default size of 8192 to 1400 (see reference below for more details on this setting):

1. Add the following single line to the sqlnet.ora file on BOTH ends of the communication:
DEFAULT_SDU_SIZE = 1400

2. Restart the Listener(s) servicing the Database in question, make a new connection from the Client, and test the query that was hanging.

3. If this corrects the issue and allows queries to complete, then there is a network / system device or setting causing fragmentation, detention, or alteration of SQL packets mid-stream.”

Oracle Support set the SDU in the sqlnet.ora file on the target database server:
DEFAULT_SDU_SIZE = 1400

More info on DEFAULT_SDU_SIZE can be found here:

Database Net Services Reference -> 5 Parameters for the sqlnet.ora File -> DEFAULT_SDU_SIZE

And then restarted the listener.  I did the same for the source database server and then the session no longer hanged 🙂 :

SQL> select * from XXX_PER_PEOPLE_F@ebs;
...

1069 rows selected.

Elapsed: 00:00:02.57

 

If you found this blog post useful, please like as well as follow me through my various Social Media avenues available on the sidebar and/or subscribe to this oracle blog via WordPress/e-mail.

Thanks

Zed DBA (Zahid Anwar)

Advertisements

Index Monitoring in Oracle Database

Applies to ONLY Oracle Database 10gR1 through to 12cR1.
In Oracle Database 12cR2, this feature is replaced.  A new blog post of this feature will be posted in due course.

Since Oracle 10g, you can monitor indexes to see if they are being used or not.  Which is very useful as indexes, consume unnecessary CPU and I/O on DML activity if not used.  Therefore, it’s recommended to monitor indexes and any unused indexes can be dropped, not only freeing vital space but CPU and I/O, resulting in better throughput.

It recommended to enable index monitoring for an appropriate period that is representative of your full workload cycle.  For example if you run a report every week, month or even year, just to catch all those activities.

Please Note: It’s important that foreign key constraints have associated indexes to avoid any table level lock on DML.  These indirect uses of indexes will not be detected by index monitoring.  So be mindful when dropping indexes that are associated with foreign key constraint.

Enable Index Monitoring

To Enable Index Monitoring, you alter the index as shown below:

ALTER INDEX . MONITORING USAGE;

Checking Index Usage

To check if the index has been used, will depends on the version of Oracle:

Oracle 10g/11g

SELECT * FROM V$OBJECT_USAGE;

Database Reference 10g – V$OBJECT_USAGE
Database Reference 11g – V$OBJECT_USAGE

Please Note: Only shows the indexes belonging to the login user.

To see all indexes in the database regardless of the logged on user:

select 
do.owner,
t.name table_name, io.name index_name,
decode(bitand(i.flags, 65536), 0, 'NO', 'YES') monitoring,
decode(bitand(ou.flags, 1), 0, 'NO', 'YES') used,
ou.start_monitoring,
ou.end_monitoring 
from 
sys.obj$ io, 
sys.obj$ t, 
sys.ind$ i, 
sys.object_usage ou,
dba_objects do 
where i.obj# = ou.obj# 
and io.obj# = ou.obj# 
and t.obj# = i.bo#
and i.obj# = do.object_id
order by 1, 2, 3;

DISCLAIMER: As accessing data dictionary objects directly, this query is not guaranteed to work.

Oracle 12cR1

“The V$OBJECT_USAGE view is deprecated in Oracle Database 12c Release 1 (12.1) and maintained for backward compatibility. Support for this view may be removed in a future release. Oracle recommends that you use the USER_OBJECT_USAGE view instead of the V$OBJECT_USAGE view.”

Database Reference 12cR1 – V$OBJECT_USAGE

For the current logon user:

SELECT * FROM USER_OBJECT_USAGE;

Database Reference 12cR1 – USER_OBJECT_USAGE

For all users:

SELECT * FROM DBA_OBJECT_USAGE;

Database Reference 12cR1 – DBA_OBJECT_USAGE

Dis-enable Index Monitoring

To Dis-enable Index Monitoring, you alter the index as shown below:

ALTER INDEX . NOMONITORING USAGE;

Demo of Index Monitoring

The content of the index_monitoring_demo.sql is below:

-------------------------------------------------------------
--
-- Index Monitoring Demo
--
-- File Name: index_monitoring_demo.sql (blog.zeddba.com)
-- Created: Zahid Anwar (ZedDBA)
-- 
-- Date: 16/07/2018
-- Version: 1.0
--
-- History
--
--1.0 16/07/2018 ZA Initial Script
--
-------------------------------------------------------------
set echo on
set pages 999 lines 400

CREATE TABLE "EMP"
( "ID" NUMBER,
"FIRST_NAME" VARCHAR2(50),
"LAST_NAME" VARCHAR2(50)
);

pause Press Enter

INSERT INTO "EMP" VALUES (1, 'Zahid', 'Anwar');
INSERT INTO "EMP" VALUES (2, 'Scott', 'Tiger');
INSERT INTO "EMP" VALUES (3, 'Joe', 'Bloggs');

commit;

pause Press Enter

set autotrace on
col id format 99
col first_name format a50
col last_Name format a50

select * from emp;

pause Press Enter

create index emp_id on emp (id);

pause Press Enter

select * from emp where id = 1;

pause Press Enter

set autotrace off

col INDEX_NAME format a20
col TABLE_NAME format a20
col MONITORING format a15
col USED format a10
select * from USER_OBJECT_USAGE;

pause Press Enter

alter index emp_id monitoring usage;

pause Press Enter

select * from USER_OBJECT_USAGE;

pause Press Enter

set autotrace on

select * from emp;

pause Press Enter

set autotrace off

select * from USER_OBJECT_USAGE;

pause Press Enter

set autotrace on

select * from emp where id = 1;

pause Press Enter

set autotrace off

select * from USER_OBJECT_USAGE;

pause Press Enter

alter index emp_id nomonitoring usage;

pause Press Enter

select * from USER_OBJECT_USAGE;

pause Press Enter

DROP TABLE "EMP" PURGE;

Below is the output of the script index_monitoring_demo.sql:

SQL> @index_monitoring_demo.sql
SQL> set pages 999 lines 400
SQL>
SQL> CREATE TABLE "EMP"
2 ( "ID" NUMBER,
3 "FIRST_NAME" VARCHAR2(50),
4 "LAST_NAME" VARCHAR2(50)
5 );

Table created.

SQL>
SQL> pause Press Enter
Press Enter

Table EMP is created in my schema ZEDDBA, please ensure you have appropriate schema with correct privileges.  Next we insert 3 rows into the table EMP:

SQL> INSERT INTO "EMP" VALUES (1, 'Zahid', 'Anwar');

1 row created.

SQL> INSERT INTO "EMP" VALUES (2, 'Scott', 'Tiger');

1 row created.

SQL> INSERT INTO "EMP" VALUES (3, 'Joe', 'Bloggs');

1 row created.

SQL>
SQL> commit;

Commit complete.

SQL>
SQL> pause Press Enter
Press Enter

With the 3 rows inserted and committed, we select from the EMP table with auto trace on, so we can see the execution plan:

SQL> set autotrace on
SQL> col id format 99
SQL> col first_name format a50
SQL> col last_Name format a50
SQL>
SQL> select * from emp;

ID FIRST_NAME LAST_NAME
--- -------------------------------------------------- --------------------------------------------------
1 Zahid Anwar
2 Scott Tiger
3 Joe Bloggs


Execution Plan
----------------------------------------------------------
Plan hash value: 3956160932

----------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 3 | 201 | 2 (0)| 00:00:01 |
| 1 | TABLE ACCESS STORAGE FULL| EMP | 3 | 201 | 2 (0)| 00:00:01 |
----------------------------------------------------------------------------------

Note
-----
- dynamic statistics used: dynamic sampling (level=2)


Statistics
----------------------------------------------------------
12 recursive calls
0 db block gets
28 consistent gets
0 physical reads
0 redo size
784 bytes sent via SQL*Net to client
552 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
3 rows processed

SQL>
SQL> pause Press Enter
Press Enter

Please Note: This database is an Exadata Machine, hence the “TABLE ACCESS STORAGE FULL“, otherwise would be “TABLE ACCESS FULL“.

As we can see the Execution Plan was a Full Table Scan (FTS) as this was the only access method available.  Next, we create an index on the emp_id:

SQL> create index emp_id on emp (id);

Index created.

SQL>
SQL> pause Press Enter
Press Enter

Next, we query the table with a predicate on emp_id:

SQL> select * from emp where id = 1;

ID FIRST_NAME LAST_NAME
--- -------------------------------------------------- --------------------------------------------------
1 Zahid Anwar


Execution Plan
----------------------------------------------------------
Plan hash value: 458854468

----------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 67 | 2 (0)| 00:00:01 |
| 1 | TABLE ACCESS BY INDEX ROWID BATCHED| EMP | 1 | 67 | 2 (0)| 00:00:01 |
|* 2 | INDEX RANGE SCAN | EMP_ID | 1 | | 1 (0)| 00:00:01 |
----------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("ID"=1)

Note
-----
- dynamic statistics used: dynamic sampling (level=2)


Statistics
----------------------------------------------------------
11 recursive calls
0 db block gets
19 consistent gets
0 physical reads
0 redo size
695 bytes sent via SQL*Net to client
552 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed

SQL>
SQL> pause Press Enter
Press Enter

Now, we can see the Execution Plan used the newly created index.  Next, we check if the index was used:

SQL> set autotrace off
SQL>
SQL> col INDEX_NAME format a20
SQL> col TABLE_NAME format a20
SQL> col MONITORING format a15
SQL> col USED format a10
SQL> select * from USER_OBJECT_USAGE;

no rows selected

SQL>
SQL> pause Press Enter
Press Enter

We get no rows back as we didn’t turn on index monitoring, which isn’t on by default.  So next, we enable index monitoring:

SQL> alter index emp_id monitoring usage;

Index altered.

SQL>
SQL> pause Press Enter
Press Enter

Next, we check the index usage and we can see it monitoring but not used:

SQL> select * from USER_OBJECT_USAGE;

INDEX_NAME TABLE_NAME MONITORING USED START_MONITORING END_MONITORING
-------------------- -------------------- --------------- ---------- ------------------- -------------------
EMP_ID     EMP        YES        NO   07/16/2018 17:53:59

SQL>
SQL> pause Press Enter
Press Enter

Next, we query the table with no predicate to do a FTS:

SQL> set autotrace on
SQL>
SQL> select * from emp;

ID FIRST_NAME LAST_NAME
--- -------------------------------------------------- --------------------------------------------------
1 Zahid Anwar
2 Scott Tiger
3 Joe Bloggs


Execution Plan
----------------------------------------------------------
Plan hash value: 3956160932

----------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 3 | 201 | 2 (0)| 00:00:01 |
| 1 | TABLE ACCESS STORAGE FULL| EMP | 3 | 201 | 2 (0)| 00:00:01 |
----------------------------------------------------------------------------------

Note
-----
- dynamic statistics used: dynamic sampling (level=2)


Statistics
----------------------------------------------------------
51 recursive calls
0 db block gets
54 consistent gets
0 physical reads
132 redo size
784 bytes sent via SQL*Net to client
552 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
6 sorts (memory)
0 sorts (disk)
3 rows processed

SQL>
SQL> pause Press Enter
Press Enter

Next, we check if the index monitoring reflects the usage:

SQL> set autotrace off
SQL>
SQL> select * from USER_OBJECT_USAGE;

INDEX_NAME TABLE_NAME MONITORING USED START_MONITORING END_MONITORING
-------------------- -------------------- --------------- ---------- ------------------- -------------------
EMP_ID     EMP        YES        NO   07/16/2018 17:53:59

SQL>
SQL> pause Press Enter
Press Enter

Next, we query the table with a predicate to do Index Range Scan:

SQL> set autotrace on
SQL>
SQL> select * from emp where id = 1;

ID FIRST_NAME LAST_NAME
--- -------------------------------------------------- --------------------------------------------------
1 Zahid Anwar


Execution Plan
----------------------------------------------------------
Plan hash value: 458854468

----------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 67 | 2 (0)| 00:00:01 |
| 1 | TABLE ACCESS BY INDEX ROWID BATCHED| EMP | 1 | 67 | 2 (0)| 00:00:01 |
|* 2 | INDEX RANGE SCAN | EMP_ID | 1 | | 1 (0)| 00:00:01 |
----------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("ID"=1)

Note
-----
- dynamic statistics used: dynamic sampling (level=2)


Statistics
----------------------------------------------------------
13 recursive calls
3 db block gets
24 consistent gets
0 physical reads
876 redo size
695 bytes sent via SQL*Net to client
552 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed

SQL>
SQL> pause Press Enter
Press Enter

Next, we check if the index monitoring reflects the usage:

SQL> set autotrace off
SQL>
SQL> select * from USER_OBJECT_USAGE;

INDEX_NAME TABLE_NAME MONITORING USED START_MONITORING END_MONITORING
-------------------- -------------------- --------------- ---------- ------------------- -------------------
EMP_ID     EMP        YES        YES  07/16/2018 17:53:59

SQL>
SQL> pause Press Enter
Press Enter

Next, we now turn off index monitoring, as we know it’s been used:

SQL> alter index emp_id nomonitoring usage;

Index altered.

SQL>
SQL> pause Press Enter
Press Enter

Next, we check to see if monitoring is now disabled:

SQL> select * from USER_OBJECT_USAGE;

INDEX_NAME TABLE_NAME MONITORING USED START_MONITORING END_MONITORING
-------------------- -------------------- --------------- ---------- ------------------- -------------------
EMP_ID     EMP        NO         YES  07/16/2018 17:53:59 07/16/2018 17:54:09

SQL>
SQL> pause Press Enter
Press Enter

Finally, we drop the demo EMP table:

SQL> DROP TABLE "EMP" PURGE;

Table dropped.

SQL>

If you found this blog post useful, please like as well as follow me through my various Social Media avenues available on the sidebar and/or subscribe to this oracle blog via WordPress/e-mail.

Thanks

Zed DBA (Zahid Anwar)

Oracle Exadata Smart Flash Logging

What is Exadata Smart Flash Logging?

In an OLTP environment, it is crucial to have fast response times to redo log writes i.e. low latency.  When multiplexing redo logs for high availability i.e. to protect against hardware failure, redo log writes are only acknowledged when redo is written to all redo log members i.e when the slowest disk completes the write.  By this nature, whenever a disk slows down even if for a moment it can have impact on redo log performance and throughput.

Flash alone can’t resolve this issue as flash can also momentarily slow down due to issues in erase cycles or wear leveling and remember the acknowledgement is only given when the redo is written to all redo log members.

Exadata Smart Flash Logging, is the feature that writes to both hard disk and flash with the acknowledgement given as soon as either completes the write, thus improving response time and throughput.  So if a write is slow to hard disk the flash will give a quicker acknowledgement but when flash is experiencing a slow down due to erase cycles or wear leveling then the hard disk will acknowledge, smoothing out response times.

The Exadata Smart Flash Cache isn’t permanent but a temporary store to provide fast response times by storing redo until it’s safely written to disk.

No changes are required to redo log configuration and is transparent to database and recovery.

How to enable Smart Flash Logging?

It’s enabled out the box or for older systems it’s enabled when applying cell patch version 11.2.2.4 and also requires Database 11.2.0.2 Bundle Patch 11 or higher.

How to disable Smart Flash Logging?

This shouldn’t be done unless instructed to do so by Oracle Support or Development.

How much flash is used by Smart Flash Logging?

By default just 512Mb is used per cell, which should be sufficient for most situations.   It’s a small investment for huge performance benefit.  Statistics record the number of successful write and unsuccessful writes due to the temporary space filled.  In which case the size may need to be increased.  Also I/O Resource Manager (IORM) can be used to disable Smart Flash Logging for none critical databases.

Do standby redo logs use Smart Flash Logging?

Yes, standby redo logs benefit from Smart Flash Logging just as redo logs as long as cell patch 11.2.2.4 or higher is applied and Database 11.2.0.2 Bundle Patch 11 or higher is applied.

How to check that Smart Flash Logging is configured?

Using CellCLI run “LIST FLASHLOG DETAIL” and if output is returned as shown below with the details, then this means that Smart Flash Logging is configured:

[root@v1ex1dbadm01 ~]# dcli -g /opt/oracle.SupportTools/onecommand/cell_group -l root cellcli -e "list flashlog detail"
 v1ex1celadm01: name: v1ex1celadm01_FLASHLOG
 v1ex1celadm01: cellDisk: FD_00_v1ex1celadm01,FD_01_v1ex1celadm01
 v1ex1celadm01: creationTime: 2015-06-28T17:52:43+01:00
 v1ex1celadm01: degradedCelldisks:
 v1ex1celadm01: effectiveSize: 512M
 v1ex1celadm01: efficiency: 100.0
 v1ex1celadm01: id: 366421ec-bf77-499e-870f-f0cf5390343e
 v1ex1celadm01: size: 512M
 v1ex1celadm01: status: normal
 v1ex1celadm02: name: v1ex1celadm02_FLASHLOG
 v1ex1celadm02: cellDisk: FD_01_v1ex1celadm02,FD_00_v1ex1celadm02
 v1ex1celadm02: creationTime: 2015-06-28T17:52:44+01:00
 v1ex1celadm02: degradedCelldisks:
 v1ex1celadm02: effectiveSize: 512M
 v1ex1celadm02: efficiency: 100.0
 v1ex1celadm02: id: 9f670843-c9cc-4156-a32e-8d23fa79cdb8
 v1ex1celadm02: size: 512M
 v1ex1celadm02: status: normal
 v1ex1celadm03: name: v1ex1celadm03_FLASHLOG
 v1ex1celadm03: cellDisk: FD_01_v1ex1celadm03,FD_00_v1ex1celadm03
 v1ex1celadm03: creationTime: 2015-06-28T17:52:33+01:00
 v1ex1celadm03: degradedCelldisks:
 v1ex1celadm03: effectiveSize: 512M
 v1ex1celadm03: efficiency: 100.0
 v1ex1celadm03: id: 749bada6-8ae2-4c51-8410-97622f9a9532
 v1ex1celadm03: size: 512M
 v1ex1celadm03: status: normal
[root@v1ex1dbadm01 ~]#

For more info:
Exadata Smart Flash Logging FAQ (Doc ID 1372894.1)
Oracle Exadata Whitepaper:  Exadata Smart Flash Cache Features and the Oracle Exadata Database Machine

If you found this blog post useful, please like as well as follow me through my various Social Media avenues available on the sidebar and/or subscribe to this oracle blog via WordPress/e-mail.

Thanks

Zed DBA (Zahid Anwar)

How to fix queries on DBA_FREE_SPACE that are slow

I found myself in a situation where OpsView, a monitoring tool, was having difficulty monitoring the tablespaces for a particular pluggable database.

Upon investigation it was found the queries against the dictionary table DBA_FREE_SPACE were taking a very long time:

SQL> set timing on
SQL> select nvl(sum(dfs.bytes)/1024/1024,0) from dba_free_space dfs where dfs.tablespace_name = 'USERS';

NVL(SUM(DFS.BYTES)/1024/1024,0)
-------------------------------
 70.75

Elapsed: 00:00:10.98

There are 60 tablespaces in this pluggable database, which the time varied querying each tablespace, but was by far where most the time was spent.

I wrote a PL/SQL block to mimic Opsview as I didn’t want to create an object (procedure) in this customer’s database:

SET SERVEROUTPUT ON
SET TIMING ON
DECLARE
 cursor ts_names is select tablespace_name from dba_tablespaces where contents != 'TEMPORARY';
 sql_used VARCHAR(200) := 'select nvl(sum(dfs.bytes)/1024/1024,0) from dba_free_space dfs where dfs.tablespace_name = ''text_string''';
 sql_free VARCHAR(200) := 'select nvl(sum(dfs.bytes)/1024/1024,0) from dba_free_space dfs where dfs.tablespace_name = ''text_string''';
 sql_max VARCHAR(200) := 'select sum(maxbytes-bytes)/1024/1024 from dba_data_files where AUTOEXTENSIBLE = ''YES'' and maxbytes>bytes and tablespace_name = ''text_string''';
 num_out NUMBER;
BEGIN
 FOR ts_name in ts_names
 LOOP
 --sql for used space
 EXECUTE IMMEDIATE replace(sql_used, 'text_string', ts_name.tablespace_name) into num_out;
 dbms_output.put_line(replace(sql_used, 'text_string', ts_name.tablespace_name));
 dbms_output.put_line(num_out);
 --sql for free space
 EXECUTE IMMEDIATE replace(sql_free, 'text_string', ts_name.tablespace_name) into num_out;
 dbms_output.put_line(replace(sql_free, 'text_string', ts_name.tablespace_name));
 dbms_output.put_line(num_out);
 --sql for max
 EXECUTE IMMEDIATE replace(sql_max, 'text_string', ts_name.tablespace_name) into num_out;
 dbms_output.put_line(replace(sql_max, 'text_string', ts_name.tablespace_name));
 dbms_output.put_line(num_out);
 END LOOP;
END;
/

I ran this and the total time was shocking 😐 :

SQL> --SET SERVEROUTPUT ON
SQL> SET TIMING ON
SQL> DECLARE
 2 cursor ts_names is select tablespace_name from dba_tablespaces where contents != 'TEMPORARY';
 3 sql_used VARCHAR(200) := 'select nvl(sum(dfs.bytes)/1024/1024,0) from dba_free_space dfs where dfs.tablespace_name = ''text_string''';
 4 sql_free VARCHAR(200) := 'select nvl(sum(dfs.bytes)/1024/1024,0) from dba_free_space dfs where dfs.tablespace_name = ''text_string''';
 5 sql_max VARCHAR(200) := 'select sum(maxbytes-bytes)/1024/1024 from dba_data_files where AUTOEXTENSIBLE = ''YES'' and maxbytes>bytes and tablespace_name = ''text_string''';
 6 num_out NUMBER;
 7 BEGIN
 8 FOR ts_name in ts_names
 9 LOOP
 10 --sql for used space
 11 EXECUTE IMMEDIATE replace(sql_used, 'text_string', ts_name.tablespace_name) into num_out;
 12 dbms_output.put_line(replace(sql_used, 'text_string', ts_name.tablespace_name));
 13 dbms_output.put_line(num_out);
 14 --sql for free space
 15 EXECUTE IMMEDIATE replace(sql_free, 'text_string', ts_name.tablespace_name) into num_out;
 16 dbms_output.put_line(replace(sql_free, 'text_string', ts_name.tablespace_name));
 17 dbms_output.put_line(num_out);
 --sql for max
 18 19 EXECUTE IMMEDIATE replace(sql_max, 'text_string', ts_name.tablespace_name) into num_out;
 20 dbms_output.put_line(replace(sql_max, 'text_string', ts_name.tablespace_name));
 21 dbms_output.put_line(num_out);
 22 END LOOP;
END;
 23 24 /

PL/SQL procedure successfully completed.

Elapsed: 00:21:30.94
SQL>

So I searched My Oracle Support (MOS) and found the following MOS note:
Queries on DBA_FREE_SPACE are Slow (Doc ID 271169.1)

Which states:
“1) In release 10g, the view dba_free_space was modified to also include objects in the recycle bin.

2) Large number of objects in the recyclebin can slow down queries on  dba_free_space.

3) This is a normal behaviour.

4) For release 11g, the view dba_free_space doesn’t contain a hint which in case when there is only few objects in recyclebin, you may want to gather underlying stats of tables/dictionary to get better performance.”

The database indeed did have a lot of objects in the recycle bin (in the pluggable database):

SQL> SELECT count(*) from dba_recyclebin;

 COUNT(*)
----------
 27615

SQL>

With most of them drop recently:

SQL> select trunc(to_date(DROPTIME,'YYYY-MM-DD:HH24:MI:SS')), count(*) from dba_recyclebin group by trunc(to_date(DROPTIME,'YYYY-MM-DD:HH24:MI:SS'))
  2  order by 1
  3 /

TRUNC(TO_ COUNT(*)
--------- ----------
24-SEP-16 2
...
19-JAN-18 2506
20-JAN-18 4322
21-JAN-18 4321
22-JAN-18 4320
23-JAN-18 4321
24-JAN-18 4321
25-JAN-18 2446

421 rows selected.

SQL>

So I purged the recycle bin (with customers permission)  and re-ran the check:

SQL> purge dba_recyclebin;

DBA Recyclebin purged.

Elapsed: 00:06:30.39
SQL> --SET SERVEROUTPUT ON
SET TIMING ON
SQL> SQL> DECLARE
 2 cursor ts_names is select tablespace_name from dba_tablespaces where contents != 'TEMPORARY';
 3 sql_used VARCHAR(200) := 'select nvl(sum(dfs.bytes)/1024/1024,0) from dba_free_space dfs where dfs.tablespace_name = ''text_string''';
 4 sql_free VARCHAR(200) := 'select nvl(sum(dfs.bytes)/1024/1024,0) from dba_free_space dfs where dfs.tablespace_name = ''text_string''';
 5 sql_max VARCHAR(200) := 'select sum(maxbytes-bytes)/1024/1024 from dba_data_files where AUTOEXTENSIBLE = ''YES'' and maxbytes>bytes and tablespace_name = ''text_string''';
 6 num_out NUMBER;
 7 BEGIN
 8 FOR ts_name in ts_names
 9 LOOP
 10 --sql for used space
 11 EXECUTE IMMEDIATE replace(sql_used, 'text_string', ts_name.tablespace_name) into num_out;
 12 dbms_output.put_line(replace(sql_used, 'text_string', ts_name.tablespace_name));
 13 dbms_output.put_line(num_out);
 14 --sql for free space
 15 EXECUTE IMMEDIATE replace(sql_free, 'text_string', ts_name.tablespace_name) into num_out;
 16 dbms_output.put_line(replace(sql_free, 'text_string', ts_name.tablespace_name));
 17 dbms_output.put_line(num_out);
 18 --sql for max
 19 EXECUTE IMMEDIATE replace(sql_max, 'text_string', ts_name.tablespace_name) into num_out;
 20 dbms_output.put_line(replace(sql_max, 'text_string', ts_name.tablespace_name));
 21 dbms_output.put_line(num_out);
 22 END LOOP;
 23 END;
 24 /

PL/SQL procedure successfully completed.

Elapsed: 00:02:46.25
SQL>

Result, the duration of the PL/SQL block went from 21 minutes to just under 3 minutes.  However I need it to go under 2 minutes as this was the timeout for OpsView.

So I proceed with the next recommendation in the MOS note of gather dictionary and fixed table stats (with customers permission) using MOS note:
How to Gather Statistics on Objects Owned by the ‘SYS’ User and ‘Fixed’ Objects (Doc ID 457926.1)

SQL> EXEC DBMS_STATS.GATHER_DICTIONARY_STATS;

PL/SQL procedure successfully completed.

Elapsed: 00:00:20.49

SQL> EXEC DBMS_STATS.GATHER_FIXED_OBJECTS_STATS;

PL/SQL procedure successfully completed.

Elapsed: 00:04:28.07

SQL> --SET SERVEROUTPUT ON
SET TIMING ON
SQL> SQL> DECLARE
 2 cursor ts_names is select tablespace_name from dba_tablespaces where contents != 'TEMPORARY';
 3 sql_used VARCHAR(200) := 'select nvl(sum(dfs.bytes)/1024/1024,0) from dba_free_space dfs where dfs.tablespace_name = ''text_string''';
 4 sql_free VARCHAR(200) := 'select nvl(sum(dfs.bytes)/1024/1024,0) from dba_free_space dfs where dfs.tablespace_name = ''text_string''';
 5 sql_max VARCHAR(200) := 'select sum(maxbytes-bytes)/1024/1024 from dba_data_files where AUTOEXTENSIBLE = ''YES'' and maxbytes>bytes and tablespace_name = ''text_string''';
 6 num_out NUMBER;
 7 BEGIN
 8 FOR ts_name in ts_names
 9 LOOP
 10 --sql for used space
 11 EXECUTE IMMEDIATE replace(sql_used, 'text_string', ts_name.tablespace_name) into num_out;
 12 dbms_output.put_line(replace(sql_used, 'text_string', ts_name.tablespace_name));
 13 dbms_output.put_line(num_out);
 14 --sql for free space
 15 EXECUTE IMMEDIATE replace(sql_free, 'text_string', ts_name.tablespace_name) into num_out;
 16 dbms_output.put_line(replace(sql_free, 'text_string', ts_name.tablespace_name));
 17 dbms_output.put_line(num_out);
 18 --sql for max
 19 EXECUTE IMMEDIATE replace(sql_max, 'text_string', ts_name.tablespace_name) into num_out;
 20 dbms_output.put_line(replace(sql_max, 'text_string', ts_name.tablespace_name));
 21 dbms_output.put_line(num_out);
 22 END LOOP;
 23 END;
 24 /

PL/SQL procedure successfully completed.

Elapsed: 00:00:04.53
SQL>

Bingo! the duration of the PL/SQL block went down to 4 seconds 🙂

PLEASE NOTE: This still effects non-pluggable databases, however in pluggable databases, you need to purge the recycle bin for where the dropped objects are, the container database and the pluggable databases require independent purge.

If you found this blog post useful, please like as well as follow me through my various Social Media avenues available on the sidebar and/or subscribe to this oracle blog via WordPress/e-mail.

Thanks

Zed DBA (Zahid Anwar)

New READ Object Privilege in 12cR1

In writing a blog post about:
Creating a Read Only Database User Account in an Oracle Database

It came to my attention of the new “READ” object privilege, which is a New Feature in 12.1.0.2:
Changes in Oracle Database 12c Release 1 (12.1.0.2)
READ and SELECT Object Privileges

The “SELECT” object privilege in addition to querying the table, allows the user to:
LOCK TABLE table_name IN EXCLUSIVE MODE;
SELECTFROM table_name FOR UPDATE;

The New Feature of “READ” object privilege, does not allow the user to lock tables in exclusive mode nor select table for update.

Prior to 12.1.0.2, the “SELECT” object privilege is only available which allows the locking:

GRANT SELECT ON ... TO ...;

12.1.0.2 onwards, the new “READ” object privilege is available which doesn’t allow the locking:

GRANT READ ON ... TO ...;

This also applies to the “SELECT ON ANY TABLE“, prior to 12.1.0.2 which allows the locking:

GRANT SELECT ANY TABLE TO ...;

12.1.0.2 onwards, the new “READ ON ANY TABLE” object privilege is available which doesn’t allow the locking:

GRANT READ ANY TABLE TO ...;

If you found this blog post useful, please like as well as follow me through my various Social Media avenues available on the sidebar and/or subscribe to this oracle blog via WordPress/e-mail.

Thanks

Zed DBA (Zahid Anwar)

Creating a Read Only Database User Account in an Oracle Database

It can be quite common to create a “Read Only” database user account in an Oracle database.  To do this is pretty simple using the principle of least privilege:

CREATE USER READ_ONLY IDENTIFIED BY "password";
GRANT CREATE SESSION TO READ_ONLY;

Expected output:

SQL> CREATE USER READ_ONLY IDENTIFIED BY "password";

User created.

SQL> GRANT CREATE SESSION TO READ_ONLY;

Grant succeeded.

SQL>

Depending on the Oracle Version, it’s recommend to use “READ” object privilege new in 12.1.0.2 instead of “SELECT” to prevent locking.  See the following blog post for more info:
New READ Object Privilege in 12cR1

PLEASE NOTE: For anyone using Oracle Version prior to 12.1.0.2, replace the object privilege “READ” with “SELECT“.

To read specific tables for a schema:

SET HEADING OFF
SET PAGES 9999 LINES 400
SPOOL TABLE_GRANTS.sql
SELECT 'GRANT READ ON ' || OWNER || '.' || TABLE_NAME || ' TO READ_ONLY;'
FROM DBA_TABLES
WHERE OWNER IN ('ZEDDBA')
ORDER BY 1;
SPOOL OFF
@TABLE_GRANTS.sql
!rm TABLE_GRANTS.sql

Expected output, where you can see the user “READ_ONLY” able to select from the table owned by user “ZEDDBA“:

SQL> SET HEADING OFF
SQL> SET PAGES 9999 LINES 400
SQL> SPOOL TABLE_GRANTS.sql
SQL> SELECT 'GRANT READ ON ' || OWNER || '.' || TABLE_NAME || ' TO READ_ONLY;'
 2 FROM DBA_TABLES
 3 WHERE OWNER IN ('ZEDDBA')
 4 ORDER BY 1;

GRANT READ ON ZEDDBA.DEMO TO READ_ONLY;

SQL> SPOOL OFF
SQL> @TABLE_GRANTS.sql
SP2-0734: unknown command beginning "SQL> SELEC..." - rest of line ignored.

Grant succeeded.

SP2-0734: unknown command beginning "SQL> SPOOL..." - rest of line ignored.
SQL> !rm TABLE_GRANTS.sql

SQL> conn read_only/password
Connected.
SQL> SELECT * FROM ZEDDBA.DEMO;

Read Only example.

SQL> exit

To read all tables in the databases:

GRANT READ ANY TABLE TO READ_ONLY;

Expected output, where you can see the user “READ_ONLY” able to select from the table owned by user “ZEDDBA“, even after the revoke and the user “ZEDDBA2“, which hasn’t been explicitly referenced:

SQL> conn zeddba/zeddba
Connected.
SQL> REVOKE READ ON ZEDDBA.DEMO FROM READ_ONLY;

Revoke succeeded.

SQL> GRANT READ ANY TABLE TO READ_ONLY;

Grant succeeded.

SQL> conn read_only/password
Connected.
SQL> SELECT * FROM ZEDDBA.DEMO;

COMMENTS
--------------------------------------------------------------------------------
Read Only example.

SQL> SELECT * FROM ZEDDBA2.DEMO;

COMMENTS
--------------------------------------------------------------------------------
Read Only example.

SQL>

To grant access to dictionary tables:

GRANT SELECT ANY DICTIONARY TO READ_ONLY;

Expected output, where you can see the user “READ_ONLY” able to select from dictionary i.e. table starting with DBA and V$:

SQL> conn zeddba/zeddba
Connected.
SQL> GRANT SELECT ANY DICTIONARY TO READ_ONLY;

Grant succeeded.

SQL> conn read_only/password
Connected.
SQL> SELECT TABLE_NAME
 2 FROM DBA_TABLES
 3 WHERE OWNER = 'ZEDDBA';

TABLE_NAME
--------------------------------------------------------------------------------
DEMO

SQL> SELECT NAME FROM V$DATABASE;

NAME
---------
ZEDDBA

SQL>

If you found this blog post useful, please like as well as follow me through my various Social Media avenues available on the sidebar and/or subscribe to this oracle blog via WordPress/e-mail.

Thanks

Zed DBA (Zahid Anwar)

ODC Appreciation Day : Oracle Exadata Database Machine

Those that know me well, will know about my “appreciation” of the “Oracle Exadata Database Machine“, more commonly known as “Exadata” 🙂

So this will be my contribution to ODC Appreciation Day formally known as OTN Appreciation Day, a great initiative by Tim Hall aka Oracle-Base.com.

You can see a summary of last year’s blog post here:
OTN Appreciation Day : Summary

The very first Exadata, was the V1 model, the hardware by HP and the software by Oracle.  I still remember being very excited by this in my previous employment at Auto Trader and trying very hard to convince them to get one 🙂

I, of course, became an instant fan of the brawn hardware with smart software, Oracle labelling as “Hardware and Software optimised together“.

Oracle’s partnership with HP only lasted a year with Oracle switching to Sun on the V2 model, when shortly after Oracle then brought Sun in 2010.  This is when Oracle switched from the V models to X models, with the initial models being the X2-2 (2 socket) and X2-8 (8 sockets).

I still remember this old video “Oracle Exadata. Are You Ready?” that I played at an internal Auto Trader conference which was about sharing knowledge, interesting new things, etc:

Exadata has come a long way since the initial release that was aimed at being a Data Warehouse to a Full On OLTP, Data Warehouse, mixed load, consolidation platform, etc with “record-breaking” IOPS and scan rate!

My favourite feature is the smart scan, the ability to off load data intensive SQL operations from the database servers directly into the storage servers, mitigating the need to pull lots of data from storage to database server.  Yes you can have very fast All Flash Storage, but the network to ship all this to the database server becomes the bottleneck and the compute to filter the data on the database server.  Exadata does this at the storage server meaning only the rows and columns that are directly relevant to a query are sent to the database servers.

Another one is storage indexes where the min and max values are stored of a column in 1Mb chunk in memory to allow for unnecessary I/O to be avoided when it’s known that block of data doesn’t meet the predicate condition.

I didn’t manage to convince Auto Trader, however I have since been very fortunate in my current employment at Version 1 to have worked on Exadata since 2014 from the Exadata X2-2 through to X5-2.  I do really appreciate these “Engineered Systems” for the Extreme Performance, Reliability and Availability.  The whole concept of being “Engineered” and the whole stack optimised, really works and the fact that all Exadatas are the same hardware makes you appreciate their supportability.  Even patching them with patchmgr is pretty much a doddle these days! 🙂

For more info, visit the following site:
www.Oracle.com/Exadata
https://en.wikipedia.org/wiki/Oracle_Exadata

Tuesday 10th October 2017

Happy ODC Appreciation Day! #ThanksODC #ThanksOTN 🙂

If you found this blog post useful, please like as well as follow me through my various Social Media avenues available on the sidebar and/or subscribe to this oracle blog via WordPress/e-mail.

Thanks

Zed DBA (Zahid Anwar)