MRP process getting terminated with error ORA-10485

If you have a Data Guard environment, where you’ve just applied a Database Bundle Patch and OJVM Patch, it’s possible that your Physical Standby can throw the following error:

Completed: ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT NODELAY
Wed Oct 11 08:11:57 2017
Media Recovery Log +RECOC1/VER1S/ARCHIVELOG/2017_10_11/thread_1_seq_18251.24912.957080425
MRP0: Background Media Recovery terminated with error 10485
Wed Oct 11 08:11:57 2017
Errors in file /u01/app/oracle/diag/rdbms/ver1s/VER1S2/trace/VER1S2_pr00_220336.trc:
ORA-10485: Real-Time Query cannot be enabled while applying migration redo.

This is because your database is “open” i.e. Active Data Guard (license option) and Managed Recovery Process is trying to apply the redo of datapatch which it can’t do when “open“.

See MOS note for more info:

MRP process getting terminated with error ORA-10485 (Doc ID 1618485.1):

“ORA-10485: Real-Time Query cannot be enabled while applying migration redo.

The Real-Time Query feature was enabled when an attempt was made to recover through migration redo generated during primary upgrades or downgrades”

The easiest solution is to restart the database as “mount” mode allowing the redo via Data Guard to apply the patch, then restart again as “open” mode:

DGMGRL> show configuration;

Configuration - ver1p

Protection Mode: MaxPerformance
 Members:
 ver1p - Primary database
 ver1s - Physical standby database
 Error: ORA-16766: Redo Apply is stopped

Fast-Start Failover: DISABLED

Configuration Status:
ERROR (status updated 37 seconds ago)

DGMGRL> show database ver1s;

Database - ver1s

Role: PHYSICAL STANDBY
 Intended State: APPLY-ON
 Transport Lag: 0 seconds (computed 0 seconds ago)
 Apply Lag: 2 hours 24 minutes 33 seconds (computed 1 second ago)
 Average Apply Rate: 99.32 MByte/s
 Real Time Query: OFF
 Instance(s):
 VER1S1
 VER1S2 (apply instance)

Database Error(s):
 ORA-16766: Redo Apply is stopped

Database Status:
ERROR

DGMGRL>

Now restart the database as “mount” mode allowing the redo via Data Guard to apply the patch:

[oracle@v1ex2dbadm01 ~]$ srvctl status database -d VER1S -v
Instance VER1S1 is running on node v1ex2dbadm01 with online services VER1_BK1,VER1_BK2,VER1_BK3,VER1_BK4. Instance status: Open,Readonly.
Instance VER1S2 is running on node v1ex2dbadm02. Instance status: Open,Readonly.
[oracle@v1ex2dbadm01 ~]$ srvctl config database -d VER1S
Database unique name: VER1S
Database name:
Oracle home: /u01/app/oracle/product/12.1.0.2/dbhome_1
Oracle user: oracle
Spfile: +DATAC1/VER1S/PARAMETERFILE/spfileVER1S.ora
Password file:
Domain:
Start options: open
Stop options: immediate
Database role: PHYSICAL_STANDBY
Management policy: AUTOMATIC
Server pools:
Disk Groups: DATAC1,RECOC1
Mount point paths:
Services: VER1_BK1,VER1_BK2,VER1_BK3,VER1_BK4
Type: RAC
Start concurrency:
Stop concurrency:
OSDBA group: dba
OSOPER group: dba
Database instances: VER1S1,VER1S2
Configured nodes: v1ex2dbadm01,v1ex2dbadm02
Database is administrator managed
[oracle@v1ex2dbadm01 ~]$ srvctl stop database -d VER1S
[oracle@v1ex2dbadm01 ~]$ srvctl start database -d VER1S -o mount

Re-check Data Guard Broker to check if the transport lag and status have cleared as expected:

DGMGRL> show database ver1s

Database - ver1s

Role: PHYSICAL STANDBY
 Intended State: APPLY-ON
 Transport Lag: 0 seconds (computed 0 seconds ago)
 Apply Lag: 0 seconds (computed 0 seconds ago)
 Average Apply Rate: 34.62 MByte/s
 Real Time Query: OFF
 Instance(s):
 VER1S1
 VER1S2 (apply instance)

Database Status:
SUCCESS

DGMGRL>

Then restart the database again as “open” mode (Active Data Guard):

[oracle@v1ex2dbadm01 ~]$ srvctl stop database -d VER1S
[oracle@v1ex2dbadm01 ~]$ srvctl start database -d VER1S
[oracle@v1ex2dbadm01 ~]$ srvctl status database -d VER1S -v
Instance VER1S1 is running on node v1ex2dbadm01 with online services VER1_BK1,VER1_BK2,VER1_BK3,VER1_BK4. Instance status: Open,Readonly.
Instance VER1S2 is running on node v1ex2dbadm02. Instance status: Open,Readonly.

Re-check Data Guard Broker to check if the “Real Time Query” is back on as expected:

DGMGRL> show configuration

Configuration - ver1p

Protection Mode: MaxPerformance
 Members:
 ver1p - Primary database
 ver1s - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS (status updated 2 seconds ago)

DGMGRL> show database ver1s

Database - ver1s

Role: PHYSICAL STANDBY
 Intended State: APPLY-ON
 Transport Lag: 0 seconds (computed 0 seconds ago)
 Apply Lag: 0 seconds (computed 0 seconds ago)
 Average Apply Rate: 84.35 MByte/s
 Real Time Query: ON
 Instance(s):
 VER1S1
 VER1S2 (apply instance)

Database Status:
SUCCESS

DGMGRL>

 

If you found this blog post useful, please like as well as follow me through my various Social Media avenues available on the sidebar and/or subscribe to this oracle blog via WordPress/e-mail.

Thanks

Zed DBA (Zahid Anwar)

ORA-00322: log 41 of thread 2 is not current copy

On a Data Guard Physical Standby, I noticed the following in the alert log:

Fri Jan 13 08:12:39 2017 
Errors in file /u01/app/oracle/diag/rdbms/v1s/V1S1/trace/V1S1_arc1_399861.trc: 
ORA-00322: log 41 of thread 2 is not current copy 
ORA-00312: online log 41 thread 2: '+RECOC1/V1S/ONLINELOG/group_41.3099.912775569' 
ORA-00322: log 41 of thread 2 is not current copy 
ORA-00312: online log 41 thread 2: '+DATAC1/V1S/ONLINELOG/group_41.807.912775567'

If you read metalink note:
Physical standby – alert*log shows ORA-322, ORA-312 intermittently (Doc ID 1592057.1)

SYMPTOMS

alert.log shows intermittently ORA-322 and ORA-312 for SRLs (standby redo logs).

CAUSE

These errors are written due to a very small timing window, when the SRL header
is being updated to reflect new content from the primary and in the same way
the ARCH process is doing its activity and seeing a CF flag entry that reflects an earlier
state.

SOLUTION

check the alert.log for any further issues. As long as ORA-322, ORA-312 is not combined with other issues, you can ignore this error.

Related Issue:
ORA-00314: log 42 of thread 2, expected sequence# 37363 doesn’t match 37361

If you found this blog post useful, please like as well as follow me through my various Social Media avenues available on the sidebar and/or subscribe to this oracle blog via WordPress/e-mail.

Thanks

Zed DBA (Zahid Anwar)

ORA-00314: log 42 of thread 2, expected sequence# 37363 doesn’t match 37361

On a Data Guard Physical Standby, I noticed the following in the alert log:

Mon Jan 30 12:37:22 2017
Errors in file /u01/app/oracle/diag/rdbms/v1s/V1S1/trace/V1S1_arc1_105040.trc:
ORA-00314: log 42 of thread 2, expected sequence# 37363 doesn't match 37361
ORA-00312: online log 42 thread 2: '+RECOC1/V1S/ONLINELOG/group_42.15446.927996415'
ORA-00314: log 42 of thread 2, expected sequence# 37363 doesn't match 37361
ORA-00312: online log 42 thread 2: '+DATAC1/V1S/ONLINELOG/group_42.1051.927996411'

If you read metalink note:
ORA-00314: LOG 404 OF THREAD 4, EXPECTED SEQUENCE# 33808 DOESN’T MATCH 33543 (Doc ID 1077564.1)

It explains that the standby redo has a corrupt entry, due to a problem (instance crash, network problem) that happened, it was receiving sequence 33808 and was corrupted in the middle.  So the header has info on sequence 33808 but current archive sequence # being transferred is 33543.

This was confirmed when I checked the trace file:

*** 2017-01-30 12:37:17.760
----- START Event Driven Actions Dump ----
---- END Event Driven Actions Dump ----
----- START DDE Actions Dump -----
Executing SYNC actions
----- START DDE Action: 'DB_STRUCTURE_INTEGRITY_CHECK' (Async) -----
Successfully dispatched
----- END DDE Action: 'DB_STRUCTURE_INTEGRITY_CHECK' (SUCCESS, 0 csec) -----
Executing ASYNC actions
----- END DDE Actions Dump (total 0 csec) -----
Reading mirrors of log member '+DATAC1/V1S/ONLINELOG/group_42.1051.927996411' (thread 2 seq 37363 block 1)
ksfdrfms:Mirror Read file=+DATAC1/V1S/ONLINELOG/group_42.1051.927996411 fob=0x248d0ba68 bufp=0x7f4139596000 blkno=1 nbytes=512
ksfdafReadMirror: Read success from mirror side=1 logical extent number=0 disk=DATAC1_CD_01_V1OEX2CELADM01 path=o/10.1.11.14;10.1.11.15/DATAC1_CD_01_v1oex2celadm01
Mirror I/O done from ASM disk o/10.1.11.14;10.1.11.15/DATAC1_CD_01_v1oex2celadm01
DDE: Problem Key 'ORA 312' was flood controlled (0x1) (no incident)
ORA-00312: online log 42 thread 2: '+DATAC1/V1S/ONLINELOG/group_42.1051.927996411'
Read from log mirror 'DATAC1_CD_01_V1OEX2CELADM01' (thread 2 seq 37363 block 1) found corrupt block
ksfdrnms:Mirror Read file=+DATAC1/V1S/ONLINELOG/group_42.1051.927996411 fob=0x248d0ba68 bufp=0x7f4139596000 nbytes=512

Where we can see the ‘found corrupt block’.

Luckily for me, Oracle resolved this itself by doing a RFS call for the archive redo log it required and then continued media recovery 🙂 :

Mon Jan 30 12:37:23 2017
RFS[2]: Selected log 42 for thread 2 sequence 37363 dbid 284911310 branch 891349070
Mon Jan 30 12:38:09 2017
Media Recovery Waiting for thread 2 sequence 37363 (in transit)
Mon Jan 30 12:38:09 2017
Recovery of Online Redo Log: Thread 2 Group 42 Seq 37363 Reading mem 0
 Mem# 0: +DATAC1/V1S/ONLINELOG/group_42.1051.927996411
 Mem# 1: +RECOC1/V1S/ONLINELOG/group_42.15446.927996415

If you’re not so lucky you can resolve following the steps in the metalink note:
ORA-00314: LOG 404 OF THREAD 4, EXPECTED SEQUENCE# 33808 DOESN’T MATCH 33543 (Doc ID 1077564.1)

Which are:
Stopping recovery
Clear standby redo log
Restart recovery

Related Issue:
ORA-00322: log 41 of thread 2 is not current copy

If you found this blog post useful, please like as well as follow me through my various Social Media avenues available on the sidebar and/or subscribe to this oracle blog via WordPress/e-mail.

Thanks

Zed DBA (Zahid Anwar)