Cloning an Oracle Home

You may wish to clone an Oracle Home, for example you have all your databases on a single Oracle Home but you want to separate Development from Test.  This could be so you can soak test Patch Set Updates (PSU) on Development before applying to Test and then Production.  Or you might wish to have 2 Oracle Homes, so you can patch one and then switch all databases to the patched Oracle Home for minimal downtime.

 

Copying the Oracle Home

First you need to copy the Oracle Home at file level using cp as user root as shown below:

[root@v1ex2dbadm01 ~]# cp -Rp /u01/app/oracle/product/12.1.0.2/dbhome_1 /u01/app/oracle/product/12.1.0.2/dbhome_2

Then check the Oracle Home and the Cloned Oracle Home are the same size:

[root@v1ex2dbadm01 ~]# du -h /u01/app/oracle/product/12.1.0.2/dbhome_1 --max-depth=0
12G /u01/app/oracle/product/12.1.0.2/dbhome_1
[root@v1ex2dbadm01 ~]# du -h /u01/app/oracle/product/12.1.0.2/dbhome_2 --max-depth=0
12G /u01/app/oracle/product/12.1.0.2/dbhome_2

Then repeat on all the other nodes:

[root@v1ex2dbadm02 ~]# cp -Rp /u01/app/oracle/product/12.1.0.2/dbhome_1 /u01/app/oracle/product/12.1.0.2/dbhome_2
[root@v1ex2dbadm02 ~]# du -h /u01/app/oracle/product/12.1.0.2/dbhome_1 --max-depth=0
12G /u01/app/oracle/product/12.1.0.2/dbhome_1
[root@v1ex2dbadm02 ~]# du -h /u01/app/oracle/product/12.1.0.2/dbhome_2 --max-depth=0
12G /u01/app/oracle/product/12.1.0.2/dbhome_2

 

Cloning the Oracle Home

Now we have a copy of the Oracle Home, we next need to clone using the clone.pl perl script as shown below:

/usr/bin/perl $ORACLE_HOME/clone/bin/clone.pl \
'-O"CLUSTER_NODES={v1ex2dbadm01,v1ex2dbadm02}"' \
'-O"LOCAL_NODE=v1ex2dbadm01"' ORACLE_BASE=/u01/app/oracle \
ORACLE_HOME=$ORACLE_HOME ORACLE_HOME_NAME=OraDB12Home2 '-O-noConfig'

CLUSTER_NODES = all the nodes in the cluster
LOCAL_NODE = The node you are running the command on
ORACLE_BASE = The Oracle Base defined on the Server
ORACLE_HOME = The Cloned Oracle Home, already exported
ORACLE_HOME_NAME = The name you wish to give to the Cloned Oracle Home

[root@v1ex2dbadm01 ~]# su - oracle
[oracle@v1ex2dbadm01 ~]$ export ORACLE_HOME=/u01/app/oracle/product/12.1.0.2/dbhome_2
[oracle@v1ex2dbadm01 ~]$ /usr/bin/perl $ORACLE_HOME/clone/bin/clone.pl \
> '-O"CLUSTER_NODES={v1ex2dbadm01,v1ex2dbadm02}"' \
> '-O"LOCAL_NODE=v1ex2dbadm01"' ORACLE_BASE=/u01/app/oracle \
> ORACLE_HOME=$ORACLE_HOME ORACLE_HOME_NAME=OraDB12Home2 '-O-noConfig'
./runInstaller -clone -waitForCompletion "CLUSTER_NODES={v1ex2dbadm01,v1ex2dbadm02}" "LOCAL_NODE=v1ex2dbadm01" "ORACLE_BASE=/u01/app/oracle" "ORACLE_HOME=/u01/app/oracle/product/12.1.0.2/dbhome_2" "ORACLE_HOME_NAME=OraDB12Home2" -noConfig -silent -paramFile /u01/app/oracle/product/12.1.0.2/dbhome_2/clone/clone_oraparam.ini
Starting Oracle Universal Installer...

Checking Temp space: must be greater than 500 MB. Actual 5392 MB Passed
Checking swap space: must be greater than 500 MB. Actual 24532 MB Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2017-06-21_02-57-14PM. Please wait ...You can find the log of this install session at:
/u01/app/oraInventory/logs/cloneActions2017-06-21_02-57-14PM.log
 .................................................. 5% Done.
 .................................................. 10% Done.
 .................................................. 15% Done.
 .................................................. 20% Done.
 .................................................. 25% Done.
 .................................................. 30% Done.
 .................................................. 35% Done.
 .................................................. 40% Done.
 .................................................. 45% Done.
 .................................................. 50% Done.
 .................................................. 55% Done.
 .................................................. 60% Done.
 .................................................. 65% Done.
 .................................................. 70% Done.
 .................................................. 75% Done.
 .................................................. 80% Done.
 .................................................. 85% Done.
 ..........
Copy files in progress.

Copy files successful.

Link binaries in progress.

Link binaries successful.

Setup files in progress.

Setup files successful.

Setup Inventory in progress.

Setup Inventory successful.

Finish Setup successful.
The cloning of OraDB12Home2 was successful.
Please check '/u01/app/oraInventory/logs/cloneActions2017-06-21_02-57-14PM.log' for more details.

Setup Oracle Base in progress.

Setup Oracle Base successful.
 .................................................. 95% Done.

As a root user, execute the following script(s):
1. /u01/app/oracle/product/12.1.0.2/dbhome_2/root.sh

Execute /u01/app/oracle/product/12.1.0.2/dbhome_2/root.sh on the following nodes:
[v1ex2dbadm01]

.................................................. 100% Done.

[oracle@v1ex2dbadm01 ~]$

Next check the re-linking is RDS not UDP:

[oracle@v1ex2dbadm01 ~]$ /u01/app/oracle/product/12.1.0.2/dbhome_2/bin/skgxpinfo
rds
[oracle@v1ex2dbadm01 ~]$

If UDP, then relink using command below:

cd $ORACLE_HOME/rdbms/lib; ORACLE_HOME=$ORACLE_HOME make -f ins_rdbms.mk ipc_rds ioracle

Then repeat on all the other nodes, remember to change LOCAL_NODE:

[root@v1ex2dbadm02 ~]# su - oracle
[oracle@v1ex2dbadm02 ~]$ export ORACLE_HOME=/u01/app/oracle/product/12.1.0.2/dbhome_2
[oracle@v1ex2dbadm02 ~]$ /usr/bin/perl $ORACLE_HOME/clone/bin/clone.pl \
> '-O"CLUSTER_NODES={v1ex2dbadm01,v1ex2dbadm02}"' \
> '-O"LOCAL_NODE=v1ex2dbadm02"' ORACLE_BASE=/u01/app/oracle \
> ORACLE_HOME=$ORACLE_HOME ORACLE_HOME_NAME=OraDB12Home2 '-O-noConfig'
./runInstaller -clone -waitForCompletion "CLUSTER_NODES={v1ex2dbadm01,v1ex2dbadm02}" "LOCAL_NODE=v1ex2dbadm02" "ORACLE_BASE=/u01/app/oracle" "ORACLE_HOME=/u01/app/oracle/product/12.1.0.2/dbhome_2" "ORACLE_HOME_NAME=OraDB12Home2" -noConfig -silent -paramFile /u01/app/oracle/product/12.1.0.2/dbhome_2/clone/clone_oraparam.ini
Starting Oracle Universal Installer...

Checking Temp space: must be greater than 500 MB. Actual 10725 MB Passed
Checking swap space: must be greater than 500 MB. Actual 24565 MB Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2017-06-21_03-05-37PM. Please wait ...You can find the log of this install session at:
/u01/app/oraInventory/logs/cloneActions2017-06-21_03-05-37PM.log
 .................................................. 5% Done.
 .................................................. 10% Done.
 .................................................. 15% Done.
 .................................................. 20% Done.
 .................................................. 25% Done.
 .................................................. 30% Done.
 .................................................. 35% Done.
 .................................................. 40% Done.
 .................................................. 45% Done.
 .................................................. 50% Done.
 .................................................. 55% Done.
 .................................................. 60% Done.
 .................................................. 65% Done.
 .................................................. 70% Done.
 .................................................. 75% Done.
 .................................................. 80% Done.
 .................................................. 85% Done.
 ..........
Copy files in progress.

Copy files successful.

Link binaries in progress.

Link binaries successful.

Setup files in progress.

Setup files successful.

Setup Inventory in progress.

Setup Inventory successful.

Finish Setup successful.
The cloning of OraDB12Home2 was successful.
Please check '/u01/app/oraInventory/logs/cloneActions2017-06-21_03-05-37PM.log' for more details.

Setup Oracle Base in progress.

Setup Oracle Base successful.
 .................................................. 95% Done.

As a root user, execute the following script(s):
1. /u01/app/oracle/product/12.1.0.2/dbhome_2/root.sh

Execute /u01/app/oracle/product/12.1.0.2/dbhome_2/root.sh on the following nodes:
[v1ex2dbadm02]

.................................................. 100% Done.

[oracle@v1ex2dbadm02 ~]$ /u01/app/oracle/product/12.1.0.2/dbhome_2/bin/skgxpinfo
rds
[oracle@v1ex2dbadm02 ~]$

Next need to run root.sh as shown below:

[oracle@v1ex2dbadm01 ~]$ exit
logout
[root@v1ex2dbadm01 ~]# /u01/app/oracle/product/12.1.0.2/dbhome_2/root.sh
Check /u01/app/oracle/product/12.1.0.2/dbhome_2/install/root_v1ex2dbadm01.v1.com_2017-06-21_15-04-16.log for the output of root script
[root@v1ex2dbadm01 ~]#

Then repeat on all the other nodes:

[oracle@v1ex2dbadm02 ~]$ exit
logout
[root@v1ex2dbadm02 ~]# /u01/app/oracle/product/12.1.0.2/dbhome_2/root.sh
Check /u01/app/oracle/product/12.1.0.2/dbhome_2/install/root_v1ex2dbadm02.v1.com_2017-06-21_15-06-49.log for the output of root script
[root@v1ex2dbadm02 ~]#

Verify that the Cloned Oracle Home comprises of all the nodes in the cluster:

[root@v1ex2dbadm01 ~]# su - oracle
[oracle@v1ex2dbadm01 ~]$ export ORACLE_HOME=/u01/app/oracle/product/12.1.0.2/dbhome_2
[oracle@v1ex2dbadm01 ~]$ $ORACLE_HOME/OPatch/opatch lsinventory -oh $ORACLE_HOME | grep node
Rac system comprising of multiple nodes
Local node = v1ex2dbadm01
Remote node = v1ex2dbadm02
[oracle@v1ex2dbadm01 ~]$

Then repeat on all the other nodes:

[root@v1ex2dbadm02 ~]# su - oracle
[oracle@v1ex2dbadm02 ~]$ export ORACLE_HOME=/u01/app/oracle/product/12.1.0.2/dbhome_2
[oracle@v1ex2dbadm02 ~]$ $ORACLE_HOME/OPatch/opatch lsinventory -oh $ORACLE_HOME | grep node
Rac system comprising of multiple nodes
Local node = v1ex2dbadm02
Remote node = v1ex2dbadm01
[oracle@v1ex2dbadm02 ~]$

 

Switching Databases to Cloned Oracle Home

Change the Oracle Home using server control:

[oracle@v1ex2dbadm01 ~]$ srvctl config database -d V1DEV -a
Database unique name: V1DEV
Database name:
Oracle home: /u01/app/oracle/product/12.1.0.2/dbhome_1
Oracle user: oracle
Spfile: +DATAC1/V1DEV/PARAMETERFILE/spfileV1DEV.ora
Password file:
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools:
Disk Groups: DATAC1,RECOC1
Mount point paths:
Services: v1jobservice
Type: RAC
Start concurrency:
Stop concurrency:
Database is enabled
Database is individually enabled on nodes:
Database is individually disabled on nodes:
OSDBA group: dba
OSOPER group: dba
Database instances: V1DEV1,V1DEV2
Configured nodes: v1ex2dbadm01,v1ex2dbadm02
Database is administrator managed
[oracle@v1ex2dbadm01 ~]$ srvctl modify database -d V1DEV -o /u01/app/oracle/product/12.1.0.2/dbhome_2
[oracle@v1ex2dbadm01 ~]$ srvctl config database -d V1DEV -a
Database unique name: V1DEV
Database name:
Oracle home: /u01/app/oracle/product/12.1.0.2/dbhome_2
Oracle user: oracle
Spfile: +DATAC1/V1DEV/PARAMETERFILE/spfileV1DEV.ora
Password file:
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools:
Disk Groups: DATAC1,RECOC1
Mount point paths:
Services: v1jobservice
Type: RAC
Start concurrency:
Stop concurrency:
Database is enabled
Database is individually enabled on nodes:
Database is individually disabled on nodes:
OSDBA group: dba
OSOPER group: dba
Database instances: V1DEV1,V1DEV2
Configured nodes: v1ex2dbadm01,v1ex2dbadm02
Database is administrator managed
[oracle@v1ex2dbadm01 ~]$

Next change /etc/oratab to reflect the new Oracle Home:

[oracle@v1ex2dbadm01 ~]$ vi /etc/oratab
[oracle@v1ex2dbadm01 ~]$ more /etc/oratab | grep dbhome_2
V1DEV1:/u01/app/oracle/product/12.1.0.2/dbhome_2:N # line added by Agent

Then repeat on all the other nodes:

[oracle@v1ex2dbadm02 ~]$ vi /etc/oratab
[oracle@v1ex2dbadm02 ~]$ more /etc/oratab | grep dbhome_2
V1DEV2:/u01/app/oracle/product/12.1.0.2/dbhome_2:N # line added by Agent

Now we rolling bounce the database:

[oracle@v1ex2dbadm01 ~]$ srvctl status database -d V1DEV -v
Instance V1DEV1 is running on node v1ex2dbadm01. Instance status: Open,Running from Old Oracle Home.
Instance V1DEV2 is running on node v1ex2dbadm02 with online services v1jobservice. Instance status: Open,Running from Old Oracle Home.
[oracle@v1ex2dbadm01 ~]$ srvctl stop instance -d V1DEV -i V1DEV1 -f
[oracle@v1ex2dbadm01 ~]$ srvctl start instance -d V1DEV -i V1DEV1
[oracle@v1ex2dbadm01 ~]$ srvctl stop instance -d V1DEV -i V1DEV2 -f
[oracle@v1ex2dbadm01 ~]$ srvctl start instance -d V1DEV -i V1DEV2
[oracle@v1ex2dbadm01 ~]$ srvctl status database -d V1DEV -v
Instance V1DEV1 is running on node v1ex2dbadm01. Instance status: Open.
Instance V1DEV2 is running on node v1ex2dbadm02 with online services v1jobservice. Instance status: Open.
[oracle@v1ex2dbadm01 ~]$

 

Related My Oracle Support (MOS) notes:

Master Note For Cloning Oracle Database Server ORACLE_HOME’s Using the Oracle Universal Installer (OUI) (Doc ID 1154613.1)

Cloning An Existing Oracle Database 12c Release 1 (12.1.0.x) RDBMS Installation Using OUI (Doc ID 1493677.1)

Minimal downtime patching via cloning 11gR2 ORACLE_HOME directories (Doc ID 1136544.1)

 

If you found this blog post useful, please like as well as follow me through my various Social Media avenues available on the sidebar and/or subscribe to this oracle blog via WordPress/e-mail.

Thanks

Zed DBA (Zahid Anwar)

Assess Performance using Calibrate on Exadata

For those who are fortunate to have an Oracle Exadata Database Machine, may wonder if their Exadata meets the IOPS/MBPS as per the technical specifications.  Well with the command CALIBRATE in CellCLI, you can run raw performance tests on the cells’ hard disks and flash drives, enabling you to verify the disk/drive performance:

[root@v1ex1celadm01 ~]# cellcli
CellCLI: Release 12.1.2.3.4 - Production on Tue Jun 13 19:02:05 IST 2017

Copyright (c) 2007, 2016, Oracle. All rights reserved.

CellCLI> calibrate force;
Calibration will take a few minutes...
Aggregate random read throughput across all hard disk LUNs: 1823 MBPS
Aggregate random read throughput across all flash disk LUNs: 9973 MBPS
Aggregate random read IOs per second (IOPS) across all hard disk LUNs: 3002
Calibrating hard disks (read only) ...
LUN 0_0 on drive [8:0 ] random read throughput: 152.00 MBPS, and 243 IOPS
LUN 0_1 on drive [8:1 ] random read throughput: 157.00 MBPS, and 246 IOPS
LUN 0_10 on drive [8:10 ] random read throughput: 161.00 MBPS, and 253 IOPS
LUN 0_11 on drive [8:11 ] random read throughput: 157.00 MBPS, and 251 IOPS
LUN 0_2 on drive [8:2 ] random read throughput: 157.00 MBPS, and 244 IOPS
LUN 0_3 on drive [8:3 ] random read throughput: 158.00 MBPS, and 245 IOPS
LUN 0_4 on drive [8:4 ] random read throughput: 156.00 MBPS, and 248 IOPS
LUN 0_5 on drive [8:5 ] random read throughput: 161.00 MBPS, and 250 IOPS
LUN 0_6 on drive [8:6 ] random read throughput: 159.00 MBPS, and 252 IOPS
LUN 0_7 on drive [8:7 ] random read throughput: 158.00 MBPS, and 251 IOPS
LUN 0_8 on drive [8:8 ] random read throughput: 157.00 MBPS, and 251 IOPS
LUN 0_9 on drive [8:9 ] random read throughput: 159.00 MBPS, and 254 IOPS
Calibrating flash disks (read only, note that writes will be significantly slower) ...
LUN 1_1 on drive [FLASH_1_1] random read throughput: 2,157.00 MBPS, and 280525 IOPS
LUN 2_1 on drive [FLASH_2_1] random read throughput: 2,156.00 MBPS, and 274304 IOPS
LUN 4_1 on drive [FLASH_4_1] random read throughput: 2,158.00 MBPS, and 282083 IOPS
LUN 5_1 on drive [FLASH_5_1] random read throughput: 2,160.00 MBPS, and 287786 IOPS
CALIBRATE results are within an acceptable range.
Calibration has finished.

CellCLI>

 

The CALIBRATE FORCE, allows the test to run when CELLSRV is still up, this is acceptable if there is no user workload.  It is therefore recommended to not run during normal operations.  Without the FORCE, the CELLSRV must be shut down.

PLEASE NOTE: This is a single run on a single storage cell, you will need to run on all storage cells in the Exadata Machine to get the total IOPS/MBPS.  You can use dcli to run this across all the cells 🙂

If you found this blog post useful, please like as well as follow me through my various Social Media avenues available on the sidebar and/or subscribe to this oracle blog via WordPress/e-mail.

Thanks

Zed DBA (Zahid Anwar)

SNMP unresponsive on Storage Cell after Exadata Patching

After a round of Exadata patching, the Storage Cells become unresponsive to SNMP for monitoring tools like OpsViews:

check_snmp_sysinfo CRITICAL - Agent not responding, tried SNMP v1 and v2c

This is because the Storage Cells get re-imaged as part of the Exadata patching, confirmed by imagehistory:

[root@v1ex1celadm01 ~]# imagehistory
Version : 12.1.2.1.1.150316.2
Image activation date : 2015-05-01 16:10:10 -0700
Imaging mode : fresh
Imaging status : success

Version : 12.1.2.1.3.151021
Image activation date : 2015-12-16 05:25:21 +0000
Imaging mode : out of partition upgrade
Imaging status : success

Version : 12.1.2.2.1.160330
Image activation date : 2016-05-12 14:02:43 +0100
Imaging mode : out of partition upgrade
Imaging status : success

Version : 12.1.2.3.2.160721
Image activation date : 2016-10-05 01:45:42 +0100
Imaging mode : out of partition upgrade
Imaging status : success

Version : 12.1.2.3.3.161208
Image activation date : 2017-01-18 02:53:10 +0000
Imaging mode : out of partition upgrade
Imaging status : success

Version : 12.1.2.3.4.170111
Image activation date : 2017-05-23 10:58:24 +0100
Imaging mode : out of partition upgrade
Imaging status : success

[root@v1oex1celadm01 ~]#

Which knocks out the configuration in iptables and snmpd.conf.

To resolve add back in the lines for your primary and secondary monitoring servers, for example:

rocommunity V12V1 192.168.0.31
rocommunity V12V1 192.168.0.32

So looks like this:

[root@v1ex1celadm01 ~]# more /etc/snmp/snmpd.conf
trapcommunity public
trapsink 127.0.0.1 public
rocommunity public 127.0.0.1
rocommunity V12V1 192.168.0.31
rocommunity V12V1 192.168.0.32
rwcommunity public 127.0.0.1

access RWGroup "" any noauth exact all all all
com2sec snmpclient 127.0.0.1 public
group RWGroup v1 snmpclient

pass .1.3.6.1.4.1.4413.4.1 /usr/bin/ucd5820stat
pass .1.3.6.1.4.1.3582 /usr/sbin/lsi_mrdsnmpmain

syscontact Root <root@localhost> (configure /etc/snmp/snmp.local.conf)
syslocation Unknown (edit /etc/snmp/snmpd.conf)

view all included .1 80

[root@v1ex1celadm01 ~]#

Then reload snmp using the user root:

[root@v1ex2celadm01 ~]# service snmpd reload
Reloading snmpd: [ OK ]
[root@v1ex2celadm01 ~]#

Next add the monitoring servers primary and secondary to iptables as follows:

[root@v1ex1celadm01 ~]# iptables -I INPUT -s 192.168.0.31 -p udp --dport 161 -j ACCEPT
[root@v1ex1celadm01 ~]# iptables -I INPUT -s 192.168.0.32 -p udp --dport 161 -j ACCEPT

And make permanent by saving to firewall rules:

[root@v1ex1dbadm01 ~]# /etc/init.d/iptables save
iptables: Saving firewall rules to /etc/sysconfig/iptables:[ OK ]
[root@v1ex1dbadm01 ~]#

Now your monitoring should return as SNMP connectivity is restored 🙂 :

SNMP OK - "Linux v1ex1celadm01.v1.com 2.6.39-400.294.1.el6uek.x86_64 #1 SMP Wed Jan 11 08:46:38 PST 2017 x86_64"

 

If you found this blog post useful, please like as well as follow me through my various Social Media avenues available on the sidebar and/or subscribe to this oracle blog via WordPress/e-mail.

Thanks

Zed DBA (Zahid Anwar)

How to easily delete files in the Oracle Cloud using CloudBerry Explorer

So you have some Oracle Cloud storage, which was probably thrown in as a freebie initially by Oracle 🙂  Now your freebie is expiring and you decide you want to retain the service but now as metered, so you want to delete all the unnecessary files, which in my case are old Oracle database backups.

As you can see here I have 50k plus files using 897Gb:

Oracle Cloud Web Console Details

Here are the files listed in the Web Console:

Oracle Cloud Web Console List Objects

To delete them one by one isn’t feasible.

So the solution is to use a 3rd party file explorer, in my case CloudBerry Explorer from CloudBerry Labs:

https://www.cloudberrylab.com/explorer/openstack.aspx

The freeware version is perfectly fine, no need to purchase Pro or use the trial.  Just click ‘Cancel’ on the Register Product dialogue and then the application will load.

Once installed, to connect to your Oracle Cloud storage, you can follow this link:

https://www.cloudberrylab.com/blog/how-to-use-cloudberry-explorer-with-oracle-cloud-storage/

However, the key to connecting is to have the username in the format of:

<Your Oracle Cloud Service Instance Name>-<Your Oracle Cloud Identity Domain>:<Your Oracle Cloud User Name>

e.g. zeddbacloud-zeddba:oraclecloudbackup@zeddba.com

Also select the correct ‘Account location’ which will fill the ‘Authentication Service’:

CloudBerry Login

Keystone, set to ‘Do not use’.

When you finally manage to get connected, you’ll see something like this:

CloudBerry Explorer

Now you have the freedom, to browse your files and delete at leisure 🙂

If you found this blog post useful, please like as well as follow me through my various Social Media avenues available on the sidebar and/or subscribe to this oracle blog via WordPress/e-mail.

Thanks

Zed DBA (Zahid Anwar)