Translate into your own language

Saturday, October 13, 2018

Step by step - Rolling Forward a Physical Standby Database using RMAN Incremental Backup

1) Stop the managed recovery process (MRP) on the STANDBY database

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;

Note:  For the remaining steps, the standby database must be in a MOUNT state.  

2) Determine the SCN of the STANDBY database.

On the standby database, find the SCN which will be used for the incremental backup at the primary database:
You need to use the 'lowest SCN' from the queries below:


SQL> SELECT CURRENT_SCN FROM V$DATABASE;

CURRENT_SCN
--------------
3164433


SQL> select min(checkpoint_change#) from v$datafile_header
where file# not in (select file# from v$datafile where enabled = 'READ ONLY');

MIN(F.FHSCN)
----------------
3162298

You need to use the 'lowest SCN' from the queries, in this example is SCN: 3162298.  Therefore, From the above you need to backup from SCN 3162298

3) Take an incremental backup of the PRIMARY database

In RMAN, connect to the primary database and create an incremental backup from the SCN derived in the previous step:

RMAN> BACKUP INCREMENTAL FROM SCN 3162298 DATABASE FORMAT '/tmp/ForStandby_%U' tag 'FORSTANDBY';

4) Transfer all backup sets to STANDBY server

All backups created by step #3 must be transferred from the primary to the standby system.  If the backup is written to NFS device, this step can be skipped.
  
scp /tmp/ForStandby_* standby:/tmp

5) Catalog the backups in STANDBY controlfile.

In order for the standby database to know about the backups, catalog the backup pieces on the STANDBY database. 

RMAN> CATALOG START WITH '/tmp/ForStandby';

using target database control file instead of recovery catalog
searching for all files that match the pattern /tmp/ForStandby

List of Files Unknown to the Database
=====================================
File Name: /tmp/ForStandby_2lkglss4_1_1
File Name: /tmp/ForStandby_2mkglst8_1_1

Do you really want to catalog the above files (enter YES or NO)? YES
cataloging files...
cataloging done

List of Cataloged Files
=======================
File Name: /tmp/ForStandby_2lkglss4_1_1
File Name: /tmp/ForStandby_2mkglst8_1_1

6) Recover the STANDBY database with the cataloged incremental backup:

RMAN> RECOVER DATABASE NOREDO;

starting recover at 03-JUN-09
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=28 devtype=DISK
channel ORA_DISK_1: starting incremental datafile backupset restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
destination for restore of datafile 00001: +DATA/mystd/datafile/system.297.688213333
destination for restore of datafile 00002: +DATA/mystd/datafile/undotbs1.268.688213335
destination for restore of datafile 00003: +DATA/mystd/datafile/sysaux.267.688213333
channel ORA_DISK_1: reading from backup piece /tmp/ForStandby_2lkglss4_1_1
channel ORA_DISK_1: restored backup piece 1
piece handle=/tmp/ForStandby_2lkglss4_1_1 tag=FORSTANDBY
channel ORA_DISK_1: restore complete, elapsed time: 00:00:02
Finished recover at 03-JUN-09

7) In RMAN, connect to the PRIMARY database and create a standby control file backup:

RMAN> BACKUP CURRENT CONTROLFILE FOR STANDBY FORMAT '/tmp/ForStandbyCTRL.bck';

8) Copy the standby control file backup to the STANDBY system. 

scp /tmp/ForStandbyCTRL.bck standby:/tmp

9) Capture datafile information in STANDBY database.

We now need to refresh the standby controlfile from primary controlfile (for standby) backup. However, since the datafile names are likely different than primary, let's save the name of datafiles on standby first, which we can refer after restoring controlfile from primary backup to verify if there is any discrepancy. Run below query from Standby and save results for further use.

spool datafile_names_step8.txt
set lines 200
col name format a60
select file#, name from v$datafile order by file# ;
spool off

10) From RMAN, connect to STANDBY database and restore the standby control file:

RMAN> SHUTDOWN IMMEDIATE ;
RMAN> STARTUP NOMOUNT;
RMAN> RESTORE STANDBY CONTROLFILE FROM '/tmp/ForStandbyCTRL.bck';

Starting restore at 03-JUN-09
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=36 devtype=DISK

channel ORA_DISK_1: restoring control file
channel ORA_DISK_1: restore complete, elapsed time: 00:00:07
output filename=+DATA/mystd/controlfile/current.257.688583989
Finished restore at 03-JUN-09

11) Shut down the STANDBY database and startup mount:

RMAN> SHUTDOWN;
RMAN> STARTUP MOUNT;

12) Catalog datafiles in STANDBY if location/name of datafiles is different

Note:  We recommend checking the incarnation for primary and standby before completing this step.

example:  RMAN> list incarnation;

Since the controlfile is restored from PRIMARY the datafile locations in STANDBY controlfile will be same as PRIMARY database, if the directory structure is different between the standby and primary or you are using Oracle managed file names, catalog the datafiles in STANDBY will do the necessary rename operations.  If the primary and standby have identical structure and datafile names, this step can be skipped.
Perform the below step  in STANDBY for each diskgroup where the datafile directory structure between primary and standby are different.

RMAN> CATALOG START WITH '+DATA/mystd/datafile/';

List of Files Unknown to the Database
=====================================
File Name: +data/mystd/DATAFILE/SYSTEM.309.685535773
File Name: +data/mystd/DATAFILE/SYSAUX.301.685535773
File Name: +data/mystd/DATAFILE/UNDOTBS1.302.685535775
File Name: +data/mystd/DATAFILE/SYSTEM.297.688213333
File Name: +data/mystd/DATAFILE/SYSAUX.267.688213333
File Name: +data/mystd/DATAFILE/UNDOTBS1.268.688213335

Do you really want to catalog the above files (enter YES or NO)? YES
cataloging files...
cataloging done

List of Cataloged Files
=======================
File Name: +data/mystd/DATAFILE/SYSTEM.297.688213333
File Name: +data/mystd/DATAFILE/SYSAUX.267.688213333
File Name: +data/mystd/DATAFILE/UNDOTBS1.268.688213335

If any datafiles have been added to Primary AFTER scn 3162298 they will also have to be restored to the standby host (see Note 1531031.1 Steps to perform for Rolling forward a standby database using RMAN incremental backup when datafile is added to primary) and cataloged as shown above before doing the switch.   
To determine if any files have been added to Primary since the standby current scn:

SQL>SELECT FILE#, NAME FROM V$DATAFILE WHERE CREATION_CHANGE# > 3162298

If the above query returns with 0 zero rows, you can switch the datafiles. This will rename the datafiles to its correct name at the standby site:

RMAN> SWITCH DATABASE TO COPY;

datafile 1 switched to datafile copy "+DATA/mystd/datafile/system.297.688213333"
datafile 2 switched to datafile copy "+DATA/mystd/datafile/undotbs1.268.688213335"
datafile 3 switched to datafile copy "+DATA/mystd/datafile/sysaux.267.688213333"

Note:
a) At this point, you can compare the query output from step 9 for any discrepancy (other than newly added datafiles) to ensure we have all the datafiles added in standby.
b) run the queries from step 2 again to confirm that the incremental apply has moved the datafiles forward. The SCN should now be bigger than its initial values.

13) Configure the STANDBY database to use flashback (optional)

If the STANDBY database needs to be configured for FLASHBACK run the following commands:

SQL> ALTER DATABASE FLASHBACK OFF;
SQL> ALTER DATABASE FLASHBACK ON;

NOTE:  This command may return the following:

ORA-38788 More standby database recovery is needed

Depending on situation, this is expected.  Proceed with the remaining steps.

14) On STANDBY database, clear all standby redo log groups:

SQL> select GROUP# from v$logfile where TYPE='STANDBY' group by GROUP#;
SQL> ALTER DATABASE CLEAR LOGFILE GROUP 1;
SQL> ALTER DATABASE CLEAR LOGFILE GROUP 2;
SQL> ALTER DATABASE CLEAR LOGFILE GROUP 3;
....

15) On the STANDBY database, start the MRP

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT;


Saturday, September 15, 2018

ORA-01722: invalid number while running catupgrd.sql during database upgrade

Recently we encountered this issue while upgrading from 11.2.0.3 to 11.2.0.4.

ISSUE

catupgrd.sql script was failing with the below error:

DOC> run in the old oracle home prior to upgrading a pre-11.2 database: 
DOC> 
DOC> SELECT TO_NUMBER('MUST_BE_SAME_TIMEZONE_FILE_VERSION') 
DOC> * 
DOC> ERROR at line 1: 
DOC> ORA-01722: invalid number 
DOC> 
DOC> 
DOC> o Action: 
DOC> Shutdown database ("alter system checkpoint" and then "shutdown abort"). 
DOC> Revert to the original ORACLE_HOME and start the database. 
DOC> Run pre-upgrade tool against the database. 
DOC> Review and take appropriate actions based on the pre-upgrade 
DOC> output before opening the datatabase in the new software version. 
-- 
The following error is generated if (1) the old release uses a time 
DOC> zone file version newer than the one shipped with the new oracle 
DOC> release and (2) the new oracle home has not been patched yet: 
DOC> 
DOC> SELECT TO_NUMBER('MUST_PATCH_TIMEZONE_FILE_VERSION_ON_NEW_ORACLE_HOME') 
DOC> * 
DOC> ERROR at line 1: 
DOC> ORA-01722: invalid number 
DOC> 
DOC> o Action: 
DOC> Shutdown database ("alter system checkpoint" and then "shutdown abort"). 
DOC> Patch new ORACLE_HOME to the same time zone file version as used 
DOC> in the old ORACLE_HOME. 


SOLUTION

Get the output of the below queries from 11.2.0.3 and 11.2.0.4 homes:

1) SELECT PROPERTY_NAME, SUBSTR(property_value, 1, 30) value 
FROM DATABASE_PROPERTIES 
WHERE PROPERTY_NAME LIKE 'DST_%' 
ORDER BY PROPERTY_NAME; 

2) SELECT version FROM v$timezone_file; 

3) ls -ltr $ORACLE_HOME/ORACORE/ZONEINFO 


OUTPUT:

on 11.2.0.3 
========

1. SELECT PROPERTY_NAME, SUBSTR(property_value, 1, 30) value 
FROM DATABASE_PROPERTIES 
WHERE PROPERTY_NAME LIKE 'DST_%' 
ORDER BY PROPERTY_NAME; 

PROPERTY_NAME VALUE 
------------------------------ ---------- 
DST_PRIMARY_TT_VERSION 18 
DST_SECONDARY_TT_VERSION 0 
DST_UPGRADE_STATE NONE 

2. SELECT version FROM v$timezone_file; 

SYS@host1> SELECT version FROM v$timezone_file; 

VERSION 
---------- 
18 

3.   @host1> cd /u01/sq/ora_1/db/11.2.0.3/oracore/zoneinfo/ 
@host1> ls -tlr 
total 14568 
-rw-r--r-- 1 oracle oinstall 6068 Jun 7 2010 readme.txt 
-rw-r--r-- 1 oracle oinstall 31525 Jun 7 2010 timezdif.csv 
-rw-r--r-- 1 oracle oinstall 351525 Aug 18 2011 timezone_9.dat 
-rw-r--r-- 1 oracle oinstall 302100 Aug 18 2011 timezone_8.dat 
-rw-r--r-- 1 oracle oinstall 286815 Aug 18 2011 timezone_7.dat 
-rw-r--r-- 1 oracle oinstall 286217 Aug 18 2011 timezone_6.dat 
-rw-r--r-- 1 oracle oinstall 286310 Aug 18 2011 timezone_5.dat 
-rw-r--r-- 1 oracle oinstall 286264 Aug 18 2011 timezone_4.dat 
-rw-r--r-- 1 oracle oinstall 286651 Aug 18 2011 timezone_3.dat 
-rw-r--r-- 1 oracle oinstall 274900 Aug 18 2011 timezone_2.dat 
-rw-r--r-- 1 oracle oinstall 274427 Aug 18 2011 timezone_1.dat 
-rw-r--r-- 1 oracle oinstall 344448 Aug 18 2011 timezone_14.dat 
-rw-r--r-- 1 oracle oinstall 344425 Aug 18 2011 timezone_13.dat 
-rw-r--r-- 1 oracle oinstall 345024 Aug 18 2011 timezone_12.dat 
-rw-r--r-- 1 oracle oinstall 345356 Aug 18 2011 timezone_11.dat 
-rw-r--r-- 1 oracle oinstall 345637 Aug 18 2011 timezone_10.dat 
-rw-r--r-- 1 oracle oinstall 801410 Aug 18 2011 timezlrg_9.dat 
-rw-r--r-- 1 oracle oinstall 616723 Aug 18 2011 timezlrg_8.dat 
-rw-r--r-- 1 oracle oinstall 601242 Aug 18 2011 timezlrg_7.dat 
-rw-r--r-- 1 oracle oinstall 586750 Aug 18 2011 timezlrg_6.dat 
-rw-r--r-- 1 oracle oinstall 587487 Aug 18 2011 timezlrg_5.dat 
-rw-r--r-- 1 oracle oinstall 531137 Aug 18 2011 timezlrg_4.dat 
-rw-r--r-- 1 oracle oinstall 527717 Aug 18 2011 timezlrg_3.dat 
-rw-r--r-- 1 oracle oinstall 507957 Aug 18 2011 timezlrg_2.dat 
-rw-r--r-- 1 oracle oinstall 493675 Aug 18 2011 timezlrg_1.dat 
-rw-r--r-- 1 oracle oinstall 791430 Aug 18 2011 timezlrg_14.dat 
-rw-r--r-- 1 oracle oinstall 782475 Aug 18 2011 timezlrg_13.dat 
-rw-r--r-- 1 oracle oinstall 785621 Aug 18 2011 timezlrg_12.dat 
-rw-r--r-- 1 oracle oinstall 787272 Aug 18 2011 timezlrg_11.dat 
-rw-r--r-- 1 oracle oinstall 792894 Aug 18 2011 timezlrg_10.dat 
-rwxr-xr-x 1 oracle oinstall 781669 Jun 11 2012 timezlrg_18.dat >>>>>>>>>>>>>>>>> 
-rwxr-xr-x 1 oracle oinstall 341718 Jun 11 2012 timezone_18.dat >>>>>>>>>>>>> 
-rwxr-xr-x 1 oracle oinstall 7150 Jun 11 2012 readme_18.txt 
drwxr-xr-x 2 oracle oinstall 4096 May 6 2013 big 
drwxr-xr-x 2 oracle oinstall 4096 May 6 2013 little 

on 11.2.0.4 
========

1. SELECT PROPERTY_NAME, SUBSTR(property_value, 1, 30) value 
FROM DATABASE_PROPERTIES 
WHERE PROPERTY_NAME LIKE 'DST_%' 
ORDER BY PROPERTY_NAME; 

PROPERTY_NAME VALUE 
------------------------------ ---------- 
DST_PRIMARY_TT_VERSION 18 
DST_SECONDARY_TT_VERSION 0 
DST_UPGRADE_STATE NONE 

2. SQL> SELECT version FROM v$timezone_file; 

no rows selected 

3. @host1> cd /u01/sq/ora_1/db/11.2.0.4/oracore/zoneinfo/ 
@host1> ls -tlr 
total 14568 
-rw-r--r-- 1 oracle oinstall 6068 Jun 7 2010 readme.txt 
-rw-r--r-- 1 oracle oinstall 31525 Jun 7 2010 timezdif.csv 
-rw-r--r-- 1 oracle oinstall 351525 Jul 16 2013 timezone_9.dat 
-rw-r--r-- 1 oracle oinstall 302100 Jul 16 2013 timezone_8.dat 
-rw-r--r-- 1 oracle oinstall 286815 Jul 16 2013 timezone_7.dat 
-rw-r--r-- 1 oracle oinstall 286217 Jul 16 2013 timezone_6.dat 
-rw-r--r-- 1 oracle oinstall 286310 Jul 16 2013 timezone_5.dat 
-rw-r--r-- 1 oracle oinstall 286264 Jul 16 2013 timezone_4.dat 
-rw-r--r-- 1 oracle oinstall 286651 Jul 16 2013 timezone_3.dat 
-rw-r--r-- 1 oracle oinstall 274900 Jul 16 2013 timezone_2.dat 
-rw-r--r-- 1 oracle oinstall 274427 Jul 16 2013 timezone_1.dat 
-rw-r--r-- 1 oracle oinstall 344448 Jul 16 2013 timezone_14.dat 
-rw-r--r-- 1 oracle oinstall 344425 Jul 16 2013 timezone_13.dat 
-rw-r--r-- 1 oracle oinstall 345024 Jul 16 2013 timezone_12.dat 
-rw-r--r-- 1 oracle oinstall 345356 Jul 16 2013 timezone_11.dat 
-rw-r--r-- 1 oracle oinstall 345637 Jul 16 2013 timezone_10.dat 
-rw-r--r-- 1 oracle oinstall 801410 Jul 16 2013 timezlrg_9.dat 
-rw-r--r-- 1 oracle oinstall 616723 Jul 16 2013 timezlrg_8.dat 
-rw-r--r-- 1 oracle oinstall 601242 Jul 16 2013 timezlrg_7.dat 
-rw-r--r-- 1 oracle oinstall 586750 Jul 16 2013 timezlrg_6.dat 
-rw-r--r-- 1 oracle oinstall 587487 Jul 16 2013 timezlrg_5.dat 
-rw-r--r-- 1 oracle oinstall 531137 Jul 16 2013 timezlrg_4.dat 
-rw-r--r-- 1 oracle oinstall 527717 Jul 16 2013 timezlrg_3.dat 
-rw-r--r-- 1 oracle oinstall 507957 Jul 16 2013 timezlrg_2.dat 
-rw-r--r-- 1 oracle oinstall 493675 Jul 16 2013 timezlrg_1.dat 
-rw-r--r-- 1 oracle oinstall 791430 Jul 16 2013 timezlrg_14.dat 
-rw-r--r-- 1 oracle oinstall 782475 Jul 16 2013 timezlrg_13.dat 
-rw-r--r-- 1 oracle oinstall 785621 Jul 16 2013 timezlrg_12.dat 
-rw-r--r-- 1 oracle oinstall 787272 Jul 16 2013 timezlrg_11.dat 
-rw-r--r-- 1 oracle oinstall 792894 Jul 16 2013 timezlrg_10.dat 
-rw-r--r-- 1 oracle oinstall 341718 Sep 18 2013 timezone_17.dat 
-rw-r--r-- 1 oracle oinstall 779003 Sep 18 2013 timezlrg_17.dat 
-rw-r--r-- 1 oracle oinstall 6905 Aug 31 22:38 readme_17.txt 
drwxr-xr-x 2 oracle oinstall 4096 Aug 31 23:04 big 
drwxr-xr-x 2 oracle oinstall 4096 Aug 31 23:04 little 
@host1> 

from the above output we can see that the result for 2nd query on 11.2.0.4 is empty(NO ROW SELECED) because of 18 version TZ files does not exist at new 11.2.0.4 home.

Hence to resolve this issue we copied the TZ files from 11.2.0.3 home to new 11.2.0.4 home.
Make sure to copy the 18 version of files from 11.2.0.3 to 11.2.0.4 homes on below locations:

/u01/sq/ora_1/db/11.2.0.4/oracore/zoneinfo/
/u01/sq/ora_1/db/11.2.0.4/oracore/zoneinfo/big
/u01/sq/ora_1/db/11.2.0.4/oracore/zoneinfo/little

And we ran the catupgrd.sql script again and it was successful.

==========================*****========================================

Tuesday, July 24, 2018

Step by step - How to upgrade from 11.2.0.3 to 11.2.0.4 Grid and RDBMS in RAC & Data Guard Environment

This document is for upgrading databases in RAC and dataguard environment. In this upgrade, Grid and RDBMS home is upgraded. This upgrade is done by out-of-place approach(new home for both GI and DB). There are basically 6 steps to complete this requirement:

Step 1: Prerequisite checks and backups

Step 2: New Grid Binary Installation:

            - Fill in the response file
            - Run the prerequisite checks
            - Install the new GI binary

Step 3: Manual steps performed after Grid binary installation to upgrade the Grid.

Step 4: New RDBMS installation

           -Fill in the response file
           -Run the prerequisite checks
           -Install the new RDBMS binary

Step 5: Manual steps performed after new binary installation to upgrade the RDBMS

Step 6: Upgrade the database on standby side.

Now we will move step by step in detail:

Step 1: Prerequisite checks and backups

1) Check storage space for the new binary to be installed. We must have around 20 GB of space         available for each homes(Grid and RDBMS) on each node.
Also issue df -kh and make sure we space available in /tmp and /(root).

2) Remove the audit files and any large log files on each node(not deleting will slowdown the upgrade).

$cd $GRID_HOME/rdbms/audit
$rm *.aud

3) Take full database backup using RMAN and also take full database export.

4) Backup of Grid and RDBMS binaries using tar.

5) Backup of Oracle Inventory

6) Backup of OCR on each nodes.

7) Backup of ASM metadata.

8) Secure dataguard.

Step 2: New Grid home installation:

1) Fill in the response file for the new binary home. Edit the response file according to the requirement.

2) Run the installer to install the GRID using this response file.

Step 3: Grid Upgrade:

Manual Grid upgrade step after new binary installation:

1) On each node, copy listener.ora and sqlnet.ora to new Grid home(11.2.0.4).

2) If necessary, adjust existing ASM instance memory parameter:

     Log in as sysasm:
     SQL>show parameter memory_target

     if the value is smaller than 1536 m, then issue the following command:

     SQL>alter system set memory_target=1536m scope=spfile;

    The number 1536m has proven to be efficient in most environment, the change will not be     effective util next restart.

    SQL>show parameter memory_max_target

   If the value is smaller than 4096, then issue the following command:

   SQL>alter system set memory_max_target=4096 scope=spfile;

   The number 4096m has proven to be sufficient in most environment, the change will not be effective until next restart.

3) Stop clusterware and database resources gracefully on each node.

    #crsctl stop crs

    This command will bring down the database so no need to shutdown the database exclusively.

    Make sure all CRS deamons are sto resopped on each node:

  #crsctl stat resource -t

4) On each node, execute rootupgrade.sh script from new 11.2.0.4 home to upgrade clusterware.

    The rootupgrade.sh script can only be run sequentially node by node, and not in parallel between nodes.

  Note: This script takes 15-20 minutes to complete.

While switching to root user to execute rootupgrade.sh, "su -" or "su - root" provides the full root environment. While sudo, pbrun,"su root" or "su" or similar facilities don't. It is recommended to execute rootupgrade.sh with full root acces to avoid any issue.

5) Once the Grid upgrade script finishes, all the clusterware and database services automatically comes up. If not then manually start it.

6) Verify and cluster and database services on each node.

     #crsctl check crs
     #crsctl stat resource -t
     #ocrcheck -local
     $srvctl status database -d dbname

7) After all the nodes are upgraded, update environment variables to point to new home 11.2.0.4

    Below files needs to be updated:

    1) .bash_profile
    2) /etc/oratab

Note: Once the grid is installed and upgraded in the primary servers, do the grid installation and upgrade on the standby side too. Otherwise it will not allow to install new 11.2.0.4 RDBMS home on standby side.

Step 4: New RDBMS Home Installation

Like grid installation, first thing we need to do is to prepare the response file. Fill all the necessary parameter which are required to install the new home. And then run the installer using this response file. Install new RDBMS(11.2.0.4) on standby also.

Step 5: RDBMS Upgrade

1) Before we start database upgrade, we need to create the restore point.

    SQL>CREATE RESTORE POINT before_upgrade GURANTEE FLASHBACK DATABASE;

2)Copy the parameter file, password file, tnsnames.ora and sqlnet.ora to the new RDBMS home. For      RAC copy on all the nodes.

3) Count invalid object and make a note of it.

4) Check the space in system tablespace.

5) For RAC only, change the instance to exclusive mode(i.e. cluster_database=false)

6) Gather dictionary stats from existing binary home.

      SQL>execute dbms_stats.gather_dictionary_stats;

7) Execute the following pre-check script from the existing home to check instance fulfill pre-upgrade requirement.

    SQL>@/u01/sql/ora_2db/11.2.0.4/rdbms/admin/utlu112i.sql

Fix all the errors which is reported by this script before proceeding to the actual upgrade.
For RAC this step only needs to be executed only once, not from all the nodes.

8) Startup instance with startup upgrade option in SQLPLUS with new oracle home(11.2.0.4).
     For RAC this step needs to be executed from one node only(not from all nodes). And then run the       upgrade script.

     SQL>startup upgrade
   
     SQL>@/u01/sq/ora_2/db/11.2.0.4/rdbms/admin/catupgrd.sql

9) Then restart the instance and run the following sql script:

     SQL>shut immediate

     SQL>startup

     SQL>@/u01/sq/ora_2/db/11.2.0.4/rdbms/admin/catuppst.sql

    (For RAC, this script needs to be executed only from one node)

10) Run the post-upgrade check script from new home.

      SQL>@/u01/sq/ora_2/db/11.2.0.4/rdbms/admin/utlu112s.sql

      (For RAC, this script needs to be executed from only one node)

11) Recompile objects and compile the object list to ensure no new invalid objects

      SQL>@?/rdbms/admin/utluiobj.sql

      SQL>@?/rdbms/admin/utlrp.sql

      SQL>@?/rdbms/admin/utluiobj.sql

     (for RAC, this script needs to be executed only from one node)

12) For RAC, fallback instance from exclusive mode(cluster_database=true)

13) Update database resources in CRS registry:
   
       $srvctl upgrade database -d dbname -o /u01/sq/ora_2/db/11.2.0.4

14) Startup the remaining instances of the database.

      $srvctl start instance -d dbname -i instance_name

15) After database is upgraded, update the environment variable so that it points to 11.2.0.4 in the              below files:

       /etc/oratab for oracle user
       .bash_profile for oracle user
       listener.ora file

16) If there is oracle valet then add VNCR parameter in the listener.ora file.
      (for RAC, it applies to all nodes).


17) Shutdown and restart the database.

18) Ask application team to verify everything is running fine for them.

19) Drop restore point befor_upgrade.


STEP 6 - UPGRADE THE DATABASE ON STANDBY SIDE

Perform these steps only if there is a physical standby database associated with the database being upgraded.

1) Copy below file from 11.2.0.3 home to new home 11.2.0.4:

      tnsnames.ora
      sqlnet.ora
      parameter file
      password file

2) Shutdown the primary database as follows:

    $srvctl stop database -d DBNAME -o immediate

3) Shutdown the standby database as follows:

   $srvctl stop database -d DBNAME

4) Edit the /etc/oratab and listener.ora file to point to the new 11.2.0.4 home

5) Update the OCR configuration

   $srvctl upgrade database -d DBNAME -o /u01/sq/ora_2/db/11.2.0.4

6) Start the standby database in mount mode from new home 11.2.0.4

    $srvctl start database -d DBNAME -o mount

7) Start the primary database in open mode

    $srvctl start database -d DBNAME

8) Start the redo apply on standby database

    SQL>alter database recover managed standby database disconnect from session

9)  Verify the standby alert log file and make sure logs are shipping properly, also check the archive
      gap.


Suggestions are most welcome :)