Translate into your own language

Friday, July 22, 2016

Removing Old Oracle Homes

De-Install has some bugs ..and it is touching a lot of files... and we do have enough control.
More controlled way ... is to simply detach the old Oracle home... and removing the root directory.

Step 1:  Set Oracle Home to Old Oracle Home ... and list all the available ORacle Homes.
[oracle@host1 product]$ export ORACLE_HOME=/u01/app/oracle/product/11.2.0.2/db_1
[oracle@host1 product]$ $ORACLE_HOME/OPatch/opatch lsinventory -all

Oracle Interim Patch Installer version 11.2.0.1.9
Copyright (c) 2011, Oracle Corporation.  All rights reserved.
Oracle Home       : /u01/app/oracle/product/11.2.0.2/db_1
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/oracle/product/11.2.0.2/db_1/oraInst.loc
OPatch version    : 11.2.0.1.9
OUI version       : 11.2.0.2.0
Log file location : /u01/app/oracle/product/11.2.0.2/db_1/cfgtoollogs/opatch/opatch2014-03-26_17-00-52PM.log
Lsinventory Output file location : /u01/app/oracle/product/11.2.0.2/db_1/cfgtoollogs/opatch/lsinv/lsinventory2014-03-26_17-00-52PM.txt
--------------------------------------------------------------------------------
List of Oracle Homes:
  Name          Location
   OraDb11g_home2         /u01/app/oracle/product/11.2.0.2/db_1    ---- This is the Old home in the inventory… We need to remove this from the Central Inventory
   agent12g1         /u01/app/oracle/agent11g/core/12.1.0.1.0
   sbin12g1         /u01/app/oracle/agent11g/sbin
   Ora11g_gridinfrahome1         /u01/app/11.2.0.3/grid
   OraDb11g_home1         /u01/app/oracle/product/11.2.0.3/db_1
   agent11g1         /u01/app/oracle/agent11g/agent11g
   agent11g2         /u01/app/oracle/OEM11G/Middleware/agent11g
   common11g2         /u01/app/oracle/OEM11G/Middleware/oracle_common
   webtier11g2         /u01/app/oracle/OEM11G/Middleware/Oracle_WT

There are no Interim patches installed in this Oracle Home.
Rac system comprising of multiple nodes
  Local node = host1
  Remote node = host2
  Remote node = host3
--------------------------------------------------------------------------------
OPatch succeeded.

Step 2 :  Detach the home that we want to remove from the system.
[oracle@host1 product]$ $ORACLE_HOME/oui/bin/runInstaller -detachHome -silent ORACLE_HOME=/u01/app/oracle/product/11.2.0.2/db_1
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 65535 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'DetachHome' was successful.


Step 3:  Double Check if the home still exists in the inventory.

--------------------------------------------------------------------------------
List of Oracle Homes:
  Name          Location
   agent12g1         /u01/app/oracle/agent11g/core/12.1.0.1.0
   sbin12g1         /u01/app/oracle/agent11g/sbin
   Ora11g_gridinfrahome1         /u01/app/11.2.0.3/grid
   OraDb11g_home1         /u01/app/oracle/product/11.2.0.3/db_1
   agent11g1         /u01/app/oracle/agent11g/agent11g
   agent11g2         /u01/app/oracle/OEM11G/Middleware/agent11g
   common11g2         /u01/app/oracle/OEM11G/Middleware/oracle_common
   webtier11g2         /u01/app/oracle/OEM11G/Middleware/Oracle_WT

Installed Top-level Products (1):

Oracle Database 11g                                                  11.2.0.3.0
There are 1 products installed in this Oracle Home.


Step 4 : You are now good to delete the Oracle home using rm –rf . Pls note that many files in GRID_HOME will be owned by root. So,  its better to do it from Root user.

/bin/rm -rf $OLD_GI_HOME
Before removing the directories, I rename them to XYZ … , bounced all the services .. CRS/ASM/DBs … made sure everything is working … kind of a fool proof…

Database Upgradation in RAC environment from 11.2.0.3 to 11.2.0.4

Pre Upgrade Steps

SQL> select instance_name,host_name from gv$instance

INSTANCE_NAME        HOST_NAME
----------------              ----------------------------------------------------------------
TESTRAC3               host1.com
TESTRAC4               host2.com

$srvctl stop database -d TESTRAC

$srvctl status database -d TESTRAC

$srvctl start database -d TESTRAC

SQL> select name,open_mode from gv$database;

Select distinct status from dba_objects;

SQL>set lines 200

SQL>select owner,object_name,object_type,status from dba_objects where status='INVALID'
order by 1,3;

SQL>select owner,object_type,count(*) from dba_objects
where status='INVALID'
group by owner,object_type
order by owner;

SQL>@$ORACLE_HOME/rdbms/admin/utlrp.sql
select distinct status from dba_objects;

SQL>col comp_name format a40

SQL>col status format a30

SQL>col version format a30

SQL>set lines 132 pages 100

SQL>select comp_name,status,version from dba_registry;

$export ORACLE_HOME=/u01/app/oracle/product/11.2.4/dbhome_1

$echo $ORACLE_HOME

$cd $ORACLE_HOME/bin

$pwd

Note:  BlackOut OEM before starting the Upgrade.

Note: Please don’t change anything source version of database, keep the database up and running from there. 

Upgrade Steps

Invoke dbua to start the upgrade from new home

$/u01/app/oracle/product/11.2.4/dbhome_1/bin/dbua










Please note this summary page is your last chance to make sure all looks good before you hit the “FINISHED” button. This can be done before your outage window start because if you do so nothing will change until press the finish button so recommendation would be please go through all the step before hand so that you can fix them before you enter in to the outage window.




This is the point outage window start before that all other activities were online.






Verify /etc/oratab

$[oracle@host1 bin]$ cat /etc/oratab | grep TESTRAC
TESTRAC:/u01/app/oracle/product/11.2.4/dbhome_1:N # line added by Agent

$[oracle@host1 bin]$ . oraenv
$ORACLE_SID = [oracle] ? TESTRAC3
ORACLE_HOME = [/home/oracle] ? /u01/app/oracle/product/11.2.4/dbhome_1

select distinct status from dba_objects;

SQL> @$ORACLE_HOME/rdbms/admin/utlrp.sql

$srvctl stop database -d TESTRAC

$srvctl status database -d TESTRAC

$srvctl start database -d TESTRAC

$[oracle@host1 bin]$ srvctl status database -d TESTRAC
Instance TESTRAC3 is running on node host1
Instance TESTRAC4 is running on node host2

SQL>select name,open_mode from gv$database;

SQL>col comp_name format a40

SQL>col status format a30

SQL>col version format a30

SQL>set lines 132 pages 100

SQL>select comp_name,status,version from dba_registry;

SQL>alter session set nls_date_format='mm/dd/yyyy:hh24:mi:ss';

SQL>set lines 200

SQL>col host_name format a40

SQL>select instance_name,host_name,startup_time, status from gv$instance
order by 1;
INSTANCE_NAME      HOST_NAME    STARTUP_T     STATUS
----------------            ------------------------  ----------------    --------------
TESTRAC3                   host1.com           12-APR-16          OPEN
TESTRAC4                   host2.com           12-APR-16          OPEN

Apply Jan 2016 CPU Patch

cd $ORACLE_HOME/rdbms/admin;echo $ORACLE_HOME
pwd

$/u01/app/oracle/product/11.2.4/dbhome_1/rdbms/admin

$sqlplus "/ as sysdba"

SQL>select name,open_mode from gv$database;

SQL>@catbundle.sql psu apply

SQL>@$ORACLE_HOME/rdbms/admin/utlrp.sql

SQL>select owner,object_name,object_type,status from dba_objects where status='INVALID' order by 1,3;

--Verify PSU Patch Install

SQL>select to_char(ACTION_TIME,'DD-MON-YYYY'),ACTION,VERSION,ID,BUNDLE_SERIES,COMMENTS from
DBA_REGISTRY_HISTORY where comments like '%PSU 11.2.0.4%'
/
exit;

---------------------------------------------Upgradation Completed------------------------------------------------

Database Upgradation in RAC environment from 11.2.0.3 to 11.2.0.4

Pre Upgrade Steps

SQL> select instance_name,host_name from gv$instance

INSTANCE_NAME        HOST_NAME
----------------              ----------------------------------------------------------------
TESTRAC3               host1.com
TESTRAC4               host2.com

$srvctl stop database -d TESTRAC

$srvctl status database -d TESTRAC

$srvctl start database -d TESTRAC

SQL> select name,open_mode from gv$database;

Select distinct status from dba_objects;

SQL>set lines 200

SQL>select owner,object_name,object_type,status from dba_objects where status='INVALID'
order by 1,3;

SQL>select owner,object_type,count(*) from dba_objects
where status='INVALID'
group by owner,object_type
order by owner;

SQL>@$ORACLE_HOME/rdbms/admin/utlrp.sql
select distinct status from dba_objects;

SQL>col comp_name format a40

SQL>col status format a30

SQL>col version format a30

SQL>set lines 132 pages 100

SQL>select comp_name,status,version from dba_registry;

$export ORACLE_HOME=/u01/app/oracle/product/11.2.4/dbhome_1

$echo $ORACLE_HOME

$cd $ORACLE_HOME/bin

$pwd

Note:  BlackOut OEM before starting the Upgrade.

Note: Please don’t change anything source version of database, keep the database up and running from there. 

Upgrade Steps

Invoke dbua to start the upgrade from new home

$/u01/app/oracle/product/11.2.4/dbhome_1/bin/dbua










Please note this summary page is your last chance to make sure all looks good before you hit the “FINISHED” button. This can be done before your outage window start because if you do so nothing will change until press the finish button so recommendation would be please go through all the step before hand so that you can fix them before you enter in to the outage window.




This is the point outage window start before that all other activities were online.






Verify /etc/oratab

$[oracle@host1 bin]$ cat /etc/oratab | grep TESTRAC
TESTRAC:/u01/app/oracle/product/11.2.4/dbhome_1:N # line added by Agent

$[oracle@host1 bin]$ . oraenv
$ORACLE_SID = [oracle] ? TESTRAC3
ORACLE_HOME = [/home/oracle] ? /u01/app/oracle/product/11.2.4/dbhome_1

select distinct status from dba_objects;

SQL> @$ORACLE_HOME/rdbms/admin/utlrp.sql

$srvctl stop database -d TESTRAC

$srvctl status database -d TESTRAC

$srvctl start database -d TESTRAC

$[oracle@host1 bin]$ srvctl status database -d TESTRAC
Instance TESTRAC3 is running on node host1
Instance TESTRAC4 is running on node host2

SQL>select name,open_mode from gv$database;

SQL>col comp_name format a40

SQL>col status format a30

SQL>col version format a30

SQL>set lines 132 pages 100

SQL>select comp_name,status,version from dba_registry;

SQL>alter session set nls_date_format='mm/dd/yyyy:hh24:mi:ss';

SQL>set lines 200

SQL>col host_name format a40

SQL>select instance_name,host_name,startup_time, status from gv$instance
order by 1;
INSTANCE_NAME      HOST_NAME    STARTUP_T     STATUS
----------------            ------------------------  ----------------    --------------
TESTRAC3                   host1.com           12-APR-16          OPEN
TESTRAC4                   host2.com           12-APR-16          OPEN

Apply Jan 2016 CPU Patch

cd $ORACLE_HOME/rdbms/admin;echo $ORACLE_HOME
pwd

$/u01/app/oracle/product/11.2.4/dbhome_1/rdbms/admin

$sqlplus "/ as sysdba"

SQL>select name,open_mode from gv$database;

SQL>@catbundle.sql psu apply

SQL>@$ORACLE_HOME/rdbms/admin/utlrp.sql

SQL>select owner,object_name,object_type,status from dba_objects where status='INVALID' order by 1,3;

--Verify PSU Patch Install

SQL>select to_char(ACTION_TIME,'DD-MON-YYYY'),ACTION,VERSION,ID,BUNDLE_SERIES,COMMENTS from
DBA_REGISTRY_HISTORY where comments like '%PSU 11.2.0.4%'
/
exit;

---------------------------------------------Upgradation Completed------------------------------------------------

Manual Upgradation of 11.2.0.1 Database to 11.2.0.2

Steps in 11.2.0.1 Environment:

1. Install 11.2.0.2 into a separate ORACLE_HOME

2. Running the Pre-Upgrade Information Tool
After installing the 11.2.0.2 software, start the 11.2.0.1 instance with the 11.2.0.1 ORACLE_HOME and spool/run 11.2.0.2 $ORACLE_HOME/rdbms/admin/utlu112i.sql script against the running 11.2.0.1 instance.

3. Review 11.2.0.2 pre-upgrade script spool file  from Step 2 and fix any issues before upgrading to 11.2.0.2

4. Run dbupgdiag.sql script to verify that all the components in dba_registry are valid and no invalid data dictionary objects in dba_objects

5. If the dbupgdiag.sql script reports any invalid objects, run $ORACLE_HOME/rdbms/admin/utlrp.sql (multiple times) to validate the invalid objects in the database, until there is no change in the number of invalid objects.

$ cd $ORACLE_HOME/rdbms/admin
$ sqlplus "/ as sysdba"
SQL> @utlrp.sql

6. Shut down the database cleanly

Steps in 11.2.0.2 Env

7. Set Environment variable ORACLE_HOME to point to 11.2.0.2 installation

8. Create pfile from spfile

9. Change cluster_database=false

10. Copy init.ora/spfile and password file (orapw<sid>.ora) from 11.2.0.1 $ORACLE_HOME/dbs to 11.2.0.2 $ORACLE_HOME/dbs

11. Upgrade the database manually
a)start sqlplus and run catupgrd.sql script

sqlplus " / as sysdba "
SQL> spool /tmp/upgrade.log
SQL> startup upgrade
SQL> set echo on
SQL> @catupgrd.sql;
SQL> spool off
SQL> Shutdown immediate


b) Check catupgrd.sql spool file for errors

12. Restart the database in normal mode

13. SQL> @$ORACLE_HOME/rdbms/admin/catuppst.sql;

14. SQL> @$ORACLE_HOME/rdbms/admin/utlrp.sql;

15. Run dbupgdiag.sql script again to verify that all the components in dba_registry are valid and no invalid data dictionary objects in dba_objects
POST UPGRADE STEPS

16. Upgrade time Zone to version 14 using DBMS_DST

17. Upgrade the Recovery Catalog RMAN (Add the steps)

18. Run srvctl.

From old 11.2.0.1 Oracle_Home:
% $ORACLE_HOME/bin/srvctl config database -d db_name -a
% $ORACLE_HOME/bin/srvctl config service -d db_name
% $ORACLE_HOME/bin/srvctl remove database -d db_name

From 11.2.0.2 Oracle_Home:

% $ORACLE_HOME/bin/srvctl add database -d db_name -o <location of 11g home>
% $ORACLE_HOME/bin/srvctl add instance -d db_name -i instance -n node
% $ORACLE_HOME/bin/srvctl add service -d db_name –s service_name

Steps to do for Timezone Upgrade
---Check the current version of file ---
SQL> select version from V$TIMEZONE_FILE;

   VERSION
----------
11

SQL> SELECT PROPERTY_NAME, SUBSTR(property_value, 1, 30) value
FROM DATABASE_PROPERTIES
WHERE PROPERTY_NAME LIKE 'DST_%'
ORDER BY PROPERTY_NAME;

PROPERTY_NAME       VALUE
------------------------------ ------------------------------
DST_PRIMARY_TT_VERSION       11
DST_SECONDARY_TT_VERSION       0
DST_UPGRADE_STATE       NONE

---Confirm if $ORACLE_HOME/oracore/zoneinfo contains timezlrg_14.dat i.e version 14 file

--Prepare for upgrade
SQL> exec DBMS_DST.BEGIN_PREPARE(14);

PL/SQL procedure successfully completed.

--Confirm status

SQL> SELECT PROPERTY_NAME, SUBSTR(property_value, 1, 30) value
FROM DATABASE_PROPERTIES
WHERE PROPERTY_NAME LIKE 'DST_%'
ORDER BY PROPERTY_NAME;

PROPERTY_NAME       VALUE
------------------------------ ------------------------------
DST_PRIMARY_TT_VERSION       11
DST_SECONDARY_TT_VERSION       14
DST_UPGRADE_STATE       PREPARE

---Re-running prepare statement will give below error.
SQL> exec DBMS_DST.BEGIN_PREPARE(14);
BEGIN DBMS_DST.BEGIN_PREPARE(14); END;

*
ERROR at line 1:
ORA-56920: a prepare or upgrade window or an on-demand or datapump-job loading of a secondary time zone data file is in
an active state
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 79
ORA-06512: at "SYS.DBMS_DST", line 1340
ORA-06512: at line 1

-- truncate logging tables if they exist.

TRUNCATE TABLE SYS.DST$TRIGGER_TABLE;
TRUNCATE TABLE sys.dst$affected_tables;
TRUNCATE TABLE sys.dst$error_table;

-- log affected data

BEGIN
DBMS_DST.FIND_AFFECTED_TABLES
(affected_tables => 'sys.dst$affected_tables',
log_errors => TRUE,
log_errors_table => 'sys.dst$error_table');
END;
/
---Query table to check if any affected table
SQL> SELECT * FROM sys.dst$affected_tables;

no rows selected

SQL>
SQL> SELECT * FROM sys.dst$error_table;

no rows selected

-- End the prepare phase
SQL> EXEC DBMS_DST.END_PREPARE;
A prepare window has been successfully ended.

--Startup the database in upgrade mode. Ensure you have set cluster_database=false

SQL> EXEC DBMS_DST.BEGIN_UPGRADE(14);
An upgrade window has been successfully started.

PL/SQL procedure successfully completed.

-- To confirm whether it has been upgraded to version 14
SQL> SELECT PROPERTY_NAME, SUBSTR(property_value, 1, 30) value
FROM DATABASE_PROPERTIES
WHERE PROPERTY_NAME LIKE 'DST_%'
ORDER BY PROPERTY_NAME;

PROPERTY_NAME       VALUE
------------------------------ ------------------------------
DST_PRIMARY_TT_VERSION       14
DST_SECONDARY_TT_VERSION       11
DST_UPGRADE_STATE       UPGRADE

--Restart the database after removing cluster_database

VVI – Make sure you can connect using the services.

Database upgradation in Dataguard environment

In Primary Database

SQL> Alter system switch logfile;

SQL> show parameter log_archive_dest_state_2


NAME TYPE    VALUE

------------------------------------ ----------- ------------------------------

log_archive_dest_state_2 string   ENABLE


SQL> show parameter log_archive_dest_2


NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
log_archive_dest_2 string   SERVICE=REMOTE_P1ASFWMP LGWR A
SYNC VALID_FOR=(ONLINE_LOGFILE
S,PRIMARY_ROLE) DB_UNIQUE_NAME
=P1ASFWMP



Alter system set log_archive_dest_state_2=defer scope=both;



Check standby database alert log and make sure you see "in trasit" as below:

Archived Log entry 11243 added for thread 1 sequence 42540 rlc 758314963 ID 0xa9df544e dest 2: RFS[4]: Opened log for thread 1 sequence 42541 dbid -1444945839 branch 758314963

Tue Mar 18 11:14:09 2014

Media Recovery Log +FRA/p1asfwmp/archivelog/2014_03_18/thread_1_seq_42540.3729.842511625 Media Recovery Waiting for thread 1 sequence 42541 (in transit)

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;


SQL> shutdown immediate


In Primary database.

Setup xwindow using xming for GUI.


Set the oracle_home to new home. oracle@host1.com-/home/oracle/>. oraenv ORACLE_SID = [P1ASFWMP] ? P1ASFWMP

The Oracle base remains unchanged with value /u01/app/oracle oracle@host1.searshc.com-/home/oracle/>echo $ORACLE_HOME /u01/app/oracle/product/11.2.4/db_1
 oracle@host1.com-/home/oracle/>

oracle@host1.com-/home/oracle/>dbua
































Now we have to set the Standby environment:

Copy the tnsnames.ora file to new home



Change /etc/oratab



Startup the database from new oracle home.




Start the MRP process on standby

Sql>alter database recover managed standby database disconnect from session;





In Primary database now enable the log_archive_dest



Alter system set log_archive_dest_state_2=enable scope=both;







On the standby you see the log is in transit. On alert log.




Run the query to make sure standy have upgraded too.



--------------------------------Completed--------------------------------------------

Tuesday, July 19, 2016

Renaming the database to a new name and revert it back to its original name

Original database name: BEFORE
Rename database to   : AFTER

Step 1.

Change the cluster_database parameter to FALSE

[oracle@host1 admin]$ export ORACLE_SID=BEFORE1

[oracle@host1 admin]$ sqlplus "/ as sysdba"

SQL*Plus: Release 11.2.0.4.0 Production on Mon Sep 1 09:38:14 2014

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

USERNAME       INST_NAME    HOST_NAME                           SID   SERIAL#  VERSION    STARTED  SPID       OPID  CPID
-------------- ------------ ----------------------------------- ----- -------- ---------- -------- ---------- ----- ----------
SYS            BEFORE1       host1.intra.searshc.com    236   29       11.2.0.4.0 20140901 7868       35    7867


SQL> alter system set
  2  cluster_database=false scope=spfile sid='*';
System altered.

--------------------------------

STEP 2.

Shutdown the database from srvctl command

$srvctl stop database -d BEFORE

--------------------------------

STEP 3.

Start the instance 1 in mount mode

[oracle@host1 admin]$ export ORACLE_SID=BEFORE1
[oracle@host1 admin]$ sqlplus "/ as sysdba"

SQL*Plus: Release 11.2.0.4.0 Production on Mon Sep 1 09:40:05 2014

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to an idle instance.

select
*
ERROR at line 1:
ORA-01034: ORACLE not available
Process ID: 0
Session ID: 0 Serial number: 0

SQL> startup mount
ORACLE instance started.

Total System Global Area 1603411968 bytes
Fixed Size                  2253664 bytes
Variable Size             469765280 bytes
Database Buffers         1124073472 bytes
Redo Buffers                7319552 bytes
Database Buffers         1124073472 bytes
Redo Buffers                7319552 bytes
Database mounted.
SQL>

---------------------------------

STEP 4.

Run the nid command

[oracle@host1 admin]$ nid target=sys/password@BEFORE DBNAME=AFTER

DBNEWID: Release 11.2.0.4.0 - Production on Mon Sep 1 09:45:33 2014

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Connected to database BEFORE (DBID=2458372709)

Connected to server version 11.2.0

Control Files in database:
    +DATA_DATA1/after/control01.ctl
    +DATA_DATA1/after/control02.ctl

Change database ID and database name BEFORE to AFTER? (Y/[N]) => y

Proceeding with operation
Changing database ID from 2458372709 to 1208276477
Changing database name from BEFOR to AFTER
    Control File +DATA_DATA1/after/control01.ctl - modified
    Control File +DATA_DATA1/after/control02.ctl - modified
    Datafile +DATA_DATA1/after/system01.db - dbid changed, wrote new name
    Datafile +DATA_DATA1/after/sysaux01.db - dbid changed, wrote new name
    Datafile +DATA_DATA1/after/undotbs01.db - dbid changed, wrote new name
    Datafile +DATA_DATA1/after/users01.db - dbid changed, wrote new name
    Datafile +DATA_DATA1/after/undotbs02.db - dbid changed, wrote new name
    Datafile +DATA_DATA1/after/temp01.db - dbid changed, wrote new name
    Control File +DATA_DATA1/after/control01.ctl - dbid changed, wrote new name
    Control File +DATA_DATA1/after/control02.ctl - dbid changed, wrote new name
    Instance shut down

Database name changed to AFTER.
Modify parameter file and generate a new password file before restarting.
Database ID for database AFTER changed to 1208276477.
All previous backups and archived redo logs for this database are unusable.
Database is not aware of previous backups and archived logs in Recovery Area.
Database has been shutdown, open database with RESETLOGS option.
Succesfully changed database name and ID.
DBNEWID - Completed succesfully.

[oracle@host1 admin]$

---------------------------------

STEP 5.

Create pfile from spfile

[oracle@host1 admin]$

[oracle@host1 admin]$ export ORACLE_SID=BEFORE1
[oracle@host1 admin]$ sqlplus "/ as sysdba"

SQL*Plus: Release 11.2.0.4.0 Production on Mon Sep 1 09:49:14 2014

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to an idle instance.

select
*
ERROR at line 1:
ORA-01034: ORACLE not available
Process ID: 0
Session ID: 0 Serial number: 0

SQL> create pfile from SPFILE='+DATA_DATA1/BEFORE/spfilebefore.ora';

File created.

--------------------------------

STEP 6.

Modify the newly created pfile

Update the below value in parameter file :

AFETR2.__db_cache_size=1124073472
AFTER1.__db_cache_size=1124073472
AFTER2.__java_pool_size=16777216
AFTER1.__java_pool_size=16777216
AFTER2.__large_pool_size=33554432
AFTER1.__large_pool_size=100663296
AFTER1.__oracle_base='/u01/app/oracle'#ORACLE_BASE set from environment
AFTER2.__pga_aggregate_target=536870912
AFTER1.__pga_aggregate_target=536870912
AFTER2.__sga_target=1610612736
AFTER1.__sga_target=1610612736
AFTER2.__shared_io_pool_size=0
AFTER1.__shared_io_pool_size=0
AFTER2.__shared_pool_size=419430400
AFTER1.__shared_pool_size=352321536
AFTER2.__streams_pool_size=0
AFTER1.__streams_pool_size=0
*.audit_file_dest='/u01/app/oracle/admin/AFTER/adump'

*.db_name='AFTRCHNG'
*.diagnostic_dest='/u01/app/oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=AFTERXDB)'
AFTER1.instance_number=1
AFTER2.instance_number=2

AFTER1.thread=1
AFTER2.thread=2
AFTER1.undo_tablespace='UNDOTBS2'
AFTER2.undo_tablespace='UNDOTBS1'

-----------------------------------

Step 7.

Create the dump directory and password file.

$mkdir -p /u01/app/oracle/admin/AFTER/adump

$orapwd file=orapwAFTER password=n4u2ck

-----------------------------------

STEP 8.

Create spfile from pfile.

SQL> create SPFILE='+DATA_DATA1/BEFORE/spfileAFTER.ora' from pfile;

-----------------------------------

Step 9.

Rename the parameter file and delete all it contents and give value of above spfile in the pfile.

[oracle@host1 dbs]$ cp initBEFORE1.ora initAFTER1.ora

SPFILE='+DATA_DATA1/BEFORE/spfileAFTER.ora'

-----------------------------------

Step 10.

Start the AFTER database in mount mode

[oracle@host1 dbs]$export ORACLE_SID=AFTER1

[oracle@host1 dbs]$ sqlplus "/ as sysdba"

SQL*Plus: Release 11.2.0.4.0 Production on Mon Sep 1 08:21:21 2014

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options


USERNAME       INST_NAME    HOST_NAME                           SID   SERIAL#  VERSION    STARTED  SPID       OPID  CPID
-------------- ------------ ----------------------------------- ----- -------- ---------- -------- ---------- ----- ----------
SYS            AFTER1       host1.intra.searshc.com    106   5        11.2.0.4.0 20140901 10444      29    10443


SQL> shut immediate
ORA-01507: database not mounted


ORACLE instance shut down.
SQL> startup mount
ORACLE instance started.

Total System Global Area 1603411968 bytes
Fixed Size                  2253664 bytes
Variable Size             469765280 bytes
Database Buffers         1124073472 bytes
Redo Buffers                7319552 bytes
Database mounted.

-----------------------------------------

Step 11.

Open the database in resetlog mode

SQL> alter database open resetlogs;

Database altered.

----------------------------------------

Step 12.

Change the cluster_database parameter to TRUE

SQL> alter system set
  2  cluster_database=true scope=spfile sid='*';

System altered.

----------------------------------------

Step 13.

Shutdow the database

SQL> shut immediate
Database closed.
Database dismounted.
ORACLE instance shut down

----------------------------------------

Step 14.

Add the services to the SRVCTL

[oracle@host1 dbs]$ srvctl add database -d AFTER -o /u01/app/oracle/product/11.2.4/dbhome_1 -p +DATA_DATA1/BEFORE/spfileafter.ora -y AUTOMATIC -s OPEN -a "RAC5_DATA"
[oracle@host1 dbs]$ srvctl config database -d AFTER
Database unique name: AFTER
Database name:
Oracle home: /u01/app/oracle/product/11.2.4/dbhome_1
Oracle user: oracle
Spfile: +DATA_DATA1/BEFORE/spfileafter.ora
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: AFTER
Database instances:
Disk Groups: RAC5_DATA
Mount point paths:
Services:
Type: RAC
Database is administrator managed
[oracle@host1 dbs]$ srvctl add instance -d AFTER -i AFTER2 -n hostb
[oracle@host1 dbs]$ srvctl add instance -d AFTER -i AFTER1 -n host1
[oracle@host1 dbs]$ srvctl config database -d AFTER
Database unique name: AFTER
Database name:
Oracle home: /u01/app/oracle/product/11.2.4/dbhome_1
Oracle user: oracle
Spfile: +DATA_DATA1/BEFORE/spfileafter.ora
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: AFTER
Database instances: AFTER1,AFTER2
Disk Groups: RAC5_DATA
Mount point paths:
Services:
Type: RAC
Database is administrator managed
[oracle@host1 dbs]$

--------------------------------------------


Step 15.

SQL> alter system set cluster_database=TRUE scope=spfile sid='*';

----------------------------------------------

Steps 16.

Start the database with srvctl command

[oracle@host1 dbs]$ srvctl start database -d AFTER

[oracle@host1 dbs]$ srvctl status database -d AFTER
Instance AFTER1 is running on node host1
Instance AFTER2 is running on node hostb

--------------------------------------------


For renaming it back to original name we need to previous BEFORE database from configuration
using srvctl command and follow the all above steps:


Stop the database and run the below command:

[oracle@host1 dbs]$ srvctl remove database -d BEFORE
Remove the database BEFORE? (y/[n]) y


-------------------------------------------


Now we need to rename it to the original name:

Step 1.

Please verify the local listener and set it:



SQL> show parameter listener

NAME                                 TYPE        VALUE
------------------------------------ ----------- ----------------------------------------------------------------------------------------------------
listener_networks                    string
local_listener                       string
remote_listener                      string      host.com:1522


SQL> alter database open;

Database altered.

SQL> select instance_name,status from gv$instance;

INSTANCE_NAME    STATUS
---------------- ------------
AFTER1             OPEN

SQL> !tnsping NEW

TNS Ping Utility for Linux: Version 11.2.0.4.0 - Production on 02-SEP-2014 13:18:43

Copyright (c) 1997, 2013, Oracle.  All rights reserved.

Used parameter files:
/u01/app/oracle/product/11.2.4/dbhome_1/network/admin/sqlnet.ora


Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = host.com)(PORT = 1522)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = AFTER)))
OK (0 msec)

SQL> !nslookup host1-vip
Server:         151.149.224.164
Address:        151.149.224.164#53

Name:   host1-vip.intra.searshc.com
Address: 151.149.141.85


SQL> alter system set local_listener='(DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST =151.149.141.85)(PORT = 1522)))' scope=both;

System altered.

SQL> alter system register;

System altered.


--------------------------------------


NOW FOLLOW THE THE REMAINING STEPS FROM 1 - 16

Synchronize | Refresh Standby Database From The Primary Database Using RMAN Incremental Backup

We need to synchronize a standby DB from an incremental RMAN backup in these two cases:

1-There is a big gap between the primary database and the standby database whereas copying and applying the archives from Production to DR site will take a long time.

2-You lost one or more archive log file on the primary site that needed for the recovery of the standby database or nologging transactions have been ran on the primary database.

In general the advantages of recovering the standby database from an incremental backup over the recovery from arvhivelogs is that Incremental backup will recover the standby database much faster than applying the archives, incremental backup apply only the COMMITED transactions on the standby database while archives  will apply the the COMMITED & NON COMMITED transactions then ROLLBACK the non committed transactions the thing makes the process much longer than recovering from RMAN incremental backup.

Note: The following demonstration done on 11g database [Linux OS platform]...

Step1: On The DR Site:
------  ------------------
Check the current SCN on the standby DB:

DEV > select to_char(current_scn) from v$database;

TO_CHAR(CURRENT_SCN)
------------------------------
1552405397


Step 2: On The Primary Site:
-------   ----------------------
Create a standby control file:

SQL> ALTER DATABASE CREATE STANDBY CONTROLFILE AS '/home/oracle/control/standby_control.ctl';

Move that new standby controlfile to the DR site:
#scp /home/oracle/control/standby_control.ctl oracle@dr:/ora_dr1/backup_inc

Create an incremental backup:

Take an incremental backup starting from the standby SCN -The one we checked in Step1-:
RMAN> Backup incremental from SCN 1552405397 database tag='FOR_STANDBY' format '/backupdisk/%d_%t_%s_%p';

Move the backup pieces to the DR site:
#scp /backupdisk/* oracle@dr:/ora_dr1/backup_inc


Step 3: On The DR Site:
-------  ------------------
Shutdown the standby database:

SQL> Shu immediate;

Rename the original control file of  the standby database:
# cp /ora_dr1/control/control01.ctl /ora_dr1/control/control01.ctl.sav

Restore the standby control file we just copied from the primary site:
Make sure to copy it to the right path of the standby DB control file and duplicate it if you have more than one control file path in the standby DB spfile:

# cp /ora_dr1/backup_inc/standby_control.ctl   /ora_dr1/control/control01.ctl

Startup the Standby database with the new controlfile:

SQL> startup mount;

Catalog the incremental backup pieces -which we moved from the Primary site- in the standby DB RMAN catalog to let the standby DB consider that backup piece during recovery process:

#rman target /

RMAN> catalog start with '/ora_dr1/backup_inc/' noprompt;

Recover the standby database from the incremental backup: -From RMAN-


RMAN> RECOVER DATABASE noredo;
Or:
RMAN> RECOVER DATABASE FROM TAG for_standby;

Now your standby database is refreshed from the incremental backup.You can start the Managed Recovery process on the standby DB:

SQL> recover managed standby database disconnect;

Done.

Wednesday, July 13, 2016

Scenario - Recovering from wrongly added datafile in system tablespace to default dbs location in ASM environment

Recently while adding a datafile to system tablespace, one of our dba's made a mistake by inserting a white space in between "quote" and "plus" sign in ASM aware environment. Since it was a production we had to restore this wrongly added datafile to the ASM location as soon as possible.

We reproduced this issue in one of our test environment.


Steps to reproduce the issue:

[oracle@dbstnd ~]$ ps -ef | grep pmon
grid     18985     1  0 00:08 ?        00:00:00 asm_pmon_+ASM
oracle   28511     1  0 01:41 ?        00:00:00 ora_pmon_TEST
oracle   28983 28923  0 01:53 pts/5    00:00:00 grep --color=auto pmon

[oracle@dbstnd ~]$ . oraenv
ORACLE_SID = [TEST] ?
The Oracle base remains unchanged with value /u01/app/oracle
[oracle@dbstnd ~]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Wed Jul 13 01:53:51 2016

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Data Mining
and Real Application Testing options


SQL> select instance_name,status from v$instance;

INSTANCE_NAME    STATUS
---------------- ------------
TEST             OPEN

SQL> archive log list;
Database log mode              Archive Mode
Automatic archival             Enabled
Archive destination            USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence     1
Next log sequence to archive   2
Current log sequence           2

SQL> select name from v$datafile;

NAME
----------------------------------------------------------------------
+DATA/test/system01.dbf
+DATA/test/sysaux01.dbf
+DATA/test/undotbs01.dbf
+DATA/test/users01.dbf

SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Data Mining
and Real Application Testing options
[oracle@dbstnd ~]$ rman target /

Recovery Manager: Release 11.2.0.4.0 - Production on Wed Jul 13 01:56:19 2016

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

connected to target database: TEST (DBID=2215202024)

RMAN>  backup database plus archivelog;


Starting backup at 13-JUL-16
current log archived
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=46 device type=DISK
channel ORA_DISK_1: starting archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=2 RECID=1 STAMP=917056597
channel ORA_DISK_1: starting piece 1 at 13-JUL-16
channel ORA_DISK_1: finished piece 1 at 13-JUL-16
piece handle=+DATA/test/backupset/2016_07_13/annnf0_tag20160713t015637_0.268.917056599 tag=TAG20160713T015637 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 13-JUL-16

Starting backup at 13-JUL-16
using channel ORA_DISK_1
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00001 name=+DATA/test/system01.dbf
input datafile file number=00002 name=+DATA/test/sysaux01.dbf
input datafile file number=00003 name=+DATA/test/undotbs01.dbf
input datafile file number=00004 name=+DATA/test/users01.dbf
channel ORA_DISK_1: starting piece 1 at 13-JUL-16
channel ORA_DISK_1: finished piece 1 at 13-JUL-16
piece handle=+DATA/test/backupset/2016_07_13/nnndf0_tag20160713t015639_0.269.917056599 tag=TAG20160713T015639 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:25
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
including current control file in backup set
including current SPFILE in backup set
channel ORA_DISK_1: starting piece 1 at 13-JUL-16
channel ORA_DISK_1: finished piece 1 at 13-JUL-16
piece handle=+DATA/test/backupset/2016_07_13/ncsnf0_tag20160713t015639_0.270.917056625 tag=TAG20160713T015639 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 13-JUL-16

Starting backup at 13-JUL-16
current log archived
using channel ORA_DISK_1
channel ORA_DISK_1: starting archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=3 RECID=2 STAMP=917056626
channel ORA_DISK_1: starting piece 1 at 13-JUL-16
channel ORA_DISK_1: finished piece 1 at 13-JUL-16
piece handle=+DATA/test/backupset/2016_07_13/annnf0_tag20160713t015706_0.272.917056627 tag=TAG20160713T015706 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 13-JUL-16

RMAN> restore database preview;

Starting restore at 13-JUL-16
using channel ORA_DISK_1


List of Backup Sets
===================


BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
2       Full    1014.30M   DISK        00:00:16     13-JUL-16
        BP Key: 2   Status: AVAILABLE  Compressed: NO  Tag: TAG20160713T015639
        Piece Name: +DATA/test/backupset/2016_07_13/nnndf0_tag20160713t015639_0.269.917056599
  List of Datafiles in backup set 2
  File LV Type Ckp SCN    Ckp Time  Name
  ---- -- ---- ---------- --------- ----
  1       Full 931310     13-JUL-16 +DATA/test/system01.dbf
  2       Full 931310     13-JUL-16 +DATA/test/sysaux01.dbf
  3       Full 931310     13-JUL-16 +DATA/test/undotbs01.dbf
  4       Full 931310     13-JUL-16 +DATA/test/users01.dbf

List of Archived Log Copies for database with db_unique_name TEST
=====================================================================

Key     Thrd Seq     S Low Time
------- ---- ------- - ---------
2       1    3       A 13-JUL-16
        Name: +DATA/test/archivelog/2016_07_13/thread_1_seq_3.271.917056627

Media recovery start SCN is 931310
Recovery must be done beyond SCN 931310 to clear datafile fuzziness
Finished restore at 13-JUL-16

[oracle@dbstnd ~]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Wed Jul 13 02:00:02 2016

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Data Mining
and Real Application Testing options

SQL> select tablespace_name from dba_tablespaces;

TABLESPACE_NAME
------------------------------
SYSTEM
SYSAUX
UNDOTBS1
TEMP
USERS


Here we added a datafile with white space.

SQL> ALTER TABLESPACE "SYSTEM" ADD DATAFILE ' +DATA' SIZE 100M;

Tablespace altered.


Now we can see it has been added to the default location of Oracle_home/dbs. Since everything that will use this datafile have to throw an error and our backps were also regurlay failing because of this.

SQL> select name from v$datafile;

NAME
----------------------------------------------------------------------
+DATA/test/system01.dbf
+DATA/test/sysaux01.dbf
+DATA/test/undotbs01.dbf
+DATA/test/users01.dbf
/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/ +DATA

SQL> select file_name,file_id from dba_data_files where tablespace_name='SYSTEM';

FILE_NAME FILE_ID
----------------------------------------------------------------------
+DATA/test/system01.dbf        1
/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/ +DATA  5


SQL> shut immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Data Mining
and Real Application Testing options


[oracle@dbstnd ~]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Wed Jul 13 02:09:44 2016

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to an idle instance.

SQL> startup mount
ORACLE instance started.

Total System Global Area  730714112 bytes
Fixed Size                  2256832 bytes
Variable Size             482345024 bytes
Database Buffers          243269632 bytes
Redo Buffers                2842624 bytes
Database mounted.
SQL>

++ Open a new session:
---------------------
[oracle@dbstnd ~]$ . oraenv
ORACLE_SID = [oracle] ? TEST
The Oracle base has been set to /u01/app/oracle
[oracle@dbstnd ~]$ rman target /

Recovery Manager: Release 11.2.0.4.0 - Production on Wed Jul 13 02:11:09 2016

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

connected to target database: TEST (DBID=2215202024, not open)


RMAN> copy datafile 5 to '+DATA';

Starting backup at 13-JUL-16
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=26 device type=DISK
channel ORA_DISK_1: starting datafile copy
input datafile file number=00005 name=/u01/app/oracle/product/11.2.0.4/dbhome_1/                                                                                        dbs/ +DATA
output file name=+DATA/test/datafile/system.273.917057511 tag=TAG20160713T021150                                                                                         RECID=1 STAMP=917057510
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:01
Finished backup at 13-JUL-16


RMAN> switch datafile 5 to copy;

datafile 5 switched to datafile copy "+DATA/test/datafile/system.273.917057511"

Then open the database which is mounted:
---------------------------------------
SQL> alter database open;

Database altered.


SQL> select file_name,file_id from dba_data_files where tablespace_name='SYSTEM';

FILE_NAME FILE_ID
----------------------------------------------------------------------
+DATA/test/system01.dbf 1
+DATA/test/datafile/system.273.917057511 5

Friday, July 8, 2016

DataPump DISABLE_ARCHIVE_LOGGING example in 12c

Using New Oracle Database 12c Feature for DataPump DISABLE_ARCHIVE_LOGGING
New DataPump feature disables redo logging when loading data into tables and when creating indexes.

For my testing of the new feature I will be importing the SOE schema from the SwingBench tool. The SOE schema is about 2G and size.

Test 1

I will be to import the normal way.

$impdp parfile=impdp.par
impdp.par
directory=DATA_PUMP_DIR2
dumpfile=exp_schema_soe%U.dmp
logfile=imp_schema_soe.log
parallel=2

Before the import there were zero archive logs
In the first test we should see redolog and archive logs being generated during the import of the SOE schema.

Import


During the import we can see redologs and archivelogs are being generated during the import


After the import 25 archive logs were generated


The import had an elapsed time of 14 minutes with 41 seconds

Test 2

I will import using the new feature DISABLE_ARCHIVE_LOGGING

SQL> drop user SOE cascade;

$impdp parfile=impdp_noarchive.par
impdp_noarchive.par
directory=DATA_PUMP_DIR2
dumpfile=exp_schema_soe%U.dmp
logfile=imp_schema_soe.log
parallel=2
transform=disable_archive_logging:Y

Before the import there were zero archive logs


Import


During the import we see that there is no archivelog being generate since there is no redo generation.

After import was complete we can see that no new archivelog where generated.

The import had an elapsed time of 10 minutes with 06 seconds

Note from Oracle Documentation:
With redo logging disabled, the disk space required for redo logs during an Oracle Data Pump import will be smaller. However, to ensure recovery from media failure, the DBA should do an RMAN backup after the import completes.

Even with this parameter specified, there is still redo logging for other operations of Oracle Data Pump. This includes all CREATE and ALTER statements, except CREATE INDEX, and all operations against the master table used by Oracle Data Pump during the import.

This feature reduces the required maintenance of redo logs by DBAs

Oracle Database 12c New Features – Part 3

During this Oracle Database 12c new features article series; I shall be extensively exploring some of the miscellaneous, yet very useful, new additions and enhancements introduced in the areas of Clusterware, ASM and RAC database.

Also Read Oracle Database 12c New Features – Part 1 & Part 2

Part 3 covers:

1. Additions/Enhancements in ASM

2. Additions/Enhancements in Grid Infrastructure

3. Additions/Enhancements in Real Application Cluster (database)

1. Additions/Enhancements in Automatic Storage Management (ASM)

In a typical Grid Infrastructure installation, each node will have its own ASM instance running and act the as the storage container for the databases running on the node. There is a single point-of-failure threat with this setup. For instance, if the ASM instance on the node suffers or fails all the databases and instances running on the node will be impacted. To avoid ASM instance single-point-failure, Oracle 12c provides a Flex ASM feature. The Flex ASM is a different concept and architecture all together. Only a fewer number of ASM Instances need to run on a group of servers in the cluster. When an ASM instance fails on a node, Oracle Clusterware automatically starts surviving (replacement) ASM instance on a different node to maintain availability. In addition, this setup also provides ASM instance load balancing capabilities for the instances running on the node. Another advantage of Flex ASM is that it can be configured on a separate node.

When you choose Flex Cluster option as part of the cluster installation, Flex ASM configuration will be automatically selected as it is required by the Flex Cluster. You can also have traditional cluster over Flex ASM. When you decide to use Flex ASM, you must ensure the required networks are available.  You can choose the Flex ASM storage option as part of Cluster installation, or use ASMCA to enable Flex ASM in a standard cluster environment.

The following command shows the current ASM mode:


$ ./asmcmd showclustermode

$ ./srvctl config asm
Or connect to the ASM instances and query the INSTANCE_TYPE parameter. If the output value is ASMPROX, then, the Flex ASM is configured.

Increased ASM storage limits

The ASM storage hard limits on maximum ASM disk groups and disk size has been drastically increased. In 12cR1, ASM support 511 ASM disk groups against 63 ASM disk groups in 11gR2. Also, an ASM disk can be now 32PB size against 20PB in 11gR2.

Tuning ASM rebalance operations

The new EXPLAIN WORK FOR statement in 12c measures the amount of work required for a given ASM rebalance operation and inputs the result in V$ASM_ESTIMATE dynamic view. Using the dynamic view, you can adjust the POWER LIMIT clause to improve the rebalancing operation work. For example, if you want to measure the amount of work required for adding a new ASM disk, before actually running the manual rebalance operation, you can use the following:

SQL> EXPLAIN WORK FOR ALTER DISKGROUP DG_DATA ADD DISK data_005;

SQL> SELECT est_work FROM V$ASM_ESTIMATE;

SQL> EXPLAIN WORK SET STATEMENT_ID='ADD_DISK' FOR ALTER DISKGROUP DG_DATA AD DISK data_005;

SQL> SELECT est_work FROM V$ASM_ESTIMATE WHERE STATEMENT_ID = 'ADD_DISK’;
You can adjust the POWER limit based on the output you get from the dynamic view to improve the rebalancing operations.

ASM Disk Scrubbing

The new ASM Disk Scrubbing operation on a ASM diskgroup with normal or high redundancy level, verifies the logical data corruption on all ASM disks of that ASM diskgroup, and repairs the logical corruption automatically, if detected, using the ASM mirror disks. The disk scrubbing can be performed at disk group, specified disk or on a file and the impact is very minimal. The following examples demonstrate the disk scrubbing scenario:

SQL> ALTER DISKGROUP dg_data SCRUB POWER LOW:HIGH:AUTO:MAX;

SQL> ALTER DISKGROUP dg_data SCRUB FILE '+DG_DATA/MYDB/DATAFILE/filename.xxxx.xxxx'
REPAIR POWER AUTO;

Active Session History (ASH) for ASM

The V$ACTIVE_SESSION_HISOTRY dynamic view now provides the active session sampling on ASM instance too. However, the use of diagnostic pack is subject to the license.

2. Additions/Enhancements in Grid Infrastructure

Oracle 12c support two types of cluster configuration at the time of Clusterware installation: Traditional Standard Cluster and Flex cluster. In a traditional standard cluster, all nodes in a cluster are tightly integrated to each other and interact through a private network and can access the storage directly. On the other hand, the Flex Cluster introduced two types of nodes arranged in Hub and Leaf nodes architecture. The nodes arranged in Hub nodes category are similar to the traditional standard cluster, i.e. they are interconnected to each other through a private network and have the directly storage read/write access. The Leaf nodes are different from the Hub nodes. They don’t need to have direct access to the underlying storage; rather they access the storage/data through Hub nodes.

You can configure Hub nodes up to 64, and Leaf nodes can be many. In an Oracle Flex Cluster, you can have Hub nodes without having Leaf nodes configured, but no Leaf nodes exist without Hub nodes. You can configure multiple Leaf nodes to a single Hub node.  In Oracle Flex Cluster, only Hub nodes will have direct access to the OCR/Voting disks.  When you plan large scale Cluster environments, this would be a great feature to use. This sort of setup greatly reduces interconnect traffic, provides room to scale up the cluster to the traditional standard cluster.

There are two ways to deploy the Flex Cluster:

While configuring a brand new cluster
Upgrade a standard cluster mode to Flex Cluster

If you are configuring a brand new cluster, you need to choose the type of cluster configuration during step 3, select Configure a Flex Cluster option and you will have to categorize the Hub and Leaf nodes on Step 6. Against each node, select the Role, Hub or Leaf, and optionally Virtual Hostname too.

The following steps are required to convert a standard cluster mode to Flex Cluster mode:

1. Get the current status of the cluster using the following command:

$ ./crsctl get cluster mode status
2. Run the following command as the root user:

$ ./crsctl set cluster mode flex
$ ./crsctl stop crs
$ ./crsctl start crs –wait
3. Change the node role as per your design

$ ./crsctl get node role config
$ ./crsctl set node role hub|leaf
$ ./crsctl stop crs
$ ./crsctl start crs -wait

Note the following:

You can’t revert back from Flex to Standard cluster mode
Cluster node mode change requires cluster stack stop/start
Ensure GNS is  configured with a fixed VIP


OCR backup in ASM disk group

With 12c, OCR can be now be backed-up in ASM disk group. This simplifies the access to the OCR backup files across all nodes. In case of OCR restore, you don’t need to worry about which node the OCR latest backup is on. One can simply identify the latest backup stored in the ASM from any node and can perform the restore easily.

The following example demonstrates how to set the ASM disk group as OCR backup location:

$ ./ocrconfig -backuploc +DG_OCR

IPv6 support
With Oracle 12c, Oracle now supports IPv4 and IPv6 network protocol configuration on the same network. You can now configure public network (Public/VIP) either on IPv4, IPv6 or combination protocol configuration. However, ensure you use the same set of IP protocol configuration across all nodes in a cluster.

3. Additions/Enhancements in RAC (database)

What-If command evaluation

Using the new What-if command evaluation (-eval) option with srvctl, one can now determine the impact of running the command. This new addition to the srvctl command, will let you simulate the command without it actually being executed or making any changes to the current system. This is particularly useful in a situation when you want to make a change to an existing system and you’re not sure of the outcome.  Therefore, the command will provide the effect of making the change. The –eval option also can be used with crsctl command.

For example, if you want to know what will happen if you stop a particular database, you can use the following example:

$ ./srvctl stop database –d MYDB –eval
$ ./crsctl eval modify resource <resource_name> -attr “value”
Miscellaneous srvctl improvements
There are a few new additions to the srvctl command. The following demonstrates the new addition to stop/start database/instance resources on the cluster:

srvctl start database|instance –startoption NOMOUNT|MOUNT|OPEN
srvctl stop database|instance –stopoption NOMOUNT|MOUNT|OPEN
The next article will focus on top most developers’ features on 12c.