Translate into your own language

Monday, March 21, 2016

EMD upload error: uploadXMLFiles skipped :: OMS version not checked yet.

I strongly believe if you’ve deployed agents a coupled of times, you’ve encountered the following error after the installation:
[oracle@dg2 bin]$ ./emctl upload agent
Oracle Enterprise Manager 10g Release 3 Grid Control 10.2.0.3.0.
Copyright (c) 1996, 2007 Oracle Corporation. All rights reserved.
—————————————————————
EMD upload error: uploadXMLFiles skipped :: OMS version not checked yet..

The first thing that should be done is checking the log file. Switch to the following directory and tail the log file:
cd /u01/oracle/product/10.2.0/agent10g/sysman/log
tail -f emagent.trc

Here’s the output from the log file:
2012-07-26 03:46:33 Thread-51022736 ERROR pingManager: nmepm_pingReposURL: Did not receive a response header from repository
2012-07-26 03:47:08 Thread-51022736 ERROR pingManager: nmepm_pingReposURL: Did not receive a response header from repository
2012-07-26 03:47:08 Thread-51022736 ERROR pingManager: nmepm_pingReposURL: Did not receive a response header from repository

I checked the status of the agent and saw that it’s running :
[oracle@dg2 bin]$ ./emctl status agent
Oracle Enterprise Manager 10g Release 3 Grid Control 10.2.0.3.0.
Copyright (c) 1996, 2007 Oracle Corporation. All rights reserved.
—————————————————————
Agent Version : 10.2.0.3.0
OMS Version : 10.2.0.3.0
Protocol Version : 10.2.0.2.0
Agent Home : /u01/oracle/product/10.2.0/agent10g
Agent binaries : /u01/oracle/product/10.2.0/agent10g
Agent Process ID : 27772
Parent Process ID : 27755
Agent URL : http://dg2:3872/emd/main/
Repository URL : http://dg1:4889/em/upload/
Started at : 2012-07-26 03:42:04
Started by user : oracle
Last Reload : 2012-07-26 03:42:04
Last successful upload : (none)
Last attempted upload : (none)
Total Megabytes of XML files uploaded so far : 0.00
Number of XML files pending upload : 21
Size of XML files pending upload(MB) : 3.25
Available disk space on upload filesystem : 58.86%
Last attempted heartbeat to OMS : 2012-07-26 03:45:22
Last successful heartbeat to OMS : unknown
—————————————————————
Agent is Running and Ready
[oracle@dg2 bin]$
[oracle@dg2 bin]$

So I decided to clear unnecessary files that can’t be uploaded to the Grid Control under the following directories:
$AGENT_HOME/sysman/emd/collection
$AGENT_HOME/sysman/emd/state
$AGENT_HOME/sysman/emd/recv
$AGENT_HOME/sysman/emd/upload
$AGENT_HOME/sysman/emd/lastupld.xml

Then stopped the agent, secured it, cleared unnecessary files again with clearstate parameter, started the agent and uploaded.

[oracle@dg2 bin]$ ./emctl stop agent
Oracle Enterprise Manager 10g Release 3 Grid Control 10.2.0.3.0.
Copyright (c) 1996, 2007 Oracle Corporation. All rights reserved.
Stopping agent …. stopped.

[oracle@dg2 bin]$ ./emctl secure agent
Oracle Enterprise Manager 10g Release 3 Grid Control 10.2.0.3.0.
Copyright (c) 1996, 2007 Oracle Corporation. All rights reserved.
Enter Agent Registration password :
Agent is already stopped… Done.
Securing agent… Started.
Requesting an HTTPS Upload URL from the OMS… Done.
Requesting an Oracle Wallet and Agent Key from the OMS…
Done.
Check if HTTPS Upload URL is accessible from the agent… Done.
Configuring Agent for HTTPS in CENTRAL_AGENT mode… Done.
EMD_URL set in /u01/oracle/product/10.2.0/agent10g/sysman/config/emd.properties
Securing agent… Successful.
[oracle@dg2 bin]$

[oracle@dg2 bin]$ ./emctl clearstate agent
Oracle Enterprise Manager 10g Release 3 Grid Control 10.2.0.3.0.
Copyright (c) 1996, 2007 Oracle Corporation. All rights reserved.
EMD clearstate completed successfully
[oracle@dg2 bin]$ ./emctl start agent
Oracle Enterprise Manager 10g Release 3 Grid Control 10.2.0.3.0.
Copyright (c) 1996, 2007 Oracle Corporation. All rights reserved.
Starting agent ….. started.

[oracle@dg2 bin]$ ./emctl upload agent
Oracle Enterprise Manager 10g Release 3 Grid Control 10.2.0.3.0.
Copyright (c) 1996, 2007 Oracle Corporation. All rights reserved.
—————————————————————
EMD upload completed successfully

[oracle@dg2 bin]$ ./emctl upload agent
Oracle Enterprise Manager 10g Release 3 Grid Control 10.2.0.3.0.
Copyright (c) 1996, 2007 Oracle Corporation. All rights reserved.
—————————————————————
EMD upload completed successfully
[oracle@dg2 bin]$

Sunday, March 20, 2016

Step by step how to apply PSU patch in Oracle

STEP 1:Check the operating system bit version (32 bit or 64 bit).

 [oracle@host1 ~]$ uname -a

 Linux host11.intra.searshc.com 2.6.18-308.1.1.el5xen #1 SMP Fri Feb 17 17:11:09 EST 2012 x86_64 x86_64 x86_64 GNU/Linux.

here we can see it is Linux operating system with bit version  x86_64.

If it is AIX server then use below command to check the bit version:

host03-oracle:/home/oracle(PROD)-->getconf -a | grep KERN
KERNEL_BITMODE:                         64


STEP 2:Check the oracle database version where we have to apply the patch.

There are various method to check the database version. I would be using lsinventory command because it shows many more details apart from database version.

[oracle@host1 ~]$ $ORACLE_HOME/OPatch/opatch lsinventory

Oracle Interim Patch Installer version 11.2.0.3.5
Copyright (c) 2013, Oracle Corporation.  All rights reserved.


Oracle Home       : /u01/app/oracle/product/11.2.0.2/db_1
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/oracle/product/11.2.0.2/db_1/oraInst.loc
OPatch version    : 11.2.0.3.5
OUI version         : 11.2.0.2.0
Log file location : /u01/app/oracle/product/11.2.0.2/db_1/cfgtoollogs/opatch/opatch2016-03-21_00-14-10AM_1.log

Lsinventory Output file location : /u01/app/oracle/product/11.2.0.2/db_1/cfgtoollogs/opatch/lsinv/lsinventory2016-03-21_00-14
-10AM.txt

--------------------------------------------------------------------------------
Installed Top-level Products (1):

Oracle Database 11g                                                  11.2.0.2.0
There are 1 products installed in this Oracle Home.
-----------------------------------------------------------------------
-----------------------------------------------------------------------

the above is the only parical output. You can see lot of other detials as well.
Here you can notice two things:
1. OPatch version    : 11.2.0.3.5
2. Oracle Database 11g : 11.2.0.2.0

I will cover the OPatch version in the later steps. Here we will concentrate only on oracle database version.

So from above two steps we have found that the operating system bit version is
x86_64 and database version is 11.2.0.2.0.

STEP 3: Now check in the patch advisory document what patch oracle has released for 11.2.0.2.0. You can find the patch advisory document for each quarter's patch.


STEP 4: Download the patch from my oracle support(meta link). While downloading make sure you are downloading the patch for the correct operating system bit version.
Here we have to downlaod the patch for operating system bit version
x86_64 and database version 11.2.0.2.0.


STEP 5. Move the downloaded zipped file into the server. And then unzip it using the appropriate unzip command into the server only.

STEP 6. Now you can unzip the file on your local system and open the read me file from it.

STEP 7: In the prerequisite section you can see the OPatch version required to apply this patch. It clearly mention if higher version of OPatch is needed or not. If so then download the OPatch and move it to the Oracle home. Make sure you rename or remove the old OPatch direcotry from the Oracle home. After this verify the new opatch version using lsinventory command. OPatch version will be changed from what was earlier. Now we are good to go.


STEP 8: Count the invalid object and make a note of it.


STEP 9: If listener is running from the home where we are go apply to patch then stop it otherwise no need to stop it. Many time it is running from Grid home. So no need to stop it if is running from Grid home.

STEP 10: If the server is being monitored by OEM grid control then put it into blackout.

STEP 11: Check the space into the mount point what oracle home is reffering to.

STEP 12: Shut down all the databases running from that home.

STEP 13: Go to the directory where you have moved and unzipped the patch:

[oracle@host1  tmp]$ cd p20760997_112030_Linux-x86-64
[oracle@host1 p14841437_112020_Linux-x86-64]$ ls -ltr
total 4
drwxr-xr-x 43 ralam0 ldap-users 4096 Feb 11  2013 20760997
[oracle@host1 p14841437_112020_Linux-x86-64]$ cd 20760997

STEP 14: Now issue the following command:

$$ORACLE_HOME/OPatch/opatch apply 

   
             ------------------------------------------------------------------------------
             -------------------------------------------------------------------------------
             --------------------------------------------------------------------------------

OPatch succeeded


STEP 15: Start up the database and run the catbundle command for each databases running under that home.

$cd $ORACLE_HOME/rdbms/admin
SQL>SQLPLUS “/ AS SYSDBA”
SQL> STARTUP
 SQL> @catbundle.sql psu apply
     
STEP 16: Run the utlrp.sql script to validate the objects.

SQL>@utlrp.sql

STEP 17. Count the invalid object and make sure it is lesser or same than what it was in earlier count before patching.




Step by step how to apply CPU(SPU) Patch in Oracle

STEP 1:Check the operating system bit version (32 bit or 64 bit).

 [oracle@host1 ~]$ uname -a

 Linux host11.intra.searshc.com 2.6.18-308.1.1.el5xen #1 SMP Fri Feb 17 17:11:09 EST 2012 x86_64 x86_64 x86_64 GNU/Linux.

here we can see it is Linux operating system with bit version  x86_64.

If it is AIX server then use below command to check the bit version:

host03-oracle:/home/oracle(PROD)-->getconf -a | grep KERN
KERNEL_BITMODE:                         64


STEP 2:Check the oracle database version where we have to apply the patch.

There are various method to check the database version. I would be using lsinventory command because it shows many more details apart from database version.

[oracle@host1 ~]$ $ORACLE_HOME/OPatch/opatch lsinventory

Oracle Interim Patch Installer version 11.2.0.3.5
Copyright (c) 2013, Oracle Corporation.  All rights reserved.


Oracle Home       : /u01/app/oracle/product/11.2.0.2/db_1
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/oracle/product/11.2.0.2/db_1/oraInst.loc
OPatch version    : 11.2.0.3.5
OUI version         : 11.2.0.2.0
Log file location : /u01/app/oracle/product/11.2.0.2/db_1/cfgtoollogs/opatch/opatch2016-03-21_00-14-10AM_1.log

Lsinventory Output file location : /u01/app/oracle/product/11.2.0.2/db_1/cfgtoollogs/opatch/lsinv/lsinventory2016-03-21_00-14
-10AM.txt

--------------------------------------------------------------------------------
Installed Top-level Products (1):

Oracle Database 11g                                                  11.2.0.2.0
There are 1 products installed in this Oracle Home.
-----------------------------------------------------------------------
-----------------------------------------------------------------------

the above is the only parical output. You can see lot of other detials as well.
Here you can notice two things:
1. OPatch version    : 11.2.0.3.5
2. Oracle Database 11g : 11.2.0.2.0

I will cover the OPatch version in the later steps. Here we will concentrate only on oracle database version.

So from above two steps we have found that the operating system bit version is
x86_64 and database version is 11.2.0.2.0.

STEP 3: Now check in the patch advisory document what patch oracle has released for 11.2.0.2.0. You can find the patch advisory document for each quarter's patch.


STEP 4: Download the patch from my oracle support(meta link). While downloading make sure you are downloading the patch for the correct operating system bit version.
Here we have to downlaod the patch for operating system bit version
x86_64 and database version 11.2.0.2.0.


STEP 5. Move the downloaded zipped file into the server. And then unzip it using the appropriate unzip command into the server only.

STEP 6. Now you can unzip the file on your local system and open the read me file from it.

STEP 7: In the prerequisite section you can see the OPatch version required to apply this patch. It clearly mention if higher version of OPatch is needed or not. If so then download the OPatch and move it to the Oracle home. Make sure you rename or remove the old OPatch direcotry from the Oracle home. After this verify the new opatch version using lsinventory command. OPatch version will be changed from what was earlier. Now we are good to go.


STEP 8: Count the invalid object and make a note of it.


STEP 9: If listener is running from the home where we are go apply to patch then stop it otherwise no need to stop it. Many time it is running from Grid home. So no need to stop it if is running from Grid home.

STEP 10: If the server is being monitored by OEM grid control then put it into blackout.

STEP 11: Check the space into the mount point what oracle home is reffering to.

STEP 12: Shut down all the databases running from that home.

STEP 13: Go to the directory where you have moved and unzipped the patch:

[oracle@host1  tmp]$ cd p14841437_112020_Linux-x86-64
[oracle@host1 p14841437_112020_Linux-x86-64]$ ls -ltr
total 4
drwxr-xr-x 43 ralam0 ldap-users 4096 Feb 11  2013 14841437
[oracle@host1 p14841437_112020_Linux-x86-64]$ cd 14841437
[oracle@hsot1 14841437]$ ls -ltr
total 164
drwxr-xr-x 4 ralam0 ldap-users 4096 Feb 11  2013 11830776
drwxr-xr-x 4 ralam0 ldap-users 4096 Feb 11  2013 11830777
drwxr-xr-x 4 ralam0 ldap-users 4096 Feb 11  2013 12586486
drwxr-xr-x 4 ralam0 ldap-users 4096 Feb 11  2013 12586488
drwxr-xr-x 4 ralam0 ldap-users 4096 Feb 11  2013 12586489
drwxr-xr-x 4 ralam0 ldap-users 4096 Feb 11  2013 12586491
drwxr-xr-x 4 ralam0 ldap-users 4096 Feb 11  2013 12586492
drwxr-xr-x 4 ralam0 ldap-users 4096 Feb 11  2013 12586493
drwxr-xr-x 4 ralam0 ldap-users 4096 Feb 11  2013 12586494
drwxr-xr-x 4 ralam0 ldap-users 4096 Feb 11  2013 12586495
drwxr-xr-x 4 ralam0 ldap-users 4096 Feb 11  2013 12586496
drwxr-xr-x 4 ralam0 ldap-users 4096 Feb 11  2013 12846268
drwxr-xr-x 4 ralam0 ldap-users 4096 Feb 11  2013 12846269
drwxr-xr-x 4 ralam0 ldap-users 4096 Feb 11  2013 13386082
drwxr-xr-x 4 ralam0 ldap-users 4096 Feb 11  2013 13769501
drwxr-xr-x 4 ralam0 ldap-users 4096 Feb 11  2013 13769502
drwxr-xr-x 4 ralam0 ldap-users 4096 Feb 11  2013 13769503

STEP 14: Now issue the following command:

$opatch napply -skip_subset -skip_duplicate

     This command will show logs as below :

Oracle Database 11g                                                  11.2.0.2.0
There are 1 products installed in this Oracle Home.


Interim patches (3) :

Patch  13528551     : applied on Thu Mar 22 05:19:12 CDT 2012
Unique Patch ID:  14454920
    Created on 6 Jan 2012, 05:04:26 hrs PST8PDT
    Bugs fixed:
      13528551

Patch  13499128     : applied on Thu Mar 22 05:19:05 CDT 2012
Unique Patch ID:  14454920
    Created on 6 Jan 2012, 04:58:55 hrs PST8PDT
    Bugs fixed:
    13499128

Patch  13466801     : applied on Thu Mar 22 05:19:00 CDT 2012
Unique Patch ID:  14454920
  Created on 6 Jan 2012, 04:50:15 hrs PST8PDT
    Bugs fixed:
      13466801

             ------------------------------------------------------------------------------
             -------------------------------------------------------------------------------
             --------------------------------------------------------------------------------

OPatch succeeded


STEP 15: Start up the database and run the catbundle command for each databases running under that home.

$cd $ORACLE_HOME/rdbms/admin
SQL>SQLPLUS “/ AS SYSDBA”
SQL> STARTUP
        SQL> @catbundle.sql cpu apply
        Below are the log of catbundle command
SQL> @?/rdbms/admin/catbundle.sql cpu apply
PL/SQL procedure successfully completed.
PL/SQL procedure successfully completed.
PL/SQL procedure successfully completed.
Generating apply and rollback scripts...
Check the following file for errors:
/u01/app/oracle/product/11.2.0/dbhome_          1/cfgtoollogs/catbundle/catbundle_CPU_IKBTEST1_GENERATE_2012Mar22_05_22 _13.log
Apply script: /u01/app/oracle/product/11.2.0/dbhome_ 1/rdbms/admin/catbundle_CPU_IKBTEST1_APPLY.sql
Rollback script: /u01/app/oracle/product/11.2.0/dbhome_ 1/rdbms/admin/catbundle_CPU_IKBTEST1_ROLLBACK.sql
PL/SQL procedure successfully completed.
Executing script file...
SQL> COLUMN spool_file NEW_VALUE spool_file NOPRINT
SQL> SELECT '/u01/app/oracle/product/11.2.0/dbhome_1/cfgtoollogs/catbundle/' || 'catbundle_CPU_' || name || '_APPLY_' || TO_CHAR(SYSDATE, 'YYYYMonDD_hh24 _mi_ss', 'NLS_DATE_LANGUAGE=''AMERICAN''') || '.log' AS spool_file FROM v $database;
            ------------------------------------------------------------------------------------------------------------
            ------------------------------------------------------------------------------------------------------------
            ------------------------------------------------------------------------------------------------------------
            ------------------------------------------------------------------------------------------------------------
SQL> SPOOL &spool_file
SQL> exec dbms_registry.set_session_namespace('SERVER')
PL/SQL procedure successfully completed.
SQL> ALTER SESSION SET current_schema = SYS;
Session altered.
SQL> PROMPT Updating registry...
Updating registry...
SQL> INSERT INTO registry$history
  2    (action_time, action,
  3     namespace, version, id,
  4     bundle_series, comments)
  5  VALUES
  6    (SYSTIMESTAMP, 'APPLY',
  7     SYS_CONTEXT('REGISTRY$CTX','NAMESPACE'),
  8     '11.2.0.3',
  9     1,
  10     'CPU',
  11     'CPUJan2012');
1 row created.
SQL>  COMMIT;
Commit complete.
SQL> SPOOL off
SQL> SET echo off
Check the following log file for errors:
/u01/app/oracle/product/11.2.0/dbhome_ 1/cfgtoollogs/catbundle/catbundle_CPU_IKBTEST1_APPLY_2012Mar22_05_22_ 14.log

STEP 16: Run the utlrp.sql script to validate the objects.

SQL>@utlrp.sql

STEP 17. Count the invalid object and make sure it is lesser or same than what it was in earlier count before patching.





Resolving ORA-01000 open cursor error

We are frequently getting the Maximum Open Cursors exceeded error, and we want to resolve the error.
Solution:

One of the first things we need to do when we receive the ORA-01000: “maximum open cursors exceeded” error is to check the value of the initialization parameter open_cursors.

We can view the current limit for open cursors by issuing the following command:

SQL> sho parameter open_cursors
NAME TYPE VALUE
------------------------------------ ----------- ---------
open_cursors integer 300

The parameter OPEN_CURSORS sets the maximum number of cursors a session can have open at once. We specify this parameter to control the number of open cursors. Keeping the parameter’s value too low will result in a session receiving the ORA-01000 error. There’s no harm in specifying a very large value for the OPEN_CURSORS parameter (unless we expect all sessions to simultaneously max out their cursors, which is unlikely), so we can usually resolve cursor-related errors simply by raising the parameter value
to a large number. However, we may sometimes find that raising the value of the open_cursors parameter doesn’t “fix” the problem. In such cases, investigate which processes are using the open cursors by issuing the following query:

SQL> select a.value, s.username,s.sid,s.serial#,s.program,s.inst_id
from gv$sesstat a,gv$statname b,gv$session s
where a.statistic# = b.statistic# and s.sid=a.sid
and b.name='opened cursors current'

The GV$OPEN_CURSOR (or the V$OPEN_CURSOR) view shows all the cursors that each user session has currently opened and parsed, or cached. We can issue the following query to identify the sessions with a high number of opened and parsed or cached cursors.

SQL> select saddr, sid, user_name, address,hash_value,sql_id, sql_text
from gv$open_cursor
where sid in
(select sid from v$open_cursor
group by sid having count(*) > &threshold);

The query lists all sessions with an open cursor count greater than the threshold we specify. This way, we can limit the query’s output and focus just on the sessions that have opened, parsed, or cached a large number of cursors.

we can get the actual SQL code and the open cursor count for a specific session by issuing the following query:

SQl> select sql_id,substr(sql_text,1,50) sql_text, count(*)
from gv$open_cursor where sid=81
group by sql_id,substr(sql_text,1,50)
order by sql_id;

The output shows the SQL code for all open cursors in the session with the SID 81. we can examine all SQL statements with a high open cursor count, to see why the session was keeping a large number of cursors open.

How It Works:

If wer application is not closing open cursors, then setting the OPEN_CURSORS parameter to a higher value won’t really help we. we may momentarily resolve the issue, but we’re likely to run into the same issue a little later. If the application layer never closes the ref cursors created by the PL/SQL code, the database will simply hang on to the server resources for the used cursors. we must fix the application logic so it closes the cursors—the problem isn’t really in the database. If we’re using a Java application deployed on an application server such as the Oracle WebLogic Server, the WebLogic Server’s JDBC connection pools provide open database connections for applications. Any prepared statements in each of these connections will use a cursor. Multiple application server instances and multiple JDBC connection pools will mean that the database needs to
support all the cursors. If multiple requests share the same session ID, the open cursor problem may be due to implicit cursors. The only solution then is to close the connection after each request. A cursor leak is when the database opens cursors but doesn’t close them. we can run a 10046 trace for a session to find out if it’s closing its cursors:

SQL> alter session set events '10046 trace name context forever, level 12';

If we notice that the same SQL statement is associated with different cursors, it means that the application isn’t closing its cursors. If the application doesn’t close its cursors after opening them, Oracle assigns different cursor numbers for the next SQL statement it executes. If the cursor is closed, instead, Oracle will re-use the same cursor number for the next cursor it assigns. Thus, if we see the item PARSING IN CURSOR #nnnn progressively increase in the output for the 10046 trace, it means that the application is not closing the cursors. Note that while leaving cursors open may be due to a faulty application design, developers may also intentionally leave cursors open to reduce soft parsing, or when
they use the session cursor cache.

we can use the SESSION_CACHED_CURSORS initialization parameter to set the maximum number of cached closed cursors for each session. The default setting is 50. we can use this parameter to prevent a session from opening an excessive number of cursors, thereby filling the library cache or forcing excessive hard parses. Repeated parse calls for a SQL statement leads Oracle to move the session cursor for that statement into the session cursor cache. The database satisfies subsequent parse calls by using the cached cursor instead of re-opening the cursor.

 When we re-execute a SQL statement, Oracle will first try to find a parsed version of that statement in the shared pool—if it finds the parsed version in the shared pool, a soft parse occurs. Oracle is forced to perform the much more expensive hard parse if it doesn’t find the parsed version of the statement in the shared pool. While a soft parse is much less expensive than a hard parse, a large number of soft parses can affect performance, because they do require CPU usage and library cache latches. To reduce the number of soft parses, Oracle caches the recent closed cursors of each session in a local session cache for that session—Oracle stores any cursor for which a minimum of three parse calls were made,
thus avoiding having to cache every single session cursor, which will fill up the cursor cache.
The default value of 50 for the SESSION_CACHED_CURSORS initialization parameter may be too low for many databases. we can check if the database is bumping against the maximum limit for sessioncached cursors by issuing the following statement:

SQL> select max(value) from v$sesstat
2 where statistic# in (select statistic# from v$statname
3* where name = 'session cursor cache count');
MAX(VALUE)
----------
49

The query shows the maximum number of session cursors that have been cached in the past. Since this number (49) is virtually the same as the default value (or the value we’ve set) for the SESSION_CACHED_CURSORS parameter, we must set the parameter's value to a larger number. Session cursor caches use the shared pool. If we’re using automatic memory management, there’s nothing for we to do after we reset the SESSION_CACHED_CURSORS parameter—the database will bump up the shared pool size if necessary. we can find out how many cursors each session has in its session cursor cache by issuing the following query:

SQL> select a.value,s.username,s.sid,s.serial#
2 from v$sesstat a, v$statname b,v$session s
3 where a.statistic#=b.statistic# and s.sid=a.sid
4* and b.name='session cursor cache count';

Resolving ORA-04031 (stream pool) error while exporting

When I executed datapump export (expdp) job on a 11.2.0.2 database, it got failed with below error message
ORA-31626: job does not exist
ORA-31637: cannot create job SYS_EXPORT_FULL_01 for user SYS
ORA-06512: at “SYS.DBMS_SYS_ERROR”, line 95
ORA-06512: at “SYS.KUPV$FT_INT”, line 600
ORA-39080: failed to create queues “KUPC$C_1_20070823095248” and “KUPC$S_1_20070823095248” for Data Pump job ORA-06512: at “SYS.DBMS_SYS_ERROR”, line 95
ORA-06512: at “SYS.KUPC$QUE_INT”, line 1580
ORA-04031: unable to allocate 4194344 bytes of shared memory (“streams pool”,”unknown object”,”streams pool”,”fixed allocation callback
when we look at last line, like me, many people wonder why it is showing stream pool here and what is the relation between stream pool and expdp?
Here is the background…
Datapump jobs (either expdp/impdp), when initiated, will use buffer queues in order to transfer the data directly from the master table to the dump file. Instead of creating new buffer queue, datapump operations will try to use the existing queues in stream pool memory area.
Normally, Stream pool will have buffer queues in order to store messages in memory that supports capture processes, apply processes, XStream outbound servers, and XStream inbound servers.
Ideally, Stream pool size will be managed (increase/decrease) automatically when ASMM or AMM is enabled. But, sometimes with this automatic memory management, it might happen that STREAM_POOL_SIZE has not been set and pointing to a size of 0. This is the cause for the error mentioned above.
Solution:
Set STREAM_POOL_SIZE parameter to a value greater than 0. Usually 48m should be the min size and this value will be more if there is a large database or a database with high work load.

Direct path Vs Conventional path export

We might be using both Direct and Conventional path exports regularly. But here are some more interesting facts about Direct path approach…

1. Oracle introduced direct path export from its 7.3 version

2. Normally export will follow the process of SELECT statement i.e data from disk will be copied to buffer cache and then it will be written to dump file. When we use direct path by specifying DIRECT=Y in export command, then oracle will copy data directly from disk to PGA and from there it is written to dumpfile.

This is the reason why direct path is faster than conventional path

3. To improve the performance of Direct path still further, we can use RECORDLENGTH parameter in export. The values that can be possible for RECORDLENGTH are multiples of OS block size / multiples of DB_BLOCK_SIZE. If we don’t specify RECORDLENGTH and still use direct=y, then oracle will take default OS block size (In most of the env, it is 1024 bytes)

Example : Direct path can be improved by 50% when used with RECORDLENGTH of 64kb

Now, till the time we saw advantages. Lets talk about restrictions of using Direct path

1. Direct path will not work if your export release version is 8.1.4 or lower for LOB’s. Oracle will not take rows export in such case. But from 8i and above, even if you use direct on LOB’s, it will be switched to conventional path

Because of this reason, I always use direct path in my daily usage which makes export faster –> ofcourse only for 9i databases :-)

2. QUERY and BUFFER parameters cannot be used along with DIRECT=Y in export command

3. Direct path will export the data only if NLS_LANG variable is equal to database characterset. If different, it will throw following error

EXP-00041: Export done in server’s UTF8, different from user’s character set WE8ISO8859P1
EXP-00000: Export terminated unsuccessfully.

Hope this information will help you to understand better about Direct path.

How to use PARALLEL parameter in Datapump

Many a times, we may observe that datapump is running slow even after using PARALLEL option. But, we don’t know that there is a method to calculate value for PARALLEL parameter. Below information will helps us to do that…
DATAPUMP will use two methods to export/import data
a. DIRECT PATH
2. EXTERNAL TABLES
It is upto datapump to decide which path it can work. Means, it may happen that some tables are exported/imported through direct path and some other using external tables in the same datapump job.
1. Value for PARALLEL parameter in datapump can be set to more than one only in Enterprise Edition.
2. PARALLEL will not work effectively on Jobs with more metadata
3. As we know, in datapump, a master process will control entire process of export/import through worker processes. These worker process will start parallel execution (PX) processes to do that actual work. We can increase or decrease parallelism at any moment using interactive prompt.
4. If the worker processes are active, then even though we decrease the value for parallel, it will not be affective till the work reaches a completion point. But, increase in parallel processes will take affect immediately.
5. For Data Pump Export, the value that is specified for the parallel parameter should be less than or equal to the number of files in the dump file set (if you specified dump files exclusively and not used %U option)
HOW EXPORT WITH PARALLEL WORKS
1. Master process will start multiple worker processes. Atleast 2 worker processes will be created in case of export, one is for metadata and other is for table data.
2. You might have observed that datapump export will give estimated data size (even if we don’t mention ESTIMATE option). This is to calculate the number of parallel execution (PX) processes.
3. The columns of the tables are observed next, to decide whether to go for direct path or external tables method
Note: Direct path doesn’t support PARALLEL more than one
4. If the external tables method is chosen, Data Pump will determine the maximum number of PX processes that can work on a table data. It does this by dividing the estimated size of the table data by 250 MB and rounding the result down. If the result is zero or one, then PX processes are not used
Example : If the data size is 1000MB, it will be divided by 250MB which results in 4 i.e one process is for metadata and 1 process for data. again the worker process of data will have 4 corresponding PX processes. So, the total number of processes are 6.
Note : PX proceeses information will not be shown when we check through STATUS command
So, in the above example, we will see only 2 processes instead of 6 if we use STATUS command
5. Even though we give more PARALLEL value, Oracle will calculate worker and PX processes as above only.
Example : if we give PARALLEL=10 in above example, still Oracle uses 6 only
Note : We need to remember that Oracle will not scale up the processes if we mention less value to PARALLEL. So, we need to give more value any time in order to get maximum benefit
HOW IMPORT WITH PARALLEL WORKS
The PARALLEL parameter works a bit differently in Import because there are various dependencies and everything must be done in order.
Data Pump Import processes the database objects in the following order:
1. The first worker begins to load all the metadata: the tablespaces, schemas, etc., until all the tables are created.
2. Once the tables are created, the first worker starts loading data and the rest of the workers start loading data
3. Once the table data is loaded, the first worker returns to loading metadata again i.e for Indexes or other objects. The rest of the workers are idle until the first worker loads all the metadata
Note: One worker creates all the indexes but uses PX processes up to the PARALLEL value so indexes get created faster.
Thus, an import job can be started with a PARALLEL = 10, and the user will only see one worker being utilized at certain points during job execution. No other workers or Parallel Execution Processes will be working until all the tables are created. When the tables are created, a burst of workers and possibly PX processes will execute in parallel until the data is loaded, then the worker processes will become idle.
Before starting any export/import, it is better to use ESTIMATE_ONLY parameter. Divide the output by 250MB and based on the result decide on PARALLEL value
Finally when using PARALLEL option, do keep below points in mind
a. Set the degree of parallelism to two times the number of CPUs, then tune from there.
b. For Data Pump Export, the PARALLEL parameter value should be less than or equal to the number of dump files.
c. For Data Pump Import, the PARALLEL parameter value should not be much larger than the number of files in the dump file set.
For more details, you can refer to MOS doc 365459.1
Hope this post will help you & me when we perform Parallel datapump operations next time…

Granting permissions on v$ views

Sometimes we may get a requirement in which we need to provide access on dynamic performance views like v$session, v$process etc. I had seen this when application team want to capture session information from their webpages.

In such situations, application team will ask to grant select on those views. when you try the same as just like normal grant statement, you will get following error

SQL> grant select on v$session to PAST1;
grant select on v$session to PAST1
                *
ERROR at line 1:
ORA-02030: can only select from fixed tables/views

The reason for this is v$session is synonym to the view v_$session. so you need to grant select on that main view instead of synonym

SQL> grant select on v_$session to PAST1;

Grant succeeded.

Same you need to follow for all other v$ views. But for data dictionary views, you can directly grant permissions

SQL> grant select on dba_users to PAST1;

Grant succeeded.

Note : Due to security reasons, never we should encourage granting permissions on data dictionary of Oracle until very much required

Finding all the privileges and roles granted to a user

To find all the privileges and roles granted to a user 3 dictionary views comes very handy:

1. dba_sys_privs
2. dba_tab_privs
3. dba_role_privs

I queried all 3 views and then verified with toad to match the result for a user called "USERNAME".
The result was very accurate.

SQL> select GRANTEE, PRIVILEGE from dba_sys_privs where GRANTEE='USERNAME';
GRANTEE                        PRIVILEGE
------------------------------ ----------------------------------------
USERNAME                        CREATE PUBLIC SYNONYM
USERNAME                        CREATE SYNONYM
USERNAME                        UNLIMITED TABLESPACE
USERNAME                        CREATE SEQUENCE
USERNAME                        CREATE ANY SYNONYM
USERNAME                        CREATE MATERIALIZED VIEW
USERNAME                        CREATE TRIGGER
USERNAME                        CREATE TABLE
USERNAME                        CREATE SESSION
USERNAME                        CREATE DATABASE LINK
USERNAME                        CREATE PROCEDURE
USERNAME                        CREATE VIEW

12 rows selected.

SQL> select GRANTEE, PRIVILEGE from dba_tab_privs where GRANTEE='USERNAME';

no rows selected

SQL> select GRANTEE, GRANTED_ROLE from dba_role_privs where GRANTEE='USERNAME';

GRANTEE                        GRANTED_ROLE
------------------------------ ------------------------------
USERNAME                        CONNECT
USERNAME                        RESOURCE

Resolving ORA-01031 while configuring EM 10g

EMCTL is one service which we never know whether it will run fine or not…. :-)
Just kidding, here is one such problem with Enterprise Manager configuration and its solution
Problem:
When trying to configure EM for 10g or trying to drop repository or trying to recreate repository you may get below error
[oracle@issgascop218 ~]$ emca -deconfig dbcontrol db -repos drop
STARTED EMCA at Aug 24, 2010 7:51:44 AM
EM Configuration Assistant, Version 10.2.0.1.0 Production
Copyright (c) 2003, 2005, Oracle.  All rights reserved.
Enter the following information:
Database SID: PASQ
Listener port number: 1521
Password for SYS user:
Password for SYSMAN user:
Do you wish to continue? [yes(Y)/no(N)]: y
Aug 24, 2010 7:51:54 AM oracle.sysman.emcp.EMConfig perform
INFO: This operation is being logged at /opt/oracle/product/10.2.0/cfgtoollogs/emca/PASQ/emca_2010-08-24_07-51-44-AM.log.
Aug 24, 2010 7:51:55 AM oracle.sysman.emcp.util.DBControlUtil stopOMS
INFO: Stopping Database Control (this may take a while) …
Aug 24, 2010 7:51:57 AM oracle.sysman.emcp.EMReposConfig stopDBMSJobs
WARNING: Error initializing SQL connection. SQL operations cannot be performed
Aug 24, 2010 7:51:57 AM oracle.sysman.emcp.EMReposConfig invoke
WARNING: Unable to remove DBMS jobs.
Aug 24, 2010 7:51:57 AM oracle.sysman.emcp.EMReposConfig dropRepository
INFO: Dropping the EM repository (this may take a while) …
Aug 24, 2010 7:51:57 AM oracle.sysman.emcp.util.PlatformInterface executeCommand
WARNING: Error executing /opt/oracle/product/10.2.0/sysman/admin/emdrep/bin/RepManager -connect (DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=issgascop218)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=PASQ.issgascop218.global.iss.biz))) -repos_user SYSMAN -action drop -verbose -output_file /opt/oracle/product/10.2.0/cfgtoollogs/emca/PASQ/emca_repos_drop_2010-08-24_07-51-57-AM.log
Aug 24, 2010 7:51:57 AM oracle.sysman.emcp.EMReposConfig invoke
SEVERE: Error dropping the repository
Aug 24, 2010 7:51:57 AM oracle.sysman.emcp.EMReposConfig invoke
INFO: Refer to the log file at /opt/oracle/product/10.2.0/cfgtoollogs/emca/PASQ/emca_repos_drop_<date>.log for more details.
Aug 24, 2010 7:51:57 AM oracle.sysman.emcp.EMConfig perform
SEVERE: Error dropping the repository
Refer to the log file at /opt/oracle/product/10.2.0/cfgtoollogs/emca/PASQ/emca_2010-08-24_07-51-44-AM.log for more details.
Could not complete the configuration. Refer to the log file at /opt/oracle/product/10.2.0/cfgtoollogs/emca/PASQ/emca_2010-08-24_07-51-44-AM.log for more details.
When i opened log file, i found below information…
[24-08-2010 07:51:57] Enter SYS user’s password :
[24-08-2010 07:51:57]
[24-08-2010 07:51:57] Enter repository user password :
[24-08-2010 07:51:57]
[24-08-2010 07:51:57] Getting temporary tablespace from database…
[24-08-2010 07:51:57] Could not connect to SYS/(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=issgascop218)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=PASQ.issgascop218.global.iss.biz))): ORA-01031: insufficient privileges (DBD ERROR: OCISessionBegin)
One of possible reasons for ORA-01031 is OS user is not added to either oinstall or dba group. But in my case, its there.
After a search, i found that the reason is lack of PASSWORD FILE. Then i created password file which solved the issue
[oracle@issgascop218 ~]$ cd $ORACLE_HOME/dbs
[oracle@issgascop218 dbs]$ orapwd file=orapw$ORACLE_SID password=oracle entries=1 force=y
[oracle@issgascop218 dbs]$ ls -ltr
total 16096
-rw-r—–  1 oracle dba    8385 Sep 11  1998 init.ora
-rw-r—–  1 oracle dba   12920 May  3  2001 initdw.ora
-rw-rw—-  1 oracle dba      24 Mar 22  2007 lkPAS
-rw-rw—-  1 oracle dba    1552 Mar 22  2007 hc_PAS.dat
-rw-rw—-  1 oracle dba    1552 May  9  2007 hc_PASQ.dat
-rw-rw—-  1 oracle dba      24 May  9  2007 lkPASQ
-rw-r—–  1 oracle dba    2430 May  9  2007 initPASQ.ora
-rw-r—–  1 oracle dba    2560 Dec 28  2007 spfilePAS.oraold
-rw-r—–  1 oracle dba    2393 Dec 28  2007 initPAS.ora
-rw-r—–  1 oracle dba    1536 Apr 30 11:54 orapwPAS
-rw-r—–  1 oracle dba    2560 Jul  1 22:00 spfilePAS.ora
-rw-r—–  1 oracle dba    3584 Jul 12 22:00 spfilePASQ.ora
-rw-r—–  1 oracle dba 7716864 Aug 23 18:30 snapcf_PASQ.f
-rw-r—–  1 oracle dba 8601600 Aug 23 18:57 snapcf_PAS.f
-rw-r—–  1 oracle dba    1536 Aug 24 07:54 orapwPASQ
[oracle@issgascop218 dbs]$ emca -config dbcontrol db -repos recreate
Last few lines of above command
INFO: Repository successfully created
Aug 24, 2010 8:01:56 AM oracle.sysman.emcp.util.DBControlUtil startOMS
INFO: Starting Database Control (this may take a while) …
Aug 24, 2010 8:03:32 AM oracle.sysman.emcp.EMDBPostConfig performConfiguration
INFO: Database Control started successfully
Aug 24, 2010 8:03:33 AM oracle.sysman.emcp.EMDBPostConfig performConfiguration
INFO: >>>>>>>>>>> The Database Control URL is http://issgascop218:5500/em <<<<<<<<<<<
Enterprise Manager configuration completed successfully
So, whenever you get ora-01031 while performing some action on EM, plz do check if this solution works out

User not able to login through OEM

Have you any time tried to login as normal user like scott or HR into OEM and landed up with below message?

The application requires more database privileges than you have currently been granted. Click on Help to get more version specific information.

Unfortunately, when you click on HELP, it may not give proper details on which privilege is missing.

When checked at DB level, may be that user will have all the required privileges to connect to the database, but still login to OEM is impacted.

The solution for this is to grant SELECT_CATALOG_ROLE to that user.

sqlplus / as sysdba

SQL> grant select_catalog_role to schemaname;

for example, if you are facing issue with scott schema, then issue below command

SQL> grant select_catalog_role to scott;

After above grant, user should able to login to OEM.

But, even after granting the above role, still if you face issue, then you may need to grant select on dictionary views to that user. But do remember, with this user can able to retrieve dictionary information from oracle database. So, grant it only if it is very much required.

SQL> grant select any dictionary to scott;

Hope this helps…

Upgrade OEM 11g to 12C - Two System Approach


Pre-requisite before you start Upgrade –

Once the pre-requisite patches (PSU and 12c pre-upgrade patch) are done, login to OEM console.
Go to 11g OMS deployment page and Select UPGRADE TYPE.
Also one needs to restore the backup of the OMS Repository i.e. 11g OMS database on different host, which will be used for 12c upgrade.
Make sure database name stays same otherwise the upgrade will fail

ENTER HOST AND PORTS, WHERE YOU ARE GOING TO INSTALL NEW 12c OMS


ALSO PROVIDE THE TIME OF THE BACKUP, THE ONE YOU USED TO RESTORE THE DATABASE.




On the new host start the 12c installer

[oracle@Host11G em12c]$ ./runInstaller
Starting Oracle Universal Installer...

Checking Temp space: must be greater than 400 MB.   Actual 3611 MB    Passed
Checking swap space: must be greater than 150 MB.   Actual 65535 MB    Passed
Checking monitor: must be configured to display at least 256 colors.    Actual 65536    Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2013-07-12_06-24-41AM. Please wait ...[oracle@ushofsvpracb







PROVIDE MIDDLEWARE HOME LOCATION WHERE YOUR WEBLOGIC 10.3.5 AND ABOVE SERVER IS RUNNING. 
Also enter your Agent Base directory and Host name.


If you hit the above error, one needs to modify following parameters from Repository Database.

SQL> conn / as sysdba
Connected.
SQL> alter system set job_queue_processes=0;
System altered.

SQL> exec dbms_scheduler.disable('SYS.GATHER_STATS_JOB');
BEGIN dbms_scheduler.disable('SYS.GATHER_STATS_JOB'); END;

*
ERROR at line 1:
ORA-27476: "SYS.GATHER_STATS_JOB" does not exist
ORA-06512: at "SYS.DBMS_ISCHED", line 4407
ORA-06512: at "SYS.DBMS_SCHEDULER", line 2737
ORA-06512: at line 1

One has to run following pre-req checks and modify the parameters on the restored oms repository database.

SQL> conn / as sysdba
Connected.
SQL> alter system set job_queue_processes=0;
System altered.

SQL> show parameter nls_length

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
nls_length_semantics                 string      BYTE
SQL> grant execute on dbms_random to dbsnmp;
Grant succeeded.

SQL> grant execute on dbms_random to sysman;
Grant succeeded.

SQL> alter user oracle_ocm account unlock;
User altered.

SQL> sho parameter process
NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
aq_tm_processes                      integer     1
cell_offload_processing              boolean     TRUE
db_writer_processes                  integer     2
gcs_server_processes                 integer     0
global_txn_processes                 integer     1
job_queue_processes                  integer     0
log_archive_max_processes            integer     4
processes                            integer     500
processor_group_name                 string
SQL> show parameter stream
NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
streams_pool_size                    big integer 0

SQL> alter system set streams_pool_size=300m;
System altered.

 SQL> alter database add logfile group 4 '+DATA_PSFT_HR'  size 300M;
 Database altered.

SQL> alter database add logfile group 5  '+DATA_PSFT_HR'  size 300M;
 Database altered.

SQL>  alter database add logfile group 6 '+DATA_PSFT_HR'  size 300M;
 Database altered.

SQL> alter database drop logfile group 1;
 Database altered.

SQL> alter database drop logfile group 2;
 Database altered.

SQL> alter database drop logfile group 3;
 Database altered.
Now select the plugins required for the 12c OMS. 


Enter the WebLogic server credentials.

Now one needs to enter old OMS Repository connection string. What happens is that the link to old OMS Repository is created and is being used to sync data from old OMS to new OMS.







This information is also available at:

                /u01/app/oracle/Middleware/oms/install/setupinfo.txt

See below for information pertaining to your Enterprise Manager installation:

Use the following URL to access:

                1. Enterprise Manager Cloud Control URL: https://Host11G.searshc.com:7799/em
                2. Admin Server URL: https://Host11G.searshc.com:7101/console

The following details need to be provided during the additional OMS install:

                1. Admin Server Hostname: Host11G.searshc.com
                2. Admin Server Port: 7101

 NOTE:
 An encryption key has been generated to encrypt sensitive data in the Management Repository. If this key is lost, all encrypted data in the Repository becomes unusable. Please run following command to backup the Management Service configuration including the emkey and keep the backup on another system:
 emctl exportconfig oms -dir <backup location>

Since the OMS also needs to backup one needs to export the OMS configuration as follows and keep it some where safe. 
Note - This is one of the component needed as part of OMS backup strategy, others are OMS_HOME, GC_INST Home and Fusion MIDDLEWARE_HOME

[oracle@Host11G OEM12c]$ cd ../Middleware/oms/bin/
[oracle@Host11G bin]$ ./emctl exportconfig oms -dir /u01/app/oracle/OEM12c/emctl_confg_exp_DONOTDELETE
Oracle Enterprise Manager Cloud Control 12c Release 2
Copyright (c) 1996, 2012 Oracle Corporation.  All rights reserved.
Enter Enterprise Manager Root (SYSMAN) Password :
ExportConfig started...
<Jul 15, 2013 11:18:33 AM CDT> <Info> <Security> <BEA-090905> <Disabling CryptoJ JCE Provider self-integrity check for better startup performance. To enable this check, specify -Dweblogic.security.allowCryptoJDefaultJCEVerification=true>
Machine is Admin Server host. Performing Admin Server backup...
Exporting emoms properties...
Exporting secure properties...

Export has determined that the OMS is not fronted
by an SLB. The local hostname was NOT exported.
The exported data can be imported on any host but
resecure of all agents will be required. Please
see the EM Advanced Configuration Guide for more
details.

Exporting configuration for pluggable modules...
Preparing archive file...
Backup has been written to file:/01/app/oracle/OEM12c/emctl_confg_exp_DONOTDELETE/opf_ADMIN_20130715_111837.bka


The export file contains sensitive data.
Please ensure that it is kept secure.

ExportConfig completed successfully!