Translate into your own language

Monday, June 27, 2016

Step by step - How to refresh UAT/DEV/QA/TEST database schemas from PRODUCTION database

                                                                            Schema Refresh

Schema refresh is one of the routine tasks in DBA's life, Whenever application team feels to test something in production database, first they do their test in dev/uat/qa/test database.

For this they need the latest data copy from production database to dev/uat/qa/test database. We can achieve this by schema refresh.

Follow below steps in source and target database sides. Lets assume we are refreshing USER1 schema from production to development database.

Source Database Side :

Step 1: Check the Schema size in the source database.

SQL> select owner, sum(bytes)/1024/1024/1024  "GB"  from dba_segments where owner='USER1';

Step 2:Check the tablespace assigned to USER1 schema.

SQL>select default_tablespace from dba_users where username='USER1';

Step 3: Check tablespace size assign to USER1 scema

Step 4:  Take the count of schema objects, this will be useful after refresh to compare both source and target schema objects.

SQL> select object_type, count(*) from dba_objetcts where owner='ORACLE' group by object_type;

Step 5: Before export, first check mount point size where we're going to store the export dump files, if mount point doesn't have sufficient space export job will fail. So make sure we have enough space in the mount point.

Step 6 : Create a data pump directory at the database level for export, if we do not create directory then it will use the default directory DATA_PUMP_DIR. So if we are using this default directory, we need to make sure that it is mapped to the correct path.

Step 7:   Now take the export of schema or schemas.

            $ nohup expdp \'/ as sysdba\' directory=DATA_PUMP_DIR
              dumpfile=proddb1_8August_%U.dmp logfile=proddb1_8August.log
              schemas=USER1 parallel=8 &


Step 8: Copy the dump file from source server to destination server. We basically use scp command but we can also use other server copy command for this. It is always better to copy the dump
files in the location which will be used in the impdp directory path. Otherwise we will have to again copy this to the impdp directory path.

We have completed almost all steps in source database side, now we are moving to target database side,

Target Database side :

Step 1: Check if we have enough space available in the data pump director path to hold the dump file coming from the source database server.

Step 2 : Check if we have enough space available in the tablespace of the user which is going to be     refreshed. For this, it is always better to check the tablespace size of that particular user in the prod database and add that much of space before refresh starts.

Step 3:It is always recommended to take the export backup of the schema on target side as well which we are going to refresh.

               $ nohup expdp \'/ as sysdba\' directory=DATA_PUMP_DIR
                  dumpfile=devdb1_8August_%U.dmp logfile=devdb1_8August.log
                  schemas=USER1 parallel=8 &

Step 4 : Drop the schema objects only on the target side i.e. on development database. Double check  it before dropping anything in the database. Make sure we only drop the schema object not the complete schema. Sometime people also drop complete schema but that is not                                recommended.


Step 5 : Import the data into target schema(Shema refresh)

               $ nohup impdp \'/ as sysdba\' directory=DATA_PUMP_DIR
                  dumpfile=proddb1_8August_%U.dmp logfile=imp_8August.log
                  schemas=USER1 parallel=8 &
       
Note: While import is running, it is always better to monitor the alert log file of the database
and also monitor the import log file. Sometime the error is reported on the alert log and is not captured in the import log. So monitor both logs.

Step 6: Gather the statistics on the schema which we just refreshed.

Step 7: Count the objects and match it from production database. Make sure all are imported    properly before confirming to anyone or application team.

Suggestions are most welcome :)

Step by step - Test/Development Database Refresh From Production Procedure

The following note describes the generic procedure to be followed to refresh a Energy Test or Development environment database from a Energy production database backup.

In this example, we are refreshing the DEVE72 database from a backup taken of the PRD22 database.

The assumption here is that the required RMAN production backup has already been either copied from the production database via scp or has been restored from tape or has been placed in an NFS shared location which is accessible from both machines.

On the target machine the backups have been restored in the location u02/backup/DEVE72


Procedure:

  • Shutdown the DEVE72 database if it is already running


[oracle@DEVE72 DEVE72]$ ps -ef |grep pmon
  oracle 12701 29275   0 15:36:00 pts/3       0:00 grep pmon
  oracle  7377  2235   0   May 19 ?          84:59 ora_pmon_DEVE72

[oracle@DEVE72 DEVE72]$ echo $ORACLE_SID
DEVE72


[oracle@DEVE72 DEVE72]$ sqlplus sys as sysdba

SQL*Plus: Release 10.2.0.4.0 - Production on Thu Jun 17 15:36:22 2010

Copyright (c) 1982, 2007, Oracle.  All Rights Reserved.

Enter password:

Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.

  • Delete the database data files, redo log files and control files

Note:

**Ensure we are connected to the right server and are in the appropriate directory location**


[oracle@DEVE72 DEVE72]$ hostname
DEVE72

[oracle@DEVE72 DEVE72]$ pwd
/u03/oradata/DEVE72

[oracle@DEVE72 DEVE72]$ rm *.dbf

[oracle@DEVE72 DEVE72]$ cd /u04/oradata/DEVE72

[oracle@DEVE72 DEVE72]$ ls
control2.ctl  redo01a.log   redo02a.log   redo03a.log

[oracle@DEVE72 DEVE72]$ rm *.ctl
[oracle@DEVE72 DEVE72]$ rm *.log
[oracle@DEVE72 DEVE72]$ cd /u05/oradata/DEVE72
[oracle@DEVE72 DEVE72]$ ls
control3.ctl  redo01b.log   redo02b.log   redo03b.log


  • Copy the current init.ora parameter file of the DEVE72 database  and create a parameter file with the name of the source production database (PRD22)

[oracle@DEVE72 ~]$ cd $ORACLE_HOME/dbs

[oracle@DEVE72 dbs]$ pwd
/u01/app/oracle/product/10.2.0/db_1/dbs

[oracle@DEVE72 dbs]$ cp initDEVE72.ora initprd22.ora

  • Make the following changes to the initprd22.ora

*.db_name='prd22'


  • Set the environment to reflect the source production database and start the instance in NOMOUNT mode

[oracle@DEVE72 dbs]$ export ORACLE_SID=prd22
[oracle@DEVE72 dbs]$ sqlplus sys as sysdba

SQL*Plus: Release 10.2.0.4.0 - Production on Mon Jun 21 12:58:32 2010

Copyright (c) 1982, 2007, Oracle.  All Rights Reserved.

Enter password:
Connected to an idle instance.


SQL> startup nomount pfile=$ORACLE_HOME/dbs/initprd22.ora
ORACLE instance started.

Total System Global Area 3154116608 bytes
Fixed Size                  2043904 bytes
Variable Size             637538304 bytes
Database Buffers         2499805184 bytes
Redo Buffers               14729216 bytes
SQL> quit


  • Restore the control file from the backup location

The control file backup exists in the format  “c-<DBID>-<DATE>-<BACKUP SEQUENCE NUMBER>

Select the controlfile appropriate to the period of time that we wish to restore the database from

[oracle@DEVE72 DEVE72]$ rman target /

Recovery Manager: Release 10.2.0.4.0 - Production on Fri Jun 18 11:05:20 2010

Copyright (c) 1982, 2007, Oracle.  All rights reserved.

connected to target database: prd22 (not mounted)

RMAN> restore controlfile from '/u02/backup/DEVE72/c-4031762323-20100616-00';

Starting restore at 18-JUN-10
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=538 devtype=DISK

channel ORA_DISK_1: restoring control file
channel ORA_DISK_1: restore complete, elapsed time: 00:00:06
output filename=/u03/oradata/DEVE72/control1.ctl
output filename=/u04/oradata/DEVE72/control2.ctl
output filename=/u05/oradata/DEVE72/control3.ctl
Finished restore at 18-JUN-10


  • Mount the database

RMAN> alter database mount
2> ;

database mounted
released channel: ORA_DISK_1


  • Catalog the RMAN backup sets which have been copied from the source production database

RMAN> catalog start with '/u02/backup/DEVE72';

searching for all files that match the pattern /u02/backup/DEVE72

List of Files Unknown to the Database
=====================================
File Name: /u02/backup/DEVE72/c-4031762323-20100616-00
File Name: /u02/backup/DEVE72/21lgatsk_1_1
File Name: /u02/backup/DEVE72/c-2263349373-20100419-00
...
...

Do you really want to catalog the above files (enter YES or NO)? YES
cataloging files...
cataloging done

List of Cataloged Files
=======================
File Name: /u02/backup/DEVE72/c-4031762323-20100616-00
File Name: /u02/backup/DEVE72/21lgatsk_1_1
File Name: /u02/backup/DEVE72/2flgdi89_1_1
...
...

List of Files Which Where Not Cataloged
=======================================
File Name: /u02/backup/DEVE72/c-2263349373-20100419-00
  RMAN-07518: Reason: Foreign database file DBID: 2263349373  Database Name: DEVE72


Note – ignore any errors reported for files that are not cataloged


  • Determine the last archivelog sequence included in the backup. We will be recovering the database until this particular sequence number.

Look for the string "List of archived logs"

RMAN > list backup of archivelog all

 List of Archived Logs in backup set 69
  Thrd Seq     Low SCN    Low Time  Next SCN   Next Time
  ---- ------- ---------- --------- ---------- ---------
  1    79      7970987    16-JUN-10 7973402    16-JUN-10

In this case, the last archivelog backed up belongs to sequence number 79. If we wish to recover the database until the last archived log which has been backed up, we need to increment the last sequence number by 1. So in this case it will be 79+1  or 80

  • Create the following files in the location “/u02/backup/{DB_NAME}”

vi rman_head

RUN
{
# allocate a channel to the tape device
        ALLOCATE CHANNEL d1 DEVICE TYPE disk;

# rename the datafiles and online redo logs



vi rman_tail

# Do a SET UNTIL to prevent recovery of the online logs
 SET UNTIL SEQUENCE 80;

# restore the database and switch the datafile names
    RESTORE DATABASE;
            SWITCH DATAFILE ALL;

# recover the database
    RECOVER DATABASE;
}

vi generate_datafiles.sql

set head off pages 0 feed off echo off verify off
set lines 200
spool rename_datafiles.lst
select 'SET NEWNAME FOR DATAFILE ' || FILE# || ' TO ''' || '/u03/oradata/&1/' || substr(name,instr(name,'/',-1)+1) || ''';' from v$datafile;
spool off
exit;

vi generate_logfiles.sql

set head off pages 0 feed off echo off
spool rename_logfiles.lst
SELECT 'SQL "ALTER DATABASE RENAME FILE '''''||  MEMBER ||'''''' ||chr(10)||'to ''''' || member || '''''" ;' FROM V$LOGFILE;
exit


  • Generate data file rename script

While in directory /u02/backup/{DB_NAME}, connect as sys as sysdba via SQL*PLUS session  and run the generate_datafiles.sql script.

The generate_datafiles.sql script accepts a parameter which is the target database name.

SQL> @generate_datafiles DEVE72

It will create a file rename_datafiles.lst . The contents of this file will be like this:

SET NEWNAME FOR DATAFILE 1 TO '/u03/oradata/DEVE72/system01.dbf';
SET NEWNAME FOR DATAFILE 2 TO '/u03/oradata/DEVE72/undotbs01.dbf';
SET NEWNAME FOR DATAFILE 3 TO '/u03/oradata/DEVE72/sysaux01.dbf';
….
….

  • Generate redo log file rename script

While in directory /u02/backup/{DB_NAME}, connect as sys as sysdba via SQL*PLUS session  and run the generate_logfiles.sql script.

SQL> @generate_logfiles.sql

It will create a file called rename_logfiles.lst

Edit the rename_logfiles.lst file and change values of prd22 to DEVE72


SQL "ALTER DATABASE RENAME FILE ''/u04/oradata/prd22/redo01a.log'' to ''/u04/oradata/DEVE72/redo01a.log''" ;

SQL "ALTER DATABASE RENAME FILE ''/u05/oradata/prd22/redo01b.log'' to ''/u05/oradata/DEVE72/redo01b.log''" ;

SQL "ALTER DATABASE RENAME FILE ''/u04/oradata/prd22/redo02a.log'' to ''/u04/oradata/DEVE72/redo02a.log''" ;

SQL "ALTER DATABASE RENAME FILE ''/u05/oradata/prd22/redo02b.log'' to ''/u05/oradata/DEVE72/redo02b.log''" ;

SQL "ALTER DATABASE RENAME FILE ''/u04/oradata/prd22/redo03a.log'' to ''/u04/oradata/DEVE72/redo03a.log''" ;

SQL "ALTER DATABASE RENAME FILE ''/u05/oradata/prd22/redo03b.log'' to ''/u05/oradata/DEVE72/redo03b.log''" ;


  • Specify the archive log sequence until which recovery will be performed

Edit the rman_tail file  and change the line with the words “>> SET UNTIL SEQUENCE 80” to include the appropriate archive log sequence which was noted in an earlier step.

  • Prepare the RMAN restore and recover database script

[oracle@DEVE72 DEVE72]$ cat rman_head rename_datafiles.lst rename_logfiles.lst rman_tail > rman_recovery.rcv


  • Connect to the target database via RMAN and execute the rman_recovery.rcv script

[oracle@DEVE72 dbs]$ rman target /

Recovery Manager: Release 10.2.0.4.0 - Production on Mon Jun 21 13:04:04 2010

Copyright (c) 1982, 2007, Oracle.  All rights reserved.

connected to target database (not started)

RMAN> @rman_recovery.rcv


Note:

At this stage, we can continue to recover the database and keep it in sync with the source production database by manually applying the archive log files which are copied from the production server to the log archive destination of the test database on the target server.

We can do this via SQL*PLUS connected as SYS by issuing the command

RECOVER DATABASE UNTIL CANCEL USING BACKUP CONTROLFILE

When there are no more archive log files to apply, we enter CANCEL

  • Open the database with RESETLOGS

After the RMAN script has successfully run and recovered the database until the last archive log sequence, we will now open the database using the ALTER DATABASE OPEN RESETLOGS command executed either via RMAN or from SQL*PLUS connected as SYS.

SQL> alter database open resetlogs;

Database altered.

  • Temporary Tablespace Reconfiguration

After the restore, we will note that the temporary tablespace files are still pointing to the source production database as these tempfiles have not been renamed when we renamed all the database data files in an earlier step.

Obtain the name of the current tempfile -

SQL> select name from v$tempfile;

NAME
--------------------------------------------------------------------------------
/u03/oradata/prd22/temp01.dbf

Drop the tempfile-

SQL> ALTER DATABASE TEMPFILE '/u03/oradata/prd22/temp01.dbf' drop including datafiles;

Database altered.

Add a new tempfile for the refreshed database in the appropriate location-

SQL> ALTER TABLESPACE temp ADD TEMPFILE '/u03/oradata/DEVE72/temp01.dbf' size 2G;

Tablespace altered.

  • At this stage we will change the passwords if required for the SYS and  SYSTEM or any other database accounts.

  • Change the database name using nid

We will now shutdown the database and then mount it.

We will then run the nid utility to change the database name – we need to provide the appropriate password for the user SYS and the new value we want for the database name.

[oracle@DEVE72 dbs]$ nid target=sys dbname=DEVE72

DBNEWID: Release 10.2.0.4.0 - Production on Fri Jun 18 13:55:14 2010

Copyright (c) 1982, 2007, Oracle.  All rights reserved.

Password:
Connected to database PRD22 (DBID=4031762323)

Connected to server version 10.2.0

Control Files in database:
    /u03/oradata/DEVE72/control1.ctl
    /u04/oradata/DEVE72/control2.ctl
    /u05/oradata/DEVE72/control3.ctl

Change database ID and database name PRD22 to DEVE72? (Y/[N]) => Y

Proceeding with operation
Changing database ID from 4031762323 to 2271553224
Changing database name from PRD22 to DEVE72
    Control File /u03/oradata/DEVE72/control1.ctl - modified
    Control File /u04/oradata/DEVE72/control2.ctl - modified
    Control File /u05/oradata/DEVE72/control3.ctl - modified
    Datafile /u03/oradata/DEVE72/system01.dbf - dbid changed, wrote new name
    Datafile /u03/oradata/DEVE72/undotbs01.dbf - dbid changed, wrote new name
    Datafile /u03/oradata/DEVE72/sysaux01.dbf - dbid changed, wrote new name
    Datafile /u03/oradata/DEVE72/users01.dbf - dbid changed, wrote new name

...
...
...

    Datafile /u03/oradata/DEVE72/COGNOSPAD_CLOB01.dbf - dbid changed, wrote new name
    Datafile /u03/oradata/DEVE72/temp01.dbf - dbid changed, wrote new name
    Control File /u03/oradata/DEVE72/control1.ctl - dbid changed, wrote new name
    Control File /u04/oradata/DEVE72/control2.ctl - dbid changed, wrote new name
    Control File /u05/oradata/DEVE72/control3.ctl - dbid changed, wrote new name
    Instance shut down

Database name changed to DEVE72.
Modify parameter file and generate a new password file before restarting.
Database ID for database DEVE72 changed to 2271553224.
All previous backups and archived redo logs for this database are unusable.
Database has been shutdown, open database with RESETLOGS option.
Succesfully changed database name and ID.
DBNEWID - Completed succesfully.


At this stage the database has been shutdown and now we need to mount it and issue the RESETLOGS command after the database change.

Note:

We will now set the environment to the target database (until this stage, for example, ORACLE_SID had been set to the production database value)

[oracle@DEVE72 backup] export ORACLE_SID=DEVE72

[oracle@DEVE72 backup]$ sqlplus sys as sysdba

SQL*Plus: Release 10.2.0.4.0 - Production on Mon Jun 21 14:12:00 2010

Copyright (c) 1982, 2007, Oracle.  All Rights Reserved.

Enter password:
Connected to an idle instance.

SQL> startup mount
ORACLE instance started.

Total System Global Area 3154116608 bytes
Fixed Size                  2043904 bytes
Variable Size             637538304 bytes
Database Buffers         2499805184 bytes
Redo Buffers               14729216 bytes


SQL> alter database open resetlogs;

Database altered.

  • Post Database Refresh Tasks

Create import and export directories required for Data Pump

/u02/export/{DB_NAME} - export_dir
/u02/import/{DB_NAME} - import_dir


RMAN – run the appropriate script to register details in the RMAN catalog database where required.

Sunday, June 26, 2016

Working with Mongo Shell

The mongo shell is an interactive interface to MongoDB. You can use the mongo shell to query and update data as well as perform administrative operations.

The mongo shell is a component of the MongoDB distributions. Once you have installed and have started MongoDB, connect the mongo shell to your running MongoDB instance.

Start the mongo Shell

To start the mongo shell and connect to your MongoDB instance running on localhost with default port:

1. At a prompt in a terminal window, go to your <mongodb installation dir>:

cd <mongodb installation dir>

Type ./bin/mongo to start mongo:

./bin/mongo

If you have added the <mongodb installation dir>/bin to the PATH environment variable, you can just type mongo instead of ./bin/mongo.

When you run mongo without any arguments, the mongo shell will attempt to connect to the MongoDB instance running on the localhost interface on port 27017. To specify a different host or port number, as well as other options, see examples of starting up mongo and mongo reference which provides details on the available options.

Typically users invoke the shell with the mongo command at the system prompt. Consider the following examples for other scenarios.

Core Options

mongo

--shell
Enables the shell interface. If you invoke the mongo command and specify a JavaScript file as an argument, or use --eval to specify JavaScript on the command line, the --shell option provides the user with a shell prompt after the file finishes executing.

--nodb
Prevents the shell from connecting to any database instances. Later, to connect to a database within the shell, see Opening New Connections.

--norc
Prevents the shell from sourcing and evaluating ~/.mongorc.js on start up.

--quiet
Silences output from the shell during the connection process.

--port <port>
Specifies the port where the mongod or mongos instance is listening. If --port is not specified, mongo attempts to connect to port 27017.

--host <hostname>
Specifies the name of the host machine where the mongod or mongos is running. If this is not specified, mongo attempts to connect to a MongoDB process running on the localhost.

To connect to a replica set, specify the replica set name and a seed list of set members. Use the following form:

<replSetName>/<hostname1><:port>,<hostname2><:port>,<...>

For TLS/SSL connections (--ssl), mongo verifies that the hostname of the mongod or mongos to which you are connecting matches the CN or SAN of the mongod or mongos‘s --sslPEMKeyFile certificate. If the hostname does not match the CN/SAN, mongo will fail to connect.

--eval <javascript>
Evaluates a JavaScript expression that is specified as an argument. mongo does not load its own environment when evaluating code. As a result many options of the shell environment are not available.

--username <username>, -u <username>
Specifies a username with which to authenticate to a MongoDB database that uses authentication. Use in conjunction with the --password and --authenticationDatabase options.

--password <password>, -p <password>
Specifies a password with which to authenticate to a MongoDB database that uses authentication. Use in conjunction with the --username and --authenticationDatabase options. To force mongo to prompt for a password, enter the --password option as the last option and leave out the argument.

--help, -h
Returns information on the options and use of mongo.

--version
Returns the mongo release number.

--verbose
Increases the verbosity of the output of the shell during the connection process.

--disableJavaScriptJIT
New in version 3.2.
Disables use of the JavaScript engine’s JIT compiler.

<file.js>
Specifies a JavaScript file to run and then exit. Generally this should be the last option specified.

To connect to a database on a remote host using authentication and a non-standard port, use the following form:

mongo --username <user> --password <pass> --host <host> --port 28015

Alternatively, consider the following short form:
mongo -u <user> -p <pass> --host <host> --port 28015

Replace <user>, <pass>, and <host> with the appropriate values for your situation and substitute or omit the --port as needed.

Authentication Options

--authenticationDatabase <dbname>

Specifies the database in which the user is created. See Authentication Database.

If you do not specify a value for --authenticationDatabase, mongo uses the database specified in the connection string.

--authenticationMechanism <name>


Working with the mongo Shell

To display the database you are using, type db:

>db

The operation should return test, which is the default database. To switch databases, issue the use <db> helper, as in the following example:

>use <database>


Format Printed Results

The db.collection.find() method returns a cursor to the results; however, in the mongo shell, if the returned cursor is not assigned to a variable using the var keyword, then the cursor is automatically iterated up to 20 times to print up to the first 20 documents that match the query. The mongo shell will prompt Type it to iterate another 20 times.

To format the printed result, you can add the .pretty() to the operation, as in the following:

db.myCollection.find().pretty()

Tab Completion and Other Keyboard Shortcuts

The mongo shell supports keyboard shortcuts. For example,
Use the up/down arrow keys to scroll through command history. See .dbshell documentation for more information on the .dbshell file.

Use <Tab> to autocomplete or to list the completion possibilities, as in the following example which uses <Tab> to complete the method name starting with the letter 'c':
db.myCollection.c<Tab>

Because there are many collection methods starting with the letter 'c', the <Tab> will list the various methods that start with 'c'.

Exit the Shell

To exit the shell, type quit() or use the <Ctrl-c> shortcut.

Saturday, June 25, 2016

What is collection in MongoDB, how to create and use it

Collections : MongoDB stores documents in collections. Collections are similer to tables in relational databases like Oraclle.

Create a Collection

If a collection does not exist, MongoDB creates the collection when you first store data for that collection.

db.myNewCollection2.insert( { x: 1 } )

db.myNewCollection3.createIndex( { y: 1 } )

Both the insert() and the createIndex() operations create their respective collection if they do not already exist.

Creating a collection using db.createCollection() method 

MongoDB provides the db.createCollection() method to explicitly create a collection with various options, such as setting the maximum size or the documentation validation rules. If you are not specifying these options, you do not need to explicitly create the collection since MongoDB creates new collections when you first store data for the collections.

Because MongoDB creates a collection implicitly when the collection is first referenced in a command, this method is used primarily for creating new collections that use specific options. For example, you use db.createCollection() to create a capped collection, or to create a new collection that uses document validation. db.createCollection() is also used to pre-allocate space for an ordinary collection.

Syntax:

db.createCollection(<name>, { capped: <boolean>,
                                                  autoIndexId: <boolean>,
                                                  size: <number>,
                                                  max: <number>,
                                                  storageEngine: <document>,
                                                  validator: <document>,
                                                  validationLevel: <string>,
                                                  validationAction: <string>,
                                                  indexOptionDefaults: <document> } )

Parameter    Type Description

name    string The name of the collection to create.

options    document Optional. Configuration options for creating a capped collection or                                                         for preallocating space in a new collection.

capped    boolean         Optional. To create a capped collection, specify true. If you                                                                    specify true, you must also set a maximum size in the size field.

autoIndexId    boolean          Optional. Specify false to disable the automatic creation of an                                                                index on the _id field.

size            number Specify a maximum size in bytes for a capped collection. Once a                                                           capped collection reaches its maximum size, MongoDB removes                                                           the older documents to make space for the new documents. The                                                             size field is required for capped collections and ignored for other                                                           collections. it is optional

max                   number Optional. The maximum number of documents allowed in the                                                                 capped collection. The size limit takes precedence over this limit.                                                           If a capped collection reaches the size limit before it reaches                         the maximum number of documents, MongoDB removes old                                                                 documents. If you prefer to use the max limit, ensure that the size                                                           limit, which is required for a capped collection, is sufficient to                 contain the maximum number of documents.

usePowerOf2Sizes boolean         Optional. Available for the MMAPv1 storage engine only.

noPadding boolean         Optional. Available for the MMAPv1 storage engine only.

storageEngine document    Optional. Available for the WiredTiger storage engine only.
                New in version 3.0.
                Allows users to specify configuration to the storage                                         engine on a per-collection basis when creating a collection. The                                                             value of the storageEngine option should take the following form:
               { <storage-engine-name>: <options> } Storage engin configuration                                                         specified when creating collections are validated and logged to the                 oplog during replication to support replica sets with members that                                                         use different storage engines.

validator document                        Optional. Allows users to specify validation rules or expressions                                                            for the collection. For more information, see Document Validation

validationLevel    string                Optional. Determines how strictly MongoDB applies the validation                                                        rules to existing documents during an update.

validationAction string       Optional. Determines whether to error on invalid documents or just                                                       warn about the violations but allow invalid documents to be                                                                   inserted.


Examples


>use test
switched to db test


>db.createCollection("mycollection")
{ "ok" : 1 }


>db.tutorialspoint.insert({"name" : "tutorialspoint"})
>show collections
mycol
mycollection
system.indexes
tutorialspoint
>

Create a Capped Collection

Capped collections have maximum size or document counts that prevent them from growing beyond maximum thresholds. All capped collections must specify a maximum size and may also specify a maximum document count. MongoDB removes older documents if a collection reaches the maximum size limit before it reaches the maximum document count. Consider the following example:

db.createCollection("log", { capped : true, size : 5242880, max : 5000 } )

This command creates a collection named log with a maximum size of 5 megabytes and a maximum of 5000 documents.

The following command simply pre-allocates a 2-gigabyte, uncapped collection named people:

db.createCollection("people", { size: 2147483648 } )



What is a database in MongoDB, how to create and use it

Database: In MongoDB, a databases hold collections of documents.

MongoDB don't provides any command to create “database“. Actually, you don’t need to create it manually, because, MangoDB will create it on the fly, during the first time you save the value into the collection.

The use Command

MongoDB use DATABASE_NAME is used to create database. The command will create a new database, if it doesn't exist otherwise it will return the existing database.

Syntax:

Basic syntax of use DATABASE statement is as follows:

>use DATABASE_NAME

Example:

If you want to create a database with name <mydb>, then use DATABASE statement would be as follows:

>use mydb
switched to db mydb

To check your currently selected database use the command db

>db
mydb

If you want to check your databases list, then use the command show dbs.

>show dbs
local     0.78125GB
test      0.23012GB

Your created database (mydb) is not present in list. To display database you need to insert atleast one document into it.

>db.movie.insert({"name":"tutorials point"})
>show dbs
local      0.78125GB
mydb       0.23012GB
test       0.23012GB

In mongodb default database is test. If you didn't create any database then collections will be stored in test database.

Thursday, June 23, 2016

What is snapshot standby database, conversion from standby to snapshot and from snapshot to standby

Till Oracle 10g, we have two types of standby databases namely physical standby database and logical standby database.

From Oracle 11g, we got two more types of standby databases, called ADG (Active Data Guard) and snapshot standby database.

Oracle Active Data Guard - Standby databases can simultaneously be in read only mode and recovery mode, mostly used for reporting purposes.

Snapshot standby is updatable version of the standby database.

What is snapshot standby database?

The use of snapshot standby database is in the situation where if we want to clone a production database for testing something, we can convert existing physical standby database to snapshot standby database which is as close as to the production database w.r.t data, do required testing on the snapshot standby database and convert it back to physical standby database. Snapshot standby database is updatable and it provides the replicated version of the primary database, which can be used for development, testing purposes. We can always refresh the snapshot standby database to be in sync with the primary by converting it to physical standby database. Again we can convert to snapshot version do the testing and resynchronize with primary. This cycle can be repeated any number of times. Snapshot standby databases are best used in scenarios where the benefit of having a temporary, updatable snapshot of the primary database justifies additional administrative complexity and increased time to recover from primary database failures.

A snapshot standby database receives and archives redo data from a primary database, but are not applied. Redo data received from the primary database is applied when a snapshot standby database is converted back into a physical standby database, after discarding all local updates to the snapshot standby database. So data from the primary database is always protected. It gives data protection and disaster recovery exactly like the physical standby database.

Introduced in Oracle 11gR1, a snapshot standby database is a physical standby that is temporarily disconnected from Data Guard configuration and able to be used as a fully updatable stand-alone database. As this is in READ WRITE mode, any kind of testing or changes can be done, including features like using Database Replay.

Converting Physical Standby Database into Snapshot Standby Database

There are three restrictions where we cannot convert to a snapshot.

a) If it is the only standby of a Maximum Protection configuration.
b) If it is the target of a current switchover operation.
c) If it is the designated failover database for Fast Start Failover (FSFO).

If this is your only standby database, if you have to failover to it, the failover is going to take longer - because the standby has to be flashed back, and caught up, before the transition can complete, be aware that worst case scenario. It’s always recommended considering a Data Guard configuration that consists of two physical standby databases (at least), one is always kept physical standby database, and the other can be used/tested as a snapshot standby.

Perform the following steps to convert a physical standby database into a snapshot standby database:

Step 1: Check if Flashback is enabled or not.

This conversion is achieved using flashback database, but the physical standby database does not need to have flashback database explicitly enabled. Even if the flashback is not enabled explicitly the conversion will work.

i) SQL> select DB_UNIQUE_NAME, OPEN_MODE, DATABASE_ROLE from v$database;
NAME OPEN_MODE DATABASE_ROLE
---------------------------------------- ---------- ----------------
PHYSTD2 MOUNTED PHYSICAL STANDBY

ii) SQL> Show parameter db_recovery_file_dest
NAME TYPE VALUE
--------------------------------------------------------------- ----------- ------------------------------
db_recovery_file_dest string /u01/oradata/phystd2/FRA/
db_recovery_file_dest_size big integer 5G

iii) SQL> select flashback_on from v$database;
FLASHBACK
---------
YES

Step 2: Stop Redo Apply, if it is active.
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;

Step 3: If the standby database is a RAC setup then we need to shutdown all the instances except one on which we will be using the conversion commands.

Step 4: Ensure that the database is mounted.

Step 5: Perform the conversion from physical standby to snapshot standby database:
The conversion from physical standby to snapshot standby database and vice versa can be done in three ways.

I). Manual method. This can be done by issuing the below SQL command in the physical standby database. Once it is converted the database can be opened in read/write mode.
SQL> ALTER DATABASE CONVERT TO SNAPSHOT STANDBY;

II). Data Guard Broker
A physical standby database that is managed by the Data Guard broker can be converted into a snapshot standby database using either DGMGRL or Oracle Enterprise Manager.

DGMGRL> show configuration;

   Configuration - fsfo_config

   Protection Mode:     MaxAvailability

   Databases:
     prima - Primary database
     phystd1 - Physical standby database
     phystd2 - Physical standby database

   Fast-Start Failover: DISABLED

   Current status:
    SUCCESS

The DGMGRL command that converts the database is:
DGMGRL> CONVERT DATABASE standby_unique_name TO SNAPSHOT STANDBY

DGMGRL> CONVERT DATABASE phystd2 TO SNAPSHOT STANDBY
DGMGRL> show configuration;

Configuration - fsfo_config

Protection Mode:     MaxAvailability

Databases:
 prima - Primary database
 phystd1 - Physical standby database
 phystd2 - Snapshot standby database

Fast-Start Failover: DISABLED

Current status:
SUCCESS

III). Enterprise Manager (If the physical standby is managed by DGMGRL)
Ademco alarm system manual Step 2 Enterprise Brokers Values Manual Print

Step 6: The database is dismounted during this conversion and we have to restart the database, which will be the snapshot standby database.
SQL> startup

Step 7: Check for database role
SQL> Select DB_UNIQUE_NAME, OPEN_MODE, DATABASE_ROLE from v$database;
NAME OPEN_MODE DATABASE_ROLE
-------------------------------------- ---------- -------------------------------
PHYSTD2 READ WRITE SNAPSHOT STANDBY

Meanwhile it will be receiving the redo from the primary, which will be automatically applied when snapshot standby database is converted back to physical standby database.

When we convert a physical standby database to snapshot standby database, then
Redo gap detection and resolution will work as before.
A snapshot standby database cannot be the target of a switchover or failover. A snapshot standby database must first be converted back into a physical standby database before performing a role transition to it.
In case of role transition of one of the standby database (like failover or switchover), the snapshot standby database continues to receive the redo from the new primary.
Even if the primary database has a new incarnation (because of Flashback database or Open resetlogs), the snapshot database continues to receive the redo from the primary.

Converting Snapshot Standby Database into Physical Standby Database
Oracle uses restore point and Flashback Database feature to accomplish this conversion of snapshot standby database to physical standby database. Restore point and Flashback Database concepts were first introduced in Oracle 10g. When we convert the physical standby database to snapshot standby database a guaranteed restore point is created. This guaranteed restore point is used to flashback a snapshot standby database to its original state. The name of this guaranteed restore points begins with ‘SNAPSHOT_STANDBY_REQUIRED_’.

When the database is converted back, it is flashed back to the guaranteed restore point, put back into the full Data Guard configuration, and the redo is applied eventually catching the standby database back up to the primary. Once this is completed, the guaranteed restore point is removed automatically. Using the guaranteed restore point the snapshot standby database will be flashed back to its original state and subsequently the restore point will be dropped. After the conversion the redo will be applied which were received by the snapshot standby database.

Any operation that cannot be reversed using Flashback Database technology will prevent a snapshot standby from being converted back to a physical standby.

A snapshot standby database must be opened at least once in read-write mode before it can be converted into a physical standby database.

Perform the following steps to convert a snapshot standby database into a physical standby database:

Step 1: Check for current database role
SQL> Select DB_UNIQUE_NAME, OPEN_MODE, DATABASE_ROLE from v$database;
NAME      OPEN_MODE DATABASE_ROLE
----------------------------------------- ---------- ----------------
PHYSTD2 READ WRITE SNAPSHOT STANDBY

SQL> select name, guarantee_flashback_database from v$restore_point;
NAME                                                                       GUA
----------------------------------------------------------------- -------
SNAPSHOT_STANDBY_REQUIRED_08/18/2012 17:13:31   YES

Step 2: If the standby database is a RAC setup then we need to shutdown all the instances except one on which we will be using the conversion commands.

Step 3: Ensure that the database is mounted.

Step 4: Do the conversion of snapshot standby database to physical standby database.

I). Manual method.
Issue the following SQL statement to convert the snapshot standby back to the physical standby:
SQL> ALTER DATABASE CONVERT TO PHYSICAL STANDBY;

II). Data Guard Broker
DGMGRL> show configuration;

Configuration - fsfo_config

Protection Mode:     MaxAvailability

Databases:
 prima - Primary database
 phystd1 - Physical standby database
 phystd2 - Snapshot standby database

Fast-Start Failover: DISABLED

Current status:
SUCCESS

The DGMGRL command that converts the database is:
DGMGRL> CONVERT DATABASE standby_unique_name TO PHYSICAL STANDBY

DGMGRL> CONVERT DATABASE phystd2 TO PHYSICAL STANDBY
DGMGRL> show configuration;

Configuration - fsfo_config

Protection Mode:     MaxAvailability

Databases:
 prima - Primary database
 phystd1 - Physical standby database
 phystd2 - Physical standby database

Fast-Start Failover: DISABLED

Current status:
SUCCESS

III). Enterprise Manager (If the physical standby is managed by DGMGRL).

Step 5: Start database in mount, if not already mounted.
SQL> alter database mount standby database;

Step 6: Recover Physical Standby database
SQL> recover managed standby database disconnect from session;

Step 7: Check for database role
SQL> select DB_UNIQUE_NAME, OPEN_MODE, DATABASE_ROLE from v$database;
NAME OPEN_MODE DATABASE_ROLE
--------------------------------- ---------- ----------------
PHYSTD2 MOUNTED PHYSICAL STANDBY

Resolve archive gap between PRIMARY and STANDBY

A Physical Standby database synchs with Primary by continuous apply of archive logs from a Primary Database.

When the logs are missing on standby is less than 10-15 numbers, We can simple ship the logs which are missing in the standby database from primary database by using SCP/FTP and then register the logfiles in standby to resolve the gap.

Please use below query to find out archive gap on Standby:

SELECT ARCH.THREAD# “Thread”, ARCH.SEQUENCE# “Last Sequence Received”, APPL.SEQUENCE# “Last Sequence Applied”, (ARCH.SEQUENCE# – APPL.SEQUENCE#) “Difference” FROM (SELECT THREAD# ,SEQUENCE# FROM V$ARCHIVED_LOG WHERE (THREAD#,FIRST_TIME ) IN (SELECT THREAD#,MAX(FIRST_TIME) FROM V$ARCHIVED_LOG GROUP BY THREAD#)) ARCH, (SELECT THREAD# ,SEQUENCE# FROM V$LOG_HISTORY WHERE (THREAD#,FIRST_TIME ) IN (SELECT THREAD#,MAX(FIRST_TIME) FROM V$LOG_HISTORY GROUP BY THREAD#)) APPL WHERE ARCH.THREAD# = APPL.THREAD# ORDER BY 1;

Thread  Last Sequence Received   Last Sequence Applied     Difference
———-  ———————-                          ———————        ———-
1                   9545                                             9535                   10

SQL> SELECT MAX(SEQUENCE#) FROM V$ARCHIVED_LOG WHERE APPLIED=’YES’;
 MAX(SEQUENCE#)
————–
9535
COPY ARCHIVELOG FILE TO STANDBY FROM PRIMARY:

$ scp log_file_n.arc oracle@standby:/log_location/log_file_n.arc

REGISTER LOGFILE AT STANDBY:

SQL> alter database register logfile ‘/log_location/log_file_n.arc’; logfile registered

Repeat the same process for all the log files which are missing at standby.

Resolve huge archive gap between PRIMARY and STANDBY

A Physical Standby database synchs with Primary by continuous apply of archive logs from a Primary Database.

When the logs are missing on standby difference is huge (say more than 500 logs), you have to rebuild the standby database from scratch.

But what if the database size in TB's and rebuild will take time. So we have an option as an enhancement from 10g, an incremental backup created with BACKUP INCREMENTAL… FROM SCN can be used to refresh the standby database.

Please use below query to find out archive gap on Standby:

SELECT ARCH.THREAD# “Thread”, ARCH.SEQUENCE# “Last Sequence Received”, APPL.SEQUENCE# “Last Sequence Applied”, (ARCH.SEQUENCE# – APPL.SEQUENCE#) “Difference” FROM (SELECT THREAD# ,SEQUENCE# FROM V$ARCHIVED_LOG WHERE (THREAD#,FIRST_TIME ) IN (SELECT THREAD#,MAX(FIRST_TIME) FROM V$ARCHIVED_LOG GROUP BY THREAD#)) ARCH, (SELECT THREAD# ,SEQUENCE# FROM V$LOG_HISTORY WHERE (THREAD#,FIRST_TIME ) IN (SELECT THREAD#,MAX(FIRST_TIME) FROM V$LOG_HISTORY GROUP BY THREAD#)) APPL WHERE ARCH.THREAD# = APPL.THREAD# ORDER BY 1;

 Thread        Last Sequence Received      Last Sequence Applied      Difference
———-              ———————-                  ———————             ———-
1                               8254                                      7954                          300

Find the SCN on the PRIMARY:

SQL> select current_scn from v$database;

CURRENT_SCN
———–
242671761
Find the SCN on the STANDBY:


SQL> select current_scn from v$database;
CURRENT_SCN
———–
223771173

Clearly you can see there is difference

Stop and shutdown the managed standby apply process:

SQL> alter database recover managed standby database cancel;

Database altered.

Shutdown the standby database

SQL> shut immediate

On the primary, take an incremental backup from the SCN number where the standby current value 223771173:

 RMAN> run { allocate channel c1 type disk format ‘/backup/%U.bkp’;
backup incremental from scn 223771173 database;
 }
On the primary, create a new standby controlfile:

SQL> alter database create standby controlfile as ‘/backup/for_standby.ctl’;

Database altered.

Copy the standby controlfile to STANDBY and bring up the standby instance in nomount status with standby controlfile:

SQL> startup nomount

SQL> alter database mount standby database;

Connect to RMAN on STANDBY, Catalog backup files to RMAN using below commands:

$ rman target=/

RMAN> catalog start with ‘/backup’;

PERFORM RECOVER:

RMAN> recover database;

Start managed recovery process:

SQL> alter database recover managed standby database disconnect from session;

Database altered.

Check the SCN’s in primary and standby it should be close to each other.