Translate into your own language

Friday, July 8, 2016

Oracle Database 12c New Features – Part 3

During this Oracle Database 12c new features article series; I shall be extensively exploring some of the miscellaneous, yet very useful, new additions and enhancements introduced in the areas of Clusterware, ASM and RAC database.

Also Read Oracle Database 12c New Features – Part 1 & Part 2

Part 3 covers:

1. Additions/Enhancements in ASM

2. Additions/Enhancements in Grid Infrastructure

3. Additions/Enhancements in Real Application Cluster (database)

1. Additions/Enhancements in Automatic Storage Management (ASM)

In a typical Grid Infrastructure installation, each node will have its own ASM instance running and act the as the storage container for the databases running on the node. There is a single point-of-failure threat with this setup. For instance, if the ASM instance on the node suffers or fails all the databases and instances running on the node will be impacted. To avoid ASM instance single-point-failure, Oracle 12c provides a Flex ASM feature. The Flex ASM is a different concept and architecture all together. Only a fewer number of ASM Instances need to run on a group of servers in the cluster. When an ASM instance fails on a node, Oracle Clusterware automatically starts surviving (replacement) ASM instance on a different node to maintain availability. In addition, this setup also provides ASM instance load balancing capabilities for the instances running on the node. Another advantage of Flex ASM is that it can be configured on a separate node.

When you choose Flex Cluster option as part of the cluster installation, Flex ASM configuration will be automatically selected as it is required by the Flex Cluster. You can also have traditional cluster over Flex ASM. When you decide to use Flex ASM, you must ensure the required networks are available.  You can choose the Flex ASM storage option as part of Cluster installation, or use ASMCA to enable Flex ASM in a standard cluster environment.

The following command shows the current ASM mode:


$ ./asmcmd showclustermode

$ ./srvctl config asm
Or connect to the ASM instances and query the INSTANCE_TYPE parameter. If the output value is ASMPROX, then, the Flex ASM is configured.

Increased ASM storage limits

The ASM storage hard limits on maximum ASM disk groups and disk size has been drastically increased. In 12cR1, ASM support 511 ASM disk groups against 63 ASM disk groups in 11gR2. Also, an ASM disk can be now 32PB size against 20PB in 11gR2.

Tuning ASM rebalance operations

The new EXPLAIN WORK FOR statement in 12c measures the amount of work required for a given ASM rebalance operation and inputs the result in V$ASM_ESTIMATE dynamic view. Using the dynamic view, you can adjust the POWER LIMIT clause to improve the rebalancing operation work. For example, if you want to measure the amount of work required for adding a new ASM disk, before actually running the manual rebalance operation, you can use the following:

SQL> EXPLAIN WORK FOR ALTER DISKGROUP DG_DATA ADD DISK data_005;

SQL> SELECT est_work FROM V$ASM_ESTIMATE;

SQL> EXPLAIN WORK SET STATEMENT_ID='ADD_DISK' FOR ALTER DISKGROUP DG_DATA AD DISK data_005;

SQL> SELECT est_work FROM V$ASM_ESTIMATE WHERE STATEMENT_ID = 'ADD_DISK’;
You can adjust the POWER limit based on the output you get from the dynamic view to improve the rebalancing operations.

ASM Disk Scrubbing

The new ASM Disk Scrubbing operation on a ASM diskgroup with normal or high redundancy level, verifies the logical data corruption on all ASM disks of that ASM diskgroup, and repairs the logical corruption automatically, if detected, using the ASM mirror disks. The disk scrubbing can be performed at disk group, specified disk or on a file and the impact is very minimal. The following examples demonstrate the disk scrubbing scenario:

SQL> ALTER DISKGROUP dg_data SCRUB POWER LOW:HIGH:AUTO:MAX;

SQL> ALTER DISKGROUP dg_data SCRUB FILE '+DG_DATA/MYDB/DATAFILE/filename.xxxx.xxxx'
REPAIR POWER AUTO;

Active Session History (ASH) for ASM

The V$ACTIVE_SESSION_HISOTRY dynamic view now provides the active session sampling on ASM instance too. However, the use of diagnostic pack is subject to the license.

2. Additions/Enhancements in Grid Infrastructure

Oracle 12c support two types of cluster configuration at the time of Clusterware installation: Traditional Standard Cluster and Flex cluster. In a traditional standard cluster, all nodes in a cluster are tightly integrated to each other and interact through a private network and can access the storage directly. On the other hand, the Flex Cluster introduced two types of nodes arranged in Hub and Leaf nodes architecture. The nodes arranged in Hub nodes category are similar to the traditional standard cluster, i.e. they are interconnected to each other through a private network and have the directly storage read/write access. The Leaf nodes are different from the Hub nodes. They don’t need to have direct access to the underlying storage; rather they access the storage/data through Hub nodes.

You can configure Hub nodes up to 64, and Leaf nodes can be many. In an Oracle Flex Cluster, you can have Hub nodes without having Leaf nodes configured, but no Leaf nodes exist without Hub nodes. You can configure multiple Leaf nodes to a single Hub node.  In Oracle Flex Cluster, only Hub nodes will have direct access to the OCR/Voting disks.  When you plan large scale Cluster environments, this would be a great feature to use. This sort of setup greatly reduces interconnect traffic, provides room to scale up the cluster to the traditional standard cluster.

There are two ways to deploy the Flex Cluster:

While configuring a brand new cluster
Upgrade a standard cluster mode to Flex Cluster

If you are configuring a brand new cluster, you need to choose the type of cluster configuration during step 3, select Configure a Flex Cluster option and you will have to categorize the Hub and Leaf nodes on Step 6. Against each node, select the Role, Hub or Leaf, and optionally Virtual Hostname too.

The following steps are required to convert a standard cluster mode to Flex Cluster mode:

1. Get the current status of the cluster using the following command:

$ ./crsctl get cluster mode status
2. Run the following command as the root user:

$ ./crsctl set cluster mode flex
$ ./crsctl stop crs
$ ./crsctl start crs –wait
3. Change the node role as per your design

$ ./crsctl get node role config
$ ./crsctl set node role hub|leaf
$ ./crsctl stop crs
$ ./crsctl start crs -wait

Note the following:

You can’t revert back from Flex to Standard cluster mode
Cluster node mode change requires cluster stack stop/start
Ensure GNS is  configured with a fixed VIP


OCR backup in ASM disk group

With 12c, OCR can be now be backed-up in ASM disk group. This simplifies the access to the OCR backup files across all nodes. In case of OCR restore, you don’t need to worry about which node the OCR latest backup is on. One can simply identify the latest backup stored in the ASM from any node and can perform the restore easily.

The following example demonstrates how to set the ASM disk group as OCR backup location:

$ ./ocrconfig -backuploc +DG_OCR

IPv6 support
With Oracle 12c, Oracle now supports IPv4 and IPv6 network protocol configuration on the same network. You can now configure public network (Public/VIP) either on IPv4, IPv6 or combination protocol configuration. However, ensure you use the same set of IP protocol configuration across all nodes in a cluster.

3. Additions/Enhancements in RAC (database)

What-If command evaluation

Using the new What-if command evaluation (-eval) option with srvctl, one can now determine the impact of running the command. This new addition to the srvctl command, will let you simulate the command without it actually being executed or making any changes to the current system. This is particularly useful in a situation when you want to make a change to an existing system and you’re not sure of the outcome.  Therefore, the command will provide the effect of making the change. The –eval option also can be used with crsctl command.

For example, if you want to know what will happen if you stop a particular database, you can use the following example:

$ ./srvctl stop database –d MYDB –eval
$ ./crsctl eval modify resource <resource_name> -attr “value”
Miscellaneous srvctl improvements
There are a few new additions to the srvctl command. The following demonstrates the new addition to stop/start database/instance resources on the cluster:

srvctl start database|instance –startoption NOMOUNT|MOUNT|OPEN
srvctl stop database|instance –stopoption NOMOUNT|MOUNT|OPEN
The next article will focus on top most developers’ features on 12c.

No comments:

Post a Comment