RAC Background Processes

RAC Processes

There are few additional background processes associated with a RAC instance than you would with a single- instance database. These processes are primarily used to maintain database coherency among each instance. They manage what is called the global resources:
1)LMJON: Global Enqueue Service Monitor
2)LMD: Global Enqueue Serv ice Daemon
3)LMS: Globa] Cache Service Processes,
4)LCK: Lock process
5)DIAG: Diagnosability process

At the cluster level, you find the main processes of Oracle C’Lusterware. They provide a standard cluster interface on all platforms and perform high-availability operations. You find these processes on each node of the cluster:
1)CRSD and RACGIMON: Are engines for high-availability operations
2)OCSSD: Provides access to node membership and group services
3)EVD: Scans callout directory and invokes callouts in reactions to detected events
4)OPROCD i Is a process monitor for the cluster (not used on Linux and Windows)

Global Resources Coordination

Global Resources Coordination

GRD1Cluster operations require synchronization among all instances to control shared access to resources. RAC ases the Global Resource Directory (GRD) to record information about how resources are used w ithin a cluster database. The Global Cache Services (GCS) and Global Enqueue Serv ices (GES) manage the information in the GRD.
Each instance maintains a part of the GRD in its System Global Area (SGA). The GCS and GES nominate one instance to manage all information about a particular resource. This instance is called the resource master. Also, each instance knows w hich instance masters which resource. Maintaining cache coherency is an important part of a RAC activity. Cache coherency is the technique of keeping multiple copies of a block consistent between different Oracle instances. GCS implements cache coherency by using what is called die Cache Fusion algorithm.
The GES manages all non-Cache Fusion interinstance resource operations and tracks the status of all Oracle enqueuing mechanisms. The primary resources of the GES controls are dictionary cache locks and library cache locks. The GES also performs deadlock detection to all deadlock-sensitive enqueues and resources.

Global Cache Coordination: Example

GRD2

The scenario described in the slide assumes that the data block has been changed, or dirtied, by the first instance. Furthermore, only one copy of the block exists clusterwide, and the content of the block is represented by its SCN.

1. The second instance attempting to modify the block submits a request to the GCS.
2. The GCS transmits the request to the holder. In this case, the first instance is the holder.
3. The first instance receives die message and sends the block to the second instance. The first instance retains the dirty buffer for recov ery purposes. This dirty image of the block is also called a past image of the block. A past image block cannot be modified further.
4. On receipt of the block, the second instance informs the GCS that it holds die block.
Note: The data block is not w ritten to disk before the resource is granted to the second instance.

 

Write to Disk Coordination: Example

GRD3

The scenario described in the slide illustrates how an instance can perform a checkpoint at any time or replace butters in the cache as a response to free buffer requests. Because multiple versions of the same data block w ith different changes can exist in the caches of instances in the cluster, a w rite protocol managed by the GCS ensures that only the most current version of the data is written to disk. It must also ensure that all previous versions are purged from the other caches. A write request for a data block can originate in any instance that has the current or past image of the block. In this scenario, assume that the first instance holding a past image butter requests that the Oracle server writes the buffer to disk:

1. The first instance sends a write request to die GCS.
2. The GCS forwards the request to the second instance, which is die holder of the current version of the block.
3. The second instance receives the write request and writes the block to disk.
4. The second instance records the completion of the w rite operation w ith the GCS.
5. After receipt of die notification, the GCS orders all past image holders to discard their past images. These past images are no longer needed for recovery.
Note: In this case, only one I/O is performed to w rite the most current version of the block to disk.

Redo Transport Services

Redo Transport Services

logtrans

 

 

 

 

 

 

 

 

 

Redo transport services control the automated transfer of redo data from a database destination to one or more destinations

Configuring Destinations with the LOG_ARCHIVE_DEST_n Parameter-

The LOCATION and SERVICE attributes describe either a local disk location or an Oracle Net service name that represents a standby destination to which redo transport services will transmit redo data. Alternatively USE_DB_RECOVERY_FILE_DEST parameter can be set to use flash recovery are for archiving.
For every LOG_ARCHIVE_DEST_n initialization parameter that a corresponding LOG_ARCHIVE_DEST_STATE_n parameter is set. ENABLE and DEFER value can be set for this parameter to enable or diable archive shipping.
LOG_ARCHIVE_DEST_1=’LOCATION=/arch1/chicago/’
LOG_ARCHIVE_DEST_STATE_1=ENABLE
LOG_ARCHIVE_DEST_2=’SERVICE=boston’
LOG_ARCHIVE_DEST_STATE_2=ENABLE
SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_1=’LOCATION=USE_DB_RECOVERY_FILE_DEST ARCH MANDATORY REOPEN=5 VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)’
Query the V$ARCHIVE_DEST view to see current settings of the LOG_ARCHIVE_DEST_n initialization parameter.

How to Send Redo Data

On the primary database, Oracle Data Guard uses archiver processes (ARCn) or the log writer process (LGWR) to collect transaction redo data and transmit it to standby destinations.
See the diagrams below to understand how ARC and LGWR transmits data to standby sites.

Using Archiver Processes (ARCn) to Archive Redo Data
By default, redo transport services use ARCn processes to archive the online redo log files on the primary database. ARCn archival processing supports only the maximum performance level of data protection in Data Guard configurations. The LOG_ARCHIVE_MAX_PROCESSES initialization parameter specifies the maximum number of ARCn processes. Default is 4 archiver processes.
ARCn archiver processing is the default archival behavior, specifying the ARCH attribute on the LOG_ARCHIVE_DEST_nparameter is optional.
default_archiving

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Using the Log Writer Process (LGWR) to Archive Redo Data
SYNC LGWR

 

 

 

 

 

 

 

 

 

 

 

 

 

 

The LGWR process synchronously writes to the local online redo log files at the same time it transmits redo data to the remote destination. Using the LGWR process differs from ARCn processing, because instead of waiting for the online redo log to switch at the primary database and then writing the entire archived redo log at the remote destination all at once, the LGWR process selects a standby redo log file at the standby site that reflects the log sequence number (and size) of the current online redo log file of the primary database. Then, as redo is generated at the primary database, it is also transmitted to the remote destination. The transmission to the remote destination will either be synchronous or asynchronous, based on whether the SYNC or the ASYNC attribute is set on the LOG_ARCHIVE_DEST_n parameter. Synchronous LGWR processing is required for the maximum protection and maximum availability modes of data protection in Data Guard configurations.
LOG_ARCHIVE_DEST_1=’LOCATION=/arch1/chicago’
LOG_ARCHIVE_DEST_2=’SERVICE=boston LGWR SYNC NET_TIMEOUT=30′
LOG_ARCHIVE_DEST_STATE_1=ENABLE
LOG_ARCHIVE_DEST_STATE_2=ENABLE

LGWR ASYNC Archival Processing

ASYNC LGWR
LOG_ARCHIVE_DEST_1=’LOCATION=/arch1/chicago’
LOG_ARCHIVE_DEST_2=’SERVICE=boston LGWR ASYNC’
LOG_ARCHIVE_DEST_STATE_1=ENABLE
LOG_ARCHIVE_DEST_STATE_2=ENABLE
When the LGWR and ASYNC attributes are specified, the log writer process writes to the local online redo log file, while the network server (LNSn) processes (one for each destination) asynchronously transmit the redo to remote destinations. The LGWR process continues processing the next request without waiting for the LNS network I/O to complete.

Providing for Secure Redo Data Transmission

Redo transport services use authenticated network sessions to transfer redo data. These sessions are authenticated using the SYS user password contained in the password file. All databases in the Data Guard configuration must use a password file, and the SYS password contained in this password file must be identical on all systems

Specifying Role-Based Destinations with the VALID_FOR Attribute
The VALID_FOR attribute enables you to configure destination attributes for both the primary and standby database roles in one server parameter file (SPFILE), so that your Data Guard configuration operates properly after a role transition. This simplifies switchovers and failovers by removing the need to enable and disable the role-specific parameter files after a role transition.
To configure these factors for each LOG_ARCHIVE_DEST_n destination, you specify this attribute with a pair of keywords:VALID_FOR=(redo_log_type,database_role). The redo_log_type keyword identifies the destination as valid for archiving the following:ONLINE_LOGFILE, STANDBY_LOGFILE, or ALL_LOGFILES. The database_role keyword identifies the role in which the current database must be in for the destination to be valid: PRIMARY_ROLE, STANDBY_ROLE, or ALL_ROLES. Therefore, it is recommended that you define a VALID_FOR attribute for each destination so that your Data Guard configuration operates properly, including after a role transition.

Specify Unique Names for Primary and Standby Databases

The DB_UNIQUE_NAME attribute enables you to specify unique database names when you configure destinations.Together, the DB_UNIQUE_NAME attribute of the LOG_ARCHIVE_DEST_n parameter and the DG_CONFIG attribute of the LOG_ARCHIVE_CONFIG parameter specify the unique name of each database of the Data Guard configuration.
DB_NAME=chicago
DB_UNIQUE_NAME=chicago
LOG_ARCHIVE_CONFIG=’DG_CONFIG=(chicago, boston)’
LOG_ARCHIVE_DEST_1=’LOCATION=/arch1/chicago/ VALID_FOR=(ALL_LOGFILES,ALL_ROLES)
LOG_ARCHIVE_DEST_2=
‘SERVICE=boston LGWR ASYNC
VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)
DB_UNIQUE_NAME=boston

Choosing a Data Protection Mode

Maximum Protection Mode
This protection mode ensures that no data loss will occur if the primary database fails. To provide this level of protection, the redo data needed to recover each transaction must be written to both the local online redo log and to the standby redo log on at least one standby database before the transaction commits. To ensure data loss cannot occur, the primary database shuts down if a fault prevents it from writing its redo stream to at least one remote standby redo log.
The maximum protection mode requires that at least one standby instance has a standby redo log and theLGWR, SYNC, and AFFIRM attributes be used on the LOG_ARCHIVE_DEST_n parameter for this destination.

Maximum Availability Mode

This protection mode provides the highest level of data protection that is possible without compromising the availability of the primary database. Like maximum protection mode, a transaction will not commit until the redo needed to recover that transaction is written to the local online redo log and to at least one remote standby redo log. Unlike maximum protection mode, the primary database does not shut down if a fault prevents it from writing its redo stream to a remote standby redo log. Instead, the primary database operates in maximum performance mode until the fault is corrected and all gaps in redo log files are resolved. When all gaps are resolved, the primary database automatically resumes operating in maximum availability mode.
This mode ensures that no data loss will occur if the primary database fails, but only if a second fault does not prevent a complete set of redo data from being sent from the primary database to at least one standby database.
Like maximum protection mode, the maximum availability mode requires that you:
•Configure standby redo log files on at least one standby database.
•Set the SYNC, LGWR, and AFFIRM attributes of the LOG_ARCHIVE_DEST_n parameter for at least 1 standby database.

Maximum Performance Mode

This protection mode (the default) provides the highest level of data protection that is possible without affecting the performance of the primary database. This is accomplished by allowing a transaction to commit as soon as the redo data needed to recover that transaction is written to the local online redo log. The primary database’s redo data stream is also written to at least one standby database, but that redo stream is written asynchronously with respect to the commitment of the transactions that create the redo data.
When network links with sufficient bandwidth are used, this mode provides a level of data protection that approaches that of maximum availability mode with minimal impact on primary database performance.
The maximum performance mode enables you to either set the LGWR and ASYNC attributes, or set the ARCH attribute on the LOG_ARCHIVE_DEST_nparameter for the standby database destination. If the primary database fails, you can reduce the amount of data that is not received on the standby destination by setting the LGWR and ASYNC attributes.
Table 5-2 Minimum Requirements for Data Protection Modes
Maximum Protection Maximum Availability Maximum Performance
Redo archival process LGWR LGWR LGWR or ARCH
Network transmission mode SYNC SYNC SYNC or ASYNC when using LGWR process. SYNC if using ARCHprocess
Disk write option AFFIRM AFFIRM AFFIRM or NOAFFIRM
Standby redo log required? Yes Yes No, but it is recommended
SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_2=’SERVICE=chicago
2> OPTIONAL LGWR SYNC AFFIRM
3> VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)
4> DB_UNIQUE_NAME=chicago’;

upgrading the protection mode, perform this step.
SQL> SHUTDOWN IMMEDIATE;
SQL> STARTUP MOUNT;
SQL> ALTER DATABASE SET STANDBY DATABASE TO MAXIMIZE AVAILABILITY;
ALTER SYSTEM SET LOG_ARCHIVE_DEST_2=’SERVICE=boston
2> OPTIONAL LGWR SYNC AFFIRM
3> VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)
4> DB_UNIQUE_NAME=boston’
SQL> SELECT PROTECTION_MODE, PROTECTION_LEVEL FROM V$DATABASE;

PROTECTION_MODE PROTECTION_LEVEL
——————— ———————
MAXIMUM AVAILABILITY MAXIMUM AVAILABILITY

Where standby log files will be written

Typically, when redo data is received from the primary database, the redo data is written to archived redo log files that are stored in the directory you specify with the LOCATION attribute of the LOG_ARCHIVE_DEST_n parameter. Alternatively, you can specify the STANDBY_ARCHIVE_DEST initialization parameter on the standby database to indicate an alternate directory where the archived redo log files are to be stored when received from the primary database.
Using the Fetch Archive Log (FAL) to Resolve Archive Gaps
The fetch archive log (FAL) client and server resolve gaps detected in the range of archived redo log files generated at the primary database and received at the physical standby database.
• The FAL client requests the transfer of archived redo log files automatically.
• The FAL server services the FAL requests coming from the FAL client.

Difference Between LOG_ARCHIVE_DEST and LOG_ARCHIVE_DEST_1

Difference Between LOG_ARCHIVE_DEST and LOG_ARCHIVE_DEST_1

LOG_ARCHIVE_DEST is applicable only if we are running the database in ARCHIVELOG mode. LOG_ARCHIVE_DEST parameter are used to specified the archiving location. The Location specified by log_archive_dest must be local . We choose to archive only two local location  i.e,  primary and a secondary destination ( using LOG_ARCHIVE_DEST and LOG_ARCHIVE_DUPLEX_DEST ) .
The  LOG_ARCHIVE_DEST_n initialization parameter defines up to ten (where n = 1, 2, … 10) destinations in oracle 10g and thirty one (n=1,2….31)  destination in oracle 11g , each of which must specify either the  LOCATION or  the SERVICE  attribute to specify where to archive the redo data. All other attributes are optional.We set the attributes for the LOG_ARCHIVE_DEST_n initialization parameter to control different aspects of how redo transport services transfer redo data from a production or primary database destination to another (standby) database destination.For every LOG_ARCHIVE_DEST_n initialization parameter that we define, we must specify a corresponding LOG_ARCHIVE_DEST_STATE_n parameter. The LOG_ARCHIVE_DEST_STATE_n (where n is an integer from 1 to 10) initialization parameter specifies whether the corresponding destination is currently enabled or disabled.
LOG_ARCHIVE_DEST is incompatible with the LOG_ARCHIVE_DEST_n parameters, and must be defined as the null string (“”) or (‘ ‘) when any LOG_ARCHIVE_DEST_n parameter has a value other than a null string.

If we are using Oracle Enterprise Edition, LOG_ARCHIVE_DEST parameter is deprecated in favor of the LOG_ARCHIVE_DEST_n parameters. If Oracle Enterprise Edition is not installed or it is installed but we have not specified any LOG_ARCHIVE_DEST_n parameters,then LOG_ARCHIVE_DEST parameter is valid. If we have LOG_ARCHIVE_DEST parameter set and we have to set more than two archive location then follow the below steps :
SQL> alter system reset log_archive_dest=” scope=both;
SQL> shutdown immediate
SQL> startup
SQL> alter system set log_archive_dest_1=’location=C:archive’ ;
SQL> alter system set log_archive_dest_2=’location=D:oraclearchive’ ;

11g Data Guard Broker DGMGRL Configuration quick steps

11g Data Guard Broker DGMGRL Configuration quick steps

This note describes the commands used to create a Data Guard broker configuration using the command line dgmgrl interface. This can also be done via the Enterprise Manager Grid Control GUI as well and the assumption is that there is a Physical Standby database in place and redo transport and redo apply have already been configured and that both the Primary and Standby database are in sync.

Primary Database: genoa1_js

Standby Database: genoa1_fc

On both Primary as well as Standby database start the Data Guard Broker process

SQL> ALTER SYSTEM SET DG_BROKER_START=TRUE SCOPE=BOTH; System altered.

Edit the listener.ora on both nodes to add a static entry for DGMGRL

This is to prevent the ORA-12154 error which can be observed on startup of the standby database after performing a switchover.

Ensure that the GLOBAL_DBNAME is set to db_unique_name_DGMGRL.db_domain

SID_LIST_LISTENER =  (SID_LIST = (SID_DESC =        (GLOBAL_DBNAME = genoa1_js_dgmgrl)        (ORACLE_HOME = /u01/oracle/product/11.1.0/db_1)        (SID_NAME = genoa1)        ))

Create the configuration

genoa1:/u01/oracle> dgmgrlDGMGRL for IBM/AIX RISC System/6000: Version 11.1.0.7.0 – 64bit Production Copyright (c) 2000, 2005, Oracle. All rights reserved. Welcome to DGMGRL, type “help” for information.DGMGRL> connect sys/xxxxConnected.DGMGRL> CREATE CONFIGURATION ‘GenesisDR’> AS> PRIMARY DATABASE IS ‘genoa1_js’> CONNECT IDENTIFIER IS ‘genoa1_js’> ;Configuration “GenesisDR” created with primary database “genoa1_js”

Add the Standby database to the configuration

DGMGRL>  ADD DATABASE> ‘genoa1_fc’> AS> CONNECT IDENTIFIER IS ‘genoa1_fc’> ;Database “genoa1_fc” added DGMGRL> SHOW CONFIGURATION Configuration  Name:                GenesisDR  Enabled:             NO  Protection Mode:     MaxAvailability  Databases:    genoa1_js – Primary database    genoa1_fc – Physical standby database Fast-Start Failover: DISABLED Current status for “GenesisDR”:DISABLED

Enable the configuration

DGMGRL> ENABLE CONFIGURATIONEnabled. DGMGRL> SHOW CONFIGURATION Configuration  Name:                GenesisDR  Enabled:             YES  Protection Mode:     MaxAvailability  Databases:    genoa1_js – Primary database    genoa1_fc – Physical standby database Fast-Start Failover: DISABLED Current status for “GenesisDR”:SUCCESS

View the Standby and Primary database properties

DGMGRL> show database genoa1_js DGMGRL> show database verbose  genoa1_fc

Change the properties of a configured databaase

DGMGRL> EDIT DATABASE ‘genoa1_js’ SET PROPERTY ‘LogXptMode’=’SYNC’;Property “LogXptMode” updated

Data Guard Protection Modes

Data Guard Protection Modes

Maximum Availability

In this protection mode Transactions do not commit until all redo data needed to recover those transactions has been written to the online redo log and to at least one synchronized standby database. If the primary database cannot write its redo stream to at least one synchronized standby database, it operates as if it were in maximum performance mode to preserve primary database availability until it is again able to write its redo stream to a synchronized standby database.

This mode ensures that no data loss will occur if the primary database fails, but only if a second fault does not prevent a complete set of redo data from being sent from the primary database to at least one standby database.

Maximum Performance

This protection mode provides the highest level of data protection that is possible without affecting the performance of a primary database. This is accomplished by allowing transactions to commit as soon as all redo data generated by those transactions has been written to the online log. Redo data is also written to one or more standby databases, but this is done asynchronously with respect to transaction commitment, so primary database performance is unaffected by delays in writing redo data to the standby database(s).

This protection mode offers slightly less data protection than maximum availability mode and has minimal impact on primary database performance. This is the default protection mode.

Maximum Protection

This protection mode ensures that zero data loss occurs if a primary database fails. To provide this level of protection, the redo data needed to recover a transaction must be written to both the online redo log and to at least one synchronized standby database before the transaction commits. To ensure that data loss cannot occur, the primary database will shut down, rather than continue processing transactions, if it cannot write its redo stream to at least one synchronized standby database.

Because this data protection mode prioritizes data protection over primary database availability, Oracle recommends that a minimum of two standby databases be used to protect a primary database that runs in maximum protection mode to prevent a single standby database failure from causing the primary database to shut down.

Table 5-1 Required Redo Transport Attributes for Data Protection Modes

Maximum Availability Maximum Performance Maximum Protection
AFFIRM NOAFFIRM AFFIRM
SYNC ASYNC SYNC
DB_UNIQUE_NAME DB_UNIQUE_NAME DB_UNIQUE_NAME

Create Oracle ASM disks on AIX

Below are steps for creating asm disks on OS as well as on Oracle ASM instance on AIX 5.3 and above.

Source:Internet

Step-1) Ask storage admin to provide LUNs of specified size to be used as ocr/voting/asm disks

Step-2) Storage team will provide LUN id of disks , we need to run below (as root user) command to
discover/detect new disks provided by Storage team
1)   $cfgmgr    ###it will discover newly add disks to server
2)   $datapath query essmap|grep -w <LUN_ID>

Step-3) Now we know hdisk name from output of above command . we will use below command to create character device i.e. asm disk using it major(20) , minor(152) number
$cd /dev
$mknod asmdisk01 c 20 152

Step-4) Change permission of asmdisk for oracle user.
$chown oracle:dba /dev/asmdisk01

Step-5) Now we can see /dev/asmdisk01 as CANDIDATE disk after login in asm instance.
log in to server as oracle user and set oracle asm/grid environment variables
$ sqlplus / as sysasm
SQL> select header_status,path from v$asm_disk;

Step-6) Create asm diskgroup using this disk.
SQL>create diskgroup DG_VG01 external redundancy disk ‘/dev/asmdisk01’;

Note: Commands to check properties of disks on AIX
1) $lsattr -El hdisk1

if we are using Oracle RAC/Grid then we should verify that “reserve_policy” property of hdisk should be “no reserve” else disks will not be visible/shared on all nodes of cluster/grid. and it this may lead to corruption in database.

2) We can use below command to list LUN-ID of multiple disks in single command.

ls -ltr |grep “`ls -ltr /dev/asmdisk243| cut -c 39-45`” | grep ” hdisk”| awk ‘{print $10}’ | awk ‘{print “lsattr -El “,$1,” |grep lun_id”}’| sh

In above command we may need to replace awk ‘{print $10}’ with awk ‘{print $9}’ depending upon major/minor number length(if above command fail)
3) We can use below command to find size of RAW disks on AIX is as below
    $pcmpath query essmap |grep path0 |egrep ‘lun-id1 | lun-id2’
    example
    $pcmpath query essmap |grep path0 |egrep ’75PL83115D2 | 75PL83115D3′

Steps To Migrate a Database From Non-ASM to ASM

Steps To Migrate a Database From Non-ASM to ASM
Prerequisite – ASM instance should be created and up and running.
1) Check the database version and status

SQL> select INSTANCE_NAME,VERSION,DATABASE_STATUS from v$instance;

INSTANCE_NAME VERSION DATABASE_STATUS
—————- —————– —————–
tempdb 11.2.0.3.0 ACTIVE

2) We should be able to check the ASM disk available from the instance which we created earlier

SQL> select NAME,STATE,TOTAL_MB,PATH from v$asm_disk;
NAME STATE TOTAL_MB PATH
—————————— ——– ———- ———-
DISK1 NORMAL 4777 ORCL:DISK1

Migrating the SPFILE from Non-asm to ASM
1) If you are using a spfile, take a backup of spfile using rman to restore it to ASM disk
RMAN>BACKUP AS BACKUPSET SPFILE;

2) Now shutdown the database and start it in Mount stage

3) Now restore the spfile from the backup to ASM disk as below
RMAN>RESTORE SPFILE TO ‘+DATA/spfilelearndb.ora’;

4) You can see, still the parameter “spfile” is pointing to file system instead of ASM disk
SQL> show parameter spfile
NAME TYPE VALUE
———————————— ———– ——————————
spfile string /opt/oracle/product/11.2.0/dbhome_1/dbs/spfilelearndb.ora

5) Shutdowm the instance

6) Delete “spfile<sid>.ora and init<sid>.ora from the $ORACLE_HOME/dbs directory and create a new init<sid>.ora with the following line of content and start the instance,

vi initlearndb.ora
SPFILE=’+DATA/spfilelearndb.ora’

SQL> Startup

SQL> show parameter spfile
NAME TYPE VALUE
———————————— ———– ——————————
spfile string +DATA/spfilelearndb.ora

Migrating Control files from Non-ASM to ASM
1) These are the current control files in non-asm

SQL>show parameter contol_files
control_files string /u01/data/learndb/control01.ctl,/u01/data/learndb/control02.ctl

SQL> shutdown immediate

SQL> startup nomount

Move these control files to ASM using RMAN
RMAN>restore controlfile to ‘+DATA’ from ‘/u01/data/learndb/control01.ctl’;

2) Update the spfile with new control file location

SQL> alter system set control_files=’+DATA/LEARNDB/CONTROLFILE/current.256.787200593′ scope=spfile;

3) Now shutdown the instance and startup in mount stage and check for the control file location
SQL> show parameter control_files

NAME TYPE VALUE
———————————— ———– ——————————
control_files string +DATA/learndb/controlfile/current.256.787200593
Migrating the Datafiles from Non-ASM to ASM

1) Keep the database is in mount stage

2) Use RMAN “BACKUP AS COPY” method to migrate the datafiles from filesystem to ASM

RMAN> BACKUP AS COPY DATABASE FORMAT ‘+DATA’;

3) Switch the database to the copy created using the following RMAN command

RMAN> SWITCH DATABASE TO COPY;

datafile 1 switched to datafile copy “+DATA/learndb/datafile/system.258.787201633”
datafile 2 switched to datafile copy “+DATA/learndb/datafile/sysaux.257.787201553”
datafile 3 switched to datafile copy “+DATA/learndb/datafile/undotbs1.259.787201713”
datafile 4 switched to datafile copy “+DATA/learndb/datafile/users.261.787201725”

4) Migrate the tempfile to ASM using RMAN

RMAN> run
2> {
3> set newname for tempfile ‘/u01/data/learndb/temp01.dbf’ to ‘+DATA’;
4> switch tempfile all;
5> }

5) Now open the Database

RMAN> ALTER DATABASE OPEN;

database opened

6) You can now check the datafiles created in ASM

SQL> select FILE_NAME from dba_data_files;

FILE_NAME
——————————————————————————–
+DATA/learndb/datafile/users.261.787201725
+DATA/learndb/datafile/undotbs1.259.787201713
+DATA/learndb/datafile/sysaux.257.787201553
+DATA/learndb/datafile/system.258.787201633
Migrating the Redo log files from Non-ASM to ASM

1) Identify the currently available redo log files using the following command

SQL> SELECT a.group#, b.member, a.status FROM v$log a, v$logfile b WHERE a.group#=b.group#;
GROUP# MEMBER STATUS
———- ——— ———–
3 /u01/data/learndb/redo03.log INACTIVE
2 /u01/data/learndb/redo02.log CURRENT
1 /u01/data/learndb/redo01.log INACTIVE

2) Add the new logfiles to ASM using following command

ALTER DATABASE ADD LOGFILE MEMBER ‘+DATA’ TO GROUP 1;
ALTER DATABASE ADD LOGFILE MEMBER ‘+DATA’ TO GROUP 2;
ALTER DATABASE ADD LOGFILE MEMBER ‘+DATA’ TO GROUP 3;

3) Now drop the old logfiles, A redo log member can only be dropped after being archived and being in INACTIVE mode. If needed, switch logfile multiple times until the logfile is ready for dropping.
Now your database is migrated to ASM