Monday, 14 February 2011

Convert a single instance to RAC manually


Manual Method

Following are the single instance details

ORACLE_HOME=/u01/app/oracle/product/10.2.0/db_1

ORACLE_SID=prod

Datafile location = /u03/oradata/prod — /u03 is a ocfs shared file system. So we are going to keep the datafiles are same location while converting to RAC.

In case your files are at some other slot or disk and not in shared file system, you need to copy the same and then rename the files when you mount the instance.

Database version = 10g R2 (10.2.0.1.0)

Steps to convert single instance to RAC


Step 1) Install clusterware on the nodes on which you want to setup RAC

For detailed steps on installing clusterware on the nodes, you can refer to my previous post – http://avdeo.com/oracle-real-application-cluster-10g/

or you can refer to any other post over the internet. Basically you need to setup the IP addresses and other OS related files and variables before you can install the clusterware.

Your clusterware version must be greater then or equal to the single instance RDBMS version. Make sure you do this step correct.

Following are cluster installation details

Cluster name : crs

Cluster install location : /u01/app/oracle/product/10.2.0/crs

OCR File location : /u03/oracrs/ocr.ora

Voting disk location : /u03/oracrs/vote.crs

Step 2) Install Oracle Database 10g Real Application Cluster software

Again for this step, you can refer to my previous post – http://avdeo.com/oracle-real-application-cluster-10g/ or any post on the internet.

Just install the s/w. The RDBMS software version must be same as your single instance RDBMS software version

Following are the details of installation

RAC RACLE_HOME=/u01/app/oracle/product/10.2.0/db

Number of instances = 2

Node names for 2 instances = ocvmrh2103, ocvmrh2190

/u01 is a individual filesystem and is not mounted commonly on both nodes. This is a separate ORACLE_HOME architecture.

Step 3) Take the backup of single instance database and restore the same to the shared file system location.

This step is not required in my case as I created the database on a shared filesystem only. But this is only for demo purpose. For real time scenario, you need to copy datafiles to shared filsystem.

Step 4) Copy init.ora file of single instance and add following parameters

bash-3.00$ cp initprod.ora /tmp/initprod.ora

In my case the database name is “prod” and I am converting this single instance database to a 2 node RAC. So in my case instance 1 name becomes prod1 and instance 2 name becomes prod2

So add following parameters to /tmp/initprod.ora file

*.cluster_database = TRUE
*.cluster_database_instances = 2
*.undo_management=AUTO
prod1.undo_tablespace=UNDOTBS1
prod1.instance_name=prod1
prod1.instance_number=1
prod1.thread=1
prod1.local_listener=listener_ocvmrh2103
prod2.instance_name=prod2
prod2.instance_number=2
prod2.local_listener=listener_ocvmrh2190
prod2.thread=2
prod2.undo_tablespace=UNDOTBS2

Step 5) change the location of controlfile in the above /tmp/initprod.ora file

In my case the controlfiles are present in /u03 location which is a OCFS shared filesystem. So I dont have to change the locaiton of controlfiles in my init.ora

Incase you have moved the controlfiles along with the datafiles to shared filesystem location, then you need to change the path of controlfile in the above init.ora file copied in/tmp location

Step 6) Create SPFILE from PFILE

SQL> select name from v$database;

NAME
---------
PROD

SQL> create spfile='/u03/oradata/prod/spfileprod.ora' from pfile='/tmp/initprod.ora';

File created.

SQL>

Step 7) Copy spfile to the RAC ORACLE_HOME/dbs location of instance 1 and create pfile

bash-3.00$ cp spfileprod.ora /u01/app/oracle/product/10.2.0/db/dbs/spfileprod.ora
bash-3.00$ pwd
/u01/app/oracle/product/10.2.0/db/dbs
bash-3.00$ cat initprod1.ora
spfile='/u01/app/oracle/product/10.2.0/db/dbs/spfileprod.ora'

Step 8) Create new password file for prod1 instance under RAC oracle home

bash-3.00$ orapwd file=orapwprod1 password=welcome1

Step 9) Start database in mount stage and rename datafiles and redo log files to new shared location

In my case since the datafiles and online redo logs are placed at same shared location, I dont need to do this step. However in real time scenario, this step is required.

make sure that your ORACLE_HOME variable is set to RAC ORACLE_HOME

bash-3.00$ echo $ORACLE_HOME
/u01/app/oracle/product/10.2.0/db
bash-3.00$ echo $ORACLE_SID
prod1
SQL> startup mount pfile=initprod1.ora
ORACLE instance started.

Total System Global Area  838860800 bytes
Fixed Size                  1222168 bytes
Variable Size             213912040 bytes
Database Buffers          620756992 bytes
Redo Buffers                2969600 bytes
Database mounted.
SQL>

Step 10) Add second thread to database which will be for instance 2

SQL> alter database add logfile thread 2 group 4 ('/u03/oradata/prod/redo2_01.dbf') size 50M, group 5 ('/u03/oradata/prod/redo2_02.dbf') size 50M, group 6 ('/u03/oradata/prod/redo2_03.dbf') size 50M;

Database altered.

SQL> ALTER DATABASE OPEN;

Database altered.

SQL> alter database enable public thread 2;

Database altered.

Step 11) Create undo tablespace for instance 2

The name of the undo tablespace should be same as you specified in the init.ora file in step 4 above.

SQL> CREATE UNDO TABLESPACE UNDOTBS2 DATAFILE '/u03/oradata/prod/undotbs2_01.dbf' size 25M;

Tablespace created.

Step 12) Run $ORACLE_HOME/rdbms/admin/catclust.sql to create cluster database specific views within the existing instance 1

SQL> @?/rdbms/admin/catclust.sql

Step 13)

On the second node, set ORACLE_HOME and SID for instance 2

bash-3.00$ export ORACLE_HOME=/u01/app/oracle/product/10.2.0/db
bash-3.00$ export ORACLE_SID=prod2

Create initprod2.ora on second node similar to node 1. In this case you have to copy spfile to second node as well. You can also keep spfile in shared location (/u03 in my case) and put same path in initprod2.ora

bash-3.00$ pwd
/u01/app/oracle/product/10.2.0/db/dbs
bash-3.00$ ls -lrt spfileprod.ora
-rw-r-----  1 oracle oinstall 3584 Feb 19 12:36 spfileprod.ora
bash-3.00$ cat initprod2.ora
spfile='/u01/app/oracle/product/10.2.0/db/dbs/spfileprod.ora'

Step 14) Create new password file for instance 2

bash-3.00$ orapwd file=orapwprod2 password=welcome1

Step 15) Start the second instance

SQL> startup pfile=initprod2.ora
ORACLE instance started.

Total System Global Area  838860800 bytes
Fixed Size                  1222168 bytes
Variable Size             213912040 bytes
Database Buffers          620756992 bytes
Redo Buffers                2969600 bytes
Database mounted.
Database opened.

You might face some issue while starting second instance as bdump, udump and cdump dir location will be that of single instance ORACLE_HOME which is not present in node2.

Also you might hit following error

SQL> startup
ORA-09925: Unable to create audit trail file
Linux Error: 2: No such file or directory
Additional information: 9925
SQL> Disconnected

Make sure you alter following parameters to a valid location and copy spfileprod.ora again to node2

audit_file_dest
background_dump_dest
user_dump_dest
core_dump_dest

Step 16) Add the converted database to cluster

move the spfile to the common location such as /u03/oradata/prod and modify both the pfiles so that both pfiles refers to same spfile and there are no 2 copies.

bash-3.00$ srvctl add database -d prod -o /u01/app/oracle/product/10.2.0/db -p /u03/oradata/prod/spfileprod.ora
bash-3.00$ srvctl add instance -d prod -i prod1 -n OCVMRH2103
bash-3.00$ srvctl add instance -d prod -i prod2 -n OCVMRH2190

No comments:

Post a Comment

Number of Visitors