Converting Oracle Applications to RAC with ASM
Continuing my series on Oracle Real Application Clusters, in my current post i will talk about converting an existing Oracle Applications instance to use RAC with Automatic Storage Management (ASM).The current document will also hold good for a new installation as rapid install does not support a RAC installation by default, in which case you would first have to install
Oracle Applications as a simple installation and then proceed to convert the database to use Real Application Clusters.
Some Pre Requisites you need to have before proceeding with the conversion.
- A shared Storage device which is accessible from both the nodes(NFS mount is not supported).
- A pair of free IP addresses which will be use for your Virtual IP configuration.
- At least two network cards one for your Public IP and One for your Virtual IP.
- At least two separate disk partitions on your shared storage
The main steps which are involved in this conversion are
- Installation Of Oracle Applications (if you already do not have a one installed).
- Set up Virtual IP and Private IPs on both your Nodes.
- Setup SSH on both the nodes.
- Install and Configure OCFS2.
- Install Oracle Clusterware 10g (10.2.0.1).
- Install 10g Release 2 Software (10.2.0.1).
- Install 10g Release 2 Companion software (required for Oracle Applications).
- Upgrade Oracle Clusterware to 10.2.0.2.
- Upgrade Oracle Database Software to 10.2.0.2.
- Install ASMlib on both the Nodes.
- Create ASM instances on both the Nodes.
- Upgrade the Oracle Applications Database to 10.2.0.2.
- Convert the Database to RAC with ASM.
- Configure Application Tier with the RAC instance.
- Configure the Database Tier with the RAC instance.
Conventions
Servers lxa.appsdbablog.com lxb.appsdbablog.com
Environment Oracle Applications 11.5.10.2 installed on lxa
Port Pool 14
Operating system Redhat AS Linuix
Installation of Oracle Applications
In my current post i am assuming that you already have a oracle applications instance 11.5.10.2 set up with the following patches installed
TXK (FND & ADX) AUTOCONFIG ROLLUP PATCH O (December 2006) 5478710
11.5.10 INTEROP PATCH FOR 10GR2 4653225
ATG Roll up 4
Also check if you have rsh installed on both your nodes
rpm -qa | grep -i rsh
rsh-0.17-25.4
Check User Ids
The user id and the usernames of the oracle user on the both the nodes must be the same
lxa$ id
uid=501(orasam) gid=101(dba) groups=101(dba)
lxb$ id
uid=501(orasam) gid=101(dba) groups=101(dba)
Setup SSH on both the nodes
You must set up SSH on both the nodes
Node lxa
$ mkdir ~/.ssh
$ chmod 755 ~/.ssh
$ /usr/bin/ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/orasam/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/orasam/.ssh/id_rsa.
Your public key has been saved in /home/orasam/.ssh/id_rsa.pub.
The key fingerprint is:
b7:2e:d0:c6:bc:ea:94:57:90:6b:67:b9:10:b7:fe:fe orasam@lxa.appsdbablog.com
$ /usr/bin/ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (/home/orasam/.ssh/id_dsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/orasam/.ssh/id_dsa.
Your public key has been saved in /home/orasam/.ssh/id_dsa.pub.
The key fingerprint is:
5f:91:a2:42:20:67:0d:76:c1:e0:d5:1f:9b:10:f7:a8 orasam@lxa.appsdbablog.com
$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
Node lxb
$ mkdir ~/.ssh
$ chmod 755 ~/.ssh
$ /usr/bin/ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/orasam/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/orasam/.ssh/id_rsa.
Your public key has been saved in /home/orasam/.ssh/id_rsa.pub.
The key fingerprint is:
6b:1f:19:3c:13:91:e1:f7:fb:db:91:86:5c:a4:36:a6 orasam@lxa.appsdbablog.com
[orasam@lxa ~]$ /usr/bin/ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (/home/orasam/.ssh/id_dsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/orasam/.ssh/id_dsa.
Your public key has been saved in /home/orasam/.ssh/id_dsa.pub.
The key fingerprint is:
c8:81:8f:db:88:fb:3e:ac:1f:39:05:66:94:43:18:fb orasam@lxa.appsdbablog.com
$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
Node lxa
ssh orasam@lxa cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
ssh orasam@lxa cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
The authenticity of host ‘lxa (172.16.128.156)’ can’t be established.
RSA key fingerprint is b9:73:31:69:9a:19:58:f4:ff:cb:a2:2e:c1:f8:76:40.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ‘lxa.appsdbablog.com’ (RSA) to the list of known hosts.
orasam@lxa’s password:
Node lxb
$ssh orasam@lxb cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
$ssh orasam@lxb cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
The authenticity of host ‘lxb (172.16.128.155)’ can’t be established.
RSA key fingerprint is c2:eb:fa:0b:6a:bb:a7:87:0e:ae:83:8e:23:00:96:ed.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ‘lxb.appsdbablog.com’ (RSA) to the list of known hosts.
After this you must be able to ssh to the node name and the nodename.domainname.com without being prompted for a password from the oracle user
Establish User Equivalence
Node lxa
$ exec /usr/bin/ssh-agent $SHELL
$ /usr/bin/ssh-add
Identity added: /home/orasam/.ssh/id_rsa (/home/orasam/.ssh/id_rsa)
Identity added: /home/orasam/.ssh/id_dsa (/home/orasam/.ssh/id_dsa)
Node lxb
$ exec /usr/bin/ssh-agent $SHELL
$ /usr/bin/ssh-add
Identity added: /home/orasam/.ssh/id_rsa (/home/orasam/.ssh/id_rsa)
Identity added: /home/orasam/.ssh/id_dsa (/home/orasam/.ssh/id_dsa)
Installing OCSF2
Download the correct version of OCFS2 software for your server
Get the server kernel version
# uname -r
2.6.9-42.ELsmp
Get the architecture information
# rpm -qf /boot/vmlinuz-`uname -r` –queryformat “%{ARCH}\n”
i686
You must also install the appropriate versions of OCFS2 Tools and the OCFS2 CONSOLE packages before installing OCFS2
# rpm -ivh ocfs2-tools-1.2.3-1.i386.rpm
Preparing… ########################################### [100%]
1:ocfs2-tools ########################################### [100%]
# rpm -ihv ocfs2console-1.2.3-1.i386.rpm
Preparing… ########################################### [100%]
1:ocfs2console ########################################### [100%]
# rpm -ihv ocfs2-2.6.9-42.EL-1.2.4-2.i686.rpm
Preparing… ########################################### [100%]
1:ocfs2-2.6.9-42.EL ########################################### [100%]
Install these rpms on both the nodes as the root user
Run ocfs2console
#ocsf2console
Verify both the node configuration
Propagate the changes to the other node via the console
# /etc/init.d/o2cb enable
Writing O2CB configuration: OK
O2CB cluster ocfs2 already online
Execute this on both the nodes
Create mount dirs for ocsf file system on both the nodes
cd /
mkdir ocfs
Create a ocfs2 file system
#mkfs.ocfs2 -b 4K -C 32K -N 4 -L /samocfs /dev/sdc1
mkfs.ocfs2 1.2.3
Overwriting existing ocfs2 partition.
Proceed (y/N): y
Filesystem label=/samocfs
Block size=4096 (bits=12)
Cluster size=32768 (bits=15)
Volume size=1011613696 (30872 clusters) (246976 blocks)
1 cluster groups (tail covers 30872 clusters, rest cover 30872 clusters)
Journal size=16777216
Initial number of node slots: 4
Creating bitmaps: done
Initializing superblock: done
Writing system files: done
Writing superblock: done
Writing backup superblock: 0 block(s)
Formatting Journals: done
Writing lost+found: done
mkfs.ocfs2 successful
Mount the OCSF2 filesystem on both the nodes
# mount -o datavolume,nointr -t ocfs2 /dev/smb1 /samocfs
Verfiy from ocfs2console
Setup Public IP,Private IP and Virtual IPs
Setup Public IP,Private IP and Virtual IPs for both your nodes if not already done at the time of the OS installation.Your ifconfig on both the nodes should look something similar
Node lxa
# ifconfig
eth0 Link encap:Ethernet HWaddr 00:11:43:FD:CE:EA
inet addr:185.12.14.123 Bcast:172.16.143.255 Mask:255.255.240.0
inet6 addr: fe80::211:43ff:fefd:ceea/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:477854 errors:0 dropped:0 overruns:0 frame:0
TX packets:47913 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:88717091 (84.6 MiB) TX bytes:4238884 (4.0 MiB)
Base address:0xdcc0 Memory:dfbe0000-dfc00000
eth1 Link encap:Ethernet HWaddr 00:11:43:FD:CE:EB
inet addr:10.0.0.4 Bcast:10.0.7.255 Mask:255.255.248.0
inet6 addr: fe80::211:43ff:fefd:ceeb/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:14022 errors:0 dropped:0 overruns:0 frame:0
TX packets:9 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:2441537 (2.3 MiB) TX bytes:654 (654.0 b)
Base address:0xdc80 Memory:dfbc0000-dfbe0000
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:1255 errors:0 dropped:0 overruns:0 frame:0
TX packets:1255 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:1303798 (1.2 MiB) TX bytes:1303798 (1.2 MiB)
Node lxb
# ifconfig
eth0 Link encap:Ethernet HWaddr 00:11:43:FD:C5:DE
inet addr:185.12.14.124 Bcast:172.16.143.255 Mask:255.255.240.0
inet6 addr: fe80::211:43ff:fefd:c5de/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:293476 errors:0 dropped:0 overruns:0 frame:0
TX packets:106814 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:48389945 (46.1 MiB) TX bytes:9336369 (8.9 MiB)
Base address:0xdcc0 Memory:dfbe0000-dfc00000
eth1 Link encap:Ethernet HWaddr 00:11:43:FD:C5:DF
inet addr:10.0.0.5 Bcast:10.0.7.255 Mask:255.255.248.0
inet6 addr: fe80::211:43ff:fefd:c5df/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:17813 errors:0 dropped:0 overruns:0 frame:0
TX packets:9 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:3094360 (2.9 MiB) TX bytes:654 (654.0 b)
Base address:0xdc80 Memory:dfbc0000-dfbe0000
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:2103 errors:0 dropped:0 overruns:0 frame:0
TX packets:2103 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:2442621 (2.3 MiB) TX bytes:2442621 (2.3 MiB)
Your /etc/hosts file would have a similar look
185.12.14.123 lxa.appsdbablog.com lxa
185.12.14.124 lxb.appsdbablog.com lxb
10.0.0.4 lxa-priv.appsdbablog.com lxa-priv
10.0.0.5 lxb-priv.appsdbablog.com lxb-priv
185.12.14.123 lxa-vip.appsdbablog.com lxa-vip
185.12.14.124 lxb-vip.appsdbablog.com lxb-vip
Installing Oracle Clusterware
Start the runInstaller as the oracle user
Specify a ORACLE_HOME name and location for CRS
Make sure all pre requsite checks are done
Sepcify the details of the other node in the cluster here
Specify the interface you want to use for your private and public IPs
Specify the location for your OCR file
Specify the location for your voting disk
Review the summary screen
OUI will start the installation now
At the end of the installation run the root.sh on both your nodes.An example of one of my nodes is below
# ./root.sh
WARNING: directory ‘/u01/sam/crs/oracle/product/10.2.0′ is not owned by root
WARNING: directory ‘/u01/sam/crs/oracle/product’ is not owned by root
WARNING: directory ‘/u01/sam/crs/oracle’ is not owned by root
WARNING: directory ‘/u01/sam/crs’ is not owned by root
Checking to see if Oracle CRS stack is already configured
Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory ‘/u01/sam/crs/oracle/product/10.2.0′ is not owned by root
WARNING: directory ‘/u01/sam/crs/oracle/product’ is not owned by root
WARNING: directory ‘/u01/sam/crs/oracle’ is not owned by root
WARNING: directory ‘/u01/sam/crs’ is not owned by root
assigning default hostname lxa for node 1.
assigning default hostname lxb for node 2.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 1: lxa lxa-priv lxa
node 2: lxb lxb-priv lxb
Creating OCR keys for user ‘root’, privgrp ‘root’..
Operation successful.
Now formatting voting device: /samocfs/samvd1
Now formatting voting device: /samocfs/samvd2
Now formatting voting device: /samocfs/samvd3
Format of 3 voting devices complete.
Startup will be queued to init within 90 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
lxa
CSS is inactive on these nodes.
lxb
Local node checking complete.
Run root.sh on remaining nodes to start CRS daemons.
Installing ASMlib
Download the asm lib rpms from
http://www.oracle.com/technology/software/tech/linux/asmlib
# rpm -Uhv oracleasm-support-2.0.3-1.i386.rpm
Preparing… ########################################### [100%]
1:oracleasm-support ########################################### [100%]
# rpm -Uhv oracleasm-2.6.9-42.ELsmp-2.0.3-1.i686.rpm
Preparing… ########################################### [100%]
1:oracleasm-2.6.9-42.ELsm########################################### [100%]
# rpm -Uhv oracleasmlib-2.0.2-1.i386.rpm
Preparing… ########################################### [100%]
1:oracleasmlib ########################################### [100%]
Install this on the other node as well.
Configure ASMlib
Run ASM configurations on both the nodes
Node lxa
# /etc/init.d/oracleasm configure
Configuring the Oracle ASM library driver.
This will configure the on-boot properties of the Oracle ASM library
driver. The following questions will determine whether the driver is
loaded on boot and what permissions it will have. The current values
will be shown in brackets (‘[]‘). Hitting <ENTER> without typing an
answer will keep that current value. Ctrl-C will abort.
Default user to own the driver interface []: orasam
Default group to own the driver interface []: dba
Start Oracle ASM library driver on boot (y/n) [n]: y
Fix permissions of Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: [ OK ]
Creating /dev/oracleasm mount point: [ OK ]
Loading module “oracleasm”: [ OK ]
Mounting ASMlib driver filesystem: [ OK ]
Scanning system for ASM disks: [ OK ]
Repeate the configuration on the other node as well.
Label the ASM disks
You mark disks for use by ASMLib by running the following command as root from one of the cluster nodes:
# /etc/init.d/oracleasm createdisk VOL1 /dev/sdd1
Marking disk “/dev/sdd1″ as an ASM disk: [ OK ]
Verify that ASMLib has marked the disks
# /etc/init.d/oracleasm listdisks
VOL1
Scan the ASM Disks on both the Nodes
# /etc/init.d/oracleasm scandisks
Scanning system for ASM disks: [ OK ]
# /etc/init.d/oracleasm scandisks
Scanning system for ASM disks: [ OK ]
Install the 10g Release 2 Software
Choose a ORACLE_HOME location and name
Choose the other node of the cluster
Choose to Install only the software
Create a new listener by running the netca command
Name the listener LISTENER_SAM
Install Database components from Oracle 10g (10.2.0.1) Companion CD
Upgrade Oracle Cluster Ready Services and Database Software to 10.2.0.2
Apply the 4547817 patchset on your 10g DB home and 10g CRS Home
Shutdown CRS Services
Shutdown CRS services and set your ORACLE_HOME to your CRS HOME and run the runInstaller in the patch directory as the oracle user. Since it is a cluster installation the patch will be applied to both the nodes of the cluster
[root@lxa db_1]# /etc/init.d/init.cssd stop
Stopping resources.
Successfully stopped CRS resources
Stopping CSSD.
Shutting down CSS daemon.
Shutdown request successfully issued.
Shutdown has begun. The daemons should exit soon.
shutdown the crs daemons on both the nodes before apply the patch
ORACLE_HOME=/u01/sam/crs/oracle/product/10.2.0/crs/
export ORACLE_HOME
./runInstaller
At the end of the installation of the patchset you will be prompted to run the root102.sh on both the nodes of the cluster, after the successful completion of this it will start up the CRS demons automatically
An example is show below for one of the nodes
# ./root102.sh
Creating pre-patch directory for saving pre-patch clusterware files
Completed patching clusterware files to /u01/sam/crs/oracle/product/10.2.0/crs
Relinking some shared libraries.
Relinking of patched files is complete.
WARNING: directory ‘/u01/sam/crs/oracle/product/10.2.0′ is not owned by root
WARNING: directory ‘/u01/sam/crs/oracle/product’ is not owned by root
WARNING: directory ‘/u01/sam/crs/oracle’ is not owned by root
WARNING: directory ‘/u01/sam/crs’ is not owned by root
Preparing to recopy patched init and RC scripts.
Recopying init and RC scripts.
Status of Oracle Cluster Registry is as follows :
Version : 2
Total space (kbytes) : 0
Used space (kbytes) : 1988
Available space (kbytes) : 4294965308
ID : 1646789190
Device/File Name : /samocfs/samocr1
Device/File integrity check succeeded
Device/File Name : /samocfs/samocr2
Device/File integrity check succeeded
Cluster registry integrity check succeeded
Startup will be queued to init within 90 seconds.
Starting up the CRS daemons.
Waiting for the patched CRS daemons to start.
This may take a while on some systems.
.
.
.
10202 patch successfully applied.
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 1: lxa lxa-priv lxa
Creating OCR keys for user ‘root’, privgrp ‘root’..
Operation successful.
clscfg -upgrade completed successfully
Upgrading the 10g database 10.2.0.1 home.
Set the ORACLE_HOME and sopt the listerner running on this Home
ORACLE_HOME=/u01/sam/db/oracle/product/10.2.0/db_1;
export ORACLE_HOME;
lsnrctl stop listener_sam
Start the runInstaller form the patch directory,review the installation and proceed
After the installation complete run the root.sh on both the nodes as the root user.
Upgrade Oracle 9i(9.2.0.6) Database to 10g (10.2.0.2)
Copy the 10g ORACLE_HOME/rdbms/admin/utlu102i.sql to /tmp
Review the output of this spool before proceeding to upgrade
Execute the dbca from $ORACLE_HOME/bin
Select the database to be upgraded
Specify that it is not in a cluster as of now
Specify the location for your SYSAUX table space
Specify the number of parallel jobs for compiling invalid objects after the upgrade
Choose if you would like to backup your database
Review pre upgrade summary
DBCA will start the upgrade now
Review the Post Upgrade Results
At the end of the upgrade execute the
$ perl cr9idata.pl
Creating directory //u01/sam/db/oracle/product/10.2.0/db_1/nls/data/9idata …
Copying files to //u01/sam/db/oracle/product/10.2.0/db_1/nls/data/9idata…
glob failed (child exited with status 127) at cr9idata.pl line 148.
glob failed (child exited with status 127) at cr9idata.pl line 162.
Copy finished.
Please reset environment variable ORA_NLS10 to /u01/sam/db/oracle/product/10.2.0/db_1/nls/data/9idata!
Create the ASM Instance
Start the installation using “runInstaller” from the “database” directory
Choose a new home for your ASM instance
Choose the nodes of your cluster
Select to configure an ASM instance and give the passwords
Select your ASM diskgroup you created earlier
Review the pre installation summary and proceed ahead
At the end of the installation run the root.sh on both the nodes.
Verfify the ASM isntances have been added to CRS
Execute the crs_stat command from the CRS_HOME/bin
$ ./crs_stat
NAME=ora.lxb.ASM2.asm
TYPE=application
TARGET=ONLINE
STATE=ONLINE on lxb
NAME=ora.lxb.LISTENER_LXB.lsnr
TYPE=application
TARGET=ONLINE
STATE=ONLINE on lxb
NAME=ora.lxb.gsd
TYPE=application
TARGET=ONLINE
STATE=ONLINE on lxb
NAME=ora.lxb.ons
TYPE=application
TARGET=ONLINE
STATE=ONLINE on lxb
NAME=ora.lxb.vip
TYPE=application
TARGET=ONLINE
STATE=ONLINE on lxb
NAME=ora.lxa.ASM1.asm
TYPE=application
TARGET=ONLINE
STATE=ONLINE on lxa
NAME=ora.lxa.LISTENER_lxa.lsnr
TYPE=application
TARGET=ONLINE
STATE=ONLINE on lxa
NAME=ora.lxa.gsd
TYPE=application
TARGET=ONLINE
STATE=ONLINE on lxa
NAME=ora.lxa.ons
TYPE=application
TARGET=ONLINE
STATE=ONLINE on lxa
NAME=ora.lxa.vip
TYPE=application
TARGET=ONLINE
STATE=ONLINE on lxa
Convert your database to RAC
Verify your ASM disk groups are mounted and accessible from both the nodes
Set your ORACLE_HOME to your ASM home and log in as sys
SQL> select name, state, total_mb, free_mb from v$asm_diskgroup;
NAME STATE TOTAL_MB FREE_MB
—————————— ———– ———- ———-
DATA MOUNTED 76300 76207
Set your ORACLE_HOME to the upgraded 10g 10.2.0.2 home
cd $ORACLE_HOME/assistants/rconfig/sampleXMLs
edit the ConvertToRAC.xml file with your instance details.
[orasam@lxa bin]$ ./rconfig /u01/sam/db/oracle/product/10.2.0/db_1/assistan
Converting Database SAM to Cluster Database. Target Oracle Home : /u01/sam/db/or
Setting Data Files and Control Files
Adding Database Instances
Adding Redo Logs
Enabling threads for all Database Instances
Setting TEMP tablespace
Adding UNDO tablespaces
Adding Trace files
Setting Flash Recovery Area
Updating Oratab
Creating Password file(s)
Configuring Listeners
Configuring related CRS resources
Adding NetService entries
Starting Cluster Database
Starting Listeners
<?xml version=”1.0″ ?>
<RConfig>
<ConvertToRAC>
<Convert>
<Response>
<Result code=”0″ >
Operation Succeeded
</Result>
</Response>
<ReturnValue type=”object”>
<Oracle_Home>
/u01/sam/db/oracle/product/10.2.0/db_1
</Oracle_Home>
<SIDList>
<SID>SAM1<\SID>
<SID>SAM2<\SID>
<\SIDList> </ReturnValue>
</Convert>
</ConvertToRAC></RConfig>
Execute autoconfig on the Application Tier
Make sure you are atleast able to connect to one of your RAC instances before this setp, if you are not, try manually tweaking your tnsnames.ora temporarily
# su – applsam
$ cd $COMMON_TOP/admin/scripts/SAM
$./adconfig.sh contextfile=/u01/sam/samappl/admin/SAM_lxa.xml
Generate the apputil dir for the new Oracle Home
Login as the application user and execute $AD_TOP/bin/admkappsutil.pl to generate appsutil.zip for the database tier.
$ perl admkappsutil.pl
Starting the generation of appsutil.zip
Log file located at /u01/sam/samappl/admin/log/MakeAppsUtil_03100821.log
output located at /u01/sam/samappl/admin/out/appsutil.zip
MakeAppsUtil completed successfully.
Copy the appsutil.zip to database tier in the 10g ORACLE_HOME and unzip it.
Generate the context file for your Database Tier
perl adbldxml.pl tier=db appsuser=apps appspasswd=apps
De register the current configuration
$ perl $ORACLE_HOME/appsutil/bin/adgentns.pl appspass=apps
contextfile=/u01/sam/db/oracle/product/10.2.0/db_1/appsutil/SAM1_lxa.xml -removeserver
##########################################################################
Generate Tns Names
##########################################################################
Classpath :
/u01/sam/db/oracle/product/10.2.0/db_1/jre/1.4.2/lib/rt.jar:/u01/sam/db/oracle/product/10.2.0/db_1/jdbc/lib/ojdbc14.jar:/u01/
sam/db/oracle/product/10.2.0/db_1/appsutil/java/xmlparserv2.zip:/u01/sam/db/oracle/product/10.2.0/db_1/appsutil/java:/u01/sam
/db/oracle/product/10.2.0/db_1/jlib/netcfg.jar
Loading ORACLE_HOME environment from /u01/sam/db/oracle/product/10.2.0/db_1
Logfile: /u01/sam/db/oracle/product/10.2.0/db_1/appsutil/log/SAM1_lxa/03130015/NetServiceHandler.log
adgentns.pl exiting with status 0
ERRORCODE = 0 ERRORCODE_END
Run Autoconfig on the Database Tier
From the 10g ORACLE_HOME/appsutil/bin directory, execute AutoConfig on the database tier by running the adconfig.pl script.
At this stage both your application tier and database tier are configured to be used with a RAC instance. As the concluding steps you can configure your listener to use virtual names and configure application load balancing.
Converting Oracle Applications to RAC with ASM
No comments:
Post a Comment