Domain name / IP değişikliği sonrası RAC db de yapılması gereken kontroller ve değişiklikler konusunda sizlere faydası olabilecek şekilde blog post hazırladım. En altta referans bölümünde verdiğim 1059776.1 metalink dokümanının tüm maddelerini sırayla kontrol ediyoruz, 3.maddesinde; “$CRS_HOME/install/params.crs veya $GRID_HOME/crs/install/crsconfig_params file larda domain varsa CRS/Grid’in tekrar kurulması gerekiyor, aksi halde grid patch uygulamaları başarısız olur” deniyor;
*** Bizde aşağıdaki çıktıda koyu renkle belirtilen kısımda domain kullanıldığından CRS/Grid i yine referans bölümünde verdiğim 1276975.1 metalink dokümanını dikkatle takip ederek tekrardan kurmamız gerekiyor ***

oracle@racdbsrv02p  cat crsconfig_params
...
NODELIST=racdbsrv01p,racdbsrv02p
NETWORKS="en0"/10.10.200.0:public,"en1"/10.10.100.0:cluster_interconnect
SCAN_NAME=racdbserver.my_old.domain_name
SCAN_PORT=1521
GPNP_PA=
OCFS_CONFIG=
...

Rac db node name ler:

racdbsrv01p
racdbsrv02p

old domain name: my_old.domain_name
new domain name: my_new.domain_name

#Prod rac Scan IP’leri:
IP Hostname full
10.10.10.11, 10.10.10.12 ve 10.10.10.13 racdbserver.my_old.domain_name

#Prod rac Virtual IP’leri:
IP Hostname full Hostname
10.10.10.14 racdbsrv01p-vip.my_old.domain_name racdbsrv01p-vip
10.10.10.15 racdbsrv02p-vip.my_old.domain_name racdbsrv02p-vip

 
* Grid ve Oracle home yedeklerini alıyorum;

tar cvf - /oracle/app/grid | gzip > /setup/node1_grid_home_bck.tar.gz
tar cvf - /oracle/app/grid | gzip > /setup/node2_grid_home_bck.tar.gz
tar cvf - /oracle/app/oracle/product/11.2.0/dbhome_1 | gzip > /setup/node1_oracle_home_bck.tar.gz
tar cvf - /oracle/app/oracle/product/11.2.0/dbhome_1 | gzip > /setup/node2_oracle_home_bck.tar.gz

* Mevcut asm disklerim;

root@racdbsrv01p /dev # ls -la *rhdiskASM*
crw------- 1 oracle dba 19, 2 Mar 17 14:43 rhdiskASMd01
crw------- 1 oracle dba 19, 3 Mar 17 14:54 rhdiskASMd02
crw------- 1 oracle dba 19, 4 Mar 17 14:52 rhdiskASMd03
crw------- 1 oracle dba 19, 5 Mar 17 15:01 rhdiskASMd04
crw------- 1 oracle dba 19, 6 Mar 17 15:01 rhdiskASMd05
crw------- 1 oracle dba 19, 15 Mar 17 15:01 rhdiskASMr01
crw------- 1 oracle dba 19, 16 Mar 17 15:01 rhdiskASMr02

* Grid’i tekrar kurarken kendi ocr dosyalarını atması için aşağıdaki asm diskleri hazırlıyorum;

root@racdbsrv01p /dev # ls -la *rhdiskOCR*
crw------- 1 oracle dba 19, 22 Feb 03 10:46 rhdiskOCRd01
crw------- 1 oracle dba 19, 23 Feb 03 10:48 rhdiskOCRd02
crw------- 1 oracle dba 19, 24 Feb 03 10:49 rhdiskOCRd03

* CRS servislerin önceki halini kontrol ediyorum;

root@racdbsrv01p / # . oraenv
ORACLE_SID = [root] ? +ASM1
The Oracle base has been set to /oracle/app/oracle
root@racdbsrv01p / # crsctl stat res -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS 
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
 ONLINE ONLINE racdbsrv01p 
 ONLINE ONLINE racdbsrv02p 
ora.LISTENER.lsnr
 ONLINE ONLINE racdbsrv01p 
 ONLINE ONLINE racdbsrv02p 
ora.RECO.dg
 ONLINE ONLINE racdbsrv01p 
 ONLINE ONLINE racdbsrv02p 
ora.asm
 ONLINE ONLINE racdbsrv01p Started 
 ONLINE ONLINE racdbsrv02p Started 
ora.gsd
 OFFLINE OFFLINE racdbsrv01p 
 OFFLINE OFFLINE racdbsrv02p 
ora.net1.network
 ONLINE ONLINE racdbsrv01p 
 ONLINE ONLINE racdbsrv02p 
ora.ons
 ONLINE ONLINE racdbsrv01p 
 ONLINE ONLINE racdbsrv02p 
ora.registry.acfs
 ONLINE ONLINE racdbsrv01p 
 ONLINE ONLINE racdbsrv02p 
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
 1 ONLINE ONLINE racdbsrv01p 
ora.LISTENER_SCAN2.lsnr
 1 ONLINE ONLINE racdbsrv02p 
ora.LISTENER_SCAN3.lsnr
 1 ONLINE ONLINE racdbsrv02p 
ora.cvu
 1 ONLINE ONLINE racdbsrv02p 
ora.racdbsrv01p.vip
 1 ONLINE ONLINE racdbsrv01p 
ora.racdbsrv02p.vip
 1 ONLINE ONLINE racdbsrv02p 
ora.fecisprd.db
 1 ONLINE ONLINE racdbsrv01p Open 
 2 ONLINE ONLINE racdbsrv02p Open 
ora.oc4j
 1 ONLINE ONLINE racdbsrv02p 
ora.scan1.vip
 1 ONLINE ONLINE racdbsrv01p 
ora.scan2.vip
 1 ONLINE ONLINE racdbsrv02p 
ora.scan3.vip
 1 ONLINE ONLINE racdbsrv02p 
root@racdbsrv01p / #

* DNS tanımları yapıldığından yeni domain’e her iki node’dan gidebiliyorum;

oracle@racdbsrv01p <> nslookup racdbserver.my_new.domain_name
Server: 10.10.10.111
Address: 10.10.10.111#53
Name: racdbserver.my_new.domain_name
Address: 10.10.10.13
Name: racdbserver.my_new.domain_name
Address: 10.10.10.11
Name: racdbserver.my_new.domain_name
Address: 10.10.10.12
oracle@racdbsrv02p <> nslookup racdbserver.my_new.domain_name
Server: 10.10.10.111
Address: 10.10.10.111#53
Name: racdbserver.my_new.domain_name
Address: 10.10.10.12
Name: racdbserver.my_new.domain_name
Address: 10.10.10.13
Name: racdbserver.my_new.domain_name
Address: 10.10.10.11
root@racdbsrv01p /oracle/app/grid/bin # nslookup racdbsrv02p-vip
Server: 10.10.10.111
Address: 10.10.10.111#53
Name: racdbsrv02p-vip.my_new.domain_name
Address: 10.10.10.15
root@racdbsrv01p /oracle/app/grid/bin # nslookup racdbsrv01p-vip
Server: 10.10.10.111
Address: 10.10.10.111#53
Name: racdbsrv01p-vip.my_new.domain_name
Address: 10.10.10.14
root@racdbsrv02p /oracle/app/grid/bin # nslookup racdbsrv01p-vip
Server: 10.10.10.111
Address: 10.10.10.111#53
Name: racdbsrv01p-vip.my_new.domain_name
Address: 10.10.10.14
root@racdbsrv02p /oracle/app/grid/bin # nslookup racdbsrv02p-vip
Server: 10.10.10.111
Address: 10.10.10.111#53
Name: racdbsrv02p-vip.my_new.domain_name
Address: 10.10.10.15

* Full rman backup ve db kapanmadan hemen önce incremantal (veya archive) backuplarımı alıyorum.

* Db joplarını kapatıyorum.

* ocrconfig  file’ın yedeğini her iki node’da root’da alıyorum;

root@racdbsrv01p /oracle/app/grid/bin # ./ocrconfig -export ocrconf_17032017

* Önceki statüleri kontrol ediyorum;

root@racdbsrv01p /oracle/app/grid/bin # ./srvctl status scan
root@racdbsrv01p /oracle/app/grid/bin # ./srvctl status scan_listener
root@racdbsrv01p /oracle/app/grid/bin # ./srvctl config nodeapps -a

* Her iki node’da root altında grid’i kaldırıyoruz, koyu renkle belirttiğim kısımlara dikkat ediniz;

root@racdbsrv01p /setup # cd /oracle/app/grid/crs/install
root@racdbsrv01p /oracle/app/grid/crs/install # ./rootcrs.pl -deconfig -force -verbose
Using configuration parameter file: ./crsconfig_params
PRCR-1119 : Failed to look up CRS resources of ora.cluster_vip_net1.type type
PRCR-1068 : Failed to query resources
Cannot communicate with crsd
PRCR-1070 : Failed to check if resource ora.gsd is registered
Cannot communicate with crsd
PRCR-1070 : Failed to check if resource ora.ons is registered
Cannot communicate with crsd
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4000: Command Stop failed, or completed with errors.
CRS-4544: Unable to connect to OHAS
CRS-4000: Command Stop failed, or completed with errors.
This may take several minutes. Please wait ...
0518-307 odmdelete: 1 objects deleted.
0518-307 odmdelete: 1 objects deleted.
0518-307 odmdelete: 1 objects deleted.
Successfully deconfigured Oracle clusterware stack on this node
root@racdbsrv02p /oracle/app/grid/crs/install # ./rootcrs.pl -deconfig -force -verbose
Using configuration parameter file: ./crsconfig_params
PRCR-1119 : Failed to look up CRS resources of ora.cluster_vip_net1.type type
PRCR-1068 : Failed to query resources
Cannot communicate with crsd
PRCR-1070 : Failed to check if resource ora.gsd is registered
Cannot communicate with crsd
PRCR-1070 : Failed to check if resource ora.ons is registered
Cannot communicate with crsd
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4000: Command Stop failed, or completed with errors.
CRS-4544: Unable to connect to OHAS
CRS-4000: Command Stop failed, or completed with errors.
This may take several minutes. Please wait ...
0518-307 odmdelete: 1 objects deleted.
0518-307 odmdelete: 1 objects deleted.
0518-307 odmdelete: 1 objects deleted.
Successfully deconfigured Oracle clusterware stack on this node
oracle@racdbsrv01p  ./deinstall
Checking for required files and bootstrapping ...
Please wait ...
Location of logs /oracle/app/oraInventory/logs/
############ ORACLE DEINSTALL & DECONFIG TOOL START ############
######################### CHECK OPERATION START #########################
## [START] Install check configuration ##
Checking for existence of the Oracle home location /oracle/app/grid
Oracle Home type selected for deinstall is: Oracle Grid Infrastructure for a Cluster
Oracle Base selected for deinstall is: /oracle/app/oracle
Checking for existence of central inventory location /oracle/app/oraInventory
Checking for existence of the Oracle Grid Infrastructure home 
The following nodes are part of this cluster: racdbsrv01p,racdbsrv02p
Checking for sufficient temp space availability on node(s) : 'racdbsrv01p,racdbsrv02p'
## [END] Install check configuration ##
Traces log file: /oracle/app/oraInventory/logs//crsdc.log
Enter an address or the name of the virtual IP used on node "racdbsrv01p"[racdbsrv01p-vip]
 > 
The following information can be collected by running "/sbin/ifconfig -a" on node "racdbsrv01p"
Enter the IP netmask of Virtual IP "10.10.10.14" on node "racdbsrv01p"[255.255.255.0]
 > 
Enter the network interface name on which the virtual IP address "10.10.10.14" is active
 > 
en0
Enter an address or the name of the virtual IP used on node "racdbsrv02p"[racdbsrv02p-vip]
 > 
The following information can be collected by running "/sbin/ifconfig -a" on node "racdbsrv02p"
Enter the IP netmask of Virtual IP "10.10.10.15" on node "racdbsrv02p"[255.255.255.0]
 > 
Enter the network interface name on which the virtual IP address "10.10.10.15" is active[en0]
 > 
Enter an address or the name of the virtual IP[]
 > 
Network Configuration check config START
Network de-configuration trace file location: /oracle/app/oraInventory/logs/netdc_check2017-01-27_01-21-22-PM.log
Specify all RAC listeners (do not include SCAN listener) that are to be de-configured [LISTENER,LISTENER_SCAN3,LISTENER_SCAN2,LISTENER_SCAN1]:
Network Configuration check config END
Asm Check Configuration START
ASM de-configuration trace file location: /oracle/app/oraInventory/logs/asmcadc_check2017-01-27_01-22-09-PM.log
ASM configuration was not detected in this Oracle home. Was ASM configured in this Oracle home (y|n) [n]: y
Is OCR/Voting Disk placed in ASM y|n [n]: +DATA
Specify the ASM Diagnostic Destination [ ]: /oracle/app/oracle
Specify the diskstring []: /dev/rhdiskASMd*
Specify the diskgroups that are managed by this ASM instance []: +DATA,+RECO
De-configuring ASM will drop the diskgroups at cleanup time. Do you want deconfig tool to drop the diskgroups y|n [y]: n
######################### CHECK OPERATION END #########################
####################### CHECK OPERATION SUMMARY #######################
Oracle Grid Infrastructure Home is: 
The cluster node(s) on which the Oracle home deinstallation will be performed are:racdbsrv01p,racdbsrv02p
Oracle Home selected for deinstall is: /oracle/app/grid
Inventory Location where the Oracle home registered is: /oracle/app/oraInventory
Following RAC listener(s) will be de-configured: LISTENER,LISTENER_SCAN3,LISTENER_SCAN2,LISTENER_SCAN1
ASM instance will be de-configured from this Oracle home
Do you want to continue (y - yes, n - no)? [n]: y
A log of this session will be written to: '/oracle/app/oraInventory/logs/deinstall_deconfig2017-01-27_01-17-34-PM.out'
Any error messages from this session will be written to: '/oracle/app/oraInventory/logs/deinstall_deconfig2017-01-27_01-17-34-PM.err'
######################## CLEAN OPERATION START ########################
ASM de-configuration trace file location: /oracle/app/oraInventory/logs/asmcadc_clean2017-01-27_01-27-11-PM.log
ASM Clean Configuration START
ASM Clean Configuration END
Network Configuration clean config START
Network de-configuration trace file location: /oracle/app/oraInventory/logs/netdc_clean2017-01-27_01-27-18-PM.log
De-configuring RAC listener(s): LISTENER,LISTENER_SCAN3,LISTENER_SCAN2,LISTENER_SCAN1
De-configuring listener: LISTENER
 Stopping listener: LISTENER
 Warning: Failed to stop listener. Listener may not be running.
Listener de-configured successfully.
De-configuring listener: LISTENER_SCAN3
 Stopping listener: LISTENER_SCAN3
 Warning: Failed to stop listener. Listener may not be running.
Listener de-configured successfully.
De-configuring listener: LISTENER_SCAN2
 Stopping listener: LISTENER_SCAN2
 Warning: Failed to stop listener. Listener may not be running.
Listener de-configured successfully.
De-configuring listener: LISTENER_SCAN1
 Stopping listener: LISTENER_SCAN1
 Warning: Failed to stop listener. Listener may not be running.
Listener de-configured successfully.
De-configuring Naming Methods configuration file on all nodes...
Naming Methods configuration file de-configured successfully.
De-configuring Local Net Service Names configuration file on all nodes...
Local Net Service Names configuration file de-configured successfully.
De-configuring Directory Usage configuration file on all nodes...
Directory Usage configuration file de-configured successfully.
De-configuring backup files on all nodes...
Backup files de-configured successfully.
The network configuration has been cleaned up successfully.
Network Configuration clean config END
---------------------------------------->
The deconfig command below can be executed in parallel on all the remote nodes. Execute the command on the local node after the execution completes on all the remote nodes.
Run the following command as the root user or the administrator on node "racdbsrv02p".
/tmp/deinstall2017-01-27_01-17-20PM/perl/bin/perl -I/tmp/deinstall2017-01-27_01-17-20PM/perl/lib -I/tmp/deinstall2017-01-27_01-17-20PM/crs/install /tmp/deinstall2017-01-27_01-17-20PM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2017-01-27_01-17-20PM/response/deinstall_Ora11g_gridinfrahome1.rsp"
Run the following command as the root user or the administrator on node "racdbsrv01p".
/tmp/deinstall2017-01-27_01-17-20PM/perl/bin/perl -I/tmp/deinstall2017-01-27_01-17-20PM/perl/lib -I/tmp/deinstall2017-01-27_01-17-20PM/crs/install /tmp/deinstall2017-01-27_01-17-20PM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2017-01-27_01-17-20PM/response/deinstall_Ora11g_gridinfrahome1.rsp" -lastnode
Press Enter after you finish running the above commands

The deconfig command below can be executed in parallel on all the remote nodes. Execute the command on the local node after the execution completes on all the remote nodes.
Press Enter after you finish running the above commands

The deconfig command below can be executed in parallel on all the remote nodes. Execute the command on the local node after the execution completes on all the remote nodes.
Press Enter after you finish running the above commands

-> 1. ve 2. node da yukarda verdigi dizindeki .rsp işlerini baska bir terminalde root da calistiriyoruz:

root@racdbsrv01p / # M/response/deinstall_Ora11g_gridinfrahome1.rsp" -lastnode <
Using configuration parameter file: /tmp/deinstall2017-01-27_01-17-20PM/response/deinstall_Ora11g_gridinfrahome1.rsp
Adding Clusterware entries to /etc/inittab
/crs/install/inittab does not exist.
****Unable to retrieve Oracle Clusterware home.
Start Oracle Clusterware stack and try again.
****Unable to retrieve Oracle Clusterware home.
Start Oracle Clusterware stack and try again.
****Unable to retrieve Oracle Clusterware home.
Start Oracle Clusterware stack and try again.
****Unable to retrieve Oracle Clusterware home.
Start Oracle Clusterware stack and try again.
****Unable to retrieve Oracle Clusterware home.
Start Oracle Clusterware stack and try again.
CRS-4047: No Oracle Clusterware components configured.
CRS-4000: Command Stop failed, or completed with errors.
CRS-4047: No Oracle Clusterware components configured.
CRS-4000: Command Delete failed, or completed with errors.
Either /etc/oracle/ocr.loc does not exist or is not readable
Make sure the file exists and it has read and execute access
Either /etc/oracle/ocr.loc does not exist or is not readable
Make sure the file exists and it has read and execute access
CRS-4047: No Oracle Clusterware components configured.
CRS-4000: Command Modify failed, or completed with errors.
CRS-4047: No Oracle Clusterware components configured.
CRS-4000: Command Delete failed, or completed with errors.
Either /etc/oracle/ocr.loc does not exist or is not readable
Make sure the file exists and it has read and execute access
Either /etc/oracle/ocr.loc does not exist or is not readable
Make sure the file exists and it has read and execute access
CRS-4047: No Oracle Clusterware components configured.
CRS-4000: Command Stop failed, or completed with errors.
Either /etc/oracle/ocr.loc does not exist or is not readable
Make sure the file exists and it has read and execute access
Either /etc/oracle/ocr.loc does not exist or is not readable
Make sure the file exists and it has read and execute access
################################################################
# You must kill processes or reboot the system to properly #
# cleanup the processes started by Oracle clusterware #
################################################################
Either /etc/oracle/ocr.loc does not exist or is not readable
Make sure the file exists and it has read and execute access
Either /etc/oracle/ocr.loc does not exist or is not readable
Make sure the file exists and it has read and execute access
This may take several minutes. Please wait ...
Either /etc/oracle/olr.loc does not exist or is not readable
Make sure the file exists and it has read and execute access
Either /etc/oracle/olr.loc does not exist or is not readable
Make sure the file exists and it has read and execute access
Failure in execution (rc=-1, 256, A file or directory in the path name does not exist.) for command /etc/ohasd deinstall
Successfully deconfigured Oracle clusterware stack on this node
root@racdbsrv02p / # M/response/deinstall_Ora11g_gridinfrahome1.rsp" <
Using configuration parameter file: /tmp/deinstall2017-01-27_01-17-20PM/response/deinstall_Ora11g_gridinfrahome1.rsp
****Unable to retrieve Oracle Clusterware home.
Start Oracle Clusterware stack and try again.
CRS-4047: No Oracle Clusterware components configured.
CRS-4000: Command Stop failed, or completed with errors.
Either /etc/oracle/ocr.loc does not exist or is not readable
Make sure the file exists and it has read and execute access
Either /etc/oracle/ocr.loc does not exist or is not readable
Make sure the file exists and it has read and execute access
CRS-4047: No Oracle Clusterware components configured.
CRS-4000: Command Modify failed, or completed with errors.
CRS-4047: No Oracle Clusterware components configured.
CRS-4000: Command Delete failed, or completed with errors.
CRS-4047: No Oracle Clusterware components configured.
CRS-4000: Command Stop failed, or completed with errors.
################################################################
# You must kill processes or reboot the system to properly #
# cleanup the processes started by Oracle clusterware #
################################################################
This may take several minutes. Please wait ...
Either /etc/oracle/olr.loc does not exist or is not readable
Make sure the file exists and it has read and execute access
Either /etc/oracle/olr.loc does not exist or is not readable
Make sure the file exists and it has read and execute access
Failure in execution (rc=-1, 256, A file or directory in the path name does not exist.) for command /etc/ohasd deinstall
Successfully deconfigured Oracle clusterware stack on this node

* Yeni domain name’ini crsconfig_params file da görebiliriz;

root@racdbsrv01p /oracle/app/grid/crs/install # cat crsconfig_params
...
SCAN_NAME=racdbserver.my_new.domain_name
...

* Aşağıdaki dosyaların ne olur ne olmaz yedeklerini alıyorum;

root@racdbsrv01p / # mv /opt/ORCLfmap /opt/ORCLfmap_17032017
root@racdbsrv01p / # mv /etc/oraInst.loc /etc/oraInst_17032017_loc

root@racdbsrv02p / # mv /etc/oraInst.loc /etc/oraInst_17032017_loc
root@racdbsrv02p / # mv /opt/ORCLfmap /opt/ORCLfmap_17032017

* CRS/Grid i kuruyoruz;

oracle@racdbsrv01p  echo $ORACLE_HOME
/oracle/app/grid
oracle@racdbsrv01p  echo $ORACLE_SID
+ASM1
oracle@racdbsrv01p  cd /setup/db_and_grid/grid
oracle@racdbsrv01p  ./runInstaller

clustername: racdbserver
scan name: racdbserver.my_new.domain_name
racdbsrv01p-vip.my_new.domain_name 
racdbsrv02p-vip.my_new.domain_name


root@racdbsrv01p / # /oracle/app/oraInventory2/orainstRoot.sh
Changing permissions of /oracle/app/oraInventory2.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.
Changing groupname of /oracle/app/oraInventory2 to dba.
The execution of the script is complete.

root@racdbsrv02p / # /oracle/app/oraInventory2/orainstRoot.sh
Changing permissions of /oracle/app/oraInventory2.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.
Changing groupname of /oracle/app/oraInventory2 to dba.
The execution of the script is complete.

root@racdbsrv01p /tmp # /oracle/app/grid/root.sh
erforming root user operation for Oracle 11g 
The following environment variables are set as:
 ORACLE_OWNER= oracle
 ORACLE_HOME= /oracle/app/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]: 
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /oracle/app/grid/crs/install/crsconfig_params
Creating trace directory
Installing Trace File Analyzer
User oracle has the required capabilities to run CSSD in realtime mode
OLR initialization - successful
 root wallet
 root wallet cert
 root cert export
 peer wallet
 profile reader wallet
 pa wallet
 peer wallet keys
 pa wallet keys
 peer cert request
 pa cert request
 peer cert
 pa cert
 peer root cert TP
 profile reader root cert TP
 pa root cert TP
 peer pa cert TP
 pa peer cert TP
 profile reader pa cert TP
 profile reader peer cert TP
 peer user cert
 pa user cert
Adding Clusterware entries to inittab
CRS-2672: Attempting to start 'ora.mdnsd' on 'racdbsrv01p'
CRS-2676: Start of 'ora.mdnsd' on 'racdbsrv01p' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'racdbsrv01p'
CRS-2676: Start of 'ora.gpnpd' on 'racdbsrv01p' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'racdbsrv01p'
CRS-2672: Attempting to start 'ora.gipcd' on 'racdbsrv01p'
CRS-2676: Start of 'ora.cssdmonitor' on 'racdbsrv01p' succeeded
CRS-2676: Start of 'ora.gipcd' on 'racdbsrv01p' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'racdbsrv01p'
CRS-2672: Attempting to start 'ora.diskmon' on 'racdbsrv01p'
CRS-2676: Start of 'ora.diskmon' on 'racdbsrv01p' succeeded
CRS-2676: Start of 'ora.cssd' on 'racdbsrv01p' succeeded
ASM created and started successfully.
Disk Group OCR created successfully.
clscfg: -install mode specified
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'system'..
Operation successful.
CRS-4256: Updating the profile
Successful addition of voting disk 8aeb6a05c0b24f7bbf9c9532b47c54e1.
Successfully replaced voting disk group with +OCR.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
 1. ONLINE 8aeb6a05c0b24f7bbf9c9532b47c54e1 (/dev/rhdiskOCRd01) [OCR]
Located 1 voting disk(s).
CRS-2672: Attempting to start 'ora.asm' on 'racdbsrv01p'
CRS-2676: Start of 'ora.asm' on 'racdbsrv01p' succeeded
CRS-2672: Attempting to start 'ora.OCR.dg' on 'racdbsrv01p'
CRS-2676: Start of 'ora.OCR.dg' on 'racdbsrv01p' succeeded
Configure Oracle Grid Infrastructure for a Cluster ... succeeded


root@racdbsrv02p / # /oracle/app/grid/root.sh
Performing root user operation for Oracle 11g 
The following environment variables are set as:
 ORACLE_OWNER= oracle
 ORACLE_HOME= /oracle/app/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]: 
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /oracle/app/grid/crs/install/crsconfig_params
Creating trace directory
Installing Trace File Analyzer
User oracle has the required capabilities to run CSSD in realtime mode
OLR initialization - successful
Adding Clusterware entries to inittab
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node racdbsrv01p, number 1, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
root@racdbsrv01p /oracle/app/grid/bin # ./crsctl check cluster -all 
**************************************************************
racdbsrv01p:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
racdbsrv02p:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
root@racdbsrv01p /oracle/app/grid/bin # ./crsctl stat res -t
oracle@racdbsrv01p  echo $ORACLE_SID
+ASM1
oracle@racdbsrv01p  sqlplus / as sysasm
SQL*Plus: Release 11.2.0.4.0 Production on Fri Jan 27 16:23:54 2017
Copyright (c) 1982, 2013, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Real Application Clusters and Automatic Storage Management options
SQL> SELECT STATE, NAME FROM V$ASM_DISKGROUP;
STATE NAME
----------- ------------------------------
MOUNTED OCR
DISMOUNTED DATA
DISMOUNTED RECO
SQL> alter diskgroup DATA mount;
Diskgroup altered.
SQL> alter diskgroup RECO mount;
Diskgroup altered.
SQL> SELECT STATE, NAME FROM V$ASM_DISKGROUP;
STATE NAME
----------- ------------------------------
MOUNTED OCR
MOUNTED DATA
MOUNTED RECO
SQL> exit

* lsnrctl status kontrol ediniz;

oracle@racdbsrv01p  lsnrctl stat
LSNRCTL for IBM/AIX RISC System/6000: Version 11.2.0.4.0 - Production on 27-JAN-2017 17:25:12
Copyright (c) 1991, 2013, Oracle. All rights reserved.
Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for IBM/AIX RISC System/6000: Version 11.2.0.4.0 - Production
Start Date 27-JAN-2017 17:23:33
Uptime 0 days 0 hr. 1 min. 39 sec
Trace Level off
Security ON: Local OS Authentication
SNMP ON
Listener Parameter File /oracle/app/grid/network/admin/listener.ora
Listener Log File /oracle/app/oracle/diag/tnslsnr/racdbsrv01p/listener/alert/log.xml
Listening Endpoints Summary...
 (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
 (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.10.10.18)(PORT=1521)))
 (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.10.10.19)(PORT=1521)))
Services Summary...
Service "+ASM" has 1 instance(s).
 Instance "+ASM1", status READY, has 1 handler(s) for this service...
Service "PRDTST" has 1 instance(s).
 Instance "PRDTST1", status READY, has 1 handler(s) for this service...
Service "PRDTSTXDB" has 1 instance(s).
 Instance "PRDTST1", status READY, has 1 handler(s) for this service...
The command completed successfully
oracle@racdbsrv01p

* tnsnames.ora daki HOST kısmındaki domain name’i yenisi ile değiştiriyorum;

oracle@racdbsrv01p  vi tnsnames.ora
"tnsnames.ora" 12 lines, 354 characters 
# tnsnames.ora Network Configuration File: /oracle/app/oracle/product/11.2.0/dbhome_1/network/admin/tnsnames.ora
# Generated by Oracle configuration tools.

PRDTST =
 (DESCRIPTION =
 (ADDRESS = (PROTOCOL = TCP)(HOST = racdbserver.my_new.domain_name)(PORT = 1521))
 (CONNECT_DATA =
 (SERVER = DEDICATED)
 (SERVICE_NAME = PRDTST)
 )
 )
SQL> show parameter listener
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
listener_networks string
local_listener string
remote_listener string racdbserver.my_old.domain_name:1521

SQL> alter system set remote_listener="racdbserver.my_new.domain_name:1521" sid='*' scope=both;
System altered.

SQL> alter system register;
System altered.

SQL> show parameter listener
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
listener_networks string
local_listener string
remote_listener string racdbserver.my_new.domain_name:1521


SQL> alter system set local_listener='(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=racdbsrv01p-vip.my_new.domain_name)(PORT=1521)))' scope=BOTH sid='PRDTST1';

System altered.

SQL> 
SQL> alter system set local_listener='(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=racdbsrv02p-vip.my_new.domain_name)(PORT=1521)))' scope=BOTH sid='PRDTST2';

System altered.

SQL>

* CRS/Grid kurduk ve listener tanımlarını tekrar set ettik, grid’i kaldırıp tekrar kurduğumuz için daha önce uygulanan patch’ler de kaldırılmış olduğundan en son olarak güncel patch’i geçmeyi unutmayın.

Faydası olması dileğimle…

REFERANS:

  • How to Change the Domain Name for a RAC Database Server (Doc ID 1059776.1)
  • How to Reinstall Oracle Grid Infrastructure Without Disturbing RDBMS Installation (Doc ID 1276975.1)