Delete the Instance from the Oracle RAC Database
Verify that all the instances are up and running.
[oracle@gract3 ~]$ srvctl status database -d cdbn
Instance cdbn1 is running on node gract1
Instance cdbn2 is running on node gract2
Instance cdbn3 is running on node gract3
Check resources running on note gract3
[grid@gract3 ~]$ crs | egrep 'gract3|STATE|--'
Rescource NAME TARGET STATE SERVER STATE_DETAILS
------------------------- ---------- ---------- ------------ ------------------
ora.ACFS_DG1.ACFS_VOL1.advm ONLINE ONLINE gract3 Volume device /dev/a sm/acfs_vol1-443 isonline,STABLE
ora.ACFS_DG1.dg ONLINE ONLINE gract3 STABLE
ora.ASMNET1LSNR_ASM.lsnr ONLINE ONLINE gract3 STABLE
ora.DATA.dg ONLINE ONLINE gract3 STABLE
ora.LISTENER.lsnr ONLINE ONLINE gract3 STABLE
ora.acfs_dg1.acfs_vol1.acfs ONLINE ONLINE gract3 mounted on /u01/acfs /acfs-vol1,STABLE
ora.net1.network ONLINE ONLINE gract3 STABLE
ora.ons ONLINE ONLINE gract3 STABLE
ora.proxy_advm ONLINE ONLINE gract3 STABLE
Resource NAME INST TARGET STATE SERVER STATE_DETAILS
--------------------------- ---- ------------ ------------ --------------- -----------------------------------------
ora.LISTENER_SCAN2.lsnr 1 ONLINE ONLINE gract3 STABLE
ora.MGMTLSNR 1 ONLINE ONLINE gract3 169.254.145.224 192. 168.2.113,STABLE
ora.asm 2 ONLINE ONLINE gract3 Started,STABLE
ora.cdbn.db 3 ONLINE ONLINE gract3 Open,STABLE
ora.cvu 1 ONLINE ONLINE gract3 STABLE
ora.gract3.vip 1 ONLINE ONLINE gract3 STABLE
ora.mgmtdb 1 ONLINE ONLINE gract3 Open,STABLE
ora.scan2.vip 1 ONLINE ONLINE gract3 STABLE
Verify the current ocr backup using the command: ocrconfig -showbackup.
[grid@gract3 ~]$ ocrconfig -showbackup
gract1 2014/08/14 17:07:49 /u01/app/12102/grid/cdata/gract/backup00.ocr 0
gract1 2014/08/14 13:07:45 /u01/app/12102/grid/cdata/gract/backup01.ocr 0
gract1 2014/08/14 09:07:40 /u01/app/12102/grid/cdata/gract/backup02.ocr 0
gract1 2014/08/13 09:07:14 /u01/app/12102/grid/cdata/gract/day.ocr 0
gract1 2014/08/09 18:45:09 /u01/app/12102/grid/cdata/gract/week.ocr 0
gract1 2014/08/09 14:38:36 /u01/app/12102/grid/cdata/gract/backup_20140809_143836.ocr 0
Ensure that all the instances are registered in the default CRS Listener.
[grid@gract3 ~]$ lsnrctl status LISTENER_SCAN2
LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 14-AUG-2014 17:03:42
Copyright (c) 1991, 2014, Oracle. All rights reserved.
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN2)))
STATUS of the LISTENER
------------------------
Alias LISTENER_SCAN2
Version TNSLSNR for Linux: Version 12.1.0.2.0 - Production
Start Date 09-AUG-2014 14:10:46
Uptime 5 days 2 hr. 52 min. 56 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /u01/app/12102/grid/network/admin/listener.ora
Listener Log File /u01/app/grid/diag/tnslsnr/gract3/listener_scan2/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN2)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.1.191)(PORT=1521)))
Services Summary...
Service "cdbn" has 3 instance(s).
Instance "cdbn1", status READY, has 1 handler(s) for this service...
Instance "cdbn2", status READY, has 1 handler(s) for this service...
Instance "cdbn3", status READY, has 1 handler(s) for this service...
Service "cdbnXDB" has 3 instance(s).
Instance "cdbn1", status READY, has 1 handler(s) for this service...
Instance "cdbn2", status READY, has 1 handler(s) for this service...
Instance "cdbn3", status READY, has 1 handler(s) for this service...
Service "gract" has 1 instance(s).
Instance "-MGMTDB", status READY, has 1 handler(s) for this service...
The command completed successfully
Start DCBA from a node other than the one that you are removing and select
-->"Real Application Clusters"
--> "Instance Management"
--> "Delete Instance".
--> Accept the alert windows to delete the instance.
Verify that the instance has been deleted and thread is disabled by querying gv$instance and v$thread.
SQL> select INST_ID,INSTANCE_NUMBER,INSTANCE_NAME,HOST_NAME from gv$instance;
INST_ID INSTANCE_NUMBER INSTANCE_NAME HOST_NAME
---------- --------------- ---------------- ------------------------------
1 1 cdbn1 gract1.example.com
2 2 cdbn2 gract2.example.com
SQL> select THREAD# , STATUS, INSTANCE from v$thread;
THREAD# STATUS INSTANCE
---------- ------ ------------------------------
1 OPEN cdbn1
2 OPEN cdbn2
Verify that the thread for the deleted instance has been disabled. If it is still enabled, disable it as follows:
SQL>ALTER DATABASE DISABLE THREAD 2;
--> No need to run the above command - THREAD# 3 is already disable
Delete the Node from the Cluster
If there is a listener in the Oracle Home on the RAC node that you are deleting, you must disable and stop it before deleting the
Oracle RAC software, as in the following command:
$ srvctl disable listener -l <listener_name> -n <NodeToBeDeleted>
$ srvctl stop listener -l <listener_name> -n <NodeToBeDeleted>
Checking listners:
[grid@gract3 ~]$ ps -elf | grep tns
0 S grid 11783 1 0 80 0 - 42932 ep_pol Aug12 ? 00:00:05 /u01/app/12102/grid/bin/tnslsnr MGMTLSNR -no_crs_notify -inherit
0 S grid 23099 1 0 80 0 - 42960 ep_pol Aug09 ? 00:00:14 /u01/app/12102/grid/bin/tnslsnr ASMNET1LSNR_ASM -no_crs_notify -inherit
0 S grid 23140 1 0 80 0 - 43080 ep_pol Aug09 ? 00:00:17 /u01/app/12102/grid/bin/tnslsnr LISTENER -no_crs_notify -inherit
0 S grid 23162 1 0 80 0 - 43034 ep_pol Aug09 ? 00:00:38 /u01/app/12102/grid/bin/tnslsnr LISTENER_SCAN2 -no_crs_notify -inherit
--> No need to run the above commands as all listeners run from GRID_HOME
Run the following command from the $ORACLE_HOME/oui/bin directory on the node that you are deleting to update the inventory on that node:
[oracle@gract3 ~]$ $ORACLE_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME CLUSTER_NODES=gract3 -local
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 4198 MB Passed
The inventory pointer is located at /etc/oraInst.loc
'UpdateNodeList' was successful.
Remove the Oracle RAC software by runing the following command on the node to be deleted from the $ORACLE_HOME/deinstall directory:
[oracle@gract3 ~]$ $ORACLE_HOME/deinstall/deinstall -local
Checking for required files and bootstrapping ...
Please wait ...
Location of logs /u01/app/oraInventory/logs/
############ ORACLE DECONFIG TOOL START ############
######################### DECONFIG CHECK OPERATION START #########################
## [START] Install check configuration ##
Checking for existence of the Oracle home location /u01/app/oracle/product/12102/racdb
Oracle Home type selected for deinstall is: Oracle Real Application Cluster Database
Oracle Base selected for deinstall is: /u01/app/oracle
Checking for existence of central inventory location /u01/app/oraInventory
Checking for existence of the Oracle Grid Infrastructure home /u01/app/12102/grid
The following nodes are part of this cluster: gract3,gract2,gract1
Checking for sufficient temp space availability on node(s) : 'gract3'
## [END] Install check configuration ##
Network Configuration check config START
Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_check2014-08-14_05-36-26-PM.log
Network Configuration check config END
Database Check Configuration START
Database de-configuration trace file location: /u01/app/oraInventory/logs/databasedc_check2014-08-14_05-36-37-PM.log
Use comma as separator when specifying list of values as input
Specify the list of database names that are configured locally on this node for this Oracle home.
Local configurations of the discovered databases will be removed [cdbn]:
Database Check Configuration END
Oracle Configuration Manager check START
OCM check log file location : /u01/app/oraInventory/logs//ocm_check8896.log
Oracle Configuration Manager check END
######################### DECONFIG CHECK OPERATION END #########################
####################### DECONFIG CHECK OPERATION SUMMARY #######################
Oracle Grid Infrastructure Home is: /u01/app/12102/grid
The following nodes are part of this cluster: gract3,gract2,gract1
The cluster node(s) on which the Oracle home deinstallation will be performed are:gract3
Oracle Home selected for deinstall is: /u01/app/oracle/product/12102/racdb
Inventory Location where the Oracle home registered is: /u01/app/oraInventory
The option -local will not modify any database configuration for this Oracle home.
Checking the config status for CCR
Oracle Home exists with CCR directory, but CCR is not configured
CCR check is finished
Do you want to continue (y - yes, n - no)? [n]: y
A log of this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2014-08-14_05-35-57-PM.out'
Any error messages from this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2014-08-14_05-35-57-PM.err'
######################## DECONFIG CLEAN OPERATION START ########################
Database de-configuration trace file location: /u01/app/oraInventory/logs/databasedc_clean2014-08-14_05-40-55-PM.log
Network Configuration clean config START
Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_clean2014-08-14_05-40-55-PM.log
Network Configuration clean config END
Oracle Configuration Manager clean START
OCM clean log file location : /u01/app/oraInventory/logs//ocm_clean8896.log
Oracle Configuration Manager clean END
######################### DECONFIG CLEAN OPERATION END #########################
####################### DECONFIG CLEAN OPERATION SUMMARY #######################
Cleaning the config for CCR
As CCR is not configured, so skipping the cleaning of CCR configuration
CCR clean is finished
#######################################################################
############# ORACLE DECONFIG TOOL END #############
Using properties file /tmp/deinstall2014-08-14_05-32-27PM/response/deinstall_2014-08-14_05-35-57-PM.rsp
Location of logs /u01/app/oraInventory/logs/
############ ORACLE DEINSTALL TOOL START ############
####################### DEINSTALL CHECK OPERATION SUMMARY #######################
A log of this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2014-08-14_05-35-57-PM.out'
Any error messages from this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2014-08-14_05-35-57-PM.err'
######################## DEINSTALL CLEAN OPERATION START ########################
## [START] Preparing for Deinstall ##
Setting LOCAL_NODE to gract3
Setting CLUSTER_NODES to gract3
Setting CRS_HOME to false
Setting oracle.installer.invPtrLoc to /tmp/deinstall2014-08-14_05-32-27PM/oraInst.loc
Setting oracle.installer.local to true
## [END] Preparing for Deinstall ##
Setting the force flag to false
Setting the force flag to cleanup the Oracle Base
Oracle Universal Installer clean START
Detach Oracle home '/u01/app/oracle/product/12102/racdb' from the central inventory on the local node : Done
Failed to delete the directory '/u01/app/oracle/product/12102/racdb'. The directory is in use.
Delete directory '/u01/app/oracle/product/12102/racdb' on the local node : Failed <<<<
The Oracle Base directory '/u01/app/oracle' will not be removed on local node.
The directory is in use by Oracle Home '/u01/app/oracle/product/121/racdb'.
Oracle Universal Installer cleanup was successful.
Oracle Universal Installer clean END
## [START] Oracle install clean ##
Clean install operation removing temporary directory '/tmp/deinstall2014-08-14_05-32-27PM' on node 'gract3'
## [END] Oracle install clean ##
######################### DEINSTALL CLEAN OPERATION END #########################
####################### DEINSTALL CLEAN OPERATION SUMMARY #######################
Successfully detached Oracle home '/u01/app/oracle/product/12102/racdb' from the central inventory on the local node.
Failed to delete directory '/u01/app/oracle/product/12102/racdb' on the local node.
Oracle Universal Installer cleanup was successful.
Oracle deinstall tool successfully cleaned up temporary directories.
#######################################################################
############# ORACLE DEINSTALL TOOL END #############
Update the nodes list on the remaining nodes as in the following example:
gract1:
[root@gract3 Desktop]# ssh gract1
[root@gract1 ~]# su - oracle
-> Active ORACLE_SID: cdbn1
[oracle@gract1 ~]$ $ORACLE_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME CLUSTER_NODES=gract1,gract2
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 4695 MB Passed
The inventory pointer is located at /etc/oraInst.loc
'UpdateNodeList' was successful.
[oracle@gract1 ~]$ $ORACLE_HOME/OPatch/opatch lsinventory
..
Rac system comprising of multiple nodes
Local node = gract1
Remote node = gract2
Verify whether the node to be deleted is active or not by using following command from the $CRS_HOME/bin directory:
[grid@gract1 ~]$ olsnodes -s -t
gract1 Active Unpinned
gract2 Active Unpinned
gract3 Active Unpinned
On gract2:
[root@gract2 ~]# su - oracle
-> Active ORACLE_SID: cdbn2
[oracle@gract2 ~]$ $ORACLE_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME CLUSTER_NODES=gract1,gract2
[oracle@gract2 ~]$ $ORACLE_HOME/OPatch/opatch lsinventory
..
Rac system comprising of multiple nodes
Local node = gract2
Remote node = gract1
[root@gract2 ~]# su - grid
[grid@gract2 ~]$ olsnodes -s -t
gract1 Active Unpinned
gract2 Active Unpinned
gract3 Active Unpinned
Disable the Oracle Clusterware applications and daemons running on the node.
Run the rootcrs.pl script as root from the $CRS_HOME/crs/<install directory> on the node to be deleted
(if it is last node use the option -lastnode) as follows:
[root@gract3 Desktop]# $GRID_HOME/crs/install/rootcrs.pl -deconfig -force
Using configuration parameter file: /u01/app/12102/grid/crs/install/crsconfig_params
Network 1 exists
Subnet IPv4: 192.168.1.0/255.255.255.0/eth1, dhcp
Subnet IPv6:
Ping Targets:
Network is enabled
Network is individually enabled on nodes:
Network is individually disabled on nodes:
VIP exists: network number 1, hosting node gract1
VIP IPv4 Address: -/gract1-vip/192.168.1.160
VIP IPv6 Address:
VIP is enabled.
VIP is individually enabled on nodes:
VIP is individually disabled on nodes:
..
ONS exists: Local port 6100, remote port 6200, EM port 2016, Uses SSL false
ONS is enabled
ONS is individually enabled on nodes:
ONS is individually disabled on nodes:
PRCC-1017 : ons was already stopped on gract3
PRCR-1005 : Resource ora.ons is already stopped
PRKO-2440 : Network resource is already stopped.
PRKO-2313 : A VIP named gract3 does not exist.
CRS-2797: Shutdown is already in progress for 'gract3', waiting for it to complete
CRS-2797: Shutdown is already in progress for 'gract3', waiting for it to complete
CRS-4133: Oracle High Availability Services has been stopped.
2014/08/14 18:16:26 CLSRSC-336: Successfully deconfigured Oracle Clusterware stack on this node
From any node that you are not deleting, run the following command from the $CRS_HOME/bin directory as root to delete the node from the cluster:
[root@gract1 ~]# $GRID_HOME/bin/crsctl delete node -n gract3
CRS-4661: Node gract3 successfully deleted.
Update the node list on the node to be deleted ( gract3) , run the following command from the CRS_HOME/oui/bin directory:
[grid@gract3 ~]$ $GRID_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=$GRID_HOME CLUSTER_NODES=gract3 -local
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 4964 MB Passed
The inventory pointer is located at /etc/oraInst.loc
'UpdateNodeList' was successful.
Update the node list on the remaining nodes by running the following command from $CRS_HOME/oui/bin from each of the remaining nodes the cluster:
on gract1:
[grid@gract1 ~]$ $GRID_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=$GRID_HOME CLUSTER_NODES={ gract1,gract2} -local
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 4582 MB Passed
The inventory pointer is located at /etc/oraInst.loc
'UpdateNodeList' was successful.
[grid@gract1 ~]$ $GRID_HOME/OPatch/opatch lsinventory
..
Patch level status of Cluster nodes :
Patching Level Nodes
-------------- -----
0 gract2,gract1
on gract2:
[grid@gract2 ~]$ $GRID_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=$GRID_HOME CLUSTER_NODES={ gract1,gract2} -local
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 4977 MB Passed
The inventory pointer is located at /etc/oraInst.loc
'UpdateNodeList' was successful.
[grid@gract2 ~]$ $GRID_HOME/OPatch/opatch lsinventory
..
Patch level status of Cluster nodes :
Patching Level Nodes
-------------- -----
0 gract2,gract1
Deinstall the Oracle Clusterware home from the node that you want to delete:
grid@gract3 ~]$ $GRID_HOME/deinstall/deinstall -local
Checking for required files and bootstrapping ...
Please wait ...
Location of logs /u01/app/oraInventory/logs/
############ ORACLE DECONFIG TOOL START ############
######################### DECONFIG CHECK OPERATION START #########################
## [START] Install check configuration ##
Checking for existence of the Oracle home location /u01/app/12102/grid
Oracle Home type selected for deinstall is: Oracle Grid Infrastructure for a Standalone Server
Oracle Base selected for deinstall is: /u01/app/grid
Checking for existence of central inventory location /u01/app/oraInventory
Checking for existence of the Oracle Grid Infrastructure home
## [END] Install check configuration ##
Traces log file: /u01/app/oraInventory/logs//crsdc_2014-08-15_08-35-48AM.log
Network Configuration check config START
Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_check2014-08-15_08-35-48-AM.log
Specify all Oracle Restart enabled listeners that are to be de-configured. Enter .(dot) to deselect all.
[ASMNET1LSNR_ASM,M GMTLSNR,LISTENER,LISTENER_SCAN3,LISTENER_SCAN2,LISTENER_SCAN1]:
Network Configuration check config EN
Asm Check Configuration STAR
ASM de-configuration trace file location: /u01/app/oraInventory/logs/asmcadc_check2014-08-15_08-36-29-AM.log
ASM configuration was not detected in this Oracle home. Was ASM configured in this Oracle home (y|n) [n]: n
ASM was not detected in the Oracle Home
Database Check Configuration STAR
Database de-configuration trace file location: /u01/app/oraInventory/logs/databasedc_check2014-08-15_08-36-46-AM.log
Database Check Configuration END
######################### DECONFIG CHECK OPERATION END #########################
####################### DECONFIG CHECK OPERATION SUMMARY #######################
Oracle Grid Infrastructure Home is:
The following nodes are part of this cluster: null
The cluster node(s) on which the Oracle home deinstallation will be performed are:null
Oracle Home selected for deinstall is: /u01/app/12102/grid
Inventory Location where the Oracle home registered is: /u01/app/oraInventory
Following Oracle Restart enabled listener(s) will be de-configured: ASMNET1LSNR_ASM,MGMTLSNR,LISTENER,LISTENER_SCAN3,LISTENER_SCAN2,LISTENER_SCAN1
ASM was not detected in the Oracle Home
Do you want to continue (y - yes, n - no)? [n]: y
A log of this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2014-08-15_08-35-46-AM.out'
Any error messages from this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2014-08-15_08-35-46-AM.err'
######################## DECONFIG CLEAN OPERATION START ########################
Database de-configuration trace file location: /u01/app/oraInventory/logs/databasedc_clean2014-08-15_08-36-48-AM.log
ASM de-configuration trace file location: /u01/app/oraInventory/logs/asmcadc_clean2014-08-15_08-36-48-AM.log
ASM Clean Configuration END
Network Configuration clean config START
Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_clean2014-08-15_08-36-48-AM.log
De-configuring Oracle Restart enabled listener(s): ASMNET1LSNR_ASM,MGMTLSNR,LISTENER,LISTENER_SCAN3,LISTENER_SCAN2,LISTENER_SCAN1
De-configuring listener: ASMNET1LSNR_ASM
Stopping listener: ASMNET1LSNR_ASM
Warning: Failed to stop listener. Listener may not be running.
Deleting listener: ASMNET1LSNR_ASM
Listener deleted successfully.
Listener de-configured successfully.
De-configuring listener: MGMTLSNR
Stopping listener: MGMTLSNR
Warning: Failed to stop listener. Listener may not be running.
Deleting listener: MGMTLSNR
Listener deleted successfully.
Listener de-configured successfully.
De-configuring listener: LISTENER
Stopping listener: LISTENER
Warning: Failed to stop listener. Listener may not be running.
Deleting listener: LISTENER
Listener deleted successfully.
Listener de-configured successfully.
De-configuring listener: LISTENER_SCAN3
Stopping listener: LISTENER_SCAN3
Warning: Failed to stop listener. Listener may not be running.
Deleting listener: LISTENER_SCAN3
Listener deleted successfully.
Listener de-configured successfully.
De-configuring listener: LISTENER_SCAN2
Stopping listener: LISTENER_SCAN2
Warning: Failed to stop listener. Listener may not be running.
Deleting listener: LISTENER_SCAN2
Listener deleted successfully.
Listener de-configured successfully
De-configuring listener: LISTENER_SCAN1
Stopping listener: LISTENER_SCAN1
Warning: Failed to stop listener. Listener may not be running.
Deleting listener: LISTENER_SCAN1
Listener deleted successfully.
Listener de-configured successfully.
De-configuring Listener configuration file...
Listener configuration file de-configured successfully.
De-configuring backup files...
Backup files de-configured successfully.
The network configuration has been cleaned up successfully.
Network Configuration clean config END
######################### DECONFIG CLEAN OPERATION END #########################
####################### DECONFIG CLEAN OPERATION SUMMARY #######################
Following Oracle Restart enabled listener(s) were de-configured successfully: ASMNET1LSNR_ASM,MGMTLSNR,LISTENER,LISTENER_SCAN3,LISTENER_SCAN2,LISTENER_SCAN1
Oracle Restart is stopped and de-configured successfully.
#######################################################################
############# ORACLE DECONFIG TOOL END #############
Using properties file /tmp/deinstall2014-08-15_08-33-16AM/response/deinstall_2014-08-15_08-35-46-AM.rsp
Location of logs /u01/app/oraInventory/logs/
############ ORACLE DEINSTALL TOOL START ############
####################### DEINSTALL CHECK OPERATION SUMMARY #######################
A log of this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2014-08-15_08-35-46-AM.out'
Any error messages from this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2014-08-15_08-35-46-AM.err'
######################## DEINSTALL CLEAN OPERATION START ########################
## [START] Preparing for Deinstall ##
Setting LOCAL_NODE to gract3
Setting CRS_HOME to false
Setting oracle.installer.invPtrLoc to /tmp/deinstall2014-08-15_08-33-16AM/oraInst.loc
Setting oracle.installer.local to true
## [END] Preparing for Deinstall ##
Setting the force flag to false
Setting the force flag to cleanup the Oracle Base
Oracle Universal Installer clean START
Detach Oracle home '/u01/app/12102/grid' from the central inventory on the local node : Done
..
Oracle Universal Installer cleanup was successful.
Oracle Universal Installer clean END
## [START] Oracle install clean ##
Clean install operation removing temporary directory '/tmp/deinstall2014-08-15_08-33-16AM' on node 'gract3'
## [END] Oracle install clean ##
######################### DEINSTALL CLEAN OPERATION END #########################
####################### DEINSTALL CLEAN OPERATION SUMMARY #######################
Successfully detached Oracle home '/u01/app/12102/grid' from the central inventory on the local node.
Failed to delete directory '/u01/app/12102/grid' on the local node.
Oracle Universal Installer cleanup was successful.
Run 'rm -r /opt/ORCLfmap' as root on node(s) 'gract3' at the end of the session.
Oracle deinstall tool successfully cleaned up temporary directories.
#######################################################################
############# ORACLE DEINSTALL TOOL END #############
Check cluster and resources status of our 2-Note cluster
grid@gract2 ~]$ olsnodes -s -t
gract1 Active Unpinned
gract2 Active Unpinned
[grid@gract2 ~]$ crs
***** Local Resources: *****
Rescource NAME TARGET STATE SERVER STATE_DETAILS
------------------------- ---------- ---------- ------------ ------------------
ora.ACFS_DG1.ACFS_VOL1.advm ONLINE ONLINE gract1 Volume device /dev/a sm/acfs_vol1-443 isonline,STABLE
ora.ACFS_DG1.ACFS_VOL1.advm ONLINE ONLINE gract2 Volume device /dev/a sm/acfs_vol1-443 isonline,STABLE
ora.ACFS_DG1.dg ONLINE ONLINE gract1 STABLE
ora.ACFS_DG1.dg ONLINE ONLINE gract2 STABLE
ora.ASMNET1LSNR_ASM.lsnr ONLINE ONLINE gract1 STABLE
ora.ASMNET1LSNR_ASM.lsnr ONLINE ONLINE gract2 STABLE
ora.DATA.dg ONLINE ONLINE gract1 STABLE
ora.DATA.dg ONLINE ONLINE gract2 STABLE
ora.LISTENER.lsnr ONLINE ONLINE gract1 STABLE
ora.LISTENER.lsnr ONLINE ONLINE gract2 STABLE
ora.acfs_dg1.acfs_vol1.acfs ONLINE ONLINE gract1 mounted on /u01/acfs /acfs-vol1,STABLE
ora.acfs_dg1.acfs_vol1.acfs ONLINE ONLINE gract2 mounted on /u01/acfs /acfs-vol1,STABLE
ora.net1.network ONLINE ONLINE gract1 STABLE
ora.net1.network ONLINE ONLINE gract2 STABLE
ora.ons ONLINE ONLINE gract1 STABLE
ora.ons ONLINE ONLINE gract2 STABLE
ora.proxy_advm ONLINE ONLINE gract1 STABLE
ora.proxy_advm ONLINE ONLINE gract2 STABLE
***** Cluster Resources: *****
Resource NAME INST TARGET STATE SERVER STATE_DETAILS
--------------------------- ---- ------------ ------------ --------------- -----------------------------------------
ora.LISTENER_SCAN1.lsnr 1 ONLINE ONLINE gract2 STABLE
ora.LISTENER_SCAN2.lsnr 1 ONLINE ONLINE gract1 STABLE
ora.LISTENER_SCAN3.lsnr 1 ONLINE ONLINE gract1 STABLE
ora.MGMTLSNR 1 ONLINE ONLINE gract2 169.254.111.246 192. 168.2.112,STABLE
ora.asm 1 ONLINE ONLINE gract1 Started,STABLE
ora.asm 2 ONLINE OFFLINE - STABLE
ora.asm 3 ONLINE ONLINE gract2 Started,STABLE
ora.cdbn.db 1 ONLINE ONLINE gract1 Open,STABLE
ora.cdbn.db 2 ONLINE ONLINE gract2 Open,STABLE
ora.cdbn.db 3 OFFLINE OFFLINE - Instance Shutdown,ST ABLE
ora.cvu 1 ONLINE ONLINE gract2 STABLE
ora.gns 1 ONLINE ONLINE gract1 STABLE
ora.gns.vip 1 ONLINE ONLINE gract1 STABLE
ora.gract1.vip 1 ONLINE ONLINE gract1 STABLE
ora.gract2.vip 1 ONLINE ONLINE gract2 STABLE
ora.hanfs.export 1 ONLINE ONLINE gract1 STABLE
ora.havip_id.havip 1 ONLINE ONLINE gract1 STABLE
ora.mgmtdb 1 ONLINE ONLINE gract2 Open,STABLE
ora.oc4j 1 ONLINE ONLINE gract1 STABLE
ora.scan1.vip 1 ONLINE ONLINE gract2 STABLE
ora.scan2.vip 1 ONLINE ONLINE gract1 STABLE
ora.scan3.vip 1 ONLINE ONLINE gract1 STABLE
Reference
- Adding and Deleting Oracle RAC Nodes for Oracle E-Business Suite Release 12 (Doc ID 1134753.1)