Table of Contents
OEL 64, Virtualbox 4.2 Installation
Check the following link for Linux/VirtualBox installation details: http://www.oracle-base.com/articles/11g/oracle-db-11gr2-rac-installation-on-oracle-linux-6-using-virtualbox.php
- Install Virtualbox Guest Additons
- Install package : # yum install oracle-rdbms-server-11gR2-preinstall
- Update the installation: : # yum update
- Install Wireshark: # yum install wireshark # yum install wireshark-gnome
- Install cluvfy as user grid – download here and extract files under user grid
- Extract grid software to folder grid and install rpm from folder: grid/rpm
# cd /media/sf_kits/Oracle/11.2.0.4/grid/rpm # rpm -iv cvuqdisk-1.0.9-1.rpm Preparing packages for installation... Using default group oinstall to install package cvuqdisk-1.0.9-1
Setup User Accounts
NOTE: Oracle recommend different users for the installation of the Grid Infrastructure (GI) and the Oracle RDBMS home. The GI will be installed in a separate Oracle base, owned by user ‘grid.’ After the grid install the GI home will be owned by root, and inaccessible to unauthorized users.
Create OS groups using the command below. Enter these commands as the 'root' user: #/usr/sbin/groupadd -g 501 oinstall #/usr/sbin/groupadd -g 502 dba #/usr/sbin/groupadd -g 504 asmadmin #/usr/sbin/groupadd -g 506 asmdba #/usr/sbin/groupadd -g 507 asmoper Create the users that will own the Oracle software using the commands: #/usr/sbin/useradd -u 501 -g oinstall -G asmadmin,asmdba,asmoper grid #/usr/sbin/useradd -u 502 -g oinstall -G dba,asmdba oracle $ id uid=501(grid) gid=54321(oinstall) groups=54321(oinstall),504(asmadmin),506(asmdba),507(asmoper) $ id uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),501(vboxsf),506(adba),54322(dba) For the C shell (csh or tcsh), add the following lines to the /etc/csh.login file: if ( $USER = "oracle" || $USER = "grid" ) then limit maxproc 16384 limit descriptors 65536 endif Create directories: To create the Oracle Inventory directory, enter the following commands as the root user: # mkdir -p /u01/app/oraInventory # chown -R grid:oinstall /u01/app/oraInventory Creating the Oracle Grid Infrastructure Home Directory # mkdir -p /u01/app/grid # chown -R grid:oinstall /u01/app/grid # chmod -R 775 /u01/app/grid # mkdir -p /u01/app/11204/grid # chown -R grid:oinstall /u01//app/11204/grid # chmod -R 775 /u01/app/11203/grid Creating the Oracle Base Directory To create the Oracle Base directory, enter the following commands as the root user: # mkdir -p /u01/app/oracle # chown -R oracle:oinstall /u01/app/oracle # chmod -R 775 /u01/app/oracle Creating the Oracle RDBMS Home Directory To create the Oracle RDBMS Home directory, enter the following commands as the root user: # mkdir -p /u01/app/oracle/product/11204/racdb # chown -R oracle:oinstall /u01/app/oracle/product/11204/racdb # chmod -R 775 /u01/app/oracle/product/11204/racdb
Configure DNS,DHCP server running on its own VM
Please read following link to setup Bind,DHCP server.
Configure local network for first VM: grac41
Network/DNS setup using VirtualBox eth0 - NAT : Used for VPN connection to company network/local router ( DHCP ) eth1 - Host-Only Adapater : public ( grac1: 192.168.1.101, grac2: 192.168.1.102, grac3: 192.168.1.103, ..) eth2 - Internal : private cluster interconnect ( grac1int: 192.168.2.101, grac2int: 192.168.2.102, grac3int: 192.168.2.103, .. ) Modify eth0 device using network manager ( see /etc/sysconfig/network-scripts/ifcfg-eth0 ) Goto IPV4 settings -> change Method to : Automatic(DHCP) addresses only ( ->Now we can modify Nameservers/Search ) to Nameservers: 192.168.1.50, 192.135.82.44,192.168.1.1 ( Order matters ! Your our local name server first ! ) Search: example.com,grid.example.com,de.oracle.com Details: Nameserver settings: 192.35.82.44 : Coorporate name server 192.168.1.1 : local LTE router 192.168.1.50 : DNS name server used for GNS delagation ( GNS NS: 192.168.1.55 ) Domains: example.com : Our local domaion grid.example.com : Our GNS sub domain de.oracle.com : coorporate domain After above setup network devices and /etc/resolv.conf should look like: # ifconfig | egrep 'HWaddr|Bcast' eth0 Link encap:Ethernet HWaddr 08:00:27:A8:27:BD inet addr:10.0.2.15 Bcast:10.0.2.255 Mask:255.255.255.0 eth1 Link encap:Ethernet HWaddr 08:00:27:1E:7D:B0 inet addr:192.168.1.101 Bcast:192.168.1.255 Mask:255.255.255.0 eth2 Link encap:Ethernet HWaddr 08:00:27:97:59:C3 inet addr:192.168.2.101 Bcast:192.168.2.255 Mask:255.255.255.0 # cat /etc/resolv.conf # Generated by NetworkManager search example.com grid.example.com de.oracle.com nameserver 192.135.82.44 nameserver 192.168.1.1 nameserver 192.168.1.50 Check local DNS resolution # nslookup grac41 Name: grac41.example.com Address: 192.168.1.101 # nslookup 192.168.1.101 101.1.168.192.in-addr.arpa name = grac41.example.com. # nslookup grac41int.example.com Name: grac41int.example.com Address: 192.168.2.101 # nslookup 192.168.2.101 Server: 192.168.1.50 Address: 192.168.1.50#53 101.2.168.192.in-addr.arpa name = grac41int.example.com Check coorporate DNS resolution # nslookup supsunhh3 Non-authoritative answer: Name: supsunhh3.de.oracle.com Address: xxxxxxx Configure you hostname name by modifying: /etc/sysconfig/network NETWORKING=yes HOSTNAME=grac41.example.com
Run cluvfy commands to test Hardware, OS , GNS and pre CRS installation status
Cluvfy to run command before cloning using cluster member grac41 only Post-check for hardware and OS: $ ./bin/cluvfy stage -post hwos -n grac41 -verbose Pre-check for CRS installation: $ ./bin/cluvfy comp sys -pre crs -n grac41 -verbose Check GNS ( Note 192.168.1.55 is the IP address of our GNS name server ) $./bin/cluvfy comp gns -precrsinst -domain grid.example.com -vip 192.168.1.55 -verbose -n grac41
Configure ASM Devices for first VM: grac41
Create ASM disks
VBoxManage createhd --filename M:\VM\GRAC_OEL64_11204\asm1_10G.vdi --size 10240 --format VDI --variant Fixed
VBoxManage createhd --filename M:\VM\GRAC_OEL64_11204\asm2_10G.vdi --size 10240 --format VDI --variant Fixed
VBoxManage createhd --filename M:\VM\GRAC_OEL64_11204\asm3_10G.vdi --size 10240 --format VDI --variant Fixed
VBoxManage createhd --filename M:\VM\GRAC_OEL64_11204\asm4_10G.vdi --size 10240 --format VDI --variant Fixed
VBoxManage storageattach grac41 --storagectl "SATA" --port 1 --device 0 --type hdd --medium M:\VM\GRAC_OEL64_11204\asm1_10G.vdi
VBoxManage storageattach grac41 --storagectl "SATA" --port 2 --device 0 --type hdd --medium M:\VM\GRAC_OEL64_11204\asm2_10G.vdi
VBoxManage storageattach grac41 --storagectl "SATA" --port 3 --device 0 --type hdd --medium M:\VM\GRAC_OEL64_11204\asm3_10G.vdi
VBoxManage storageattach grac41 --storagectl "SATA" --port 4 --device 0 --type hdd --medium M:\VM\GRAC_OEL64_11204\asm4_10G.vdi
VBoxManage modifyhd M:\VM\GRAC_OEL64_11204\asm1_10G.vdi --type shareable
VBoxManage modifyhd M:\VM\GRAC_OEL64_11204\asm2_10G.vdi --type shareable
VBoxManage modifyhd M:\VM\GRAC_OEL64_11204\asm3_10G.vdi --type shareable
VBoxManage modifyhd M:\VM\GRAC_OEL64_11204\asm4_10G.vdi --type shareable
Reboot system and use format the devices Please read:
Pelase read following link.
Setup UDEV rules
Pelase read following link.
Run cluvfy commands to test Hardware, OS , GNS and pre CRS installation status
Configure 2nd note add udev rules and attach shared devices VBoxManage storageattach grac42 --storagectl "SATA" --port 1 --device 0 --type hdd --medium M:\VM\GRAC_OEL64_11204\asm1_10G.vdi VBoxManage storageattach grac42 --storagectl "SATA" --port 2 --device 0 --type hdd --medium M:\VM\GRAC_OEL64_11204\asm2_10G.vdi VBoxManage storageattach grac42 --storagectl "SATA" --port 3 --device 0 --type hdd --medium M:\VM\GRAC_OEL64_11204\asm3_10G.vdi VBoxManage storageattach grac42 --storagectl "SATA" --port 4 --device 0 --type hdd --medium M:\VM\GRAC_OEL64_11204\asm4_10G.vdi Run cluvfy .... $ ./bin/cluvfy stage -pre crsinst -asm -presence local -asmgrp asmadmin -asmdev /dev/asmdisk1_udev_sdb1,/dev/asmdisk2_udev_sdc1,/dev/asmdisk3_udev_sdd1,/dev/asmdisk4_udev_sde1 -networks eth1:192.168.1.0:PUBLIC/eth2:192.168.2.0:cluster_interconnect -n grac41,grac42 Performing pre-checks for cluster services setup Checking node reachability... Node reachability check passed from node "grac41" Checking user equivalence... User equivalence check passed for user "grid" Checking node connectivity... Checking hosts config file... Verification of the hosts config file successful Check: Node connectivity using interfaces on subnet "192.168.2.0" Node connectivity passed for subnet "192.168.2.0" with node(s) grac41,grac42 TCP connectivity check passed for subnet "192.168.2.0" Check: Node connectivity using interfaces on subnet "192.168.1.0" Node connectivity passed for subnet "192.168.1.0" with node(s) grac41,grac42 TCP connectivity check passed for subnet "192.168.1.0" Checking subnet mask consistency... Subnet mask consistency check passed for subnet "192.168.1.0". Subnet mask consistency check passed for subnet "192.168.2.0". Subnet mask consistency check passed. Node connectivity check passed Checking multicast communication... Checking subnet "192.168.2.0" for multicast communication with multicast group "224.0.0.251"... Check of subnet "192.168.2.0" for multicast communication with multicast group "224.0.0.251" passed. Check of multicast communication passed. Checking ASMLib configuration. Check for ASMLib configuration passed. Total memory check passed Available memory check passed Swap space check passed Free disk space check passed for "grac42:/usr,grac42:/var,grac42:/etc,grac42:/sbin,grac42:/tmp" Free disk space check passed for "grac41:/usr,grac41:/var,grac41:/etc,grac41:/sbin,grac41:/tmp" Check for multiple users with UID value 501 passed User existence check passed for "grid" Group existence check passed for "oinstall" Group existence check passed for "dba" Group existence check passed for "asmadmin" Membership check for user "grid" in group "oinstall" [as Primary] passed Membership check for user "grid" in group "dba" passed Membership check for user "grid" in group "asmadmin" passed Run level check passed Hard limits check passed for "maximum open file descriptors" Hard limits check passed for "maximum user processes" System architecture check passed Kernel version check passed Kernel parameter check passed for "semmsl" .. Package existence check passed for "nfs-utils" Checking availability of ports "23792,23791" required for component "Oracle Remote Method Invocation (ORMI)" Port availability check passed for ports "23792,23791" Checking availability of ports "6200,6100" required for component "Oracle Notification Service (ONS)" Port availability check passed for ports "6200,6100" Checking availability of ports "2016" required for component "Oracle Notification Service (ONS) Enterprise Manager support" Port availability check passed for ports "2016" Checking availability of ports "1521" required for component "Oracle Database Listener" Port availability check passed for ports "1521" Checking availability of ports "8888" required for component "Oracle Containers for J2EE (OC4J)" Port availability check passed for ports "8888" Check for multiple users with UID value 0 passed Current group ID check passed Starting check for consistency of primary group of root user Check for consistency of root user's primary group passed Package existence check passed for "cvuqdisk" Checking Devices for ASM... Checking for shared devices... Device Device Type ------------------------------------ ------------------------ /dev/asmdisk4_udev_sde1 Disk /dev/asmdisk2_udev_sdc1 Disk /dev/asmdisk3_udev_sdd1 Disk /dev/asmdisk1_udev_sdb1 Disk Checking consistency of device owner across all nodes... Consistency check of device owner for "/dev/asmdisk2_udev_sdc1" PASSED Consistency check of device owner for "/dev/asmdisk4_udev_sde1" PASSED Consistency check of device owner for "/dev/asmdisk3_udev_sdd1" PASSED Consistency check of device owner for "/dev/asmdisk1_udev_sdb1" PASSED Checking consistency of device group across all nodes... Consistency check of device group for "/dev/asmdisk2_udev_sdc1" PASSED Consistency check of device group for "/dev/asmdisk4_udev_sde1" PASSED Consistency check of device group for "/dev/asmdisk3_udev_sdd1" PASSED Consistency check of device group for "/dev/asmdisk1_udev_sdb1" PASSED Checking consistency of device permissions across all nodes... Consistency check of device permissions for "/dev/asmdisk2_udev_sdc1" PASSED Consistency check of device permissions for "/dev/asmdisk4_udev_sde1" PASSED Consistency check of device permissions for "/dev/asmdisk3_udev_sdd1" PASSED Consistency check of device permissions for "/dev/asmdisk1_udev_sdb1" PASSED Checking consistency of device size across all nodes... Consistency check of device size for "/dev/asmdisk2_udev_sdc1" PASSED Consistency check of device size for "/dev/asmdisk4_udev_sde1" PASSED Consistency check of device size for "/dev/asmdisk3_udev_sdd1" PASSED Consistency check of device size for "/dev/asmdisk1_udev_sdb1" PASSED UDev attributes check for ASM Disks started... Checking udev settings for device "/dev/asmdisk1_udev_sdb1" Checking udev settings for device "/dev/asmdisk2_udev_sdc1" Checking udev settings for device "/dev/asmdisk3_udev_sdd1" Checking udev settings for device "/dev/asmdisk4_udev_sde1" UDev attributes check passed for ASM Disks Devices check for ASM passed Starting Clock synchronization checks using Network Time Protocol(NTP)... NTP Configuration file check started... No NTP Daemons or Services were found to be running Clock synchronization check using Network Time Protocol(NTP) passed Core file name pattern consistency check passed. User "grid" is not part of "root" group. Check passed Default user file creation mask check passed Checking integrity of file "/etc/resolv.conf" across nodes "domain" and "search" entries do not coexist in any "/etc/resolv.conf" file All nodes have same "search" order defined in file "/etc/resolv.conf" The DNS response time for an unreachable node is within acceptable limit on all nodes Check for integrity of file "/etc/resolv.conf" passed Time zone consistency check passed Checking integrity of name service switch configuration file "/etc/nsswitch.conf" ... All nodes have same "hosts" entry defined in file "/etc/nsswitch.conf" Check for integrity of name service switch configuration file "/etc/nsswitch.conf" passed Checking daemon "avahi-daemon" is not configured and running Daemon not configured check passed for process "avahi-daemon" Daemon not running check passed for process "avahi-daemon" Starting check for Reverse path filter setting ... Check for Reverse path filter setting passed Starting check for /dev/shm mounted as temporary file system ... Check for /dev/shm mounted as temporary file system passed Pre-check for cluster services setup was successful.
Run installer with following parameters
From OUI log: /tmp/OraInstall2013-09-12_10-45-12AM/installActions2013-09-12_10-45-12AM.log -------------------------------------------------------------------------------- Global Settings -------------------------------------------------------------------------------- - Disk Space : required 5.5 GB available 28.16 GB - Install Option : Install and Configure Oracle Grid Infrastructure for a Cluster - Oracle base for Oracle Grid Infrastructure : /u01/app/grid - Grid home : /u01/app/11204/grid - Source Location : /media/sf_mykits/Oracle/11.2.0.4/grid/grid/install/../stage/products.xml - Privileged Operating System Groups : asmdba (OSDBA), asmoper (OSOPER), asmadmin (OSASM) -------------------------------------------------------------------------------- Inventory information -------------------------------------------------------------------------------- - Inventory location : /u01/app/oraInventory - Central inventory (oraInventory) group : oinstall -------------------------------------------------------------------------------- Grid Infrastructure Settings -------------------------------------------------------------------------------- - Cluster Name : grac4 - Local Node : grac41 - Remote Nodes : grac42 - GNS Subdomain : grac.example.com - GNS VIP Address : 192.168.1.58 - Single Client Access Name (SCAN) : grac4-scan.grac.example.com - SCAN Port : 1521 - Public Interfaces : eth1 - Private Interfaces : eth2 -------------------------------------------------------------------------------- Storage Information -------------------------------------------------------------------------------- - Storage Type : Oracle ASM - ASM Disk Group : DATA - Storage Redundancy : NORMAL - Disks Selected : /dev/asmdisk1_udev_sdb1,/dev/asmdisk2_udev_sdc1,/dev/asmdisk3_udev_sdd1,/dev/asmdisk4_udev_sde1 --------------------------------------------------------------------------------
Run root.sh at grac41
# /u01/app/oraInventory/orainstRoot.sh Changing permissions of /u01/app/oraInventory. Adding read,write permissions for group. Removing read,write,execute permissions for world. Changing groupname of /u01/app/oraInventory to oinstall. The execution of the script is complete. # /u01/app/11204/grid/root.sh Performing root user operation for Oracle 11g The following environment variables are set as: ORACLE_OWNER= grid ORACLE_HOME= /u01/app/11204/grid Creating /etc/oratab file... Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. Using configuration parameter file: /u01/app/11204/grid/crs/install/crsconfig_params Creating trace directory Installing Trace File Analyzer OLR initialization - successful root wallet root wallet cert root cert export peer wallet profile reader wallet pa wallet peer wallet keys pa wallet keys peer cert request pa cert request peer cert pa cert peer root cert TP profile reader root cert TP pa root cert TP peer pa cert TP pa peer cert TP profile reader pa cert TP profile reader peer cert TP peer user cert pa user cert Adding Clusterware entries to upstart CRS-2672: Attempting to start 'ora.mdnsd' on 'grac41' CRS-2676: Start of 'ora.mdnsd' on 'grac41' succeeded CRS-2672: Attempting to start 'ora.gpnpd' on 'grac41' CRS-2676: Start of 'ora.gpnpd' on 'grac41' succeeded CRS-2672: Attempting to start 'ora.cssdmonitor' on 'grac41' CRS-2672: Attempting to start 'ora.gipcd' on 'grac41' CRS-2676: Start of 'ora.cssdmonitor' on 'grac41' succeeded CRS-2676: Start of 'ora.gipcd' on 'grac41' succeeded CRS-2672: Attempting to start 'ora.cssd' on 'grac41' CRS-2672: Attempting to start 'ora.diskmon' on 'grac41' CRS-2676: Start of 'ora.diskmon' on 'grac41' succeeded CRS-2676: Start of 'ora.cssd' on 'grac41' succeeded ASM created and started successfully. Disk Group DATA created successfully. clscfg: -install mode specified Successfully accumulated necessary OCR keys. Creating OCR keys for user 'root', privgrp 'root'.. Operation successful. CRS-4256: Updating the profile Successful addition of voting disk 10c81d1ce5a14fb6bf35cbb22fff3ebf. Successful addition of voting disk 98010612be6b4fc9bf3bc1b186d8758d. Successful addition of voting disk 9688bec3914d4f70bfc959664ddd8584. Successfully replaced voting disk group with +DATA. CRS-4256: Updating the profile CRS-4266: Voting file(s) successfully replaced ## STATE File Universal Id File Name Disk group -- ----- ----------------- --------- --------- 1. ONLINE 10c81d1ce5a14fb6bf35cbb22fff3ebf (/dev/asmdisk1_udev_sdb1) [DATA] 2. ONLINE 98010612be6b4fc9bf3bc1b186d8758d (/dev/asmdisk2_udev_sdc1) [DATA] 3. ONLINE 9688bec3914d4f70bfc959664ddd8584 (/dev/asmdisk3_udev_sdd1) [DATA] Located 3 voting disk(s). CRS-2672: Attempting to start 'ora.asm' on 'grac41' CRS-2676: Start of 'ora.asm' on 'grac41' succeeded CRS-2672: Attempting to start 'ora.DATA.dg' on 'grac41' CRS-2676: Start of 'ora.DATA.dg' on 'grac41' succeeded Configure Oracle Grid Infrastructure for a Cluster ... succeeded
Run root.sh at grac42
# /u01/app/oraInventory/orainstRoot.sh Changing permissions of /u01/app/oraInventory. Adding read,write permissions for group. Removing read,write,execute permissions for world. Changing groupname of /u01/app/oraInventory to oinstall. The execution of the script is complete. # /u01/app/11204/grid/root.sh Performing root user operation for Oracle 11g The following environment variables are set as: ORACLE_OWNER= grid ORACLE_HOME= /u01/app/11204/grid Enter the full pathname of the local bin directory: [/usr/local/bin]: Copying dbhome to /usr/local/bin ... Copying oraenv to /usr/local/bin ... Copying coraenv to /usr/local/bin ... Creating /etc/oratab file... Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. Using configuration parameter file: /u01/app/11204/grid/crs/install/crsconfig_params Creating trace directory Installing Trace File Analyzer OLR initialization - successful Adding Clusterware entries to upstart CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node grac41, number 1, and is terminating An active cluster was found during exclusive startup, restarting to join the cluster Configure Oracle Grid Infrastructure for a Cluster ... succeeded
Verify the CRS installation running cluvfy with
$ ./bin/cluvfy stage -post crsinst -n grac41,grac42
Performing post-checks for cluster services setup
Checking node reachability...
Node reachability check passed from node "grac41"
Checking user equivalence...
User equivalence check passed for user "grid"
Checking node connectivity...
Checking hosts config file...
Verification of the hosts config file successful
Check: Node connectivity using interfaces on subnet "192.168.1.0"
Node connectivity passed for subnet "192.168.1.0" with node(s) grac41,grac42
TCP connectivity check passed for subnet "192.168.1.0"
Check: Node connectivity using interfaces on subnet "192.168.2.0"
Node connectivity passed for subnet "192.168.2.0" with node(s) grac41,grac42
TCP connectivity check passed for subnet "192.168.2.0"
Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "192.168.1.0".
Subnet mask consistency check passed for subnet "192.168.2.0".
Subnet mask consistency check passed.
Node connectivity check passed
Checking multicast communication...
Checking subnet "192.168.2.0" for multicast communication with multicast group "224.0.0.251"...
Check of subnet "192.168.2.0" for multicast communication with multicast group "224.0.0.251" passed.
Check of multicast communication passed.
Time zone consistency check passed
Checking Cluster manager integrity...
Checking CSS daemon...
Oracle Cluster Synchronization Services appear to be online.
Cluster manager integrity check passed
UDev attributes check for OCR locations started...
UDev attributes check passed for OCR locations
UDev attributes check for Voting Disk locations started...
UDev attributes check passed for Voting Disk locations
Default user file creation mask check passed
Checking cluster integrity...
Cluster integrity check passed
Checking OCR integrity...
Checking the absence of a non-clustered configuration...
All nodes free of non-clustered, local-only configurations
Checking OCR config file "/etc/oracle/ocr.loc"...
OCR config file "/etc/oracle/ocr.loc" check successful
Disk group for ocr location "+DATA" is available on all the nodes
NOTE:
This check does not verify the integrity of the OCR contents. Execute 'ocrcheck' as a privileged user to verify the contents of OCR.
OCR integrity check passed
Checking CRS integrity...
Clusterware version consistency passed.
CRS integrity check passed
Checking node application existence...
Checking existence of VIP node application (required)
VIP node application check passed
Checking existence of NETWORK node application (required)
NETWORK node application check passed
Checking existence of GSD node application (optional)
GSD node application is offline on nodes "grac41,grac42"
Checking existence of ONS node application (optional)
ONS node application check passed
Checking Single Client Access Name (SCAN)...
Checking TCP connectivity to SCAN Listeners...
TCP connectivity to SCAN Listeners exists on all cluster nodes
Checking name resolution setup for "grac4-scan.grid4.example.com"...
Checking integrity of name service switch configuration file "/etc/nsswitch.conf" ...
All nodes have same "hosts" entry defined in file "/etc/nsswitch.conf"
Check for integrity of name service switch configuration file "/etc/nsswitch.conf" passed
Checking SCAN IP addresses...
Check of SCAN IP addresses passed
Verification of SCAN VIP and Listener setup passed
Checking OLR integrity...
Check of existence of OLR configuration file "/etc/oracle/olr.loc" passed
Check of attributes of OLR configuration file "/etc/oracle/olr.loc" passed
WARNING:
This check does not verify the integrity of the OLR contents. Execute 'ocrcheck -local' as a privileged user to verify the contents of OLR.
OLR integrity check passed
Checking GNS integrity...
The GNS subdomain name "grid4.example.com" is a valid domain name
Checking if the GNS VIP belongs to same subnet as the public network...
Public network subnets "192.168.1.0, 192.168.1.0, 192.168.1.0, 192.168.1.0, 192.168.1.0" match with the GNS VIP "192.168.1.0, 192.168.1.0, 192.168.1.0, 192.168.1.0, 192.168.1.0"
GNS VIP "192.168.1.59" resolves to a valid IP address
GNS resolved IP addresses are reachable
GNS resolved IP addresses are reachable
GNS resolved IP addresses are reachable
GNS resource configuration check passed
GNS VIP resource configuration check passed.
GNS integrity check passed
Checking Oracle Cluster Voting Disk configuration...
Oracle Cluster Voting Disk configuration check passed
User "grid" is not part of "root" group. Check passed
Checking if Clusterware is installed on all nodes...
Check of Clusterware install passed
Checking if CTSS Resource is running on all nodes...
CTSS resource check passed
Querying CTSS for time offset on all nodes...
Query of CTSS for time offset passed
Check CTSS state started...
CTSS is in Active state. Proceeding with check of clock time offsets on all nodes...
Check of clock time offsets passed
Oracle Cluster Time Synchronization Services check passed
Checking VIP configuration.
Checking VIP Subnet configuration.
Check for VIP Subnet configuration passed.
Checking VIP reachability
Check for VIP reachability passed.
Post-check for cluster services setup was successful.
Check CRS status after installation
# my_crs_stat
NAME TARGET STATE SERVER STATE_DETAILS
------------------------- ---------- ---------- ------------ ------------------
ora.DATA.dg ONLINE ONLINE grac41
ora.DATA.dg ONLINE ONLINE grac42
ora.LISTENER.lsnr ONLINE ONLINE grac41
ora.LISTENER.lsnr ONLINE ONLINE grac42
ora.asm ONLINE ONLINE grac41 Started
ora.asm ONLINE ONLINE grac42 Started
ora.gsd OFFLINE OFFLINE grac41
ora.gsd OFFLINE OFFLINE grac42
ora.net1.network ONLINE ONLINE grac41
ora.net1.network ONLINE ONLINE grac42
ora.ons ONLINE ONLINE grac41
ora.ons ONLINE ONLINE grac42
ora.registry.acfs ONLINE ONLINE grac41
ora.registry.acfs ONLINE ONLINE grac42
ora.LISTENER_SCAN1.lsnr ONLINE ONLINE grac42
ora.LISTENER_SCAN2.lsnr ONLINE ONLINE grac41
ora.LISTENER_SCAN3.lsnr ONLINE ONLINE grac41
ora.cvu ONLINE ONLINE grac41
ora.gns ONLINE ONLINE grac41
ora.gns.vip ONLINE ONLINE grac41
ora.grac41.vip ONLINE ONLINE grac41
ora.grac42.vip ONLINE ONLINE grac42
ora.oc4j ONLINE ONLINE grac41
ora.scan1.vip ONLINE ONLINE grac42
ora.scan2.vip ONLINE ONLINE grac41
ora.scan3.vip ONLINE ONLINE grac41
Create database ..
top !
I’m building something akin to this one. If i get it started I’ll try and make a note to return
to this site and find out any updates.
well. This was actually what I was looking for, and I am glad to came here! Thanks for sharing