Tuesday, November 9, 2021

Oracle RAC: DBT-10002 DBCA does not support this operation in a Grid Infrastructure Oracle Home

This blog is to explain I am getting the error when trying to create Oracle 19C database through DBCA and the mistake I made. The error is pretty straightforward and self-explanatory at the same time, it was vague. 




When I saw the error, I thought, my grid_env or bashc profile had GI_HOME defined within.  That was what was stated in Oracle Note (DBCA Fails to Start with DBT-10002 (Doc ID 2646840.1)).  




No GI_HOME in my PATH or anywhere can be found.


[oracle@rac2 ~]$ echo $GI_HOME

[oracle@rac2 ~]$ unset GI_HOME

[oracle@rac2 ~]$ echo $PATH
/u01/app/oracle/product/19.3/db_1/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin:/usr/bin:/usr/sbin:/bin:/sbin:/home/oracle/.local/bin:/home/oracle/bin
[oracle@rac2 ~]$



As it turned out, I accidentally ran the DBCA on an absolute path in the Grid directory (/u01/app/version/grid/bin) instead of where my Oracle Software media directory. Once executing the DBCA from the media directory, it worked fine. Note: I have customized db_env, so, I use the absolute path for DBCA.


Friday, November 5, 2021

Oracle RAC: oracleasm listdisks not showing any disks

 

Oracle 19C/Oracle Linux 7.4

Upon rebooting after my newly set up Oracle RAC 19C, CRS isn't starting. After some basic checkings, I realized my "oracleasm listdisks" not show any disks at all on both nodes. It turned out the Openfiler has granted an accessible IPs have changed on my VMs. To correct the problem, I need to reset static IPs where Openfiler has granted access to where my ASM disk resides as SAN.


[oracle@rac02 ~]$ crsctl stat res -t
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4000: Command Status failed, or completed with errors.


[root@rac02 ~]# oracleasm listdisks
[root@rac02 ~]# oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks...
Scanning system for ASM disks...
[root@rac02 ~]# oracleasm listdisks

After setting the IPs to statis and restarted the network. Rescan the ASM disk on both nodes (yes, on both nodes. My second node not seeing the shared disk)

node1
[root@rac01 ~]# oracleasm listdisks
[root@rac01 ~]# oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks...
Scanning system for ASM disks...
Instantiating disk "DISK01"
[root@rac01 ~]# oracleasm listdisks
DISK01
[root@rac01 ~]#

node2
[root@rac02 ~]# oracleasm listdisks
[root@rac02 ~]# oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks...
Scanning system for ASM disks...
Instantiating disk "DISK01"
[root@rac02 ~]# oracleasm listdisks
DISK01
[root@rac02 ~]# oracleasm status
Checking if ASM is loaded: yes
Checking if /dev/oracleasm is mounted: yes
    

The second node needs to be manually started as well.

Use "crsctl start crs" or "crsctl start cluster -all" 
or
[root@rac02 ~]# /u01/app/19*/grid/bin/crsctl start res ora.crsd -init


Monday, November 1, 2021

Oracle RAC: PRCC-1108: Invalid VIP address xxx.xxx.xx.xx because the specified IP address is reachable

As the error described. It turned out my VIP IP is pingable which is not supposed to be. It should have been hidden. After double-checking a few rounds, I finally spotted that I made a typo of VIP IP and it is exactly the IP I had for the SCAN. Changing, it fixed my issue. So, look around, it may be somewhere in all the nodes /etc/hosts file that IP has been used and pingable.