Friday, November 5, 2021

Oracle RAC: oracleasm listdisks not showing any disks

 

Oracle 19C/Oracle Linux 7.4

Upon rebooting after my newly set up Oracle RAC 19C, CRS isn't starting. After some basic checkings, I realized my "oracleasm listdisks" not show any disks at all on both nodes. It turned out the Openfiler has granted an accessible IPs have changed on my VMs. To correct the problem, I need to reset static IPs where Openfiler has granted access to where my ASM disk resides as SAN.


[oracle@rac02 ~]$ crsctl stat res -t
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4000: Command Status failed, or completed with errors.


[root@rac02 ~]# oracleasm listdisks
[root@rac02 ~]# oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks...
Scanning system for ASM disks...
[root@rac02 ~]# oracleasm listdisks

After setting the IPs to statis and restarted the network. Rescan the ASM disk on both nodes (yes, on both nodes. My second node not seeing the shared disk)

node1
[root@rac01 ~]# oracleasm listdisks
[root@rac01 ~]# oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks...
Scanning system for ASM disks...
Instantiating disk "DISK01"
[root@rac01 ~]# oracleasm listdisks
DISK01
[root@rac01 ~]#

node2
[root@rac02 ~]# oracleasm listdisks
[root@rac02 ~]# oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks...
Scanning system for ASM disks...
Instantiating disk "DISK01"
[root@rac02 ~]# oracleasm listdisks
DISK01
[root@rac02 ~]# oracleasm status
Checking if ASM is loaded: yes
Checking if /dev/oracleasm is mounted: yes
    

The second node needs to be manually started as well.

Use "crsctl start crs" or "crsctl start cluster -all" 
or
[root@rac02 ~]# /u01/app/19*/grid/bin/crsctl start res ora.crsd -init


No comments:

Post a Comment