Archive

Archive for the ‘Grid Infrastructure’ Category

ORA-00600: internal error code, arguments: [ORA_NPI_ERROR], [600], [ORA-00600: internal error code, arguments: [KFCHK_ERRRET], [kfa.c], [3025]

January 2nd, 2014 Matthias Pölzinger No comments

 
Problem description:

You are trying to recursively delete an ASM folder in asmcmd on 12.1.0.1 and are receiving the following ORA-00600:

ASMCMD> rm -rf 2013_12_10
ORA-29261: bad argument
ORA-00600: internal error code, arguments: [ORA_NPI_ERROR], [600], [ORA-00600: internal error code, arguments: [KFCHK_ERRRET], [kfa.c], [3025], [], [], [], [], [], [], [], [], []
], [], [], [], [], [], [], [], [], []
ORA-06512: at line 4 (DBD ERROR: OCIStmtExecute)
ASMCMD>

 
Problem resolution:

This seems to be a currently unresolved issue in 12.1.0.1. Even applying PSU1 will not fix this issue. Currently it is only possible to workaround this issue by not recursively deleting folders and instead delete the files inside:

ASMCMD> rm -f 2013_12_10/*
ASMCMD> ls -l
Type  Redund  Striped  Time             Sys  Name
                                        Y    2013_12_11/
                                        Y    2013_12_12/
                                        Y    2013_12_13/
ASMCMD>

 
System generated folders will be deleted automatically by Oracle ASM if no files are stored inside any longer.

Categories: Database, Grid Infrastructure, Oracle Tags:

Oracle Clusterware / Grid Infrastructure: How to determine the configured name of your cluster?

Problem description:

You need to know the cluster name defined during the installation of Oracle Clusterware / Oracle Grid Infrastructure. Maybe because you are configuring Oracle Enterprise Manager Database Console for one of your RAC Databases via emca:

[oracle@racn01 ~]$ emca -config dbcontrol db -repos create -cluster
 
STARTED EMCA at May 27, 2012 10:42:10 AM
EM Configuration Assistant, Version 11.2.0.3.0 Production
Copyright (c) 2003, 2011, Oracle.  All rights reserved.
 
Enter the following information:
Database unique name: RACDB
Service name: RACDB
Listener port number: 1521
Listener ORACLE_HOME [ /oracle/app/grid/11.2.0/grid ]:
Password for SYS user: 
Password for DBSNMP user: 
Password for SYSMAN user: 
Cluster name:

 
Problem resolution:

Oracle Clusterware’s command “cemutlo” can be used to determine the name defined for your cluster during the installation:

[root@racn01 ~]# /oracle/app/grid/11.2.0/grid/bin/cemutlo -n
playground
[root@racn01 ~]#

 

AOUG Experts Forum 13.10.2011: Storage Technologies for Oracle Database Systems and Best Practices

October 14th, 2011 Matthias Pölzinger No comments

InITSo was invited to hold a lecture on Oracle ACFS / Oracle Cloud File System at the Austrian Oracle User Group’s Experts Forum on Storage Technologies for Oracle Database Systems – Best Practices (“AOUG Expertentreff: Storage Technologien Oracle Datenbanksystem – Best Practices“).

If you are interested in this topic, you can download an English or German version of the presentation via the following links:

 

Oracle ASM: $ORACLE_HOME/rdbms/audit keeps increasing in total size and number of files

Problem description:

The rdbms/audit directory of your Grid Infrastructure is increasing permanently in number of files and total size:

[grid@rac01 ~]$ du -hs $ORACLE_HOME/rdbms/audit
1151M	/u01/app/grid/11.2.0/grid/rdbms/audit
[grid@rac01 ~]$ 
[grid@rac01 ~]$ cd $ORACLE_HOME/rdbms/audit
[grid@rac01 audit]$ 
[grid@rac01 audit]$ ls -l | wc -l
1112896
[grid@rac01 audit]$

 
Permanent increase in number of files and directory size can cause the file system to run out of free space and may also have performance impact on your ASM instance.

 
Cause:

Audit files are created by every connection as user sys. In a Real Application Cluster environment with Grid Control in place, this might become a problem (although you might want to store this information for a limited time due to security compliance reasons).

Example of an ASM .aud-file:

[grid@rac01 audit]$ cat +asm1_ora_9981_2.aud
Audit file /u01/app/grid/11.2.0/grid/rdbms/audit/+asm1_ora_9981_2.aud
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Real Application Clusters and Automatic Storage Management options
ORACLE_HOME = /u01/app/grid/11.2.0/grid
System name:	Linux
Node name:	rac01.initso.at
Release:	2.6.18-194.el5
Version:	#1 SMP Mon Mar 29 22:10:29 EDT 2010
Machine:	x86_64
Instance name: +ASM1
Redo thread mounted by this instance: 0 <none>
Oracle process number: 32
Unix process pid: 9981, image: oracle@rac01.initso.at (TNS V1-V3)
 
Tue Jul  5 12:12:06 2011 +02:00
LENGTH : '144'
ACTION :[7] 'CONNECT'
DATABASE USER:[1] '/'
PRIVILEGE :[6] 'SYSDBA'
CLIENT USER:[6] 'oracle'
CLIENT TERMINAL:[0] ''
STATUS:[1] '0'
DBID:[0] ''
 
[grid@rac01 audit]$

 
Workaround:

Old files can be deleted without any impact. Each SYS connection creates a new audit file. Old aud-files should not be open by any Oracle ASM process.

In order to cleanup old files you might want to use one or both of the following methods:

  • Manual cleanup

    You can cleanup aud-files manually by running a find command similar to the following:

    [grid@rac01 audit]$ find /u01/app/grid/11.2.0/grid/rdbms/audit -maxdepth 1 -name '*.aud' -mtime +30 -delete -print
    /u01/app/grid/11.2.0/grid/rdbms/audit/+asm1_ora_24456_2.aud
    ...
    ...
    /u01/app/grid/11.2.0/grid/rdbms/audit/+asm1_ora_9006_1.aud
    [grid@rac01 audit]$

     

  • Automatic cleanup after 30 days

    Create a cronjob similar to the following by using “crontab -e” as the Grid Infrastructure user (e.g. grid):

    [grid@rac01 ~]$ crontab -l
    # Daily cleanup job for Oracle ASM aud-files not modified in the last 30 days
    30 0 * * * /usr/bin/find /u01/app/grid/11.2.0/grid/rdbms/audit -maxdepth 1 -name '*.aud' -mtime +30 -delete >/dev/null 2>&1
    [grid@rac01 ~]$

Oracle ACFS: How to check if ACFS is supported with the kernel currently in use or gather other ACFS driver related information

Description:

You want to check if the ACFS driver can be used with the kernel currently in use. In addition you might want to gather other ACFS driver related information like if it is installed, loaded or which version is used.

 
Commands:

The acfsdriverstate command enables us to gather ACFS driver related information (especially if ACFS is supported with the kernel in use). This binary is part of the Oracle Grid Infrastructure and can be executed as Grid Infrastructure user (e.g. grid) or root.

The following information can be gathered:

  • If ACFS can be used with the kernel currently in use:
    [grid@rac01 bin]$ $ORACLE_HOME/bin/acfsdriverstate -orahome $ORACLE_HOME supported
    ACFS-9200: Supported
    [grid@rac01 bin]$

     

  • If ACFS driver is installed:
    [grid@rac01 bin]$ $ORACLE_HOME/bin/acfsdriverstate -orahome $ORACLE_HOME installed
    ACFS-9203: true
    [grid@rac01 bin]$

     

  • If ACFS driver is loaded:
    [grid@rac01 bin]$ $ORACLE_HOME/bin/acfsdriverstate -orahome $ORACLE_HOME loaded
    ACFS-9204: false
    [grid@rac01 bin]$

     

  • Version of ACFS driver:
    [grid@rac01 bin]$ $ORACLE_HOME/bin/acfsdriverstate -orahome $ORACLE_HOME version
    ACFS-9325:     Driver OS kernel version = 2.6.18-8.el5(x86_64).
    ACFS-9326:     Driver Oracle version = 100804.1.
    [grid@rac01 bin]$

     

Oracle Grid Infrastructure: roothas.pl fails with “Oracle Restart stack is not active on this node” / How to forcefully deconfig all old Grid Infrastructure information

Problem description:

root.sh fails during execution for an Oracle Grid Infrastructure 11.2 installation with the following message:

[root@ora01 ~]# /u01/app/grid/11.2.0/grid/root.sh
Running Oracle 11g root script...
 
The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u01/app/grid/11.2.0/grid
 
Enter the full pathname of the local bin directory: [/usr/local/bin]: 
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.
 
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/grid/11.2.0/grid/crs/install/crsconfig_params
Improper Oracle Clusterware configuration found on this host
Deconfigure the existing cluster configuration before starting
to configure a new Clusterware 
run '/u01/app/grid/11.2.0/grid/crs/install/roothas.pl -deconfig' 
to configure existing failed configuration and then rerun root.sh
/u01/app/grid/11.2.0/grid/perl/bin/perl -I/u01/app/grid/11.2.0/grid/perl/lib -I/u01/app/grid/11.2.0/grid/crs/install /u01/app/grid/11.2.0/grid/crs/install/roothas.pl execution failed
[root@ora01 ~]#

 
When executing roothas.pl as mentioned by root.sh, an “Oracle Restart stack is not active” error is raised. Output messages will ask you to restart the SIHA stack (although you might not have any Oracle Clusterware stack configured):

[root@ora01 ~]# /u01/app/grid/11.2.0/grid/crs/install/roothas.pl -deconfig
Using configuration parameter file: /u01/app/grid/11.2.0/grid/crs/install/crsconfig_params
Oracle Restart stack is not active on this node
Restart the SIHA stack (use /u01/app/grid/11.2.0/grid/bin/crsctl start has) and retry
Failed to verify HA resources
[root@ora01 ~]#

 
Cause:

These messages can have various causes like a previous failed Grid Infrastructure installation, an old Oracle Clusterware installation, etc.

 
Problem resolution:

In order to forcefully cleanup old configuration information execute roothas.pl with the force option:

[root@ora01 grid]# /u01/app/grid/11.2.0/grid/crs/install/roothas.pl -deconfig -force -verbose
Using configuration parameter file: /u01/app/grid/11.2.0/grid/crs/install/crsconfig_params
Failure in execution (rc=-1, 256, No such file or directory) for command 1 /u01/app/grid/11.2.0/grid/bin/crsctl stop resource ora.cssd -f
Failure in execution (rc=-1, 256, No such file or directory) for command 1 /u01/app/grid/11.2.0/grid/bin/crsctl delete resource ora.cssd -f
Failure in execution (rc=-1, 256, No such file or directory) for command 1 /u01/app/grid/11.2.0/grid/bin/crsctl stop has -f
Failure in execution (rc=-1, 256, No such file or directory) for command 1 /u01/app/grid/11.2.0/grid/bin/crsctl check has
You must kill ohasd processes or reboot the system to properly 
cleanup the processes started by Oracle clusterware
/u01/app/grid/11.2.0/grid/bin/acfsdriverstate: line 51: /lib/acfstoolsdriver.sh: No such file or directory
/u01/app/grid/11.2.0/grid/bin/acfsdriverstate: line 51: exec: /lib/acfstoolsdriver.sh: cannot execute: No such file or directory
Successfully deconfigured Oracle Restart stack
[root@ora01 grid]#

 
This cleaned up all stale information and configuration and allowed a successful rerun of root.sh.

Oracle: “ORA-12545: Connect failed because target host or object does not exist” when trying to connect through SCAN-Listeners

Problem description:

You are trying to connect to a database and are receiving the following error message:

[oracle@ls01 admin]$ sqlplus system@racdb
 
SQL*Plus: Release 10.2.0.5.0 - Production on Sat Jun 18 08:47:47 2011
 
Copyright (c) 1982, 2010, Oracle.  All Rights Reserved.
 
Enter password: 
ERROR:
ORA-12545: Connect failed because target host or object does not exist
 
 
Enter user-name:

 
Cause:

Normally this error is generated by specifying a wrong hostname with the ADDRESS parameters or by specifying a hostname which can not be looked up:

12545, 00000, "Connect failed because target host or object does not exist"
// *Cause: The address specified is not valid, or the program being 
// connected to does not exist.
// *Action: Ensure the ADDRESS parameters have been entered correctly; the
// most likely incorrect parameter is the node name.  Ensure that the 
// executable for the server exists (perhaps "oracle" is missing.)
// If the protocol is TCP/IP, edit the TNSNAMES.ORA file to change the
// host name to a numeric IP address and try again.

 
In this case, we have a tnsnames.ora-entry working fully functional on the database servers and even using IP-addresses:

RACDB =
  (DESCRIPTION =
    (ADDRESS_LIST =
     (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.200.101)(PORT = 1521))
     (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.200.102)(PORT = 1521))
     (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.200.103)(PORT = 1521))
    )
    (LOAD_BALANCE = yes)
    (CONNECT_DATA =
      (SERVICE_NAME = app199.db.initso.at)
      (SERVER = DEDICATED)
      (FAILOVER_MODE=
        (TYPE=select)
        (METHOD=basic)
        (RETRIES=20)
        (DELAY=15)
      )
    )
  )

 
Above mentioned tnsnames.ora-entry is used to connect to a remote 11g Release 2 RAC-database by utilizing the newly introduced SCAN-Listeners (IP-addresses necessary for clients < 11.2). The tnsnames.ora-entry works fine for clients in the local area network, but if you are trying to connect from another location, you are receiving an ORA-12545 (even though you are not using any hostname).

This is caused by making use of the SCAN-Listeners from a remote site. The SCAN-Listeners will route you to a “normal” VIP-Listener in order to spawn the connection. This routing is based on hostnames and not IP-addresses. If the client receives the hostname for the VIP-Listener to connect to and it is unable to resolve it, you will also see the ORA-12545.

 
Problem resolution:

Enter all Oracle 11g Release 2 RAC relevant IP-addresses and hostnames in your DNS server or your hosts-file.

Example for hosts-file:

192.168.200.201		rac01.initso.at rac01
192.168.200.202		rac02.initso.at rac02
192.168.200.203		rac03.initso.at rac03
 
192.168.200.211		rac01-vip.initso.at rac01-vip
192.168.200.212		rac02-vip.initso.at rac02-vip
192.168.200.213		rac03-vip.initso.at rac03-vip
 
192.168.200.101		rac-scan.initso.at
192.168.200.102		rac-scan.initso.at
192.168.200.103		rac-scan.initso.at

 
After configuring all appropriate hosts-file entries, you should be able to connect without any issue:

[oracle@ls01 admin]$ sqlplus username/password@RACDB
 
SQL*Plus: Release 10.2.0.5.0 - Production on Sat Jun 18 09:01:38 2011
 
Copyright (c) 1982, 2010, Oracle.  All Rights Reserved.
 
 
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options
 
SQL> select instance_number, instance_name from v$instance;
 
INSTANCE_NUMBER INSTANCE_NAME
--------------- ----------------
	      1 rac1
 
SQL> exit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options
[oracle@ls01 admin]$

Oracle: HOWTO delete a service which is not configured by Oracle Clusterware

Problem description:

You are running a Real Application cluster database and Grid Control reports that one of your database services is down, but all your database services managed by Oracle Clusterware are up and running.

Cause:

Maybe a no longer used service has still an entry in dba_services (that’s one of the views which Grid Control will check). This for example can happen if you change the database domain parameter after installation.

Problem resolution:

Check all database service entries in dba_services:

SQL> SELECT service_id, name, creation_date, enabled FROM dba_services ORDER BY 1;
 
SERVICE_ID NAME                                                             CREATION_DATE   ENA
---------- ---------------------------------------------------------------- --------------- ---
         1 SYS$BACKGROUND                                                   20-MAY-11       NO
         2 SYS$USERS                                                        20-MAY-11       NO
         3 O11GXDB                                                          20-MAY-11       NO
         4 O11G                                                             20-MAY-11       NO
         5 O11G.oracle.initso.at                                            23-MAY-11       NO
         6 O11GFAIL                                                         25-MAY-11       NO
 
6 ROWS selected.
 
SQL>

In my case, O11G was the service Grid Control reported as down, as after changing the database domain to “oracle.initso.at”, the service was no longer used.

If you want to remove a service which is no longer used by Oracle Clusterware or database connections, you can remove it by using the following commands (if the service was configured using srvctl/Oracle Clusterware, please use srvctl to remove the service!):

SQL> EXEC dbms_service.delete_service('O11G');
 
PL/SQL PROCEDURE successfully completed.
 
SQL>

Grid Control will now no longer report the service as down, because it’s no longer known by the database:

SQL> SELECT service_id, name, creation_date, enabled FROM dba_services ORDER BY 1;
 
SERVICE_ID NAME                                                             CREATION_DATE   ENA
---------- ---------------------------------------------------------------- --------------- ---
         1 SYS$BACKGROUND                                                   20-MAY-11       NO
         2 SYS$USERS                                                        20-MAY-11       NO
         3 O11GXDB                                                          20-MAY-11       NO
         5 O11G.oracle.initso.at                                            23-MAY-11       NO
         6 O11GFAIL                                                         25-MAY-11       NO
 
5 ROWS selected.
 
SQL>

Oracle Grid Control: /secFarm_GCDomain/GCDomain/EMGC_ADMINSERVER/FMW Welcome Page Application(11.1.0.0.0) down

Problem description:

After installing Grid Control 11.1.0.1, you may see the target “/secFarm_GCDomain/GCDomain/EMGC_ADMINSERVER/FMW Welcome Page Application(11.1.0.0.0)” reported as DOWN:



Cause:

This issue is caused by unpublished bug “BROKEN ‘FMW WELCOME PAGE APPLICATION’ APPLICATION DISCOVERED OUT-OF-BOX”.

Problem resolution:

Patch 9431704 fixes this issue and is available via Oracle My Support.

Steps to fix this issue:

  • Stop all Oracle Management Server processes:

    oracle@gc01:/u01/app/middleware/oms11g/bin> ./emctl stop oms -all
    Oracle Enterprise Manager 11g Release 1 Grid Control
    Copyright (c) 1996, 2010 Oracle Corporation.  All rights reserved.
    Stopping WebTier...
    WebTier Successfully Stopped
    Stopping Oracle Management Server...
    Oracle Management Server Successfully Stopped
    Oracle Management Server is Down
    oracle@gc01:/u01/app/middleware/oms11g/bin>

  • Unzip patch 9431704:

    oracle@gc01:/u01/INSTALL> unzip p9431704_111120_Generic.zip
    Archive:  p9431704_111120_Generic.zip
       creating: 9431704/
       creating: 9431704/etc/
       creating: 9431704/etc/config/
      inflating: 9431704/etc/config/deploy.xml
      inflating: 9431704/etc/config/actions.xml
      inflating: 9431704/etc/config/inventory.xml
       creating: 9431704/etc/xml/
      inflating: 9431704/etc/xml/ShiphomeDirectoryStructure.xml
      inflating: 9431704/etc/xml/GenericActions.xml
      inflating: 9431704/README.txt
       creating: 9431704/files/
       creating: 9431704/files/modules/
       creating: 9431704/files/modules/oracle.wsm.common_11.1.1/
      inflating: 9431704/files/modules/oracle.wsm.common_11.1.1/wsm-dependencies.jar
    oracle@gc01:/u01/INSTALL>

  • Set ORACLE_HOME to Middleware oracle_common directory and apply patch:

    oracle@gc01:/u01/INSTALL/9431704> export ORACLE_HOME=/u01/app/middleware/oracle_common
    oracle@gc01:/u01/INSTALL/9431704> /u01/app/middleware/oracle_common/OPatch/opatch apply
    Invoking OPatch 11.1.0.8.4
     
    Oracle Interim Patch Installer version 11.1.0.8.4
    Copyright (c) 2011, Oracle Corporation.  All rights reserved.
     
     
    Oracle Home       : /u01/app/middleware/oracle_common
    Central Inventory : /u01/app/oraInventory
       from           : /etc/oraInst.loc
    OPatch version    : 11.1.0.8.4
    OUI version       : 11.1.0.7.0
    OUI location      : /u01/app/middleware/oracle_common/oui
    Log file location : /u01/app/middleware/oracle_common/cfgtoollogs/opatch/opatch2011-06-08_23-25-29PM.log
     
    Patch history file: /u01/app/middleware/oracle_common/cfgtoollogs/opatch/opatch_history.txt
     
     
    OPatch detects the Middleware Home as "/u01/app/middleware"
     
    ApplySession applying interim patch '9431704' to OH '/u01/app/middleware/oracle_common'
     
    Running prerequisite checks...
    Provide your email address to be informed of security issues, install and
    initiate Oracle Configuration Manager. Easier for you if you use your My
    Oracle Support Email address/User Name.
    Visit http://www.oracle.com/support/policies.html for details.
    Email address/User Name:
     
    You have not provided an email address for notification of security issues.
    Do you wish to remain uninformed of security issues ([Y]es, [N]o) [N]:  y
     
    OPatch detected non-cluster Oracle Home from the inventory and will patch the local system only.
     
     
    Please shutdown Oracle instances running out of this ORACLE_HOME on the local system.
    (Oracle Home = '/u01/app/middleware/oracle_common')
     
     
    Is the local system ready for patching? [y|n]
    y
    User Responded with: Y
    Backing up files and inventory (not for auto-rollback) for the Oracle Home
    Backing up files affected by the patch '9431704' for restore. This might take a while...
    Backing up files affected by the patch '9431704' for rollback. This might take a while...
     
    Patching component oracle.jrf.j2ee, 11.1.1.2.0...
    Copying file to "/u01/app/middleware/oracle_common/modules/oracle.wsm.common_11.1.1/wsm-dependencies.jar"
    ApplySession adding interim patch '9431704' to inventory
     
    Verifying the update...
    Inventory check OK: Patch ID 9431704 is registered in Oracle Home inventory with proper meta-data.
    Files check OK: Files from Patch ID 9431704 are present in Oracle Home.
     
    The local system has been patched and can be restarted.
     
     
    OPatch succeeded.
    oracle@gc01:/u01/INSTALL/9431704>

  • Start OMS processes:

    oracle@gc01:/u01/app/middleware/oms11g/bin> ./emctl start oms
    Oracle Enterprise Manager 11g Release 1 Grid Control
    Copyright (c) 1996, 2010 Oracle Corporation.  All rights reserved.
    Starting WebTier...
    WebTier Successfully Started
    Starting Oracle Management Server...
    Oracle Management Server Successfully Started
    Oracle Management Server is Up
    oracle@gc01:/u01/app/middleware/oms11g/bin>

After applying this one-off patch, FMW Welcome Page Application should no longer be reported as down:


Categories: Grid Infrastructure, Oracle Tags:

Oracle RAC on Linux: PRVF-5449 and PRVF-5431 when executing addNode.sh

Problem description:

Executing addNode.sh in 11.2 results in PRVF-5449 and PRVF-5431 if Voting Disks are located on Oracle ASM Disks:

Checking Oracle Cluster Voting Disk configuration...
 
ERROR:
PRVF-5449 : Check of Voting Disk location "ORCL:GRID01(ORCL:GRID01)" failed on the following nodes:
Check failed on nodes:
        racn02
 
        racn02:No such file or directory
 
ERROR:
PRVF-5449 : Check of Voting Disk location "ORCL:GRID02(ORCL:GRID02)" failed on the following nodes:
Check failed on nodes:
        racn02
 
        racn02:No such file or directory
 
ERROR:
PRVF-5449 : Check of Voting Disk location "ORCL:GRID03(ORCL:GRID03)" failed on the following nodes:
Check failed on nodes:
        racn02
 
        racn02:No such file or directory
 
PRVF-5431 : Oracle Cluster Voting Disk configuration check failed
Time zone consistency check passed
 
 
 
[grid@racn01 bin]$

Although the Oracle ASM Disks are available on the node to add:

[root@racn02 ~]# service oracleasm listdisks | grep GRID
GRID01
GRID02
GRID03
[root@racn02 ~]#

Cause:

addNode.sh is checking Oracle ASM disks incorrectly and will cancel the node addition for Voting Devices on ASM disks.

Problem resolution:

Check manually if the Oracle ASM Disks are available on the nodes to add:

[root@racn02 ~]# service oracleasm listdisks | grep GRID
GRID01
GRID02
GRID03
[root@racn02 ~]#

If the voting disk locations check was the only one that failed, use the environment variable IGNORE_PREADDNODE_CHECKS and rerun addNode.sh. Otherwise resolve the other errors first before continuing.

Example usage of IGNORE_PREADDNODE_CHECKS:

[grid@racn01 bin]$ export IGNORE_PREADDNODE_CHECKS=Y
[grid@racn01 bin]$ ./addNode.sh "CLUSTER_NEW_NODES={racn02}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={racn02-vip}"
...
...