Installing Oracle Database 19c RAC on OEL 8: Four Traps

I recently built a two-node Oracle RAC 19c cluster on Oracle Linux 8.10 with Pure Storage FlashArray backend. The hardware was solid, the network was configured, the storage was presented. What should have been a straightforward installation turned into a four-hour troubleshooting session because of issues that aren’t documented in Oracle’s installation guides.

This post covers the gotchas that got me. If you’re installing Oracle 19c RAC on OEL 8.x (or RHEL 8.x), read this first.

Environment

  • OS: Oracle Linux 8.10 (kernel 5.4.17-2136.350.3.1.el8uek)
  • Oracle Version: 19.3.0.0.0 (with 19.25 RU attempted)
  • Cluster: 2 nodes (rac-node-01, rac-node-02)
  • Storage: Pure Storage FlashArray via 4x 32Gb FC HBAs
  • Network: Bonded 100GbE public, dedicated private interconnect

The Gotchas

“Passwordless SSH Not Setup” Error (INS-44000)

The Symptom

Grid Infrastructure installer fails with:

[FATAL] [INS-44000] Passwordless SSH connectivity is not setup from the local node rac-node-01 to the following nodes: [rac-node-02]

But when you test SSH manually, it works perfectly:

su - grid -c "ssh rac-node-02 hostname"
# Returns: rac-node-02.rac.local

Why This Happens

Oracle 19c’s Cluster Verification Utility (CVU) copies verification files between nodes using scp. OpenSSH 8.0 introduced strict filename checking (CVE-2019-6111 mitigation) that rejects certain quoting patterns in remote paths.

You can confirm this is your issue by checking the CVU logs at $GRID_HOME/cv/log/. Look for scp errors containing “protocol error: filename does not match request”.

The scp operation fails silently from the installer’s perspective. It just sees “file didn’t arrive” and reports it as an SSH connectivity problem, which sends you down the wrong troubleshooting path.

Verification

Check your OpenSSH version:

ssh -V
# OpenSSH_8.0p1, OpenSSL 1.1.1k  FIPS 25 Mar 2021

If you’re on OpenSSH 8.x or later, you need the fix.

The Fix

Create an scp wrapper that adds the -T flag to disable strict filename checking.

Step 1: Backup the original scp binary on rac-node-01 (as root):

cp -p /usr/bin/scp /usr/bin/scp.orig

Step 2: Create the wrapper script on rac-node-01:

cat > /usr/bin/scp << 'WRAPPER'
#!/bin/bash
/usr/bin/scp.orig -T "$@"
WRAPPER

Step 3: Set correct permissions on rac-node-01:

chmod 555 /usr/bin/scp

Step 4: Repeat on rac-node-02:

ssh rac-node-02 "cp -p /usr/bin/scp /usr/bin/scp.orig"

ssh rac-node-02 "cat > /usr/bin/scp << 'WRAPPER'
#!/bin/bash
/usr/bin/scp.orig -T \"\$@\"
WRAPPER"

ssh rac-node-02 "chmod 555 /usr/bin/scp"

Step 5: Verify the wrapper is in place on both nodes:

# On rac-node-01
cat /usr/bin/scp
# Should show:
# #!/bin/bash
# /usr/bin/scp.orig -T "$@"

# On rac-node-02
ssh rac-node-02 "cat /usr/bin/scp"
# Should show same output

Confirm It Works

Create a test file and verify scp works with the wrapper:

# Create a test file on node 2
ssh rac-node-02 "echo 'test' > /tmp/scptest"

# Test scp as grid user
su - grid -c "scp rac-node-02:/tmp/scptest /tmp/"
# Should succeed without "protocol error"

# Clean up
rm /tmp/scptest
ssh rac-node-02 "rm /tmp/scptest"

This was the critical fix that unblocked the installation. Without it, Grid Infrastructure installation will not complete on OEL 8.x.


“java: command not found” via SSH

The Symptom

CVU checks fail even after fixing the scp issue. The installer can’t execute helper scripts on remote nodes.

su - grid -c "ssh rac-node-02 'java -version'"
# bash: java: command not found

su - grid -c "ssh rac-node-02 'perl -v'"
# bash: perl: command not found

Why This Happens

The grid user’s .bash_profile sets ORACLE_HOME which contains Java and Perl in its bin directory. But SSH non-interactive sessions (like those used by the installer) don’t source .bash_profile. The CVU helper scripts need Java to run, and installer operations need Perl.

The Fix

On OEL 8.x with default bash configuration, ~/bin is added to PATH via /etc/profile.d/ scripts even for non-interactive sessions. We create symlinks there.

Step 1: Extract Grid Infrastructure software first (symlinks need a target):

# On rac-node-01 only, as grid user
su - grid -c "cd /u01/app/19.0.0/grid && unzip -oq /u01/staging/LINUX.X64_193000_grid_home.zip"

Step 2: Copy the Grid home to node 2 (so symlink targets exist on both nodes):

# As root, copy extracted Grid home to node 2
rsync -avz /u01/app/19.0.0/grid/ rac-node-02:/u01/app/19.0.0/grid/
ssh rac-node-02 "chown -R grid:oinstall /u01/app/19.0.0/grid"

Step 3: Verify ~/bin is in PATH for non-interactive SSH:

su - grid -c "ssh rac-node-01 'echo \$PATH'" | tr ':' '\n' | grep -E "^/home/grid/bin$"
# Should return: /home/grid/bin

# If empty, ~/bin is NOT in your non-interactive PATH. 
# You'll need to add it via /etc/profile.d/ or use a different approach.

Step 4: Create the ~/bin directory for grid user on both nodes:

# On rac-node-01
su - grid -c "mkdir -p ~/bin"

# On rac-node-02
ssh rac-node-02 "su - grid -c 'mkdir -p ~/bin'"

Step 5: Create Java symlink for grid user on both nodes:

# On rac-node-01
su - grid -c "ln -sf /u01/app/19.0.0/grid/jdk/bin/java ~/bin/java"

# On rac-node-02
ssh rac-node-02 "su - grid -c 'ln -sf /u01/app/19.0.0/grid/jdk/bin/java ~/bin/java'"

Step 6: Create Perl symlink for grid user on both nodes:

# On rac-node-01
su - grid -c "ln -sf /u01/app/19.0.0/grid/perl/bin/perl ~/bin/perl"

# On rac-node-02
ssh rac-node-02 "su - grid -c 'ln -sf /u01/app/19.0.0/grid/perl/bin/perl ~/bin/perl'"

Step 7: After database software installation, repeat for the oracle user:

# On rac-node-01
su - oracle -c "mkdir -p ~/bin"
su - oracle -c "ln -sf /u01/app/oracle/product/19.0.0/dbhome_1/jdk/bin/java ~/bin/java"
su - oracle -c "ln -sf /u01/app/oracle/product/19.0.0/dbhome_1/perl/bin/perl ~/bin/perl"

# On rac-node-02
ssh rac-node-02 "su - oracle -c 'mkdir -p ~/bin'"
ssh rac-node-02 "su - oracle -c 'ln -sf /u01/app/oracle/product/19.0.0/dbhome_1/jdk/bin/java ~/bin/java'"
ssh rac-node-02 "su - oracle -c 'ln -sf /u01/app/oracle/product/19.0.0/dbhome_1/perl/bin/perl ~/bin/perl'"

Confirm It Works

# Test Java via non-interactive SSH
su - grid -c "ssh rac-node-02 'java -version 2>&1 | head -1'"
# java version "1.8.0_201"

# Test Perl via non-interactive SSH
su - grid -c "ssh rac-node-02 'perl -v | head -2'"
# This is perl 5, version 28...

OS Prerequisite Check Fails on OEL 8.10

The Symptom

Installer fails prerequisite checks claiming the OS is not certified.

Why This Happens

Oracle 19c’s installer only recognizes Oracle Linux up to version 7.x. OEL 8.10 isn’t in its compatibility list, even though Oracle does support it.

The Fix

Set an environment variable before running the installer to make it treat OEL 8.10 as OEL 7.8.

Step 1: Export the variable in your current session before any installer command:

export CV_ASSUME_DISTID=OEL7.8

Step 2: Verify it’s set:

echo $CV_ASSUME_DISTID
# Should return: OEL7.8

Step 3: Include the export in every installer command (the variable doesn’t persist across su or ssh):

# Grid Infrastructure installation
su - grid -c "export CV_ASSUME_DISTID=OEL7.8 && /u01/app/19.0.0/grid/gridSetup.sh -silent -responseFile /path/to/grid_install.rsp"

# Database Software installation
su - oracle -c "export CV_ASSUME_DISTID=OEL7.8 && /u01/app/oracle/product/19.0.0/dbhome_1/runInstaller -silent -responseFile /path/to/db_install.rsp"

# Database creation with DBCA
su - oracle -c "export CV_ASSUME_DISTID=OEL7.8 && dbca -silent -createDatabase -responseFile /path/to/dbca.rsp"

Important

You must include export CV_ASSUME_DISTID=OEL7.8 && at the start of every command that invokes Oracle’s installer tools. If you forget it, the command will fail the OS prerequisite check.


opatchauto 19.25 RU Bootstrap Fails (No Fix)

The Symptom

Attempting to apply the 19.25 Release Update (patch 36916690) fails during bootstrap:

OPATCHAUTO-72083: Performing bootstrap operations failed.
OPATCHAUTO-72083: The bootstrap execution failed because Failed to unzip files on path./u01/staging/36916690/36912597/files/perl.zipError::.

Note the malformed error message: path./u01/staging (missing space between “path” and “.”). The Error:: at the end with no actual error text is also suspicious.

Why This Happens

This appears to be a defect in the opatchauto Java code bundled with patch 36916690. The perl.zip file itself is valid:

unzip -t /u01/staging/36916690/36912597/files/perl.zip
# Archive:  /u01/staging/36916690/36912597/files/perl.zip
#     testing: perl/                    OK
#     testing: perl/bin/                OK
# ... (all files pass)
# No errors detected in compressed data

The malformed error string suggests a path concatenation bug where the code is missing a separator or has an off-by-one error in string building.

What I Tried

Each of these produced the same malformed error:

  • Different patch locations (/u01/staging, /tmp, /home/grid)
  • Pre-extracting perl.zip manually before running opatchauto
  • Running as grid user with wallet authentication
  • Analyze mode only (opatchauto apply -analyze)
  • Explicit -oh parameter pointing to Grid home

The Reality

No workaround exists for this specific patch bundle. Check My Oracle Support for an updated patch, or wait for 19.26+.

The good news: the system runs fine on 19.3.0.0.0 base release. This is a “nice to have” security update, not a functional requirement. I ran the cluster on base 19.3 and it worked correctly.

This one stings, but it’s not a blocker.


If you’re starting fresh, the walkthrough below has these fixes already applied.

The Clean Walkthrough

With the gotchas addressed upfront, here’s the installation sequence that works.

Prerequisites

1. Apply the scp wrapper on both nodes (as root):

cp -p /usr/bin/scp /usr/bin/scp.orig
cat > /usr/bin/scp << 'WRAPPER'
#!/bin/bash
/usr/bin/scp.orig -T "$@"
WRAPPER
chmod 555 /usr/bin/scp

2. Create grid user (as root, both nodes):

useradd -u 54331 -g oinstall -G dba,asmdba,asmoper,asmadmin grid
echo "grid:<YourSecurePassword>" | chpasswd

3. Create directory structure (as root, both nodes):

mkdir -p /u01/app/19.0.0/grid
mkdir -p /u01/app/grid
mkdir -p /u01/app/oraInventory
mkdir -p /u01/app/oracle/product/19.0.0/dbhome_1
mkdir -p /u01/staging

chown -R grid:oinstall /u01/app/19.0.0/grid
chown -R grid:oinstall /u01/app/grid
chown -R grid:oinstall /u01/app/oraInventory
chown -R oracle:oinstall /u01/app/oracle
chown -R grid:oinstall /u01/staging
chmod -R 775 /u01

4. Configure passwordless SSH for grid user:

# On rac-node-01 as grid user
ssh-keygen -t rsa -b 3072 -N "" -f ~/.ssh/id_rsa

# Add host keys
ssh-keyscan -H rac-node-01 rac-node-01.rac.local rac-node-01-priv rac-node-01-priv.rac.local \
              rac-node-02 rac-node-02.rac.local rac-node-02-priv rac-node-02-priv.rac.local \
              >> ~/.ssh/known_hosts 2>&1

# Exchange keys between nodes (repeat for rac-node-02)
ssh-copy-id grid@rac-node-02

5. Create Java/Perl symlinks for grid user (both nodes):

su - grid -c "mkdir -p ~/bin"
su - grid -c "ln -sf /u01/app/19.0.0/grid/jdk/bin/java ~/bin/java"
su - grid -c "ln -sf /u01/app/19.0.0/grid/perl/bin/perl ~/bin/perl"

Note: These symlinks point to files that don’t exist yet. Create them after extracting Grid Infrastructure software but before running the installer.

6. Configure environment files:

Grid user on rac-node-01 (/home/grid/.bash_profile):

export ORACLE_BASE=/u01/app/grid
export ORACLE_HOME=/u01/app/19.0.0/grid
export ORACLE_SID=+ASM1
export PATH=$ORACLE_HOME/bin:$PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:$LD_LIBRARY_PATH

Grid user on rac-node-02 (/home/grid/.bash_profile):

export ORACLE_BASE=/u01/app/grid
export ORACLE_HOME=/u01/app/19.0.0/grid
export ORACLE_SID=+ASM2
export PATH=$ORACLE_HOME/bin:$PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:$LD_LIBRARY_PATH

Grid Infrastructure Installation

1. Extract software on first node only:

su - grid -c "cd /u01/app/19.0.0/grid && unzip -oq /u01/staging/LINUX.X64_193000_grid_home.zip"

The second node’s grid home must remain empty. The installer copies files there.

2. Update OPatch:

su - grid -c "cd /u01/app/19.0.0/grid && mv OPatch OPatch.orig && unzip -oq /u01/staging/p6880880_190000_Linux-x86-64.zip"

3. Now create the Java/Perl symlinks (they point to extracted files):

# On both nodes
su - grid -c "ln -sf /u01/app/19.0.0/grid/jdk/bin/java ~/bin/java"
su - grid -c "ln -sf /u01/app/19.0.0/grid/perl/bin/perl ~/bin/perl"

4. Create response file (/u01/staging/grid_install.rsp):

oracle.install.responseFileVersion=/oracle/install/rspfmt_crsinstall_response_schema_v19.0.0
INVENTORY_LOCATION=/u01/app/oraInventory
oracle.install.option=CRS_CONFIG
ORACLE_BASE=/u01/app/grid
oracle.install.asm.OSDBA=asmdba
oracle.install.asm.OSOPER=asmoper
oracle.install.asm.OSASM=asmadmin
oracle.install.crs.config.scanType=LOCAL_SCAN
oracle.install.crs.config.gpnp.scanName=scan.rac.local
oracle.install.crs.config.gpnp.scanPort=1521
oracle.install.crs.config.ClusterConfiguration=STANDALONE
oracle.install.crs.config.configureAsExtendedCluster=false
oracle.install.crs.config.clusterName=rac-cluster
oracle.install.crs.config.gpnp.configureGNS=false
oracle.install.crs.config.autoConfigureClusterNodeVIP=false
oracle.install.crs.config.clusterNodes=rac-node-01.rac.local:rac-node-01-vip.rac.local:HUB,rac-node-02.rac.local:rac-node-02-vip.rac.local:HUB
oracle.install.crs.config.networkInterfaceList=bond0:192.0.2.0:1,bond0.227:198.51.100.0:5
oracle.install.asm.configureGIMRDataDG=false
oracle.install.crs.config.storageOption=FLEX_ASM_STORAGE
oracle.install.crs.config.useIPMI=false
oracle.install.asm.SYSASMPassword=<YourSecurePassword>
oracle.install.asm.diskGroup.name=GRID
oracle.install.asm.diskGroup.redundancy=EXTERNAL
oracle.install.asm.diskGroup.AUSize=4
oracle.install.asm.diskGroup.disks=/dev/mapper/3624a93703b701e383bbe421c0001ab90,/dev/mapper/3624a93703b701e383bbe421c0001ab91,/dev/mapper/3624a93703b701e383bbe421c0001ab92
oracle.install.asm.diskGroup.diskDiscoveryString=/dev/mapper/3624a937*
oracle.install.asm.monitorPassword=<YourSecurePassword>
oracle.install.crs.rootconfig.executeRootScript=false

Note: The disk paths use /dev/mapper/WWID format. Your WWIDs will differ. See Post 2 for why this matters.

5. Run installer:

su - grid -c "export CV_ASSUME_DISTID=OEL7.8 && /u01/app/19.0.0/grid/gridSetup.sh -silent -ignorePrereq -responseFile /u01/staging/grid_install.rsp"

6. Execute root scripts (as root):

On rac-node-01 first:

/u01/app/oraInventory/orainstRoot.sh
/u01/app/19.0.0/grid/root.sh

Then on rac-node-02:

/u01/app/oraInventory/orainstRoot.sh
/u01/app/19.0.0/grid/root.sh

7. Run config tools:

su - grid -c "/u01/app/19.0.0/grid/gridSetup.sh -executeConfigTools -responseFile /u01/staging/grid_install.rsp -silent"

Database Software Installation

1. Extract on first node:

su - oracle -c "cd /u01/app/oracle/product/19.0.0/dbhome_1 && unzip -oq /u01/staging/LINUX.X64_193000_db_home.zip"

2. Update OPatch:

su - oracle -c "cd /u01/app/oracle/product/19.0.0/dbhome_1 && mv OPatch OPatch.orig && unzip -oq /u01/staging/p6880880_190000_Linux-x86-64.zip"

3. Create Java/Perl symlinks for oracle user (both nodes):

su - oracle -c "mkdir -p ~/bin"
su - oracle -c "ln -sf /u01/app/oracle/product/19.0.0/dbhome_1/jdk/bin/java ~/bin/java"
su - oracle -c "ln -sf /u01/app/oracle/product/19.0.0/dbhome_1/perl/bin/perl ~/bin/perl"

4. Clean second node’s ORACLE_HOME:

ssh rac-node-02 "rm -rf /u01/app/oracle/product/19.0.0/dbhome_1/* /u01/app/oracle/product/19.0.0/dbhome_1/.*"

5. Run installer:

su - oracle -c "export CV_ASSUME_DISTID=OEL7.8 && /u01/app/oracle/product/19.0.0/dbhome_1/runInstaller -silent -ignorePrereq -waitforcompletion \
  oracle.install.option=INSTALL_DB_SWONLY \
  UNIX_GROUP_NAME=oinstall \
  INVENTORY_LOCATION=/u01/app/oraInventory \
  ORACLE_HOME=/u01/app/oracle/product/19.0.0/dbhome_1 \
  ORACLE_BASE=/u01/app/oracle \
  oracle.install.db.InstallEdition=EE \
  oracle.install.db.CLUSTER_NODES=rac-node-01.rac.local,rac-node-02.rac.local"

6. Execute root scripts (as root, both nodes):

/u01/app/oracle/product/19.0.0/dbhome_1/root.sh

Database Creation

su - oracle -c "export CV_ASSUME_DISTID=OEL7.8 && dbca -silent -createDatabase \
  -templateName General_Purpose.dbc \
  -gdbName orcl \
  -sid orcl \
  -createAsContainerDatabase true \
  -numberOfPDBs 1 \
  -pdbName pdb1 \
  -pdbAdminPassword <YourSecurePassword> \
  -sysPassword <YourSecurePassword> \
  -systemPassword <YourSecurePassword> \
  -storageType ASM \
  -datafileDestination +DATA \
  -recoveryAreaDestination +FRA \
  -redoLogFileSize 32768 \
  -characterSet AL32UTF8 \
  -nodelist rac-node-01,rac-node-02"

Verification

Check that the cluster is healthy:

# Cluster status
su - grid -c "crsctl check cluster -all"

Expected output:

**************************************************************
rac-node-01:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
rac-node-02:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************

Check database status:

su - oracle -c "srvctl status database -d orcl"

Expected output:

Instance orcl1 is running on node rac-node-01
Instance orcl2 is running on node rac-node-02

Check ASM disk groups:

su - grid -c "asmcmd lsdg"

Expected output (sizes will vary):

State    Type    Rebal  Sector  Logical_Sector  Block  AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  EXTERN  N         512             512   4096  4M    614400   591234                0          591234              0             N  DATA/
MOUNTED  EXTERN  N         512             512   4096  4M    307200   298872                0          298872              0             N  FRA/
MOUNTED  NORMAL  N         512             512   4096  4M     30720    28445            10240            9102              0             Y  GRID/

If all services are online and both instances are running, the installation is complete.


Summary

Oracle 19c RAC on OEL 8.x requires fixes that aren’t in Oracle’s documentation:

Issue Root Cause Fix
CVU NxN SSH check fails OpenSSH 8.x strict filename checking scp wrapper with -T flag
Java/Perl not found via SSH Non-interactive sessions skip .bash_profile Symlinks in ~/bin
OS not certified Installer doesn’t recognize OEL 8.x CV_ASSUME_DISTID=OEL7.8
19.25 RU fails to apply opatchauto bug in patch bundle None (use base 19.3 or wait for fix)

The OpenSSH fix is the critical one. Without it, installation will not succeed on any OEL 8.x or RHEL 8.x system.

References