Schrödinger’s dNFS: The Empty Row Mystery

You will end up here: v$dnfs_servers returns no rows. You’ve checked everything. The library is linked. The alert log shows dNFS loaded. Your oranfstab is syntactically perfect. The NFS mounts are there.

And you’re convinced it’s broken, I was too.

I spent three hours proving dNFS was configured correctly while simultaneously proving it wasn’t working. Turns out I was measuring the wrong thing the entire time.

The answer: when it actually opens a file on an NFS mount. Not before.

Context

I was setting up RMAN backup benchmarks on Oracle 19c against Pure Storage FlashBlade to compare kernel NFS vs Direct NFS throughput. Standard Wednesday – establish baseline, configure dNFS, measure delta, quantify the value proposition.

The specific challenge: make a single database backup as fast as possible.

Architecture: Four Mount Points for Maximum Throughput

A single NFS mount has throughput limits – network bandwidth, client connection limits, protocol overhead. Pure Storage FlashBlade can deliver far more aggregate bandwidth than any single NFS client connection can saturate.

Solution: multiple independent NFS paths.

Setup:
4 separate FlashBlade data VIPs (10.21.221.25, 26, 27, 28)
4 NFS exports (/rman-bench-06a through /rman-bench-06d)
4 mount points (/u01/compass/s100a, s100b, s100c, s100d)
RMAN configured with multiple channels, each channel writing to a different mount point

Each mount point provides an independent I/O path with its own IP address, its own network connection, its own bandwidth allocation. RMAN parallelizes across these channels, achieving bandwidth aggregation at the application layer.

This is standard practice for high-performance RMAN backups to NFS when you need to maximize throughput for a single database. You’re not backing up multiple databases – you’re splitting one database’s backup across multiple parallel streams to different IP addresses.

The benchmark question: does dNFS improve throughput when you already have this multi-mount parallelism configured? Does the dNFS optimization stack with the architectural parallelism, or does the multi-mount design already extract maximum performance from kernel NFS?

That’s what I was trying to measure when I got stuck on configuration verification.

Configuring dNFS should take 10 minutes. It took me three hours because I misunderstood what v$dnfs_servers actually shows.

The Diagnostic Path (And Where It Leads Nowhere)

Standard verification after configuring dNFS:

SQL> SELECT * FROM v$dnfs_servers;

no rows selected

This triggers the troubleshooting cascade. You start checking everything systematically because the configuration looks correct but the verification fails.

Library Verification

$ ls -l $ORACLE_HOME/lib/libodm19.so
lrwxrwxrwx. 1 oracle oinstall 14 Nov 26 13:45 libodm19.so -> libnfsodm19.so

The symlink points to libnfsodm19.so (dNFS) not libodmd19.so (stub). Correct.

Runtime Verification

$ grep -i "direct nfs" $ORACLE_BASE/diag/rdbms/orcl/orcl/trace/alert_orcl.log
Oracle instance running with ODM: Oracle Direct NFS ODM Library Version 6.0

Oracle loaded the dNFS library at startup. Confirmed.

Configuration File

$ cat $ORACLE_HOME/dbs/oranfstab
server: 10.21.221.25
  export: /rman-bench-06a mount: /u01/compass/s100a
  nfs_version: nfsv4

Syntax is correct. Export and mount on the same line. Proper indentation. Valid version string.

Mount Point Verification

$ mount | grep compass
10.21.221.25:/rman-bench-06a on /u01/compass/s100a type nfs4 (rw,noatime,vers=4.1,...)
10.21.221.26:/rman-bench-06b on /u01/compass/s100b type nfs4 (rw,noatime,vers=4.1,...)
10.21.221.27:/rman-bench-06c on /u01/compass/s100c type nfs4 (rw,noatime,vers=4.1,...)
10.21.221.28:/rman-bench-06d on /u01/compass/s100d type nfs4 (rw,noatime,vers=4.1,...)

All four FlashBlade exports are mounted at the OS level. Accessible. Writable.

Every configuration check passes. But v$dnfs_servers is empty.

The Actual Answer

After exhausting the troubleshooting options, I tested whether dNFS would actually function:

SQL> CREATE TABLESPACE dnfs_test 
     DATAFILE '/u01/compass/s100a/test01.dbf' SIZE 100M;

Tablespace created.

SQL> SELECT svrname, dirname, nfsversion FROM v$dnfs_servers;

SVRNAME          DIRNAME              NFSVERSION
---------------- -------------------- ----------------
10.21.221.25     /rman-bench-06a      NFSv3.0
10.21.221.26     /rman-bench-06b      NFSv3.0
10.21.221.27     /rman-bench-06c      NFSv3.0
10.21.221.28     /rman-bench-06d      NFSv3.0

It was working the entire time.

What v$dnfs_servers Actually Shows

v$dnfs_servers only shows ACTIVE connections, not configured servers. The view is empty until Oracle actually opens files on configured NFS mounts. Oracle’s documentation confirms this behavior but doesn’t make it obvious – you have to read between the lines.Retry

If Oracle hasn’t accessed any files on your configured NFS mounts, the view is empty. This is working as designed. The configuration is correct. The library is loaded. The system is ready. But until Oracle actually opens a file on one of those mounts, there’s nothing to display in the view.

The Schrödinger reference: the configuration exists in a superposition of states (working/broken) until you observe it by performing I/O. The observation collapses the wavefunction.

This cost me three hours because I was asking the wrong question.

Configuration Reference

If you’re actually setting this up (not just debugging a non-existent problem), here’s the complete process:

1. Enable dNFS Library

cd $ORACLE_HOME/lib
ls -l libodm19.so

If libodm19.so points to libodmd19.so (the stub), dNFS is disabled. Fix it:

mv libodm19.so libodm19.so.backup
ln -s libnfsodm19.so libodm19.so

2. Create oranfstab

Location: $ORACLE_HOME/dbs/oranfstab

Format requirements:
export: and mount: on the same line, space-separated
– Two-space indentation
nfsv4 not NFSv4.1 or nfs4.1

Example:

server: 10.21.221.25
  export: /rman-bench-06a mount: /u01/compass/s100a
  nfs_version: nfsv4

server: 10.21.221.26
  export: /rman-bench-06b mount: /u01/compass/s100b
  nfs_version: nfsv4

The export path must match exactly what the NFS server exports. Verify with:

showmount -e 10.21.221.25

3. Verify NFS Mounts Exist

df -h | grep /u01/compass
mount | grep /u01/compass

dNFS requires OS-level NFS mounts. It doesn’t create them; it intercepts I/O to existing mounts.

4. Restart Database

Only required if you’re enabling dNFS for the first time (changing the library symlink). If you’re just modifying oranfstab, changes apply immediately when Oracle next accesses files on those mounts.

shutdown immediate
startup

5. Verification

Alert log confirms library load:

grep -i "direct nfs" $ORACLE_BASE/diag/rdbms/*/*/trace/alert_*.log | tail -1

Should show:

Oracle instance running with ODM: Oracle Direct NFS ODM Library Version 6.0

Actual functional test:

CREATE TABLESPACE dnfs_verify DATAFILE '/your/nfs/mount/test.dbf' SIZE 1M;
SELECT svrname, dirname, nfsversion FROM v$dnfs_servers;
DROP TABLESPACE dnfs_verify INCLUDING CONTENTS AND DATAFILES;

If v$dnfs_servers shows rows after creating the tablespace, dNFS is working.

Performance Context

The reason for configuring dNFS was RMAN backup benchmarking against Pure Storage FlashBlade. dNFS typically delivers 30-50% better throughput for large sequential I/O compared to kernel NFS because:

  1. Bypasses kernel NFS client (direct path from Oracle process to network stack)
  2. Better parallelism (can use multiple network channels simultaneously)
  3. Oracle-optimized I/O patterns

For RMAN workloads specifically – large, sequential writes to NFS – the performance delta is measurable and significant.

But only if you configure it correctly and don’t waste three hours thinking it’s broken when the verification methodology is wrong.

Monitoring Views

Once dNFS is actually in use:

-- Active NFS server connections
SELECT * FROM v$dnfs_servers;

-- Files currently open via dNFS
SELECT * FROM v$dnfs_files;

-- Network channels
SELECT * FROM v$dnfs_channels;

-- I/O statistics
SELECT * FROM v$dnfs_stats;

Summary

v$dnfs_servers showing “no rows selected” doesn’t mean dNFS is broken. It means Oracle hasn’t opened any files on your configured NFS mounts yet. The view shows active connections, not configuration state.

The alert log message confirming dNFS library load is the actual indicator. The rest is runtime behavior.

Ask the right question, get the right answer. The question wasn’t “why isn’t dNFS working?” – it was “what does v$dnfs_servers actually show?”

Three hours to learn that lesson. Thats all folks.


Environment: Oracle 19c (19.22), Pure Storage FlashBlade, NFSv4.1