True Cache on Oracle 26ai

Most database shops have a caching layer. Redis, Memcached, something custom. Reads hit the cache, misses hit the database, and someone owns the invalidation logic, the TTLs, and the 2 AM incident when the cache goes stale.

Oracle 26ai introduces True Cache, and it works nothing like Redis. It’s a diskless, read-only database replica that stays current by applying redo from the primary. Same mechanism as Active Data Guard, included in Enterprise Edition at no extra cost. No ADG licence required. Not available on Standard Edition.

I deployed one on my lab cluster. A row inserted on the primary was visible on True Cache with no measurable delay. Writes were correctly blocked (ORA-16000). Vector similarity search with HNSW indexes worked through it. The setup had some sharp edges, and Oracle hasn’t published any latency benchmarks beyond “sub-second,” but the mechanism is sound.

True Cache vs Redis

These solve different problems and it’s worth being clear about which.

Redis is a general-purpose in-memory data store. You explicitly put data in, manage expiry, choose data structures. It serves any application, any data source, any language.

True Cache is an Oracle-specific read replica. You don’t put data in. It replicates everything from the primary via redo automatically. It serves Oracle SQL queries only. No separate data model, no consistency bugs, no cache invalidation logic. But it only works with Oracle.

If your cache serves multiple data sources or non-Oracle applications, True Cache is irrelevant. If your cache exists solely to offload Oracle reads, True Cache eliminates the caching tier.

Redis is open source with licensing caveats since 2024, or commercial via Redis Ltd. True Cache ships with Oracle EE. If you’re already paying for Enterprise Edition, the read caching layer is included.

How True Cache works

True Cache receives redo from the primary via LGWR ASYNC and applies it continuously. It doesn’t store data files. It needs disk only for the Oracle software, standby redo logs, and config files. The data lives in the SGA buffer cache on the True Cache node.

SELECT NAME, OPEN_MODE, DATABASE_ROLE FROM V$DATABASE;

NAME      OPEN_MODE              DATABASE_ROLE
--------  ---------------------  -------------
ORCLCDB   READ ONLY WITH APPLY   TRUE CACHE

The consistency model is redo-lag-based, not TTL-based. True Cache always returns committed data. Freshness depends on how quickly redo arrives and gets applied. Oracle says sub-second. My lab showed near-instant for a single row, but I didn’t load-test it. No published p99 numbers exist.

If the lag exceeds a configurable threshold: ORA-61877. If the primary goes down, True Cache serves cached reads for up to 24 hours (configurable), then ORA-61860 shuts it down. Uncached data during an outage: ORA-61857.

No invalidation logic. No TTLs. No cache warming. The redo stream is the single source of truth.

The lab

I deployed True Cache on bench-03 against the bench-01 primary (same 8-node 26ai cluster from the vector search and Private AI posts). SGA on the True Cache node: 25GB. PGA: 8GB.

Setup is a two-phase DBCA process. On the primary, generate a config blob:

dbca -configureDatabase -prepareTrueCacheConfigFile \
  -sourceDB ORCL \
  -trueCacheBlobLocation /tmp/truecache_config \
  -silent

Copy the blob to the True Cache node, then create the instance:

echo 'SysPassword' | dbca -createTrueCache \
  -gdbName TCDB -sid TCDB \
  -sourceDBConnectionString primary-host:1521/ORCLCDB \
  -trueCacheBlobFromSourceDB /tmp/ORCLCDB_*.tar.gz \
  -sgaTargetInMB 25000 -pgaAggregateTargetInMB 8000 \
  -silent

The True Cache node needs its own Oracle Home (same version as primary) and any existing database on that node has to be dropped first. DBCA won’t reuse an existing SID if there are residual files. I had to clean out initORCL.ora, spfileORCL.ora, hc_ORCL.dat, and lkORCLCDB before it would proceed.

DBCA creates a separate listener on a new port (1522 in my case). I registered the True Cache SID with the grid listener on 1521 via a static SID_LIST entry and set VALID_NODE_CHECKING_REGISTRATION_LISTENER = OFF on the primary’s grid listener.

Verification

SELECT * FROM V$TRUE_CACHE;

STATUS   TRUE_CACHE_NAME  PRIMARY_NAME  CURRENT_SCN
-------  ---------------  ------------  -----------
HEALTHY  TCDB             ORCLCDB       124375902

PDBs open as read-only automatically. Data from the primary is visible. 1000 rows from the vector test table, right there on True Cache.

Writes blocked correctly:

INSERT INTO vectest.vec_test_1536 (doc_text) VALUES ('test');

ORA-16000: Attempting to modify database or pluggable database
that is open for read-only access.

Consistency: inserted a row on the primary, committed, queried True Cache immediately. The row was there.

-- Primary (bench-01):
INSERT INTO vectest.vec_test_1536 (doc_text) VALUES ('TC consistency test');
COMMIT;
SELECT COUNT(*) FROM vectest.vec_test_1536;  -- 1001

-- True Cache (bench-03), immediately after:
SELECT COUNT(*) FROM vectest.vec_test_1536;  -- 1001

Vector search through True Cache

HNSW indexes live in the Vector Memory Pool on the primary. The question is whether approximate vector search works when you’re querying through a True Cache node that doesn’t have its own Vector Memory Pool configured.

-- Connected to True Cache on bench-03
SELECT id, doc_text,
  VECTOR_DISTANCE(embedding,
    (SELECT embedding FROM vec_test_1536 WHERE id = 5), COSINE) AS distance
FROM vec_test_1536
ORDER BY VECTOR_DISTANCE(embedding,
    (SELECT embedding FROM vec_test_1536 WHERE id = 5), COSINE)
FETCH APPROXIMATE FIRST 5 ROWS ONLY;

It works. Returns id=5 with distance ~0, then four nearest neighbours. Identical results to the primary. I don’t know whether True Cache builds its own HNSW graph from the replicated data or routes the approximate search to the primary. The results are the same either way, but the performance characteristics would differ. Oracle’s documentation doesn’t clarify the mechanism.

For read-heavy similarity search workloads where you need to scale without adding load to the primary, this is a viable path.

Application routing

Three connection models for routing reads to True Cache.

Simplest: connect to the True Cache service name for reads, primary service name for writes. Two connection strings, application chooses.

JDBC: connect to the primary service only, toggle connection.setReadOnly(true) for reads (routed to True Cache) and setReadOnly(false) for writes. The driver handles routing.

OCI: a new session pool mode (OCI_SPC_TRUECACHE) supports READ_ONLY, READ_PREFER (fallback to primary if True Cache unavailable), and default (primary only).

Service configuration links a primary service to a True Cache service via DBMS_SERVICE or DBCA. I didn’t complete the DBCA route because it requires an interactive SYS password with no --password-file option. The manual DBMS_SERVICE approach is the documented alternative.

Where this lands

True Cache does one thing and does it well: offload Oracle reads without a separate caching tier. No invalidation. No TTLs. No consistency bugs. No separate data model. Included in EE.

There are serveral gaps you will need to explore; No published latency benchmarks. Data Guard failover reconnection is undocumented. DML redirection exists (ADG_REDIRECT_DML = TRUE) but Oracle says not for write-intensive workloads without quantifying the overhead. And Standard Edition is excluded entirely.

If you’re running Redis purely because your Oracle reads are too heavy for the primary, True Cache is worth evaluating. If your caching layer serves multiple backends or non-Oracle applications, True Cache doesn’t change anything for you.