TL;DR: DBMS_CLOUD fails with non-AWS S3 because it parses hostnames to select the signing algorithm and extract the AWS region. Fix: point s3.us-east-1.amazonaws.com at your endpoint via DNS, add it as a SAN on the endpoint’s TLS cert, and import the cert into an Oracle wallet. Full DBMS_CLOUD functionality, no file downloads.
In Part 1, we installed DBMS_CLOUD, created credentials, configured ACLs, and hit a wall: Oracle 26ai enforces HTTPS with strict hostname verification, and self-signed certificates on S3-compatible storage fail with ORA-24263. The workaround was downloading files locally. That’s not a real solution.
This post explains why DBMS_CLOUD fails with non-AWS S3 endpoints at a protocol level, and shows how to fix it properly. The fix involves DNS, TLS certificates, and an understanding of how DBMS_CLOUD’s SigV4 signing actually works.
We tested this against a Pure Storage FlashBlade. The same approach works for MinIO or any S3-compatible endpoint that doesn’t validate the AWS region in SigV4 signatures.
Why DBMS_CLOUD Fails with Non-AWS S3 Endpoints
DBMS_CLOUD identifies cloud providers by matching URL patterns in an internal table called sys.dbms_cloud_store$. This table is undocumented. We found it by inspecting the DBMS_CLOUD package and tracing its behaviour. The structure could change in future releases.
The CDB root has 37 rows covering AWS, OCI (including S3 Compatible, Swift, and Classic), Azure, GCS, GitHub, and basic auth. The relevant subset for S3:
| Pattern | Cloud Type |
|---|---|
%amazonaws.com% |
AMAZON_S3 |
%oraclecloud%.com |
ORACLE_BMC |
%googleapis.com% |
GOOGLE_CLOUD_STORAGE |
%windows.net |
MICROSOFT_AZURE_BLOB |
When DBMS_CLOUD sees a URL like `https://flashblade.local/bucket/key`, it doesn’t match any pattern, so it doesn’t know which signing algorithm to use. The request goes out unsigned, and the S3 endpoint returns HTTP 403.
Why You Can’t Just Register a Custom Hostname
Your first instinct might be to register your endpoint’s hostname in dbms_cloud_store$:
-- Don't do this. Modifying undocumented sys tables is unsupported
-- and won't solve the problem anyway. Shown here to explain why.
INSERT INTO sys.dbms_cloud_store$ (cloud_type, base_uri_pattern, field1)
VALUES ('AMAZON_S3', '%flashblade.local%', 'us-east-1');
We tested this. The INSERT succeeds, and DBMS_CLOUD does recognise the endpoint as AMAZON_S3. But the SigV4 signing code also parses the hostname to extract the AWS region. It expects the format s3.<region>.amazonaws.com. With a custom hostname, DBMS_CLOUD can’t extract the region, and the signature is malformed:
ORA-20403: Authorization failed for URI - https://flashblade-data01.soln.local/lb-bronze/
The endpoint receives a signed request, but the signature is wrong because the region component is missing or garbage. The store$ entry got us past provider detection, but broke at signing.
The s3.us-east-1.amazonaws.com hostname solves both problems at once:
1. URL pattern match — %amazonaws.com% triggers SigV4 signing
2. Region extraction — DBMS_CLOUD parses us-east-1 from the hostname and uses it in the SigV4 signature
You could use a different region like s3.eu-west-1.amazonaws.com. It doesn’t matter as long as your S3-compatible endpoint doesn’t validate the region in the signature. FlashBlade and MinIO both accept any region.
The KUPC Module Complication
Oracle 26ai’s external table HTTP client (KUPC) enforces strict TLS hostname verification. Based on testing:
– It does not honour sqlnet.ora settings like SSL_SERVER_DN_MATCH=NO
– There is no configuration option to disable hostname verification
– The certificate SAN must match the hostname in the URL exactly
This is why the SSL_SERVER_DN_MATCH=NO workaround from Oracle 19c no longer works in 26ai, as documented in Part 1.
Solution
Make your S3 endpoint appear as s3.us-east-1.amazonaws.com to Oracle via DNS, and configure the endpoint’s TLS certificate to match. Three steps: certificate, DNS, wallet.
Step-by-Step
Step 1: Configure the S3 Endpoint’s TLS Certificate
Regenerate or update your S3 endpoint’s TLS certificate to include SANs (Subject Alternative Names) for both its original hostname and s3.us-east-1.amazonaws.com. The certificate must cover:
s3.us-east-1.amazonaws.com— the hostname Oracle will use*.s3.us-east-1.amazonaws.com— defensive, for virtual-hosted bucket URLs (e.g. `https://mybucket.s3.us-east-1.amazonaws.com/`). We tested and confirmed virtual-hosted style works, but all examples in this series use path-style URLs.- Your endpoint’s original hostname — so existing clients aren’t broken
- The endpoint’s IP address — for direct-IP access if needed
How you regenerate the certificate depends on your storage platform. On MinIO, update the cert files in ~/.minio/certs/. The requirement is the same everywhere: the SANs must include the amazonaws.com hostname.
FlashBlade example: On Pure Storage FlashBlade, use the
purecertCLI from the management interface:purecert self-signed setattr global \ --common-name "flashblade-data01.soln.local" \ --san "flashblade-data01.soln.local,*.flashblade-data01.soln.local,s3.us-east-1.amazonaws.com,*.s3.us-east-1.amazonaws.com,10.21.227.93"
Verify the certificate from the Oracle server:
echo | openssl s_client -connect 10.21.227.93:443 2>/dev/null \
| openssl x509 -noout -ext subjectAltName
X509v3 Subject Alternative Name:
DNS:flashblade-data01.soln.local, DNS:*.flashblade-data01.soln.local,
DNS:s3.us-east-1.amazonaws.com, DNS:*.s3.us-east-1.amazonaws.com,
IP Address:10.21.227.93
Step 2: Configure DNS on the Oracle Server
Add an entry to /etc/hosts on the Oracle database server so that s3.us-east-1.amazonaws.com resolves to your S3 endpoint’s IP:
10.21.227.93 s3.us-east-1.amazonaws.com flashblade-data01.soln.local
This is the key insight: DBMS_CLOUD parses the URL to determine the cloud type and AWS region. By using s3.us-east-1.amazonaws.com, Oracle automatically:
1. Identifies the endpoint as AMAZON_S3
2. Extracts us-east-1 as the signing region
3. Signs requests with AWS SigV4
Verify DNS resolution:
getent hosts s3.us-east-1.amazonaws.com
10.21.227.93 s3.us-east-1.amazonaws.com
Production note:
/etc/hostsis fine for a lab or a single server. For RAC, multiple database nodes, or any managed environment, create a DNS A record in your internal DNS zone instead. Split-horizon DNS works if the same servers also need to reach actual AWS S3 for other workloads.Security warning: This DNS entry redirects all traffic from the Oracle server destined for
s3.us-east-1.amazonaws.comto your local endpoint. If anything else on that server legitimately connects to AWS S3 in us-east-1, it will route to your on-prem storage instead. Scope the DNS override to the database servers only, and use a dedicated internal DNS zone if other applications on the same network need real AWS access.
Step 3: Set Up the Oracle SSL Wallet
Import the S3 endpoint’s certificate into Oracle’s wallet:
# Download the certificate
echo | openssl s_client -connect s3.us-east-1.amazonaws.com:443 2>/dev/null \
| openssl x509 > /tmp/s3_cert.pem
# Create an auto-login wallet
orapki wallet create -wallet /u01/app/oracle/admin/ORCLCDB/wallet \
-pwd YourWalletPassword -auto_login
# Add the certificate as trusted
orapki wallet add -wallet /u01/app/oracle/admin/ORCLCDB/wallet \
-trusted_cert -cert /tmp/s3_cert.pem -pwd YourWalletPassword
Then set the database property so Oracle knows where the wallet lives:
-- Run as SYSDBA at CDB root
ALTER DATABASE PROPERTY SET SSL_WALLET='/u01/app/oracle/admin/ORCLCDB/wallet';
Step 4: Configure Network ACLs
If you followed Part 1, you already have ACLs configured for your original hostname. You need to add ACLs for the new s3.us-east-1.amazonaws.com hostname at both the CDB root and PDB level.
The $ characters in xs$ace_type and xs$name_list get interpreted as shell variables if you’re pasting into a terminal over SSH. Write the SQL to a file first and run it through sqlplus.
cat << 'EOF' > /tmp/acl_cdb.sql
BEGIN
DBMS_NETWORK_ACL_ADMIN.APPEND_HOST_ACE(
host => 's3.us-east-1.amazonaws.com',
lower_port => 443,
upper_port => 443,
ace => xs$ace_type(
privilege_list => xs$name_list('http'),
principal_name => 'SYS',
principal_type => xs_acl.ptype_db
)
);
END;
/
EOF
CDB root (as SYSDBA, not in any PDB):
sqlplus "/ as sysdba" @/tmp/acl_cdb.sql
PDB:
cat << 'EOF' > /tmp/acl_pdb.sql
ALTER SESSION SET CONTAINER = ORCLPDB;
BEGIN
DBMS_NETWORK_ACL_ADMIN.APPEND_HOST_ACE(
host => 's3.us-east-1.amazonaws.com',
lower_port => 443,
upper_port => 443,
ace => xs$ace_type(
privilege_list => xs$name_list('http'),
principal_name => 'SYS',
principal_type => xs_acl.ptype_db
)
);
END;
/
EOF
sqlplus "/ as sysdba" @/tmp/acl_pdb.sql
Note on principal: Part 1 used
principal_name => 'SYSTEM'. Both SYS and SYSTEM work for DBMS_CLOUD operations since both are privileged users. We’ve standardised on SYS for Part 2 onward.
Step 5: Test Connectivity
List S3 bucket contents:
SELECT object_name, bytes, last_modified
FROM DBMS_CLOUD.LIST_OBJECTS(
credential_name => 'FLASHBLADE_CRED',
location_uri => 'https://s3.us-east-1.amazonaws.com/lb-bronze/customer/interactions/'
)
WHERE ROWNUM <= 5;
OBJECT_NAME BYTES LAST_MODIFIED
------------------------ ----------- ----------------------------
part-000000.parquet 552775111 02-MAR-26 10.11.44 PM +00:00
part-000001.parquet 552836156 02-MAR-26 10.11.44 PM +00:00
part-000002.parquet 552903890 02-MAR-26 10.12.00 PM +00:00
part-000003.parquet 552932410 02-MAR-26 10.11.58 PM +00:00
part-000004.parquet 552904057 02-MAR-26 10.12.15 PM +00:00
Send a raw HTTP request:
DECLARE
v_resp DBMS_CLOUD_TYPES.resp;
BEGIN
v_resp := DBMS_CLOUD.SEND_REQUEST(
credential_name => 'FLASHBLADE_CRED',
uri => 'https://s3.us-east-1.amazonaws.com/lb-bronze/',
method => DBMS_CLOUD.METHOD_GET
);
DBMS_OUTPUT.PUT_LINE('Status: ' || DBMS_CLOUD.GET_RESPONSE_STATUS_CODE(v_resp));
END;
/
Status: 200
DBMS_CLOUD signs the request with SigV4, and the S3 endpoint returns 200. The same credentials and endpoint that failed in Part 1 now work because Oracle thinks it’s talking to AWS S3.
Gotchas
Content-Encoding: aws-chunked
Some S3 clients upload objects with Content-Encoding: aws-chunked. We hit this with files written by Spark’s S3A connector using chunked transfer encoding, though other S3 clients may also set this header. The KUPC module cannot decode this encoding and fails with:
KUP-13015: unsupported algorithm
Fix by re-copying the affected objects without the encoding metadata:
aws s3 cp --recursive s3://bucket/path/ s3://bucket/path/ \
--endpoint-url https://s3.us-east-1.amazonaws.com \
--content-encoding "" --metadata-directive REPLACE
One-time fix per dataset. You’ll encounter this in Part 4 when reading Iceberg data written by Spark.
The AWS CLI uses its own certificate trust store, not Oracle’s wallet. If your CLI doesn’t trust the endpoint’s certificate after the SAN changes, either import the cert into the system trust store (/etc/pki/tls/certs/ on Oracle Linux) or add --no-verify-ssl.
Rollback
If you need to undo these changes:
DNS: Remove the s3.us-east-1.amazonaws.com line from /etc/hosts (or delete the DNS A record).
ACLs: There’s no single “delete” for APPEND_HOST_ACE. Use DBMS_NETWORK_ACL_ADMIN.REMOVE_HOST_ACE with the same parameters, or query DBA_HOST_ACES to find and remove the specific entries.
Wallet: Remove the certificate:
orapki wallet remove -wallet /u01/app/oracle/admin/ORCLCDB/wallet \
-trusted_cert_all -pwd YourWalletPassword
Certificate SANs: Revert your S3 endpoint’s certificate to its original SANs using your platform’s certificate management tools.
Summary
Oracle 26ai’s DBMS_CLOUD connects to S3-compatible object storage by making the endpoint appear as AWS S3 via DNS. The setup requires three things beyond what Part 1 covered: (1) a TLS certificate with s3.us-east-1.amazonaws.com as a SAN, (2) a DNS entry mapping that hostname to your endpoint’s IP, and (3) the certificate imported into an Oracle wallet.
The reason this works, and why simpler approaches don’t, is that DBMS_CLOUD’s SigV4 implementation both matches URL patterns to select the signing algorithm and parses the hostname to extract the AWS region. Only the s3.<region>.amazonaws.com format satisfies both requirements.
A note on shelf life: This approach exists because DBMS_CLOUD doesn’t yet support custom S3 endpoints natively. If Oracle adds that capability, the DNS workaround becomes unnecessary. Until then, this is the only way to get full DBMS_CLOUD functionality with S3-compatible storage on-prem.
