Skip to content

SQLcl Container — custom Alpine image

Custom-built Alpine-based Docker image bundling Oracle's SQLcl command-line client. Hosted on O1 alongside n8n; n8n pipelines call into it for database deployments. Two active operational risks are tracked in known-issues.md: no source/registry storage, and IP volatility breaking pipelines.

Field Value
Public URL none
Image source custom Alpine-based; Dockerfile not in Git, image not in any registry
Audience n8n CI/CD pipelines, possibly engineers via Coder
Criticality medium-high — load-bearing for DB deployments
Maturity hobby — image is a single artifact on O1
Owner Vishnu Kant [CONFIRM]
Last reviewed 2026-05-05

1. At a glance

A small Docker container that runs Oracle SQLcl on top of an Alpine Linux base. n8n CI/CD pipelines use it to run SQL deployments / migrations against the various Oracle ADBs (E3, E4, E5, O2, O3). Today the image only exists on O1 — there is no copy of the Dockerfile in source control and the built image is not pushed to a registry. If O1 is rebuilt or the image is pruned, the build is lost.

It also has a second live problem: when restarted, the container picks up a different 172.0.0.xx Docker IP, and the n8n pipelines hard-code IPs, so they break unpredictably.

2. Business purpose

  • Enables n8n to run repeatable database deployments (DDL changes, data fixes, exports).
  • Containerized Oracle client without per-laptop install.

3. Audience

n8n pipelines (primary). Engineers via Coder workspaces (secondary, occasional).

4. Hosting & cloud infrastructure

  • Server: O1 ORA448Global VPS
  • Runtime: Docker, managed via Portainer
  • Network: default bridge [CONFIRM] — picks up 172.0.0.xx dynamically
  • Trigger: invoked by n8n on the same host

Infrastructure map

Item Value Notes
Container image custom (Alpine + SQLcl) not in any registry
Image version pinning n/a — no versioned tags
Internal IP unstable 172.0.0.xx breaks n8n pipelines on restart
Run pattern long-running on O1 re-IP'd on restart
Networks reachable the Oracle ADBs in both tenancies via mTLS wallet [CONFIRM]

Credentials in Vault

Secret type Vault path / link Last rotated
Oracle ADB connection strings (TNS / EZConnect) [INFO NEEDED]
Oracle wallet bundle (mTLS) [INFO NEEDED]
Oracle schema users used via SQLcl [INFO NEEDED]

5. Technology behind it

  • Base image: Alpine Linux
  • Bundled tool: Oracle SQLcl (Java)
  • Source: Dockerfile lives only on O1

6. Data it handles

  • Whatever query results / migration outputs are produced by SQL execution against the Oracle ADBs.
  • Connection credentials are the real assets of concern (wallet + schema passwords).

7. External dependencies

  • The Oracle ADBs in both tenancies (E3, E4, E5, O2, O3).
  • n8n on the same host.
  • Container registry (none yet — once one is in place, will pull from it).

8. Authentication & access

  • No interactive auth into the container itself.
  • DB auth via wallet + schema user.

9. Maturity assessment

Dimension Status Evidence
Image in source control Dockerfile only on O1
Image in registry not pushed anywhere
Versioned tags rebuild = different image
Static internal IP breaks n8n pipelines
Credential handling [INFO NEEDED] wallet path on host? injected at runtime?
Audit trail of queries [INFO NEEDED] DB-level auditing on the target ADBs?

10. Known risks & vulnerabilities

  • KI-002 — IP volatility breaks pipelines silently.
  • KI-003 — single-copy artifact; loss of O1 = loss of the image.
  • Privileged DB access concentrated in one container — owning O1 means owning the wallet and DB credentials.
  • Wallet on host[INFO NEEDED] confirm the Oracle wallet's location and access permissions on O1.
  • Credentials in shell history — if scripts run sqlcl user/password@... instead of using a wallet, passwords end up in ~/.bash_history.
  • No connection / query logging[INFO NEEDED] whether DB-level audit is on.

11. Impact if it goes down

  • n8n DB-deployment pipelines fail.
  • Engineers can install SQLcl locally as a stop-gap, but with the wallet and TNS config also lost on O1 disappearance, recovery is multi-step.

12. Owner & on-call

  • Primary owner: Vishnu Kant [CONFIRM]