The audit log pipeline records authoritative, identity-bound events that operators and auditors need to reconstruct who did what, when, and with what outcome. It is part of SurrealDB Enterprise and is off by default — set SURREAL_AUDIT_SINK=file and SURREAL_AUDIT_FILE_PATH to enable it.
Records flow through two parallel paths:
Durable file sink. A bounded queue feeds a background worker that appends each record to an NDJSON file. Optional SHA-256 hash chaining provides tamper-evidence; size-based rotation and a tunable fsync cadence keep the file manageable. This is the primary path for compliance and SIEM ingestion.
OpenTelemetry logs. The same record can also be emitted as an OTel
LogRecordon the SDK logger provider. Off by default; opt in per pipeline withSURREAL_AUDIT_OTEL_EXPORT=true. Compliance-sensitive deployments typically keep this off and rely on the file sink.
The observer hot path never blocks on I/O. Redaction runs synchronously on the executor thread before the record reaches the queue, so the worker can write raw bytes straight to the sink and the OTel emit cannot leak unredacted content.
The full set of configuration variables lives on the configuration reference.
Events captured
Each event surfaces as an OTel LogRecord with a specific event name and a severity that depends on the outcome:
| Event name | Severities | Captures |
|---|---|---|
surrealdb.audit.statement | Info / Error | Completion of an individual statement. |
surrealdb.audit.query | Info / Error | Completion of a multi-statement query. |
surrealdb.audit.transaction | Info / Error | Commit or rollback of a transaction. |
surrealdb.audit.rpc | Info / Error | Completion of an RPC call. |
surrealdb.audit.auth | Info (success) / Warn (any non-success) / Error (outcome=error) | Sign-in, sign-up and authentication outcomes. |
surrealdb.audit.session | Info | Session connect and disconnect events. |
surrealdb.audit.http | Info / Error (typically 5xx responses) | Completion of an HTTP request. |
The OTel LogRecord body is a short human-readable string. The structured fields live on attributes (db.namespace, db.name, db.user, db.statement, session.id, client.address, surrealdb.statement_type, surrealdb.outcome, surrealdb.duration_ms, surrealdb.error_class).
Record shape
Audit and slow-query files contain one JSON record per line, each terminated by a newline. Audit and slow-query records share a common envelope; audit records additionally carry the event_type that distinguishes the seven event variants above.
Audit event_type values: statement, query, transaction, rpc, auth, session, http.
Envelope fields (common to all records):
ts— RFC 3339 timestamp.event_type— see above.outcome—success,error,cancelled, or (for auth events)denied/failed.duration_ms— wall-clock duration of the captured operation.Identity context —
namespace,database,userresolved from the session.sql— captured statement text whenSURREAL_AUDIT_INCLUDE_SQL=true; absent otherwise.prev_hash/hash— SHA-256 chain fields, present when hash chaining is enabled.
A captured statement event with hash chaining enabled looks like:
Rotation and durability
File mode
0600on Unix. The parent directory must exist; the server refuses to start otherwise.Size-based rotation at
SURREAL_AUDIT_FILE_ROTATE_BYTES(default 256 MiB).The oldest rotation is dropped once
SURREAL_AUDIT_FILE_ROTATE_KEEP(default8) generations are present.Mid-stream fsync cadence is governed by
SURREAL_AUDIT_FSYNC_EVERY. Rotation and graceful shutdown always flush andsync_dataregardless of cadence.
Hash chaining
When SURREAL_AUDIT_HASH_CHAIN=true every record carries:
prev_hash— SHA-256 of the previous record in the same file. Absent on the genesis record at the start of a new file.hash— SHA-256 of this record's canonical serialisation (includingprev_hash).
Rotation closes a chain and starts a new one with a fresh genesis record. A detector verifies the chain by recomputing each hash sequentially and comparing against the stored hash.
Hash chaining requires
SURREAL_AUDIT_FSYNC_EVERY=1. Without per-record fsync, the chain pointer could advance for records that are not durably on disk, leaving on-disk gaps the chain still references and silently weakening the guarantee. Startup fails loudly when the two knobs disagree.
Redaction
Redaction is applied synchronously on the executor thread before the record reaches the queue, so the worker, the file sink, and the OTel logger all see the same already-scrubbed text. Three layered passes run in order:
Literal pass — when
SURREAL_AUDIT_REDACT_LITERALS=trueevery single- or double-quoted span in the SQL is replaced with'***'/"***".Identifier-token pass —
SURREAL_AUDIT_REDACT_TABLES=secrets,piiperforms a case-insensitive replacement of each identifier token with***.Regex pass —
SURREAL_AUDIT_REDACT_REGEX="<pat1>;<pat2>"(note: semicolon-separated) compiles each pattern at startup. An invalid pattern fails startup; valid patterns run in order against the SQL text.
The slow-query log pipeline supports the same three passes under SURREAL_SLOW_QUERY_REDACT_LITERALS, SURREAL_SLOW_QUERY_REDACT_TABLES and SURREAL_SLOW_QUERY_REDACT_REGEX.
Overflow semantics
Neither overflow policy offers a lossless guarantee:
drop— single non-blockingtry_send. OnFullorClosedthe record is dropped and thesurrealdb_audit_droppedgauge increments.block— bounded busy-yield loop (200 retries withstd::thread::yield_now).yield_nowdoes not park the OS thread, so on a multi-threaded runtime the drain task can make progress between yields and short bursts may be absorbed without drops. On acurrent_threadruntime the producer holds the only worker and the policy degrades to immediate drop. There is no wall-clock time-bound on the loop — the budget caps iterations only.
The audit pipeline defaults to block because audit records are compliance-sensitive; the slow-query pipeline defaults to drop because slow-query records are triage data.
Whichever policy is configured, alert on
rate(surrealdb_audit_dropped[5m]) > 0andrate(surrealdb_audit_append_errors[5m]) > 0. Both indicate records were lost.
Pipeline self-metrics
Five observable gauges expose the live state of the pipeline. Each is read at scrape time from atomic counters on the worker, so the cost is zero when nothing consumes the metric.
| Metric | Notes |
|---|---|
surrealdb_audit_records | Cumulative records successfully enqueued. |
surrealdb_audit_dropped | Cumulative records dropped (overflow or queue closed). Alert on any non-zero rate. |
surrealdb_audit_queue_depth | Records currently buffered between observer and worker. Sustained depth above ~50% of SURREAL_AUDIT_QUEUE_CAPACITY indicates a slow sink. |
surrealdb_audit_appended | Cumulative records the worker wrote to the sink. The gap to surrealdb_audit_records is queue depth plus append errors. |
surrealdb_audit_append_errors | Cumulative sink-write failures. Alert on any non-zero rate. |
The slow-query pipeline exposes the same shape under the surrealdb_slow_query_* prefix — see slow-query logging.
Related references
Configuration → Audit log knobs — every audit-log environment variable.
Configuration → Compliance checklist — the minimum tamper-evident configuration.
Slow-query logging — the sister pipeline for triage data.
Metrics reference → Audit log pipeline self-metrics — the five gauges in the metric catalogue.