AI agents need durable state, flexible querying, and fast retrieval. SurrealDB’s multi-model design lets you combine these in one engine instead of stitching together several stores. The same deployment can serve low-latency reads for live agents and batch-style jobs for backfills or evaluation.
Agent memory — Store conversation context, artefacts, and entity-centric records as documents, and model relationships (users, tasks, tools, outcomes) as a graph so agents can traverse connections and summarise what matters. You can version or partition memory per tenant or session while keeping a consistent query model.
Tool use — Express business logic and data access in SurrealQL, including database functions for reusable server-side behaviour. Where external systems are required, HTTP functions let agents trigger APIs and webhooks from the database layer with clear, auditable definitions.
Knowledge graphs — Use graph patterns and traversals so agents can follow links between concepts, permissions, and resources without round-trips through application code. That supports explainable hops from a user question to supporting facts stored as connected records.
Retrieval and RAG — Combine structured filters with vector indexes and similarity search to retrieve semantically similar chunks alongside exact matches, supporting grounded answers and hybrid retrieval strategies. Pair lexical filters with vector neighbours when you need both precision and recall.
You can evolve schemas and indexes as your agent workflows change, keeping operational overhead low. For ready-made connections to popular agent libraries, see the AI frameworks integrations overview.
What that means in practice is that you can store conversation state and retrieved chunks beside canonical records (users, tickets, products) in one place.