Surreal Cloud Enterprise is now available
Sign up to our Early Access Programme

Build agentic and GenAI workflows, directly inside SurrealDB

SurrealDB lets you run full agentic pipelines inside the database with ACID guarantees, sub-millisecond latencies, and the flexibility to store and reason over any data model you need

The challenge

Existing AI-powered backends force developers to juggle a tangle of micro-services, message queues, structured and unstructured data, graph databases, vector stores, and file buckets.

Each hand-off adds latency, drives up costs, and introduces risk of failure:

Fragmented state

Agents risk losing context as you shuttle blobs, embeddings, and contextual data from system to system.

Operational sprawl

Engineers end up having to babysit half a dozen stacks, each with their own particularities and challenges.

Governance gaps

Sensitive data may end up moving through services without uniform access controls.

Cost and latency spikes

Agent swarms may generate thundering herd traffic that can sink a database.

The SurrealDB solution

SurrealDB collapses the stack into one so your agents can think, learn, and act in place. Having all this logic centralised in a single location allows engineers to spend more time working on refining agent logic as opposed to maintaining fragile connections between one service after another.

Multi-model querying for richer reasoning

SurrealDB speaks graph, relational, document, time-series, and vector with one language. Blend connections, facts, and semantics in a single round trip.

-- Find products similar to a user's last purchase -- Quick function to generate vector arrays DEFINE FUNCTION fn::rand_array() { [[],[],[],[],[]].map(|$i| rand::int(0, 10)); }; -- Create a user that purchased two products, and some -- vector representing its location in semantic space CREATE user:one; CREATE product:one SET vector = [1,2,3,4,5]; CREATE product:two SET vector = [6,5,4,3,2]; RELATE user:one->purchased->product:one SET at = time::now(); RELATE user:one->purchased->product:two SET at = time::now(); -- Make some other random users and products FOR $_ IN 0..10 { LET $user = UPSERT ONLY user; LET $product = UPSERT ONLY type::thing("product", rand::string(7)) SET vector = fn::rand_array(); RELATE $user->purchased->$product SET at = time::now(); }; -- Get the last user's purchase info by datetime LET $last_purchase = user:one->purchased.at.last(); -- And the most recently purchased product from that LET $last_product = (user:one->purchased[WHERE at = $last_purchase]->product)[0]; -- Then get the most similar products -- $last_product will show up as most similar so -- slice starting at index 1 (SELECT id, vector, vector::similarity::cosine($last_product.vector, vector) AS similarity FROM product ORDER BY similarity DESC LIMIT 3)[1..];
SELECT VALUE ( SELECT *, vector::similarity::cosine(embedding, $parent.embedding) AS similarity FROM array::distinct(->tagged_with->$tag<-tagged_with<-document) ) AS siblings FROM ONLY $record FETCH siblings

RAG, GraphRAG, and Unified Memory

Retrieval-Augmented Generation thrives on fresh, structured context. This content is usually stored in memory, but can also be saved to disk for persistent storage. Toolkits in languages including Rust, Python and JavaScript make this easy.

As an example, let’s see how a simple GraphRAG looks like using LangChain. We will use SurrealDB for the vector and graph stores, and Ollama to generate the embeddings.

# DB connection conn = Surreal(url) conn.signin({"username": user, "password": password}) conn.use(ns, db) # Vector Store vector_store = SurrealDBVectorStore( OllamaEmbeddings(model="llama3.2"), conn ) # Graph Store graph_store = SurrealDBGraph(conn)
documents.append(Document(page_content=chunk, metadata=...)) # This calculates the embeddings and inserts the documents into the DB vector_store.add_documents(documents)
# Find nodes and edges (Product -> tagged_with -> Tag) tag_node = Node(id=tag.name, type="Tag", properties=asdict(tag)) product_node = Node(id=product.id, type="Product", properties=addict(product) # Edges relationships.append( Relationship(source=product_node, target=symptom_node, type="tagged_with") ) graph_documents.append( GraphDocument(nodes=nodes, relationships=relationships, source=doc) ) # Store the graph graph_store.add_graph_documents(graph_documents, include_source=True)
chat_model = ChatOllama(model="llama3.2", temperature=0) # -- Find relevant docs docs = vector_search(query, vector_store, k=3) # -- Query the graph chain = SurrealDBGraphQAChain.from_llm(chat_model, graph=graph_store) response = chain.invoke({"query": docs_into_str(docs)) # docs_into_str is a pseudo-function to concatente document information # in a single string to send to the LLM

Governance and security

Meet regulatory requirements without wrapping your database in yet another proxy.

RBAC & record-based access

On every part of your database schema.

Rate limiting

By disallowing arbitrary queries, requiring low-access users to use defined API endpoints with middleware such as timeouts instead.

Encrypted in flight

With TLS and on-disk encryption.

Audit trails

Use env vars to record logs and traces to file with automatic rotation.

Business outcomes

Lower total cost of ownership

One binary to replace multiple databases (structured and unstructured, graph, vector), blob store, queue, and function compute.

Stronger governance

Centralised policies and audit across structured, unstructured, and AI data.

Scale as you need

Go from embedded, to single-node server, to a to a highly scalable distributed cluster.

Powering innovation across industries

AI assistant empowering 10,000 technicians

AI assistant empowering 10,000 technicians

Verizon uses SurrealDB to power a generative AI assistant for 10,000 field technicians, delivering instant access to documentation, outage updates, and workflows.

Accelerating innovation and reducing costs with SurrealDB

Accelerating innovation and reducing costs with SurrealDB

SiteForge, an emerging AI Content Copilot, reduced its development cycle by 40%, queries by 20%, and scaled back backend APIs by 75% by migrating to SurrealDB.

Learn more

The state of Agentic AI and the need for Agentic Memory

company

The state of Agentic AI and the need for Agentic Memory

Jun 27, 2025

Building real-time AI pipelines in SurrealDB

tutorials

Building real-time AI pipelines in SurrealDB

Jun 24, 2025

Beyond black boxes - building customisable and secure RAG systems for financial services

engineering

Beyond black boxes - building customisable and secure RAG systems for financial services

Mar 26, 2025

Ready to build for
Agentic and GenAI?

Start building intelligent agentic workflows with high scalability and millisecond performance.

Start for free
Learn more