SurrealDB lets you store, enrich, and retrieve a living knowledge graph + vector index in one system. Forget shuffling documents between OLTP data stores, a separate graph DB, and another vector search service. With SurrealDB, GraphRAG is a single query.
Large language models hallucinate when context is thin, returning results that are little more than educated guesses.
Classic RAG treats relevance as nearest-neighbour similarity, missing deeper relationships and connected reasoning.
Conventional stacks force you to stitch together multiple services: relational, document, graph, vector stores, full-text search, ETL jobs, and message queues.
That complexity kills iteration speed and security reviews, making it hard to maintain and scale.
SurrealDB lets you store, enrich, and retrieve a living knowledge graph and vector index in one system. Forget shuffling documents between OLTP data stores, a separate graph DB, and another vector search service. With SurrealDB, GraphRAG is a single query.
SELECT sample, content, <-composed_of<-element, ->soluble_with->solid, vector::similarity::cosine(embedding, $lead_harmful) AS dist FROM liquid WHERE embedding <|2|> $lead_harmful;
Search embeddings using the <|N|>
syntax with HNSW and MTREE indexes for similarity search.
Traverse related tables via graph edges using the ->
and <-
arrow syntax.
Output is immediately ready for the LLM prompt, either in SurrealQL or JSON format.
No ETL, no cross-service joins, no consistency gaps - everything in one system.
Capability | Inside SurrealDB |
---|---|
Graph + Vector in one engine | Graph traversal and vector similarity evaluated together using ->
graph and <|N|>
syntax Removes network hops; lowers latency |
Dynamic, mutable knowledge graph | Insert new facts while using LIVE SELECT
to see changes in real time Agents can write back new facts mid-conversation |
Multi-model querying | Blend graph, relational, and document filters Augment reasoning with prices, counts, time windows |
Time-travel consistency | Read consistent, historical data snapshotss Auditable, repeatable LLM prompts |
Many tools such as Rig have been developed for SurrealDB, making getting started with RAG as easy as a few lines of code.
#[tokio::main] async fn main() -> Result<(), Error> { let surreal = Surreal::new::<Mem>(()).await?; surreal.use_ns("ns").use_db("db").await?; let client = openai::Client::from_env(); let model = client.embedding_model(openai::TEXT_EMBEDDING_3_SMALL); let vector_store = SurrealVectorStore::with_defaults(model.clone(), surreal.clone()); let words = vec![ WordDefinition { word: "flurbo".to_string(), definition: "A fictional currency from Rick and Morty.".to_string(), }, WordDefinition { word: "glarb-glarb".to_string(), definition: "A creature from the marshlands of Glibbo.".to_string(), }, WordDefinition { word: "wubba-lubba".to_string(), definition: "A catchphrase popularized by Rick Sanchez.".to_string(), }, WordDefinition { word: "schmeckle".to_string(), definition: "A small unit of currency in some fictional universes.".to_string(), }, WordDefinition { word: "plumbus".to_string(), definition: "A common household device with an unclear purpose.".to_string(), }, WordDefinition { word: "zorp".to_string(), definition: "A term used to describe an alien greeting.".to_string(), }, ]; let documents = EmbeddingsBuilder::new(model) .documents(words) .unwrap() .build() .await?; vector_store.insert_documents(documents).await?; ... }
... let linguist_agent = client .agent(openai::GPT_4_1_NANO) .preamble("You are a linguist. If you don't know don't make up an answer.") .dynamic_context(3, vector_store) .build(); let prompts = vec![ "What is a zorp?", "What's the word that corresponds to a small unit of currency?", "What is a gloubi-boulga?", ]; for prompt in prompts { let response = linguist_agent.prompt(prompt).await?; println!("{}", response); } Ok(())
Concern | SurrealDB answer |
---|---|
Fewer moving parts | Single binary handles OLTP, graph, vector, full-text search, file storage, and data streaming |
Predictable spend | Scale out with horizontal clustering for fault-tolerance and high availability |
Governance | Audit trails simplify regulatory reviews and compliance |
Return on Investment | Cut ETL jobs, message brokers, and custom glue code. Iterate faster with a lower Total Cost of Ownership. |
Saks Fifth Avenue uses SurrealDB to power AI-driven, real-time product recommendations, boosting engagement and conversions in luxury e-commerce.
GameScript accelerates time-to-market with SurrealDB's multi-model, AI and real-time capabilities.
Start building intelligent applications with graph and vector search in one database.