Llama Index
Llama Index is a framework for building RAG pipelines. It provides a flexible and powerful way to build and deploy RAG systems.
Note
TL;DR
Install
surrealdb+llama-index.Create a table with an
HNSWindex.Drop-in the 20-line
SurrealVectorStore.Use
VectorStoreIndex.from_documents()and you’re done.
Install the libraries
Connect to SurrealDB
Create the table and HNSW index
SurrealQL’s DEFINE INDEX … HNSW activates a high-speed, cosine-distance ANN index.
A tiny SurrealDB adapter for Llama Index
The SurrealVectorStore class is a minimal adapter cribbed from the official build a vector store from scratch example.
Index documents with Llama Index
Query
The query flow is:
What about metadata filters, deletes, hybrid search?
SurrealDB supports metadata-rich JSON payloads, additional filter clauses, and full-text search; extend SurrealVectorStore.query() to:
add
WHEREpredicates before the<|K,EF|>operator,combine with
vector::distance::knn()for re-ranking,or fall back to
vector::distance::cosine()for exact search.
All other Llama Index abstractions (retrievers, query engines, agents) work unchanged, because they talk to the vector store through the same tiny interface you just implemented.
Enjoy Llama Index + SurrealDB!