Mar 6, 2025
Alexander Fridriksson
Personalised experiences can make or break the conversion of a visitor into a customer, which is why recommendation engines are no longer a luxury — they’re essential.
You’ve seen them everywhere:
“Gift ideas inspired by your shopping history.”
“Customers who viewed this item also bought…”
“Just arrived for you.”
However, behind these helpful nudges are complex systems. Traditionally, building these systems meant building out a team of experts who build out stacks of disconnected tools, which then need to be glued together again. This might include relational, document and graph databases along with advanced machine learning and vector stores.
Systems like these trend toward ever increasing complexity, but is that really necessary?
What if complex systems don’t need to be complicated?
What if you could do it all in one stack?
SurrealDB gives you the power of 10+ products in one unified multi-model database.
It’s natively real-time, relational, document, graph, vector and more. Built from the ground up using cutting-edge research and the Rust language to ensure runtime safety, combined with the performance and low-latency requirements you need from a high performance system.
For recommendation systems, this enables you to build simpler systems and achieve better personalisation. Instead of juggling multiple systems to stitch together context, relationships, and meaning, SurrealDB lets you keep everything in one tightly integrated platform, without the need to install or interact with multiple databases, libraries or frameworks.
Traditional recommendation systems rely on techniques like:
These work, but if you could increase conversion rates by a few percent, why wouldn’t you?
The current cutting-edge trends are moving towards real-time, more context-aware systems - making using of LLM driven recommendations powered by centralised knowledge graphs.
This is where SurrealDB shines. Let’s therefore explore a high-level recipe for how you can build such recommendation systems, simply using SurrealDB and a LLM provider. No need for Python glue code, frameworks, vector stores and more.
This makes it easy to provide context that can be used to answer questions such as:
SELECT array::distinct( ->order->product <-order<-person ->order->product.{id, name} ) AS recommended_products FROM person:01FVRH055G93BAZDEVNAJ9ZG3D;
Create vector embeddings from product descriptions or user behaviour.
DEFINE FUNCTION fn::create_embeddings($input: string) { RETURN http::post("https://example.com/api/embeddings", { "model": "embedding model", "input": $input }); };
These embeddings unlock semantic search, so you can say:
“Show me products similar to a ‘casual sweatshirt’”
And get results based on meaning, not just keywords.
SELECT id, name, vector::similarity::cosine(details_embedding, $prompt) FROM product ORDER BY similarity DESC;
Combine keyword relevance and vector similarity for the best of both worlds.
DEFINE FUNCTION fn::hybrid_search($search_term: string) { LET $prompt = fn::create_embeddings($search_term); LET $semantic_search = (…); LET $full_text_search = (…); RETURN { semantic_results: $semantic_search, full_text_results: $full_text_search }; };
Retrieve personalised context: purchase history, preferences, popular items, and more.
DEFINE FUNCTION fn::person_context($record_id: string) { LET $context = (SELECT id, name, address, ->order.* AS order_history FROM $record_id); RETURN $context; };
Send all this rich context into an LLM for hyper-personalised recommendations.
DEFINE FUNCTION fn::get_recs($template: any) { RETURN http::post('https://example.com/api/chat', { "model": 'reasoning model', "input": $template }); };
A luxury fashion retailer was struggling with low website conversion rates. They had personal shoppers in-store, but scaling that experience online was a massive challenge.
Their solution? Build a virtual personal shopper!
By creating a real-time recommendation pipeline powered end-to-end by SurrealDB. Want to see it in action? Check out the full case study on YouTube.
Want help making your recommendations smarter? Reach out to us
On this page