Loading webinar...
Please check your connection and try again.
In the rapidly evolving field of AI, most of the time is spent optimising. You are either maximising your accuracy, or minimising your latency. Join our live SurrealDB webinar where we’ll be showing some LangChain components, testing some prompt engineering tricks, and identifying specific use-case challenges.
We’ll walk through an experiment: a chatbot answering questions over chat-style conversations, showing when vector retrieval wins, when lightweight graphs help, and how to handle tricky bits like time awareness.
Solutions Engineer at SurrealDB
Set up SurrealDB as both a graph and vector store—one connection, one system
Use LangChain to ingest documents
Use LLMs to infer keywords
Tune retrieval (k, thresholds) and compare vector-only, graph-only, and intersected results