About this webinar
Language model agents reason from scratch on every query, then discard everything they learned. Same question, different answer, no improvement over time. Reasoning graphs change this by persisting each chain of thought as structured edges tied to the evidence items they evaluated. On the next run, the system surfaces how each piece of evidence has been judged before, giving the model a memory that compounds with use. A companion retrieval graph tightens candidate selection over successive runs, and together both structures form a self-learning feedback loop. No retraining required. In this session, Matthew Penaroza walks through the architecture, the results (half the errors, 11-point accuracy gain on hard questions, perfect consistency, 47% lower cost, 46% lower latency), and how to apply reasoning graphs to your own RAG pipelines. Based on the paper: https://arxiv.org/abs/2604.07595
Speakers
Matthew Penaroza
Head of Solution Architecture at SurrealDB