Skip to content
NEW BLOG

Using Surrealism to build your own extensions

Read blog

1/2

Agent memory hero illustration

Agent memory that persists, structures, and connects

Working memory, semantic memory, episodic memory, procedural memory, and preference memory - all structured, all queryable, all in one transactional system.

MEMORY TYPES

Five types of memory for agents

Human cognition uses multiple memory systems. Agents benefit from the same separation. Spectron maps each memory type to structured storage with graph relationships and temporal awareness.

Working memory

Current session context - the conversation history, tool outputs, and intermediate results the agent is actively reasoning over.

Semantic memory

Facts and knowledge - entities, relationships, and properties stored as a knowledge graph. 'Paris is the capital of France.'

Episodic memory

Past interactions and experiences - what happened, when, with whom, and what the outcome was. Agents learn from history.

Procedural memory

Learned patterns and workflows - successful strategies, tool usage patterns, and decision heuristics accumulated over time.

Preference memory

User preferences, feedback, and interaction patterns. Agents personalise without re-learning from scratch.

Shared memory

Memory accessible across multiple agents. Coordination happens through shared context with ACID guarantees.

AUTONOMOUS UNDERSTANDING

Memory that keeps thinking

Most memory systems are passive. They store what agents tell them and retrieve what agents ask for. Spectron's memory is active - background processes autonomously discover connections between entities, consolidate fragmented knowledge, and infer relationships that no single conversation could reveal.

Connection discovery

Relationships between entities mentioned in separate conversations are discovered automatically as context accumulates.

Knowledge consolidation

Fragmented facts from dozens of interactions merge into coherent entity profiles. Understanding becomes structured over time.

Implicit inference

New facts are derived from the graph without being explicitly stated. The knowledge graph grows richer than the sum of its inputs.

MEMORY VS. RAG

Memory is not RAG

RAG retrieves relevant text chunks using vector similarity. Memory understands entities, tracks changes over time, resolves references, and accumulates knowledge. RAG answers “what text is relevant?” Memory answers “what does the agent know?”

RAG: retrieval

Vector similarity search over document chunks. Stateless - every query starts fresh. No entity awareness, no temporal reasoning, no knowledge accumulation.

Memory: understanding

Structured knowledge graph with entity disambiguation, temporal facts, preference accumulation, and episodic recall. Stateful - the agent builds knowledge over time.

THE PROBLEM

The fragmented memory tax

Memory middleware like Mem0 layers a memory API on top of external databases - a vector store here, a key-value store there, maybe a graph database for relationships. Every layer adds latency, failure modes, and consistency gaps.

Consistency gaps

Memory in one store, state in another. No unified transactions means no guarantees that memory and data stay in sync.

Latency compounds

Each system adds a network hop. Memory retrieval that should take milliseconds takes tens of milliseconds across systems.

Operational burden

Three to five systems to deploy, monitor, back up, and secure. Each one is a potential failure point.

No unified permissions

Different permission models for memory, vectors, and structured data. Security policies cannot be applied consistently.

THE SOLUTION

Atomic context with Spectron

Spectron runs on SurrealDB. Memory, knowledge graphs, vectors, and structured data share the same ACID transaction boundary. No middleware, no glue code, no consistency gaps.

GET STARTED

Give your agents memory

Structured memory for AI agents - working, semantic, episodic, procedural, and preference memory types.