Enterprise AI initiatives in 2026 share a common failure pattern. The cause is not the model. It is the architecture beneath the model. As the initial hype surrounding Large Language Models matures into the deployment of autonomous agents, enterprises are hitting a “Context Wall.” While LLMs provide the reasoning engine, they lack a persistent, structured, and real-time memory layer. This paper introduces the Enterprise Semantic Foundation (ESF) — a unified, multi-model substrate that merges the stability of Knowledge Graphs with the dynamism of Context Graphs. It argues that the next leap in AI utility will not come from larger models, but from a shift in data architecture.
EXECUTIVE SUMMARY
INTRODUCTION
From static graphs to active context
Enterprise infrastructure has evolved through three distinct generations. Each subsumes the previous one rather than replacing it. A well-designed context graph anchors itself to the canonical facts of a knowledge graph and the governed definitions of a semantic layer.
The architectural challenge is building a foundation that can hold all three — static definitions, stable facts, and dynamic state — in a single, unified, transactional system.
THE 2026 ENTERPRISE REALITY
Hitting the wall
Nearly every organisation with a data team attempted to deploy some form of agentic workflow in 2024 and 2025: a customer support agent, a data analyst, an internal knowledge assistant. Most of those efforts failed.
THE ARCHITECTURE OF AGENCY
Why AI needs a context layer
The shift from human-driven analytics to autonomous agency introduces a fundamentally different set of requirements. Humans query data to find insights. Agents query data to decide what to do next. That distinction changes everything about how a data architecture must behave.
The read-think-write loop
An AI agent operates through a continuous cycle: perceive the environment, reason over memory, commit an action or observation back to storage. This loop runs in milliseconds, and it never stops.
Modern data platforms are optimised for analytical throughput. They are designed to answer questions about the past at scale. Fast ingest makes data queryable quickly; it does not provide the ability to lock a record, mutate it, and guarantee that the next agent in the loop reads the updated value within the same millisecond window. That requires read-after-write transactional consistency — a property that streaming ingest architectures are not designed to deliver.
The state problem
Agents are not stateless. They track goals, monitor sub-tasks, and maintain a working memory of their progress. When an agent is coordinating a complex workflow, it must be able to lock a resource, update a belief, and trigger a downstream action — all within a single consistent operation. This requires ACID compliance and granular transactional control. Analytical platforms are not built for this. They are built to tell you what happened. Agents need infrastructure that shapes what happens next.
The missing layer: context
What is missing from every existing data architecture — whether a traditional warehouse, a modern lakehouse, or a distributed data mesh — is a dedicated context layer. A context layer is an active, low-latency, transactional substrate that holds the live state of an agent's understanding: what it knows right now, what it is currently doing, and how that relates to everything else in the system.
The context layer must live at the database layer, not above it, because only at the database layer can context be kept consistent, governed, secured, and transactionally unified with the canonical knowledge that grounds it.
How context gets built
Context enters the system through multiple channels: automated pipelines that ingest canonical data from existing warehouses and semantic layers, agent-generated observations written back during the read-think-write loop, human-curated business definitions and rules contributed by domain experts, and self-updating flows where agent corrections are incorporated back into the context graph.
A multi-model foundation supports all of these ingestion patterns natively. Canonical facts arrive as structured documents. Relationships between entities are expressed as graph edges. Semantic embeddings are stored alongside the data they describe. Because all three models exist in the same transactional system, an update to a business definition, its relationships, and its embedding can be committed atomically — eliminating the synchronisation gaps that plague fragmented architectures.
THE CONTEXT LAYER
Why multi-model wins
The transition from Large Language Models to Autonomous Agents represents a shift from “stateless inference” to “stateful reasoning.” In many current architectures, memory is fragmented across a “frankenstack” of specialised databases — a vector store for similarity, a graph database for relationships, and a relational or document store for metadata. This fragmentation imposes a significant “Cognitive Tax” on the agent and a “Semantic Tax” on the developer.
The fragmented memory tax
When an agent's memory is split across multiple systems, the unification of that data happens at the application layer. This creates several architectural bottlenecks.
Why memory middleware is not enough
Products such as Mem0 provide a useful abstraction layer, but are built on top of the same fragmented infrastructure: a vector store for semantic retrieval, optionally a graph layer for relationship tracking, and a relational store for metadata. The abstraction hides the complexity from the developer, but it does not eliminate it. When context changes in one underlying system, synchronisation to others is not guaranteed. The semantic drift problem persists underneath the abstraction.
Memory middleware treats context as something to be retrieved and injected into a prompt. It does not treat context as a live, transactional substrate that the agent reasons within. For autonomous agents coordinating complex workflows with enterprise governance requirements, the foundation must be deeper.
The multi-model advantage: atomic context
A native multi-model foundation — where the database engine treats documents, graphs, and vectors as first-class citizens of the same storage layer — solves the memory problem by providing atomic context. A single “memory” is not a collection of pointers across different systems; it is a unified record.
CONTEXT GRAPH SUPPORT
Knowledge vs. experience: the static and the fluid
Most enterprises have spent the last decade building Knowledge Graphs as “dictionaries of truth.” They are excellent at defining what a company knows, but they are remarkably poor at helping an agent decide what to do now.
Why native graph support matters for agency
Modern vector databases such as Pinecone, Chroma, and Weaviate now support metadata filtering, hybrid search, and namespace scoping. These are genuine improvements. However, they remain fundamentally flat: they can tell an agent that two things are semantically similar, but they cannot model the structured relationships between entities, enforce transactional consistency across updates, or traverse impact paths through a connected graph. Similarity is not context.
Consider a concrete example. An agent monitoring a production system receives an alert about a failing API endpoint. With a vector database, the agent can search for documentation semantically similar to the error message. Useful, but shallow. With a graph-capable multi-model database, the agent can traverse from the failing endpoint to the upstream service that depends on it, to the team that owns that service, to the on-call engineer currently assigned, and simultaneously retrieve the vector-embedded runbook most relevant to this specific failure mode — all in a single query.
The structural traversal and the semantic search are not two separate operations stitched together in application code; they are one atomic operation.
MULTI-AGENT SHARED STATE
Solving the shared state problem
As AI moves from single-task assistants to Multi-Agent Systems, the architectural challenge shifts from individual memory to shared state. When multiple agents — each with specific roles like “Researcher,” “Coder,” or “Reviewer” — work together, they require a common area where the current state of a task is visible, verifiable, and actionable by all participants.
THE COMPETITIVE LANDSCAPE
What else is being built
The market has recognised the context problem. Several architectures are competing to answer the question of where the context layer should live. Understanding the distinctions matters for any organisation making infrastructure decisions.
SurrealDB occupies a distinct position. The context layer does not live above the database as a middleware abstraction, nor inside an analytical platform as a bolt-on feature. It lives in the database itself, as a native property of a storage model that treats graph, vector, and document as first-class citizens of the same transactional system.
THE ARCHITECTURE
Towards an Enterprise Semantic Foundation
The Enterprise Semantic Foundation (ESF) is the architectural response to semantic drift. It is not a single database or a rigid schema; it is a unified, multi-model substrate that serves as the common language for the entire organisation.
Solving the security and governance paradox
In a traditional “frankenstack” — vector store + graph store + document store — enforcing security is an operational nightmare. If a user's permissions change, those changes must be propagated and synchronised across three or four different systems simultaneously. If the synchronisation lags, an AI agent might inadvertently “remember” or “reason” over sensitive data it is no longer authorised to see.
An Enterprise Semantic Foundation solves this through record-level and field-level permissions native to the database.
The end of semantic drift
By moving semantics from the application code (where it is hidden and fragmented) into the database (where it is transparent and shared), organisations create a “self-describing” data environment. The database does not just store bytes; it stores intent. When a new agent is deployed, it does not need to be “trained” on how to interpret the data; it simply queries the Enterprise Semantic Foundation to understand the relationships, constraints, and context inherent in the system.
Integration, not replacement
A production context layer cannot exist in isolation. Enterprise data lives in Snowflake, Databricks, S3, and dozens of operational systems. SurrealDB is designed to complement existing data platforms. Canonical data from warehouses and lakehouses flows into the context graph through standard ingestion pipelines. Agents query the context layer for real-time state and relationships, while analytical workloads remain on the platforms best suited for them.
This is not a rip-and-replace proposition. It is an architectural addition: a transactional, multi-model layer purpose-built for the operational requirements that analytical platforms were never designed to meet. The data platform answers “what happened.” The context layer answers “what should happen next.”
Developer experience and time-to-value
CONCLUSION
The architecture of the agentic era
The first generation of graph databases solved the problem of interconnected data for human analysts. The next generation must solve the problem of interconnected reasoning for AI agents.
The leap from simple RAG to true autonomous agency requires a move away from static data warehouses, fragmented specialty stores, and analytical platforms that were never designed for real-time agentic state. It demands a multi-model Enterprise Semantic Foundation — a system where the graph provides the structure, the document provides the substance, and the vector provides the intuition.
The market is beginning to understand this. Specialist databases are addressing parts of the problem. Memory middleware is abstracting over the fragmentation. Data platforms are building context features on top of existing products. Each of these efforts is moving the industry forward. But none of them solve the problem at the foundation level, where graph, vector, and document exist as a single unified transactional model, governed by a single permission system, queryable in a single language, and consistent within a single transaction.
SurrealDB was built specifically for this transition. The context layer will live in the database. The question is which database is ready for it.
TECHNICAL APPENDIX
Implementing semantic context with SurrealDB
To illustrate how a multi-model foundation simplifies agentic workflows, the following examples show how SurrealQL collapses complex operations into single, atomic queries.
A. The “Rich Edge” (Graph + Document)
Instead of just a pointer, an edge in SurrealDB can store the full context of an interaction, including the vector embedding of the sentiment.
B. The unified search (Vector + Graph)
An agent can find “Similar Memories” (Vector) but immediately filter them by “Established Relationships” (Graph).
C. Live queries for multi-agent sync
Instead of polling a warehouse, agents subscribe to changes in the shared state.

