Benchmarking is LIVE! Dive into the details.

Landing background Landing background

AI-native architecture

Why you shouldn’t settle for AI as the sixth bullet point.

The problem

Building intelligent applications means battling brittle systems. Your AI ends up starved of real-time context, trapped in a maze of APIs and exports.

Data silos

Training models on stale, sampled data from warehouses - not live, holistic datasets.

Latency death spiral

Shipping features to a database, then to Python for inference, then back—adding critical delays.

The solution

SurrealDB’s AI-native architecture. SurrealDB embeds machine learning into its core - store vectors, run models, and serve predictions directly alongside your data with millisecond latency.

How SurrealDB powers next-gen AI

Embed ML models Embed ML models hover

Embed ML models

Run ONNX models inside the database. Predict failures on a factory sensor using local data and ML.

Built-in search Built-in search hover

Built-in search

SurrealDB offers vector and full-text search. Store and query vectors natively for AI use cases.

Multi-model support Multi-model support hover

Multi-model support

Unify relational, document, and graph data. Combine vector results with structured business data.

Benefits of SurrealDB’s AI-native architecture

Faster inference Faster inference hover

Faster inference

Run models where the data lives. No unnecessary network hops.

No pipeline sprawl No pipeline sprawl hover

No pipeline sprawl

Iterate on AI workloads without switching tools - use SurrealQL.

Future-proof Future-proof hover

Future-proof

Adapt to new use cases without costly changes. Built-in flexibility.

Ready to build with
AI-native architecture?

Get started with SurrealDB: the multi-model database for knowledge-intensive applications.