About Index Network
Traditional discovery methods are fragmented, limiting personalisation and forcing trade-offs between relevance and privacy. While public data is widely accessible, personal context remains siloed in separate apps, inboxes, and chat histories. Index Network bridges these gaps by allowing autonomous agents to synthesise both public and private data securely, preserving user privacy. Instead of relying on centralised platforms, users can define their own context and delegate discovery to trusted agents - whether for tracking trends, finding collaborators, or making sense of fragmented knowledge. This approach makes discovery more open, personal, and intelligent.
Index Network is a user-centric and distributed discovery protocol platform. A discovery protocol is a set of rules and technologies that enable a platform to connect users with relevant information, ideas, and people. Index Network enables, via decentralised semantic indices, AI agents to securely query private data using Trusted Execution Environments (TEE)-powered compute while maintaining composability with public sources.
The problem we are solving
We are tackling the challenge of privacy-preserving, user-centric discovery. Our goal is to enable autonomous agents to retrieve and synthesise insights across fragmented data sources without requiring centralised control. Previously, user data could not be securely connected to algorithms while maintaining privacy. Custom-built solutions were necessary for every new context. However, with Web3 eliminating intermediaries, TEE-based decentralised compute enabling public-private data synthesis, and LLMs (Large Language Models) processing human language data, autonomous agents can now dynamically compose information for any context.
<a real world example>
The challenges we faced
Unpredictable data structures: User data can exist in various formats and schemas, requiring flexible querying - sometimes as a graph, other times relational, and often semantically structured.
Uncertain access patterns: A decentralised environment needs a secure, verifiable, and distributed authorisation mechanism. Compute nodes must only access data during processing, minimising data leakage risks.
Scalability constraints: The unpredictable nature of data access forces compute and storage to scale together, leading to inefficient resource allocation and performance bottlenecks. Proximity between compute and storage is crucial for efficiency, but decentralisation introduces network latency challenges.
Our solution
We explored traditional relational databases with dynamic schema management (such as PostgreSQL with pgvector for embeddings), document-based databases for schemaless design, and graph databases. Specifically, we needed to combine vector, graph, relational and document based databases on a unified platform. Trying different combinations of tools we ran into hurdles:
Performance hit of module based approaches
Requiring redundant data transformations when leveraging multiple products
Indexing headaches when trying to leverage unstructured with structured data.
Managing multiple binaries with their own version concerns.
This led us to a conclusion that to achieve a platform solution for our decentralised product our data needed to also be stored on a platform without compromise.
We chose SurrealDB because it's reliable and lets us access data in many different ways on a single data-platform without having to replicate data, leverage multiple languages and deal with the complications of managing multiple versions of technologies.
Why SurrealDB?
Our use of Rust, now the standard in decentralised compute networks, led us to SurrealDB. Its support for both graph and relational models aligned perfectly with our needs. Additionally, its architecture separates compute and storage, making it naturally decentralised.
In our setup, we leverage TEEs to ensure compute nodes access data only during execution, preventing unauthorised retention. Since SurrealDB's query layer is decoupled from storage, we can run verifiable computations within TEEs, maintaining privacy and security without sacrificing performance. Think of it like this: SurrealDB separates the financial advisors (compute) from the vault (storage). This separation allows us to bring the advisors into a secure, private room (TEE) to work on your portfolio (process data) without giving them full access to all your assets in the vault.
Code snippet: querying data with SurrealDB
Code snippet: secure query execution in a TEE
The benefits
Predictable and minimal architecture: Autonomous agents can retrieve data across graph, relational, key-value, and semantic layers without redundant transformations.
Privacy management: Users can now control data flow without compromising security.
Scalability for multi-modality: A standardised approach to querying different data modalities ensures long-term growth.
We’re excited about the road ahead with SurrealDB and look forward to unlocking new possibilities in decentralised discovery.
