NEW

The wait is over. SurrealDB 3.0 is here!

Learn more

Beginning our benchmarking journey

Company
Featured

Feb 11, 202530 min read

Alexander Fridriksson

Alexander Fridriksson

Show all posts

Beginning our benchmarking journey

Our newsletter

Explore our releases, news, events, and much more

What is SurrealDB?

That might be a strange question to start off a blog post about benchmarking, but it’s a very important one.

Why is it important?

Because that, to a large extent, determines what kind of benchmarks make sense to run against SurrealDB.

Let's therefore address that question first: What exactly is SurrealDB?

The challenge of multi-model benchmarking

SurrealDB is a multi-model database that natively handles all types of data: relational, document, graph, time-series, key-value, vector and full-text search and more, all in one place.

SurrealDB also handles all types of deployment environments, from embedded devices to distributed clusters. This is made possible by one of the fundamental architecture designs of SurrealDB, separating the storage layer from the computation layer.

SurrealDB also has a built-in security layer, along with API layers for REST, RPC, and GraphQL. This blurs the lines between a traditional database and a backend as a service (BaaS).

Users like you giving SurrealDB a try for the first time come from all kinds of previous databases:

  • SQL and NoSQL

  • Embedded and distributed

  • Traditional database and backend as a service

Naturally, you want to know how it compares to your previous database. Possibly even multiple previous databases if you’re replacing multiple databases with SurrealDB.

This is also something we are very interested in. But considering the versatility of SurrealDB, it can get complicated very fast.

For example, how immediately and reliably data is flushed to disk is not the same across databases. There are many configuration options which can result in dramatically different performance metrics. In-memory databases, as the name suggests, don't even write to disk.

It's important to keep this in mind, as we're comparing SurrealDB with various configurations against various kinds of databases. We've done our best to configure each database in the most fair way possible and added code comments, such as in the code for MongoDB, to help explain some of our reasoning. If you think any of our configurations can be improved, we'd love to hear about it, as we want to be as fair as possible.

The need for robust internal tooling

Over this last year, we have been looking into all the various benchmarking possibilities out there and have run various tools and tests to help us improve our performance and make sure there are no performance regressions between versions.

We have a vision for what SurrealDB is and can be, but as a young project, we are also heavily influenced by what you want SurrealDB to be and, therefore, the contributions you make really matter. That is why we are now opening up all our benchmarking tooling and asking for your feedback in helping us test and optimise for the things that matter to you!

Our internal benchmarking tool

What we have found the most useful and therefore put the most effort into is developing our own benchmarking tool that is built in Rust and can be easily extended to cover everything that SurrealDB does and compare against any database or platform you use.

The crud-bench benchmarking tool is an open-source benchmarking tool for testing and comparing the performance of a number of different workloads on embedded, networked, and remote databases. It can be used to compare both SQL and NoSQL platforms, including key-value and embedded databases. Importantly, crud-bench focuses on testing additional features which are not present in other benchmarking tools, but which are available in SurrealDB.

The primary purpose of crud-bench is to continually test and monitor the performance of features and functionality built into SurrealDB, enabling developers working on features in SurrealDB to assess the impact of their changes on database queries and performance.

The crud-bench benchmarking tool is being actively developed with new features and functionality being added regularly. If you have ideas for how to improve our code or want to add a new database to compare against, we’re more than happy for your feedback and contributions!

How does it work?

When running simple, automated tests, the crud-bench benchmarking tool will automatically start a Docker container for the datastore or database which is being benchmarked (when the datastore or database is networked). This configuration can be modified so that an optimised, remote environment can be connected to, instead of running a Docker container locally. This allows for running crud-bench against remote datastores, and distributed datastores on a local network or remotely in the cloud.

In one table, the benchmark will operate 5 main tasks:

  • Create: inserting N unique records, with the specified concurrency.

  • Read: read N unique records, with the specified concurrency.

  • Update: update N unique records, with the specified concurrency.

  • Scans: perform a number of range and table scans, with the specified concurrency.

  • Delete: delete N unique records, with the specified concurrency.

With crud-bench almost all aspects of the benchmark engine are configurable:

  • The number of rows or records (samples).

  • The number of concurrent clients or connections.

  • The number of concurrent threads (concurrent messages per client).

  • Whether rows or records are modified sequentially or randomly.

  • The primary id or key type for the records.

  • Total control on the record structure: Columnar, object (JSON-like).

  • Fine grained control over record types: string, booleans, numbers, array.

  • The scan specifications for range or table queries.

Which workloads can it run?

As crud-bench is in active development, some benchmarking workloads are already implemented, while others will be implemented in future releases. The list below details which benchmarks are implemented for the supporting datastores and lists those which are planned in the future.

CRUD

  • Creating single records in individual transactions

  • Reading single records in individual transactions

  • Updating single records in individual transactions

  • Deleting single records in individual transactions

  • Batch creating multiple records in a transaction

  • Batch reading multiple records in a transactions

  • Batch updating multiple records in a transactions

  • Batch deleting multiple records in a transactions

Scans

  • Full table scans, projecting all fields

  • Full table scans, projecting id field

  • Full table count queries

  • Scans with a limit, projecting all fields

  • Scans with a limit, projecting id field

  • Scans with a limit, counting results

  • Scans with a limit and offset, projecting all fields

  • Scans with a limit and offset, projecting id field

  • Scans with a limit and offset, counting results

One thing to note about the scans with a limit and offset, [S]can::limit_start_all (100), is that our implementation is not fully optimised yet and will therefore not be as fast as other databases.

Filters

  • Full table query, using filter condition, projecting all fields

  • Full table query, using filter condition, projecting id field

  • Full table query, using filter condition, counting rows

Indexes

  • Indexed table query, using filter condition, projecting all fields

  • Indexed table query, using filter condition, projecting id field

  • Indexed table query, using filter condition, counting rows

Relationships

  • Fetching or traversing 1-level, one-to-one relationships or joins

  • Fetching or traversing 1-level, one-to-many relationships or joins

  • Fetching or traversing 1-level, many-to-many relationships or joins

  • Fetching or traversing n-level, one-to-one relationships or joins

  • Fetching or traversing n-level, one-to-many relationships or joins

  • Fetching or traversing n-level, many-to-many relationships or joins

Workloads

  • Workload support for creating, updating, and reading records concurrently

As you can see from the above list, there is still a lot to do and we are committed to provide a very comprehensive benchmarking environment. If you think there is something we are missing or something we can improve, let us know!

Crud-bench is open for contributions such that you can raise issues and submit PRs for things that matter most to you.

Details of this benchmarking run

Crud-bench is running with the following hardware specifications:

Crud-bench is running with the following configurations:

  • Number of records: 5 000 000

  • Number of clients: 128

  • Number of threads: 64

  • Primary key type: string26

  • Cooldown between runs: 15 min (cooldown prevents CPU throttling)

  • Record contents sample:

{
"integer": -649603394,
"nested": {
"array": [
"oICD6WTWrrPgHxDsSPSBoOSDF5fOw63orRmaieWlC59Mnbtx9S",
"3679FpWwclzTXEICDe8Qyqxf7XWwiDNhP9SFIDNszaLsQxg316",
"UrZt46kMNd60oCftGYtd0ZcEAMAReuBiwCdlcvIDqZgEkww9bg",
"CbwLLVw8OX0ymvgcBJ8AldhXMAlk3DmvIJvFQzAZLSOsubfhL4",
"pTiBvzTomwOyCkY3xv9CAfRU7klrmDAvbfQcASe66UNEGf89Wz"
],
"text": "cFw L3div76qg OIP3I mKMU3l vX395uDd 16jMHx 7zPM39 yG Cj L7Y8C8D nZZzc pUE8 qMz4 VPmkUH N7Yh2Xwg S00I 2hJLQC F5S2o IDadxYiaU wJ6s0I Dq KkOjxDC2 Zuj NZx28LU EG WJXG9v hKBWyX7 GiKpIL HtSwDANp3 y16Thb 08kYxhPWB u7bU TWaFZ t7nfoe4CU wKrq6HhB nFmR WIR9H Sb3BpPk rO Zk bWWLNHa IALWXX ajOCI NwO zl cN vMYZZ 4hkiWn Lh6A XR1 UkHZyiuw tiF o3JF1TNi v4f ICWpD 8JCWJ LP0h ywfLy do NPNt3q x6sfOn b9DDWfR Y4WqYJE S0T TC Iy uyr9W8i muj1 1N50bSQyL fnU 5QJaNSNOD 7Biav64 ez U5Wid1vk KsN CAyqJwG It as RJP KO 6q gJnE 6aljDtes DurAHei qIOFjC DS AbXvrmUX1 qz4 8Dq14i MqxAnt CHo u6kSff53t ng fSLgs PG 8UHhQA0A ei aX1ou 0V17xl 8Yc0T eUURFG0 oydm JYI VcJdFAd dI fm w7o mhDYTaY4A Y0xmtucTZ 7ZnM1M Z8h06h AGx6aI4 3Xi aFrb g65D0 ixSYe ZHA0 Ag KwTasnW C7A1pSzvg G3Hn3Gtw eGvxZfzQQ 4RMXFJgWM Ozj7oF llFwyn9R lvrYOJfv Y8 FrhfWUMIL pU5KdGZd taueohT LxFaieQ 4Bpebv J0t5bHtsz l3VVN aPH5 EGZ CsDT8 5J SKF 9k7FMR7X JzNtI qfF2vu T2MZx iLtv llmEl CxKnhhF6 N6bULGU fQxIOo5n M6Umh rjK 0y KIWLf 9CVj 9G36Vwji 7vSMt7GmH 1uk b23htGq stB CvWNZuFxG"
},
"text": "04Pn 8jBDDE ATemPG79l jkh1 u8zHq KP E6tZytaI dOT4 NDNT"
}

Below you'll find summary tables of the crud-bench results, grouped into the different data models.

For more detailed results, you can check out the crud-bench GitHub repository

We run all the various benchmarks daily on GitHub actions, so you can also keep track of our performance journey there.

Relational (SQL) database comparison

While SurrealDB is a multi-model database, at its core, SurrealDB stores data in documents on transactional key-value stores. SurrealDB also uses record links and graph connections to establish relationships instead of joins.

This means that the most established relational benchmarks, such as TPC-C for transactional databases, cannot currently be run without modifications that the TPC-C benchmark would explicitly prohibit. This also applies to TPC-DS for analytical databases. Therefore, while we’ve looked into this quite a bit to determine the feasibility, we’ve decided not to implement these benchmarks yet. If you are interested in seeing this benchmark for SurrealDB, let us know. We would also be happy for contributions if you are familiar with implementing this benchmark.

For now, you can see the below crud-bench summary results comparing SurrealDB with the RocksDB and SurrealKV configurations vs PostgreSQL and MySQL.

Total time

Wall time - Lower is better - Time from start to finish as measured by a clock

BenchmarkSurrealDB (RocksDB)PostgreSQLMySQLSurrealDB (SurrealKV)
[C]reate32s 237ms24s 399ms1m 57s4m 8s
[R]ead9s 827ms17s 624ms20s 821ms1m 10s
[U]pdate34s 181ms30s 458ms2m 28s4m 37s
[S]can::count_all (100)4s 19ms6s 236ms4s 268ms6s 985ms
[S]can::limit_id (100)81ms 632µs39ms 770µs29ms 81µs58ms 287µs
[S]can::limit_all (100)64ms 714µs32ms 778µs20ms 586µs67ms 195µs
[S]can::limit_count (100)63ms 966µs32ms 240µs21ms 360µs62ms 69µs
[S]can::limit_start_id (100)804ms 91µs42ms 602µs53ms 621µs929ms 129µs
[S]can::limit_start_all (100)779ms 538µs25ms 749µs103ms 82µs738ms 549µs
[S]can::limit_start_count (100)701ms 106µs28ms 848µs56ms 275µs700ms 844µs
[D]elete57s 793ms25s 158ms2m 15s4m 30s


Throughput

Operations per second (OPS) - Higher is better

BenchmarkSurrealDB (RocksDB)PostgreSQLMySQLSurrealDB (SurrealKV)
[C]reate155,096.92204,923.1042,409.9920,123.13
[R]ead508,757.45283,699.12240,133.3571,195.44
[U]pdate146,277.98164,156.1733,688.4118,043.96
[S]can::count_all (100)24.8816.0323.4314.32
[S]can::limit_id (100)1,225.002,514.403,438.611,715.65
[S]can::limit_all (100)1,545.253,050.824,857.471,488.18
[S]can::limit_count (100)1,563.313,101.734,681.621,611.11
[S]can::limit_start_id (100)124.362,347.281,864.92107.63
[S]can::limit_start_all (100)128.283,883.56970.09135.40
[S]can::limit_start_count (100)142.633,466.341,776.97142.68
[D]elete86,514.94198,739.4636,780.7218,478.25


Latency

99th percentile - Lower is better - Top 1% slowest operations, 99% of operations are faster than this

BenchmarkSurrealDB (RocksDB)PostgreSQLMySQLSurrealDB (SurrealKV)
[C]reate79.04 ms57.57 ms484.61 ms360.19 ms
[R]ead15.36 ms130.94 ms31.10 ms129.53 ms
[U]pdate82.30 ms62.59 ms598.01 ms388.61 ms
[S]can::count_all (100)4009.98 ms6221.82 ms4169.73 ms6975.49 ms
[S]can::limit_id (100)76.29 ms5.03 ms19.12 ms53.79 ms
[S]can::limit_all (100)60.67 ms7.92 ms16.80 ms62.72 ms
[S]can::limit_count (100)61.38 ms7.44 ms13.91 ms58.27 ms
[S]can::limit_start_id (100)799.23 ms39.74 ms50.17 ms926.21 ms
[S]can::limit_start_all (100)774.65 ms22.93 ms98.11 ms733.18 ms
[S]can::limit_start_count (100)696.32 ms25.66 ms51.87 ms695.81 ms
[D]elete226.43 ms47.62 ms552.45 ms376.83 ms

Click here to see the full results

Relational embedded database comparison

See the below crud-bench summary results comparing SurrealDB with the RocksDB, SurrealKV and in-memory configurations vs SQLite.

Total time

Wall time - Lower is better - Time from start to finish as measured by a clock

BenchmarkSurrealDB embedded (RocksDB)SurrealDB embedded (SurrealKV)SurrealDB embedded (in-memory)SQLite
[C]reate19s 753ms1m 57s2m 29s1m 23s
[R]ead9s 985ms1m 5s9s 381ms38s 300ms
[U]pdate27s 938ms2m 11s2m 51s44s 964ms
[S]can::count_all (100)5s 385ms7s 72ms11s 744ms6s 9ms
[S]can::limit_id (100)34ms 537µs72ms 30µs47ms 990µs42ms 374µs
[S]can::limit_all (100)21ms 992µs22ms 617µs35ms 50µs29ms 69µs
[S]can::limit_count (100)21ms 767µs22ms 921µs27ms 384µs24ms 718µs
[S]can::limit_start_id (100)774ms 886µs787ms 202µs788ms 488µs23ms 416µs
[S]can::limit_start_all (100)610ms 580µs618ms 336µs610ms 489µs27ms 203µs
[S]can::limit_start_count (100)253ms 788µs258ms 285µs263ms 95µs23ms 758µs
[D]elete1m 8s2m 24s2m 27s1m 9s


Throughput

Operations per second (OPS) - Higher is better

BenchmarkSurrealDB embedded (RocksDB)SurrealDB embedded (SurrealKV)SurrealDB embedded (in-memory)SQLite
[C]reate253,119.3442,471.8233,503.7959,666.43
[R]ead500,710.2076,006.62532,942.01130,545.35
[U]pdate178,967.4137,946.5729,140.42111,198.12
[S]can::count_all (100)18.5714.148.5116.64
[S]can::limit_id (100)2,895.431,388.302,083.732,359.92
[S]can::limit_all (100)4,547.034,421.452,853.023,440.05
[S]can::limit_count (100)4,593.974,362.643,651.724,045.58
[S]can::limit_start_id (100)129.05127.03126.824,270.52
[S]can::limit_start_all (100)163.78161.72163.803,676.01
[S]can::limit_start_count (100)394.03387.17380.094,208.96
[D]elete73,129.8634,535.4033,998.6772,338.41


Latency

99th percentile - Lower is better - Top 1% slowest operations, 99% of operations are faster than this

BenchmarkSurrealDB embedded (RocksDB)SurrealDB embedded (SurrealKV)SurrealDB embedded (in-memory)SQLite
[C]reate50.69 ms185.98 ms252.54 ms111.87 ms
[R]ead22.72 ms262.91 ms22.94 ms49.63 ms
[U]pdate77.50 ms204.80 ms271.62 ms57.31 ms
[S]can::count_all (100)5357.57 ms6975.49 ms8384.51 ms5947.39 ms
[S]can::limit_id (100)28.99 ms62.81 ms44.67 ms11.65 ms
[S]can::limit_all (100)19.87 ms15.03 ms20.35 ms14.84 ms
[S]can::limit_count (100)19.41 ms19.04 ms20.45 ms1.30 ms
[S]can::limit_start_id (100)770.05 ms783.36 ms784.89 ms18.56 ms
[S]can::limit_start_all (100)605.70 ms611.33 ms603.65 ms24.89 ms
[S]can::limit_start_count (100)249.22 ms250.50 ms257.41 ms11.50 ms
[D]elete183.17 ms237.95 ms233.22 ms91.14 ms

Click here to see the full results

Document database comparison

The closest thing the NoSQL community has to a standard benchmark is the Yahoo! Cloud Serving Benchmark (YCSB), which has 6 workloads simulating various database use cases.

In our benchmarking repository, you’ll find an implementation of this benchmark in the Go programming language. This implementation was ported to Go from Java by PingCAP.

You’ll also find a fork of NoSQLBench which is developed by DataStax. The SurrealDB changes to this benchmarking tool have not yet been released, but its something we are actively looking into.

We are working on running the YCSB benchmark in a multi-node configuration, which will come after this single node crud-bench benchmark.

For now, you can see the below crud-bench summary results comparing SurrealDB with the RocksDB and SurrealKV configurations vs MongoDB and ArangoDB.

Total time

Wall time - Lower is better - Time from start to finish as measured by a clock

BenchmarkSurrealDB (RocksDB)MongoDBArangoDBSurrealDB (SurrealKV)
[C]reate32s 237ms54s 211ms3m 2s4m 8s
[R]ead9s 827ms55s 0ms27s 615ms1m 10s
[U]pdate34s 181ms57s 53ms3m 18s4m 37s
[S]can::count_all (100)4s 19ms8s 285ms22s 85ms6s 985ms
[S]can::limit_id (100)81ms 632µs43ms 280µs57ms 753µs58ms 287µs
[S]can::limit_all (100)64ms 714µs31ms 805µs10s 167ms67ms 195µs
[S]can::limit_count (100)63ms 966µs29ms 773µs43ms 949µs62ms 69µs
[S]can::limit_start_id (100)804ms 91µs29ms 73µs86ms 470µs929ms 129µs
[S]can::limit_start_all (100)779ms 538µs23ms 178µs10s 311ms738ms 549µs
[S]can::limit_start_count (100)701ms 106µs28ms 340µs65ms 24µs700ms 844µs
[D]elete57s 793ms53s 553ms3m 5s4m 30s


Throughput

Operations per second (OPS) - Higher is better

BenchmarkSurrealDB (RocksDB)MongoDBArangoDBSurrealDB (SurrealKV)
[C]reate155,096.9292,230.9327,395.8020,123.13
[R]ead508,757.4590,907.45181,056.5171,195.44
[U]pdate146,277.9887,636.3025,171.6118,043.96
[S]can::count_all (100)24.8812.074.5314.32
[S]can::limit_id (100)1,225.002,310.521,731.511,715.65
[S]can::limit_all (100)1,545.253,144.149.841,488.18
[S]can::limit_count (100)1,563.313,358.692,275.331,611.11
[S]can::limit_start_id (100)124.363,439.611,156.46107.63
[S]can::limit_start_all (100)128.284,314.379.70135.40
[S]can::limit_start_count (100)142.633,528.481,537.88142.68
[D]elete86,514.9493,365.2826,939.3018,478.25


Latency

99th percentile - Lower is better - Top 1% slowest operations, 99% of operations are faster than this

BenchmarkSurrealDB (RocksDB)MongoDBArangoDBSurrealDB (SurrealKV)
[C]reate79.04 ms105.22 ms285.18 ms360.19 ms
[R]ead15.36 ms70.78 ms38.75 ms129.53 ms
[U]pdate82.30 ms88.64 ms323.58 ms388.61 ms
[S]can::count_all (100)4009.98 ms8232.96 ms21708.80 ms6975.49 ms
[S]can::limit_id (100)76.29 ms14.74 ms53.53 ms53.79 ms
[S]can::limit_all (100)60.67 ms7.00 ms9953.28 ms62.72 ms
[S]can::limit_count (100)61.38 ms3.67 ms30.27 ms58.27 ms
[S]can::limit_start_id (100)799.23 ms17.57 ms82.11 ms926.21 ms
[S]can::limit_start_all (100)774.65 ms16.89 ms10100.74 ms733.18 ms
[S]can::limit_start_count (100)696.32 ms12.67 ms60.80 ms695.81 ms
[D]elete226.43 ms68.67 ms284.16 ms376.83 ms

Click here to see the full results

Graph database comparison

As we continue to make improvements to our graph features, we are looking into implementing the benchmarks from the Linked Data Benchmark Council (LDBC). If this is something you are interested in, please reach out to us!

For now, you can see the below crud-bench summary results comparing SurrealDB with the RocksDB and SurrealKV configurations vs Neo4j.

One thing to note is that this is not comparing graph relationships, only crud operations, as we have not yet implemented relationships in crud-bench.

Total time

Wall time - Lower is better - Time from start to finish as measured by a clock

BenchmarkSurrealDB (RocksDB)Neo4jSurrealDB (SurrealKV)
[C]reate32s 237ms6m 36s4m 8s
[R]ead9s 827ms1m 3s1m 10s
[U]pdate34s 181ms8m 37s4m 37s
[S]can::count_all (100)4s 19ms20s 226ms6s 985ms
[S]can::limit_id (100)81ms 632µs237ms 2µs58ms 287µs
[S]can::limit_all (100)64ms 714µs42ms 530µs67ms 195µs
[S]can::limit_count (100)63ms 966µs43ms 153µs62ms 69µs
[S]can::limit_start_id (100)804ms 91µs84ms 143µs929ms 129µs
[S]can::limit_start_all (100)779ms 538µs30ms 818µs738ms 549µs
[S]can::limit_start_count (100)701ms 106µs44ms 6µs700ms 844µs
[D]elete57s 793ms2m 20s4m 30s


Throughput

Operations per second (OPS) - Higher is better

BenchmarkSurrealDB (RocksDB)Neo4jSurrealDB (SurrealKV)
[C]reate155,096.9212,614.5020,123.13
[R]ead508,757.4578,444.6871,195.44
[U]pdate146,277.989,657.0118,043.96
[S]can::count_all (100)24.884.9414.32
[S]can::limit_id (100)1,225.00421.941,715.65
[S]can::limit_all (100)1,545.252,351.271,488.18
[S]can::limit_count (100)1,563.312,317.291,611.11
[S]can::limit_start_id (100)124.361,188.44107.63
[S]can::limit_start_all (100)128.283,244.85135.40
[S]can::limit_start_count (100)142.632,272.42142.68
[D]elete86,514.9435,556.1218,478.25


Latency

99th percentile - Lower is better - Top 1% slowest operations, 99% of operations are faster than this

BenchmarkSurrealDB (RocksDB)Neo4jSurrealDB (SurrealKV)
[C]reate79.04 ms774.65 ms360.19 ms
[R]ead15.36 ms98.75 ms129.53 ms
[U]pdate82.30 ms694.27 ms388.61 ms
[S]can::count_all (100)4009.98 ms20201.47 ms6975.49 ms
[S]can::limit_id (100)76.29 ms234.50 ms53.79 ms
[S]can::limit_all (100)60.67 ms40.38 ms62.72 ms
[S]can::limit_count (100)61.38 ms28.77 ms58.27 ms
[S]can::limit_start_id (100)799.23 ms81.73 ms926.21 ms
[S]can::limit_start_all (100)774.65 ms21.45 ms733.18 ms
[S]can::limit_start_count (100)696.32 ms19.21 ms695.81 ms
[D]elete226.43 ms473.34 ms376.83 ms

Click here to see the full results

In-memory database comparison

See the below crud-bench summary results comparing SurrealDB with the in-memory configuration vs Redis, Dragonfly and KeyDB.

If you would like to see more comparisons or have ideas of how we can improve these benchmarks, let us know.

Total time

Wall time - Lower is better - Time from start to finish as measured by a clock

BenchmarkSurrealDB in-memoryRedisDragonflyKeyDB
[C]reate4m 36s57s 502ms14s 734ms54s 278ms
[R]ead8s 689ms54s 481ms14s 171ms52s 8ms
[U]pdate5m 2s53s 799ms14s 682ms52s 607ms
[S]can::count_all (100)1m 29s4m 39s26m 11s4m 58s
[S]can::limit_id (100)97ms 340µs139ms 608µs1s 295ms171ms 813µs
[S]can::limit_all (100)101ms 131µs599ms 524µs1s 975ms679ms 912µs
[S]can::limit_count (100)58ms 410µs145ms 503µs1s 387ms167ms 701µs
[S]can::limit_start_id (100)944ms 173µs284ms 803µs2s 583ms351ms 872µs
[S]can::limit_start_all (100)898ms 607µs918ms 956µs3s 302ms927ms 417µs
[S]can::limit_start_count (100)818ms 139µs314ms 147µs3s 48ms340ms 127µs
[D]elete4m 32s54s 114ms14s 171ms51s 101ms


Throughput

Operations per second (OPS) - Higher is better

BenchmarkSurrealDB in-memoryRedisDragonflyKeyDB
[C]reate18,092.8586,952.42339,345.3992,117.50
[R]ead575,416.4591,774.61352,816.5396,137.26
[U]pdate16,510.9192,937.11340,535.5195,042.82
[S]can::count_all (100)1.110.360.060.33
[S]can::limit_id (100)1,027.32716.2977.20582.03
[S]can::limit_all (100)988.81166.8050.63147.08
[S]can::limit_count (100)1,712.01687.2772.06596.30
[S]can::limit_start_id (100)105.91351.1238.71284.19
[S]can::limit_start_all (100)111.28108.8230.28107.83
[S]can::limit_start_count (100)122.23318.3232.80294.01
[D]elete18,325.8092,397.10352,832.6397,844.68


Latency

99th percentile - Lower is better - Top 1% slowest operations, 99% of operations are faster than this

BenchmarkSurrealDB in-memoryRedisDragonflyKeyDB
[C]reate403.71 ms83.20 ms24.67 ms95.94 ms
[R]ead13.65 ms75.33 ms23.09 ms92.67 ms
[U]pdate431.10 ms73.73 ms24.50 ms92.67 ms
[S]can::count_all (100)89915.39 ms274726.91 ms1535115.26 ms293339.14 ms
[S]can::limit_id (100)93.50 ms134.66 ms1258.49 ms164.09 ms
[S]can::limit_all (100)97.66 ms587.77 ms1953.79 ms659.46 ms
[S]can::limit_count (100)54.78 ms137.73 ms1355.78 ms161.79 ms
[S]can::limit_start_id (100)939.01 ms280.57 ms2512.89 ms340.48 ms
[S]can::limit_start_all (100)893.44 ms891.39 ms3219.45 ms906.75 ms
[S]can::limit_start_count (100)812.54 ms304.13 ms3012.61 ms330.50 ms
[D]elete391.94 ms74.30 ms23.12 ms90.81 ms

Click here to see the full results

Key-Value store comparison

See the below crud-bench summary results comparing SurrealKV key-value storage engine vs RocksDB, LMDB and Fjall.

A few things to note regarding SurrealKV:

  • It's still in beta and under active development.

  • It's primary purpose is not to replace RocksDB, but to enable new use cases such as versioning/versioned queries.

  • RocksDB is still our primary key-value storage engine

If you would like to see more comparisons or have ideas of how we can improve these benchmarks, let us know.

Total time

Wall time - Lower is better - Time from start to finish as measured by a clock

BenchmarkRocksDBSurrealKVLMDBFjall
[C]reate49s 354ms1m 27s2m 21s1m 38s
[R]ead8s 458ms35s 749ms4s 170ms5s 106ms
[U]pdate48s 842ms1m 20s2m 14s1m 41s
[S]can::count_all (100)11s 571ms9s 739ms868ms 104µs14s 423ms
[S]can::limit_id (100)36ms 686µs29ms 516µs22ms 885µs83ms 776µs
[S]can::limit_all (100)30ms 455µs22ms 489µs22ms 309µs33ms 984µs
[S]can::limit_count (100)24ms 901µs23ms 685µs21ms 121µs18ms 819µs
[S]can::limit_start_id (100)21ms 526µs16ms 100µs22ms 362µs21ms 706µs
[S]can::limit_start_all (100)17ms 847µs75ms 414µs22ms 424µs20ms 943µs
[S]can::limit_start_count (100)24ms 22µs27ms 197µs24ms 15µs15ms 994µs


Throughput

Operations per second (OPS) - Higher is better

BenchmarkRocksDBSurrealKVLMDBFjall
[C]reate101,307.7356,953.2835,284.9450,890.18
[R]ead591,113.32139,862.881,198,773.26979,056.07
[U]pdate102,370.1062,131.2537,195.5749,277.46
[S]can::count_all (100)8.6410.27115.196.93
[S]can::limit_id (100)2,725.803,387.934,369.661,193.65
[S]can::limit_all (100)3,283.524,446.444,482.442,942.52
[S]can::limit_count (100)4,015.794,221.924,734.495,313.53
[S]can::limit_start_id (100)4,645.476,210.924,471.774,606.96
[S]can::limit_start_all (100)5,602.941,326.014,459.324,774.72
[S]can::limit_start_count (100)4,162.773,676.864,164.066,252.01
[D]elete106,131.1378,687.8223,215.5255,013.69


Latency

99th percentile - Lower is better - Top 1% slowest operations, 99% of operations are faster than this

BenchmarkRocksDBSurrealKVLMDBFjall
[C]reate0.75 ms138.24 ms32.58 ms11.08 ms
[R]ead0.79 ms2.14 ms0.39 ms0.29 ms
[U]pdate0.77 ms126.27 ms34.94 ms11.46 ms
[S]can::count_all (100)4476.93 ms3731.45 ms326.65 ms5844.99 ms
[S]can::limit_id (100)0.30 ms0.12 ms0.01 ms70.91 ms
[S]can::limit_all (100)0.35 ms0.70 ms0.01 ms0.27 ms
[S]can::limit_count (100)0.30 ms0.09 ms0.04 ms0.41 ms
[S]can::limit_start_id (100)6.41 ms10.22 ms0.55 ms10.11 ms
[S]can::limit_start_all (100)9.10 ms64.06 ms0.37 ms10.59 ms
[S]can::limit_start_count (100)7.96 ms3.58 ms0.43 ms8.08 ms
[D]elete0.69 ms107.14 ms50.59 ms10.90 ms

Click here to see the full results

Vector search benchmarks

To help us improve our vector search performance, we started by forking the ANN benchmarks developed by Erik Bernhardsson, which are one of the most popular vector search benchmarks at the moment. We have since expanded on that with tests for all flavours of vector search (MTREE, Brute-Force, HNSW) in SurrealDB. There is still more work to be done, but feel free to look at what we have done so far.

How you can use our benchmarking tooling

We have made our benchmarking repository on GitHub public and you can find all the code and detailed instructions on how to run and contribute to each benchmark there.

https://github.com/surrealdb/benchmarking

SurrealDB can do a lot and this is only the start of our performance optimisation journey!

There is still a lot to do and we are committed to provide a very comprehensive benchmarking environment, and as always, we really appreciate any feedback and contributions. SurrealDB wouldn’t be what it is today without you!

You can reach out to us here

Our newsletter

Explore our releases, news, events, and much more