Setting up an embedded SurrealDB database only takes a few lines of code. After creating a new Cargo project with cargo new project_name and going into the project folder, we will then add the following dependencies inside Cargo.toml:
anyhow = "1.0.98" async-openai = "0.28.3" serde = "1.0.219" surrealdb = { version = "2.3", features = ["kv-mem"] } tokio = "1.45.0"
They can also be added on the command line using this command:
cargo add anyhow async-openai serde tokio surrealdb --features surrealdb/kv-mem
Inside main(), we can call the connect function with "memory" to instantiate an embedded database in memory. With the possibility of error types from various sources, using anyhow is the easiest way to get started.
Inside the SDK we can put all four of these inside a single .query() call and then add a line to see if there are errors inside any of them.
letmutres=db .query( "DEFINE TABLE document; DEFINE FIELD text ON document TYPE string; DEFINE FIELD embedding ON document TYPE array<float>; DEFINE INDEX hnsw_embed ON document FIELDS embedding HNSW DIMENSION 1536;", ) .await?; for(index,error)inres.take_errors(){ println!("Error in query {index}: {error}"); }
The important piece to understand is the relationship between the embedding field, a simple array of floats, and the hnsw_embed index. The size of the vector (1536 here) represents the number of dimensions in the embedding. Since OpenAI's text-embedding-3-small model in this example uses 1536 as its default length, we set the vector size to 1536.
The HNSW index is not strictly necessary to use the KNN operator (<||>) to find an embedding's closest neighbours, and for our small sample code we will use the simple brute force method which chooses an algorithm such as Euclidean, Hamming, and so on. The following is the code that we will use, which uses the cosine of an embedding to find the four closest neighbours.
As the dataset grows, if some loss of accuracy is acceptable then the syntax can be changed to use the HNSW index, by replacing an algorithm with a number that represents the size of the dynamic candidate list.
At this point, you will need an OpenAI API key to interact with the OpenAI API. You can still check the code to see if it works if you don't have a key, and you will get as far as this error message.
Error: invalid_request_error: Incorrect API key provided: blah. You can find your API key at https://platform.openai.com/account/api-keys. (code: invalid_api_key)
The best way to set the key is as an environment variable, OPENAI_API_KEY in this case. Using a LazyLock will let us call it via std::env::var() function the first time it is accessed. You can of course simply put it into a const for simplicity when first testing, but always remember to never hard-code API keys in your code in production.
The output in your console should show a massive number of floats, 1536 of them to be precise. That's the embedding for this input!
Store embeddings in database
Now that we have the embedding returned from the OpenAI client, we can store it in the database. The response returned from the async-openai crate looks like this, with a Vec of Embedding structs that hold a Vec<f32>.
We know that our simple request only returned a single embedding, so .remove(0) will do the job. In a more complex codebase you would probably opt for a match on .get(0) to handle any possible errors.
letembeds=result.data.remove(0).embedding;
There are a number of ways to work with or avoid structs when using the Rust SDK, but we'll just go with two basic structs: one to represent the input into a .create() statement, which will implement Serialise, and another that implements Deserialise to show the result.
And then call it a few times inside main(). See if you can guess the answers yourself!
forinputin[ "What does the cat chase?", "What do Fraggles love to eat?", "Which planet rotates slowly on its axis?", "Which Greek general helped Cyrus the Younger?", "What is the largest inland sea?"]{ create_embed(input,&db,&client).await? }
Semantic search
Finally let's perform semantic search over the embeddings in our database.
With that done, it's time to test the database out. We'll go with this query that uses the KNN operator to return the closest two matches to an embedding.
We will then put this into a separate function called test_embed() which looks similar create_embed(), except that it uses its embedding retrieved from OpenAI to query the database against existing documents instead of creating a new document.
letmutresponse=db.query("SELECT text, vector::distance::knn() AS distance FROM document WHERE embedding <|2,COSINE|> $embeds ORDER BY distance;").bind(("embeds",embeds)).await?; letas_val: Value=response.take(0)?; println!("{as_val}\n"); Ok(()) }
Finally, we will call this function a few times inside main() to confirm that the results are what we expect them to be, printing out the results of each so that we can eyeball them and make sure that they are what we expect them to be.
println!("Venus is closest to:"); test_embed("Venus",&db,&client).await?;
println!("Xenophon is closest to:"); test_embed("Xenophon",&db,&client).await?;
println!("Mice are closest to:"); test_embed("mouse",&db,&client).await?;
println!("Radishes are closest to:"); test_embed("radish",&db,&client).await?;
println!("The Caspian Sea is closest to:"); test_embed("Caspian Sea",&db,&client).await?;
The output shows that in each case the closest document is returned first:
"Venus" to "Which planet rotates slowly on its axis?"
"Xenophon" to "Which Greek general helped Cyrus the Younger?"
"mouse" to "What does the cat chase?"
"radish" to "What do Fraggles love to eat?", and
"Caspian Sea" to "What is the largest inland sea?"
Success!
Venus is closest to: [{ distance: 0.6495068000978139f, text: 'Which planet rotates slowly on its axis?' }, { distance: 0.8388033444017572f, text: 'Which Greek general helped Cyrus the Younger?' }]
Xenophon is closest to: [{ distance: 0.4421917772479055f, text: 'Which Greek general helped Cyrus the Younger?' }, { distance: 0.873354690471173f, text: 'What does the cat chase?' }]
Mice are closest to: [{ distance: 0.6945913095506092f, text: 'What does the cat chase?' }, { distance: 0.8249335430462937f, text: 'Which planet rotates slowly on its axis?' }]
Radishes are closest to: [{ distance: 0.7256996315669555f, text: 'What do Fraggles love to eat?' }, { distance: 0.8812784798259233f, text: 'What does the cat chase?' }]
The Caspian Sea is closest to: [{ distance: 0.49966454922547254f, text: 'What is the largest inland sea?' }, { distance: 0.8096568276647603f, text: 'Which Greek general helped Cyrus the Younger?' }]
At this point, you could give the HNSW index a try by changing the <|2,COSINE|> in the query to something like <|2,40|>. The distance numbers will end up looking quite different, but the ordering of the closest neighbours will probably be the same in this small example.
Finally, here is all of the code for you to run and modify as you wish. Any questions or thoughts about this or semantic search using SurrealDB? Feel free to drop by our Discord to get in touch.
letmutresponse=db.query("SELECT text, vector::distance::knn() AS distance FROM document WHERE embedding <|2,COSINE|> $embeds ORDER BY distance;").bind(("embeds",embeds)).await?; letas_val: Value=response.take(0)?; println!("{as_val}\n"); Ok(()) }
letmutres=db .query( "DEFINE TABLE document; DEFINE FIELD text ON document TYPE string; DEFINE FIELD embedding ON document TYPE array<float>; DEFINE INDEX hnsw_embed ON document FIELDS embedding HNSW DIMENSION 1536 DIST COSINE;", ) .await?; for(index,error)inres.take_errors(){ println!("Error in query {index}: {error}"); }
forinputin[ "What does the cat chase?", "What do Fraggles love to eat?", "Which planet rotates slowly on its axis?", "Which Greek general helped Cyrus the Younger?", "What is the largest inland sea?", ]{ create_embed(input,&db,&client).await? }
println!("Venus is closest to:"); test_embed("Venus",&db,&client).await?;
println!("Xenophon is closest to:"); test_embed("Xenophon",&db,&client).await?;
println!("Mice are closest to:"); test_embed("mouse",&db,&client).await?;
println!("Radishes are closest to:"); test_embed("radish",&db,&client).await?;
println!("The Caspian Sea is closest to:"); test_embed("Caspian Sea",&db,&client).await?;