TiKV is a cloud-native transactional key/value store built by PingCAP and that integrates well with Kubernetes thanks to their tidb-operator.
In order for you to complete this tutorial you’ll need:
kubectl
with gcloud integration for accessing the GKE cluster. Installation herehelm
: To install SurrealDB server and TiKVSurreal CLI
: To interact with the SurrealDB server
List projects and regionsgcloud projects list gcloud compute regions list --project PROJECT_ID
Create new GKE autopilot Clustergcloud container clusters create-auto surrealdb-guide --region REGION --project PROJECT_ID
Configure kubectlgcloud container clusters get-credentials surrealdb-guide --region REGION --project PROJECT_ID
Now that we have a Kubernetes cluster, we can deploy the TiDB operator. TiDB operator is a Kubernetes operator that manages the lifecycle of TiDB clusters deployed to Kubernetes.
You can deploy it following these steps:
Install CRDSkubectl create -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.0/manifests/crd.yaml
Install TiDB operator$ helm repo add pingcap https://charts.pingcap.org $ helm repo update $ helm install \ -n tidb-operator \ --create-namespace \ tidb-operator \ pingcap/tidb-operator \ --version v1.5.0
Verify Podskubectl get pods --namespace tidb-operator -l app.kubernetes.io/instance=tidb-operator NAME READY STATUS RESTARTS AGE tidb-controller-manager-56f49794d7-hnfz7 1/1 Running 0 20s tidb-scheduler-8655bcbc86-66h2d 2/2 Running 0 20s
Now that we have the TiDB Operator running, it’s time to define a TiDB Cluster and let the Operator do the rest.
tikv-cluster.yaml
with this content:
tikv-cluster.yamlapiVersion: pingcap.com/v1alpha1 kind: TidbCluster metadata: name: sdb-datastore spec: version: v6.5.0 timezone: UTC configUpdateStrategy: RollingUpdate pvReclaimPolicy: Delete enableDynamicConfiguration: true schedulerName: default-scheduler topologySpreadConstraints: - topologyKey: topology.kubernetes.io/zone helper: image: alpine:3.16.0 pd: baseImage: pingcap/pd maxFailoverCount: 0 replicas: 3 storageClassName: premium-rwo requests: cpu: 500m storage: 10Gi memory: 1Gi config: | [dashboard] internal-proxy = true [replication] location-labels = ["topology.kubernetes.io/zone", "kubernetes.io/hostname"] max-replicas = 3 nodeSelector: dedicated: pd tolerations: - effect: NoSchedule key: dedicated operator: Equal value: pd affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app.kubernetes.io/component operator: In values: - pd topologyKey: kubernetes.io/hostname tikv: baseImage: pingcap/tikv maxFailoverCount: 0 replicas: 3 storageClassName: premium-rwo requests: cpu: 1 storage: 10Gi memory: 2Gi config: {} nodeSelector: dedicated: tikv tolerations: - effect: NoSchedule key: dedicated operator: Equal value: tikv affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app.kubernetes.io/component operator: In values: - tikv topologyKey: kubernetes.io/hostname tidb: replicas: 0
Create TiDB clusterkubectl apply -f tikv-cluster.yaml
Check cluster statuskubectl get tidbcluster NAME READY PD STORAGE READY DESIRE TIKV STORAGE READY DESIRE TIDB READY DESIRE AGE sdb-datastore True pingcap/pd:v6.5.0 10Gi 3 3 pingcap/tikv:v6.5.0 10Gi 3 3 pingcap/tidb:v6.5.0 0 5m
Now that we have a TiDB cluster running, we can deploy SurrealDB using the official Helm chart
The deploy will use the latest SurrealDB Docker image and make it accessible on internet
Get TIKV PD service urlkubectl get svc/sdb-datastore-pd NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE sdb-datastore-pd ClusterIP 10.96.208.25 <none> 2379/TCP 10h export TIKV_URL=tikv://sdb-datastore-pd:2379
Install SurrealDB Helm charthelm repo add surrealdb https://helm.surrealdb.com $ helm repo update $ helm install \ --set surrealdb.path=$TIKV_URL \ --set surrealdb.auth=false \ --set ingress.enabled=true \ --set image.tag=latest \ surrealdb-tikv surrealdb/surrealdb
ADDRESS
assigned:
Wait for Ingress ADDRESSkubectl get ingress surrealdb-tikv NAME CLASS HOSTS ADDRESS PORTS AGE surrealdb-tikv <none> * 34.160.82.177 80 5m
Define initial credentials$ export SURREALDB_URL=http://$(kubectl get ingress surrealdb-tikv -o json | jq -r .status.loadBalancer.ingress[0].ip) $ surreal sql -e $SURREALDB_URL > DEFINE USER root ON ROOT PASSWORD 'StrongSecretPassword!' ROLES OWNER; Verify you can connect to the database with the new credentials: $ surreal sql -u root -p 'StrongSecretPassword!' -e $SURREALDB_URL > INFO FOR ROOT [{ namespaces: { }, users: { root: "DEFINE USER root ON ROOT PASSHASH '...' ROLES OWNER" } }]
Update SurrealDB Helm charthelm upgrade \ --set surrealdb.path=$TIKV_URL \ --set surrealdb.auth=true \ --set ingress.enabled=true \ --set image.tag=latest \ surrealdb-tikv surrealdb/surrealdb
Run the following commands to delete the Kubernetes resources and the GKE cluster:
Cleanup commandkubectl delete tidbcluster sdb-datastore helm uninstall surrealdb-tikv helm -n tidb-operator uninstall tidb-operator gcloud container clusters delete surrealdb-guide --region REGION --project PROJECT_ID