TiKV on a Azure Kubernetes Service cluster.
TiKV is a cloud-native transactional key/value store built by PingCAP and that integrates well with Kubernetes thanks to their tidb-operator.
In order for you to complete this tutorial you’ll need:
kubectl
to manage the Kubernetes clusterhelm
to install SurrealDB server and TiKVSurreal CLI
to interact with the SurrealDB serverNoteProvisioning the environment in your Azure account will create resources and there will be cost associated with them. The cleanup section provides a guide to remove them, preventing further charges.
NoteThis guide was tested in
westeurope
region and it follows TiKV best practices for scalability and high availability. This tutorial is intended for production workloads using thestandard
tier. If you want to create a dev/test environment, you should root for thefree
tier and change the cluster and node pool configuration (no zone, fewer nodes).
Log in to Azure and list Azure subscriptions$ az login $ az account list
Create a new resource group$ az group create --name rg-surrealdb-aks --location westeurope
Create a new AKS cluster$ az aks create \ --resource-group rg-surrealdb-aks \ --location westeurope \ --name surrealdb-aks-cluster \ --generate-ssh-keys \ --load-balancer-sku standard \ --node-count 3 \ --zones 1 2 3 \ --enable-addons monitoring \ --tier standard
kubectl
to connect to the new cluster:
Get AKS cluster credentials$ az aks get-credentials --resource-group rg-surrealdb-aks --name surrealdb-aks-cluster
Display cluster nodes$ kubectl get nodes NAME STATUS ROLES AGE VERSION aks-nodepool1-33674805-vmss000000 Ready agent 2m35s v1.26.6 aks-nodepool1-33674805-vmss000001 Ready agent 2m36s v1.26.6 aks-nodepool1-33674805-vmss000002 Ready agent 2m34s v1.26.6
NoteIn order to speed things up, the following commands can be executed in parallel.
Create a TiDB Operator and monitor pool$ az aks nodepool add --name admin \ --resource-group rg-surrealdb-aks \ --cluster-name surrealdb-aks-cluster \ --zones 1 2 3 \ --node-count 1 \ --labels dedicated=admin
Create a PD node pool$ az aks nodepool add --name pd \ --resource-group rg-surrealdb-aks \ --cluster-name surrealdb-aks-cluster \ --node-vm-size Standard_F4s_v2 \ --zones 1 2 3 \ --node-count 3 \ --labels dedicated=pd \ --node-taints dedicated=pd:NoSchedule
Create a TiKV node pool$ az aks nodepool add --name tikv \ --resource-group rg-surrealdb-aks \ --cluster-name surrealdb-aks-cluster \ --node-vm-size Standard_E8s_v4 \ --zones 1 2 3 \ --node-count 3 \ --labels dedicated=tikv \ --node-taints dedicated=tikv:NoSchedule \ --enable-ultra-ssd
Now that we have a Kubernetes cluster, we can deploy the TiDB
operator. TiDB operator is a Kubernetes operator that manages the lifecycle of TiDB clusters deployed to Kubernetes. You can deploy it following these steps:
Install CRDS$ kubectl create -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.5.0/manifests/crd.yaml
Install TiDB operator$ helm repo add pingcap https://charts.pingcap.org $ helm repo update $ helm install \ -n tidb-operator \ --create-namespace \ tidb-operator \ pingcap/tidb-operator \ --version v1.5.0
Verify TiDB operator$ kubectl get pods --namespace tidb-operator -l app.kubernetes.io/instance=tidb-operator NAME READY STATUS RESTARTS AGE tidb-controller-manager-67d678dc64-qf6p2 1/1 Running 0 60s tidb-scheduler-68555ffd4-l2ssf 2/2 Running 0 60s
Now that we have the TiDB Operator running, it’s time to define a TiDB Cluster and let the Operator do the rest.
tikv-cluster.yaml
with this content:
TiDB cluster definitionapiVersion: pingcap.com/v1alpha1 kind: TidbCluster metadata: name: sdb-datastore spec: version: v6.5.0 timezone: UTC configUpdateStrategy: RollingUpdate pvReclaimPolicy: Delete enableDynamicConfiguration: true schedulerName: default-scheduler topologySpreadConstraints: - topologyKey: topology.kubernetes.io/zone helper: image: alpine:3.16.0 pd: baseImage: pingcap/pd maxFailoverCount: 0 replicas: 3 storageClassName: managed-csi-premium requests: cpu: 500m storage: 10Gi memory: 1Gi config: | [dashboard] internal-proxy = true [replication] location-labels = ["topology.kubernetes.io/zone", "kubernetes.io/hostname"] max-replicas = 3 nodeSelector: dedicated: pd tolerations: - effect: NoSchedule key: dedicated operator: Equal value: pd affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app.kubernetes.io/component operator: In values: - pd topologyKey: kubernetes.io/hostname tikv: baseImage: pingcap/tikv maxFailoverCount: 0 replicas: 3 storageClassName: managed-csi-premium requests: cpu: 1 storage: 10Gi memory: 2Gi config: {} nodeSelector: dedicated: tikv tolerations: - effect: NoSchedule key: dedicated operator: Equal value: tikv affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app.kubernetes.io/component operator: In values: - tikv topologyKey: kubernetes.io/hostname tidb: replicas: 0
Create TiDB cluster$ kubectl apply -f tikv-cluster.yaml
Verify TiDB cluster$ kubectl get tidbcluster $ kubectl get pods NAME READY STATUS RESTARTS AGE sdb-datastore-discovery-7d7d684d88-4v4ws 1/1 Running 0 4m2s sdb-datastore-pd-0 1/1 Running 0 4m2s sdb-datastore-pd-1 1/1 Running 0 4m2s sdb-datastore-pd-2 1/1 Running 0 4m2s sdb-datastore-tikv-0 1/1 Running 0 3m12s sdb-datastore-tikv-1 1/1 Running 0 3m12s sdb-datastore-tikv-2 1/1 Running 0 3m12s
Now that we have a TiDB cluster running, we can deploy SurrealDB using the official Helm chart. The deploy will use the latest SurrealDB Docker image and make it accessible on internet.
Get TIKV PD service url$ kubectl get service sdb-datastore-pd NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE sdb-datastore-pd ClusterIP 10.0.161.101 <none> 2379/TCP 5m27s $ export TIKV_URL=tikv://sdb-datastore-pd:2379
Install SurrealDB Helm chart$ helm repo add surrealdb https://helm.surrealdb.com $ helm repo update $ helm install \ --set surrealdb.path=$TIKV_URL \ --set surrealdb.auth=false \ --set ingress.enabled=false \ --set service.type=LoadBalancer \ --set service.port=80 \ --set service.targetPort=8000 \ --set image.tag=latest \ surrealdb-tikv surrealdb/surrealdb
EXTERNAL-IP
assigned:
Wait for Loadbalancer address$ kubectl get service surrealdb-tikv NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE surrealdb-tikv LoadBalancer 10.0.38.191 20.13.45.154 80:30378/TCP 6m34s
Define initial credentials$ export SURREALDB_URL=http://$(kubectl get service surrealdb-tikv -o json | jq -r .status.loadBalancer.ingress[0].ip) $ surreal sql -e $SURREALDB_URL > DEFINE USER root ON ROOT PASSWORD 'StrongSecretPassword!' ROLES OWNER; # Verify you can connect to the database with the new credentials: $ surreal sql -u root -p 'StrongSecretPassword!' -e $SURREALDB_URL > INFO FOR ROOT [{ namespaces: { }, users: { root: "DEFINE USER root ON ROOT PASSHASH '...' ROLES OWNER" } }]
Update SurrealDB Helm chart$ helm upgrade \ --set surrealdb.path=$TIKV_URL \ --set surrealdb.auth=true \ --set ingress.enabled=false \ --set service.type=LoadBalancer \ --set service.port=80 \ --set service.targetPort=8000 \ --set image.tag=latest \ surrealdb-tikv surrealdb/surrealdb
Run the following commands to delete the Kubernetes resources and the AKS cluster:
Cleanup commands$ helm uninstall surrealdb-tikv $ helm -n tidb-operator uninstall tidb-operator $ az aks delete --name surrealdb-aks-cluster --resource-group rg-surrealdb-aks $ az group delete --resource-group rg-surrealdb-aks