Gaming industry under DDoS attack. Get DDoS protection now. Start onboarding
  1. Home
  2. Developers
  3. Cloud-native benchmarking with Kubestone

Cloud-native benchmarking with Kubestone

  • By Gcore
  • April 10, 2023
  • 4 min read
Cloud-native benchmarking with Kubestone

Intro

Organizations are increasingly looking to containers and distributed applications to provide the agility and scalability needed to satisfy their clients. While doing so, modern enterprises also need the ability to benchmark their application and be aware of certain metrics in relation to their infrastructure.

In this post, we’re introducing you to a cloud-native bench-marking tool known as Kubestone. This tool is meant to assist your development teams with getting performance metrics from your Kubernetes clusters.

How does Kubestone work?

At it’s core, Kubestone is implemented as a Kubernetes Operator in Go language with the help of Kubebuilder. You can find more info on the Operator Framework via this blog post.
Kubestone leverages Open Source benchmarks to measure Core Kubernetes and Application performance. As benchmarks are executed in Kubernetes, they must be containerized to work on the cluster. A certified set of benchmark containers is provided via xridge’s DockerHub space. Here is a list of currently supported benchmarks:

TypeBenchmark NameStatus
Core/CPUsysbenchSupported
Core/DiskfioSupported
Core/DiskiopingSupported
Core/MemorysysbenchSupported
Core/Networkiperf3Supported
Core/NetworkqperfSupported
HTTP Load TesterdrillSupported
Application/EtcdetcdPlanned
Application/K8SkubeperfPlanned
Application/PostgreSQLpgbenchSupported
Application/SparksparkbenchPlanned

Let’s try installing Kubestone and running a benchmark ourselves and see how it works.

Installing Kubestone

Requirements

Deploy Kubestone to kubestone-system namespace with the following command:

$ kustomize build github.com/xridge/kubestone/config/default | kubectl create -f -

Once deployed, Kubestone will listen for Custom Resources created with the kubestone.xridge.io group.

Benchmarking

Benchmarks can be executed via Kubestone by creating Custom Resources in your cluster.

Namespace

It is recommended to create a dedicated namespace for benchmarking.

$ kubectl create namespace kubestone

After the namespace is created, you can use it to post a benchmark request to the cluster.

The resulting benchmark executions will reside in this namespace.

Custom Resource rendering

We will be using kustomize to render the Custom Resource from the github repository.

Kustomize takes a base yaml, and patches with an overlay file to render the final yaml file, which describes the benchmark.

$ kustomize build github.com/xridge/kubestone/config/samples/fio/overlays/pvc

The Custom Resource (rendered yaml) looks as follows:

apiVersion: perf.kubestone.xridge.io/v1alpha1kind: Fiometadata:  name: fio-samplespec:  cmdLineArgs: --name=randwrite --iodepth=1 --rw=randwrite --bs=4m --size=256M  image:    name: xridge/fio:3.13  volume:    persistentVolumeClaimSpec:      accessModes:      - ReadWriteOnce      resources:        requests:          storage: 1Gi    volumeSource:      persistentVolumeClaim:        claimName: GENERATED

When we create this resource in Kubernetes, the operator interprets it and creates the associated benchmark. The fields of the Custom Resource controls how the benchmark will be executed:

  • metadata.name: Identifies the Custom Resource. Later, this can be used to query or delete the benchmark in the cluster.
  • cmdLineArgs: Arguments passed to the benchmark. In this case we are providing the arguments to Fio (a filesystem benchmark). It instructs the benchmark to execute a random write test with 4Mb of block size with an overall transfer size of 256 MB.
  • image.name: Describes the Docker Image of the benchmark. In case of Fio, we are using xridge’s fio Docker Image, which is built from this repository.
  • volume.persistentVolumeClaimSpec: Given that Fio is a disk benchmark, we can set a PersistentVolumeClaim for the benchmark to be executed. The above setup instructs Kubernetes to take 1GB of space from the default StorageClass and use it for the benchmark.

Running the benchmark

Now, as we understand the definition of the benchmark, we can try to execute it.

Note: Make sure you installed the kubestone operator and have it running before executing this step.

$ kustomize build github.com/xridge/kubestone/config/samples/fio/overlays/pvc | kubectl create --namespace kubestone -f -

Since we pipe the output of the kustomize build command into kubectl create, it will create the object in our Kubernetes cluster.

The resulting object can be queried using the object’s type (fio) and it’s name (fio-sample):

$ kubectl describe --namespace kubestone fio fio-sampleName:         fio-sampleNamespace:    kubestoneLabels:       <none>Annotations:  <none>API Version:  perf.kubestone.xridge.io/v1alpha1Kind:         FioMetadata:  Creation Timestamp:  2019-09-14T11:31:02Z  Generation:          1  Resource Version:    31488293  Self Link:           /apis/perf.kubestone.xridge.io/v1alpha1/namespaces/kubestone/fios/fio-sample  UID:                 21cdbe92-d6e3-11e9-ba70-4439c4920abcSpec:  Cmd Line Args:  --name=randwrite --iodepth=1 --rw=randwrite --bs=4m --size=256M  Image:    Name:  xridge/fio:3.13  Volume:    Persistent Volume Claim Spec:      Access Modes:        ReadWriteOnce      Resources:        Requests:          Storage:  1Gi    Volume Source:      Persistent Volume Claim:        Claim Name:  GENERATEDStatus:  Completed:  true  Running:    falseEvents:  Type    Reason           Age   From       Message  ----    ------           ----  ----       -------  Normal  Created  11s   kubestone  Created /api/v1/namespaces/kubestone/configmaps/fio-sample  Normal  Created  11s   kubestone  Created /api/v1/namespaces/kubestone/persistentvolumeclaims/fio-sample  Normal  Created  11s   kubestone  Created /apis/batch/v1/namespaces/kubestone/jobs/fio-sample

As the Events section shows, Kubestone has created a ConfigMap, a PersistentVolumeClaim and aJob for the provided Custom Resource. The Status field tells us that the benchmark has completed.

Inspecting the benchmark

The created objects related to the benchmark can be listed using kubectl command:

$ kubectl get pods,jobs,configmaps,pvc --namespace kubestoneNAME                   READY   STATUS      RESTARTS   AGEpod/fio-sample-bqqmm   0/1     Completed   0          54sNAME                   COMPLETIONS   DURATION   AGEjob.batch/fio-sample   1/1           15s        54sNAME                   DATA   AGEconfigmap/fio-sample   0      54sNAME                               STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      AGEpersistentvolumeclaim/fio-sample   Bound    pvc-b3898236-c698-11e9-8071-4439c4920abc   1Gi        RWO            rook-ceph-block   54s

As shown above, Fio controller has created a PersistentVolumeClaim and a ConfigMap which is used by the Fio Job during benchmark execution. The Fio Job has an associated Pod which contains our test execution. The results of the run can be shown with the kubectl logs command:

$ kubectl logs --namespace kubestone fio-sample-bqqmmrandwrite: (g=0): rw=randwrite, bs=(R) 4096KiB-4096KiB, (W) 4096KiB-4096KiB, (T) 4096KiB-4096KiB, ioengine=psync, iodepth=1fio-3.13Starting 1 processrandwrite: Laying out IO file (1 file / 256MiB)randwrite: (groupid=0, jobs=1): err= 0: pid=47: Sat Aug 24 17:58:10 2019  write: IOPS=470, BW=1882MiB/s (1974MB/s)(256MiB/136msec); 0 zone resets    clat (usec): min=1887, max=2595, avg=2042.76, stdev=136.56     lat (usec): min=1953, max=2688, avg=2107.35, stdev=142.94    clat percentiles (usec):     |  1.00th=[ 1893],  5.00th=[ 1926], 10.00th=[ 1926], 20.00th=[ 1958],     | 30.00th=[ 1991], 40.00th=[ 2008], 50.00th=[ 2024], 60.00th=[ 2040],     | 70.00th=[ 2057], 80.00th=[ 2073], 90.00th=[ 2114], 95.00th=[ 2409],     | 99.00th=[ 2606], 99.50th=[ 2606], 99.90th=[ 2606], 99.95th=[ 2606],     | 99.99th=[ 2606]  lat (msec)   : 2=34.38%, 4=65.62%  cpu          : usr=2.22%, sys=97.78%, ctx=1, majf=0, minf=9  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%     issued rwts: total=0,64,0,0 short=0,0,0,0 dropped=0,0,0,0     latency   : target=0, window=0, percentile=100.00%, depth=1Run status group 0 (all jobs):  WRITE: bw=1882MiB/s (1974MB/s), 1882MiB/s-1882MiB/s (1974MB/s-1974MB/s), io=256MiB (268MB), run=136-136msecDisk stats (read/write):  rbd7: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00%

Listing benchmarks

We have learned that Kubestone uses Custom Resources to define benchmarks. We can list the installed custom resources using the kubectl get crds command:

$ kubectl get crds | grep kubestonedrills.perf.kubestone.xridge.io         2019-09-08T05:51:26Zfios.perf.kubestone.xridge.io           2019-09-08T05:51:26Ziopings.perf.kubestone.xridge.io        2019-09-08T05:51:26Ziperf3s.perf.kubestone.xridge.io        2019-09-08T05:51:26Zpgbenches.perf.kubestone.xridge.io      2019-09-08T05:51:26Zsysbenches.perf.kubestone.xridge.io     2019-09-08T05:51:26Z

Using the CRD names above, we can list the executed benchmarks in the system.

Kubernetes provides a convenience feature regarding CRDs: one can use the shortened name of the CRD, which is the singular part of the fully qualified CRD name. In our case, fios.perf.kubestone.xridge.io can be shortened to fio. Hence, we can list the executed fio benchmark using the following command:

$ kubectl get --namespace kubestone fios.perf.kubestone.xridge.ioNAME         RUNNING   COMPLETEDfio-sample   false     true

Cleaning up

After a successful benchmark run the resulting objects are stored in the Kubernetes cluster. Given that Kubernetes can hold a limited number of pods in the system, it is advised that the user cleans up the benchmark runs time to time. This can be achieved by deleting the Custom Resource, which initiated the benchmark:

$ kubectl delete --namespace kubestone fio fio-sample

Since the Custom Resource has ownership on the created resources, the underlying pods, jobs, configmaps, pvcs, etc. are also removed by this operation.

Next steps

Now you are familiar with the key concepts of Kubestone, it is time to explore and benchmark. You can play around with Fio Benchmark via it’s cmdLineArgs, Persistent Volume and Scheduling related settings. You can find more information about that in Fio’s benchmark page. Hopefully you gained some valuable knowledge from this post!

Discover more with Gcore Managed Kubernetes

Related articles

Subscribe to our newsletter

Get the latest industry trends, exclusive insights, and Gcore updates delivered straight to your inbox.