Operators are one of the most powerful tools when you are working with Kubernetes, especially when you are in a scenario where Stateful conditions are required like ordered and automated rolling updates, or graceful deployment/deletion and scaling/termination. The problem, however, is writing and building Operators require deep knowledge of Kubernetes internal and a lot lines of code.
Using Kubebuilder requires writing thousands lines of controller code in GO and existing implementations often don’t cover the entire lifecycle. You can also go for Operator Framework in order to use Ansible or Helm charts but both have some limitations.
KUDO (Kubernetes Universal Declarative Operator) is an open-source toolkit that makes it easy to build Operators using YAML. Additionally, it provides a set of pre-built Operators, that you can use out-of-the-box or easily customize to help you standardizing operations.
Some of the reasons to try KUDO are:
- Provides abstractions for sequencing lifecycle operations using Kubernetes objects and plans (a kind of runbook).
- You can reuse and extend previous base Operators for custom Operators.
- Provides a kubectl plugin, so you can use ‘kubectl kudo’ to manage, deploy and debug all your workloads.
- Workloads are managed as CRDs which helps to keep everything in your repository with versioning.
- Existing Operators can be managed by KUDO.
How KUDO works?
KUDO uses different objects to handle workloads: Operator, OperatorVersion and Instance.
- Operator is being represented by a CRD object and is the high level description of a deployable service.
- OperatorVersion represents implementation of an Operator’s specific version of a deployable application. It contains objects, plans and parameters.
- Instance is an application instantiation to an OperatorVersion. Once created, it renders all parameters in templates such as services, pods or StatefulSets. You can create multiple instances of an OperatorVersion on your Kubernetes cluster.
How Kudo orchestrates ordered tasks?
Kudo uses plans to orchestrate tasks through phases and steps using a structured runbook. Phases and steps can be run serial or parallel depending on the needs of your application. Some usual plans would be deploy, backup, restore or upgrade.
Let us take an example of a couple of plans from a MySQL Operator. Here is an extract of the YAML definition file:
plans: deploy: strategy: serial phases: - name: deploy strategy: serial steps: - name: deploy tasks: - deploy - name: init tasks: - init - name: cleanup tasks: - cleanup backup: strategy: serial phases: - name: backup strategy: serial steps: - name: pv tasks: - pv - name: backup tasks: - backup - name: cleanup tasks: - backup-cleanup
As you can see above two different plans are defined: deploy and backup. Both are using a serial strategy and they are executing tasks for any specific step. For the plan named ‘backup’ we are going to create a PVC for backup purposes, then run the backup job and finally doing a cleanup task.
One of the main advantages of using KUDO is deploying prebuilt Operators by official maintainers. You can find those on Github like these:
- Apache Cassandra
- Apache Zookeeper
- Apache Spark
Overview of the Architecture
The architecture diagram below helps to understand concepts explained above. We have a CLI to get Operators based on YAML from a repository, and also to manage our workload on the Kubernetes cluster. Kudo Controller takes care of all KUDO CRDs which are tied to objects. Also, we can see the relationship and inheritance between Operators, OperatorVersions and Instances.
There are some cool features in the roadmap like Dynamic CRDs, Operator Dependencies and Pipe Tasks which will turn out to be a more powerful and useful tool for our Operator workloads.
Just keep tuned as we are going to explain you how to install KUDO, deploy and manage KUDO workloads in a future hands-on article.