Managed Kubernetes—a brand-new cloud service from Gcore

Kubernetes (K8s) is an open-source platform used to deploy, scale, and manage containerized applications automatically. This service simplifies the orchestration of Docker containers, extends their functionality, and helps our clients make the entire infrastructure more stable and scalable.

We’ve recently launched a new Gcore Cloud service called Managed Kubernetes. It allows you to use K8s within our Cloud and manage containers effortlessly.

In this article, we talk about the opportunities offered by our service.

What is Gcore Managed Kubernetes?

Managed Kubernetes is a new Gcore cloud service that will allow you to use K8s within our Cloud infrastructure and facilitate your work with clusters immensely.

The service makes it possible to create clusters, manage the nodes through an all-in-one Gcore panel, and automate processes even more efficiently.

Thus, you get all the capacities of Kubernetes including a flexible infrastructure, while we take care of such routine tasks as deploying clusters and managing master nodes.

Service specifics:

  • You only have access only to the worker nodes, while the master node is controlled by our administrators. You don’t have to waste your time on routine tasks and can focus on development.
  • You can create and configure a cluster in accordance with your tasks in the control panel. You can determine the number of worker nodes as well as configure such functions as autoscaling and autohealing.
  • Our virtual machines are now used as working nodes. But in the future, we are going to make it possible to add bare metal servers to clusters*.
  • We are currently using the 1.20.6 Kubernetes version*. If a new version is released, you will be able to update your version without losing data in just a few clicks in the control panel.

* Updated (June 1, 2023): We already support Manage Kubernetes for bare metal nodes, as well as Kubernetes v1.24-1.27. See our announcement for more details.

Managed Kubernetes architecture in Gcore Cloud
Managed Kubernetes architecture in Gcore Cloud

For now, you can deploy your cluster within one data center only, but in the future, we are going to make it possible to connect the nodes located in different data centers.

Managed Kubernetes offers an autoscaling option, meaning that the system will automatically increase and decrease the number of nodes in the pool. If the resources are insufficient, the service will add more virtual machines, and if some nodes aren’t used for over 20 minutes, they get removed.

You can define the minimum and the maximum number of nodes in the pool on your own. Autoscaling can be turned off if necessary.

We also support the autohealing function: the system is constantly monitoring the node status and replaces the non-working nodes. This feature increases the fault tolerance of our clients’ infrastructure. It can also be turned off if necessary.

You can manage this service via the control panel or API. You can:

  • create clusters;
  • create pools and nodes within them and change the number of nodes in the pool;
  • scale the cluster;
  • set up autoscaling and autohealing within the pool;
  • assign a floating IP and connect to the nodes via SSH;
  • track the node load.

How to enable the new service

If you are already connected to the Gcore Cloud, Managed Kubernetes is already available in your control panel. There is no need to enable any additional features.

Now the service is in beta testing. This is why it’s free of charge.

How to use Managed Kubernetes

1. Create a cluster

Open the cloud control panel, head to the Kubernetes section, and click on Create Cluster.

How to create a cluster with Managed Kubernetes
How to create a cluster with Managed Kubernetes

Select a region that the data center will be located in. The cluster will be deployed using the resources of this data center.

Choosing a region when creating a cluster with Managed Kubernetes
Choosing a region when creating a cluster with Managed Kubernetes

Create pools within the cluster.

Adding a pool when creating a cluster with Managed Kubernetes
Adding a pool when creating a cluster with Managed Kubernetes

Enter the pool name (it can be any name of your choice) and specify the initial number of nodes. This is exactly how many nodes will be within this pool after the cluster has been launched.

Next, specify the minimum and the maximum number of nodes in order to configure autoscaling correctly. The system won’t allow the number of nodes to reach a value that is below the minimum or to exceed the maximum.

Setting the initial number of nodes and configuring autoscaling when creating a cluster with Managed Kubernetes
Setting the initial number of nodes and configuring autoscaling when creating a cluster with Managed Kubernetes

If you don’t want to use the autoscaling function, just set the maximum number of nodes to be the same as the minimum one. This value must match the initial number of nodes in the pool.

Next, select the type of the virtual machines that will be launched in the pool. Since the pool is a group of nodes with the same technical characteristics, we can choose only one virtual machine type.

Choosing the type of virtual machine in the pool when creating a cluster with Managed Kubernetes
Choosing the type of virtual machine in the pool when creating a cluster with Managed Kubernetes

You can choose any of the five types of virtual machines available:

  • Standard virtual machines—the amount of the gigabytes of memory is 2–4 times larger than the amount of the vCPU memory.
  • CPU—the amount of the vCPU memory is equal to the amount of the gigabytes of memory.
  • Memory virtual machines are machines with a lot of memory. The amount of the gigabytes of memory is 8 times larger than the amount of the vCPU memory.
  • High-frequency virtual machines have a processor clock speed of 3.37 GHz in their basic configuration.
  • SGX virtual machines with the support of the Intel SGX Technology.

Next, select the disk size and type where the pool data will be stored.

Volume settings in a pool when creating a cluster with Managed Kubernetes
Volume settings in a pool when creating a cluster with Managed Kubernetes

There are four options concerning the disc type. They differ in the drive type (SSD or HDD), acceptable IOPS number, and the maximum bandwidth.

As soon as you’ve specified all the settings mentioned, the pool will be created.

You can create as many pools as you need. To add one more pool to the cluster, just click on Add pool and configure all the settings as described above.

Adding a pool when creating a cluster with Managed Kubernetes
Adding a pool when creating a cluster with Managed Kubernetes

Then you can enable or disable the autohealing function.

Adding a pool when creating a cluster with Managed Kubernetes
Configuring the autohealing function when creating a cluster with Managed Kubernetes

Next, add the cluster nodes to the private network and the subnet. You can either select an existing network or create a new one by clicking on Add a new network.

Network settings when creating a cluster with Managed Kubernetes
Network settings when creating a cluster with Managed Kubernetes

Next, you need to add an SSH key to connect to the cluster nodes. You can either choose one of the keys that have already been added to your account, or generate a new one.

Adding an SSH key when creating a cluster with Managed Kubernetes
Adding an SSH key when creating a cluster with Managed Kubernetes

Finally, you will need to specify the cluster name (it can be any name of your choice)…

How to specify the cluster name in Managed Kubernetes
How to specify the cluster name in Managed Kubernetes

…and doublecheck all the cluster settings on the right side of the screen.

Cluster settings in Managed Kubernetes
Cluster settings in Managed Kubernetes

Click on Create Cluster. Ready! The cluster will be launched in a few minutes.

2. Edit pools

Now that the cluster has been created, it appears in the Kubernetes section of the control panel.

Launched clusters in Managed Kubernetes of Gcore Cloud
Launched clusters in Managed Kubernetes of Gcore Cloud

You can edit it by clicking on the cluster name.

You will be taken to the section with the overall information about the cluster, where its current state and status as well as the number of pools and nodes are indicated. The Pools tab displays a list of all pools with the main information. You can edit any of them, e.g.:

  • rename them;
  • change the current number of nodes (as long as the autoscaling function allows it);
  • edit autoscaling limits;
  • delete the pool.
Editing pools in Managed Kubernetes
Editing pools in Managed Kubernetes

You can also add one more pool to the cluster. At the end of the list on the Pools tab, there will be an Add pool button. Click on it. A new pool is created in the same way as a new cluster.

Adding pools in Managed Kubernetes
Adding pools in Managed Kubernetes

3. Check node load

You can check the load on every node on your own.

To do this, select the necessary pool in the Pools tab and click on the arrow opposite to it. A nodes list will expand. Click on the node that you need.

How to check the load on nodes in a cluster with Managed Kubernetes—Step 1
How to check the load on nodes in a cluster with Managed Kubernetes—Step 1

Head to the Monitoring tab.

How to check the load on nodes in a cluster in Managed Kubernetes—Step 2
How to check the load on nodes in a cluster in Managed Kubernetes—Step 2

You will see charts with two buttons above them. The left button configures the period of the data displayed and the right one—information updates frequency on your screen.

Setting up the display of node load data in Managed Kubernetes
Setting up the display of node load data in Managed Kubernetes

The statistics are displayed for 10 metrics:

  • CPU Utilization—processor load, %.
  • RAM Utilization—what percentage share of the RAM is used by the node.
  • Network BPS ingress—how fast the incoming traffic is received (bytes per second).
  • Network BPS egress—how fast the outgoing traffic is sent (bytes per second).
  • Network PPS ingress—how fast the incoming traffic is received (packets per second).
  • Network PPS egress—how fast the outgoing traffic is sent (packets per second).
  • sda/Disk IOPS read—how fast information is read on the disk (number of operations per second).
  • sda/Disk IOPS write—how fast the data are recorded on the disk (number of operations per second).
  • sda/Disk BPS read and sda/Disk BPS write—these are the same as the two previous metrics but measured in the number of bytes transferred per second.

Example chart:

Example node load data chart in Managed Kubernetes
Example node load data chart in Managed Kubernetes

Read more about the work with Managed Kubernetes in the Kubernetes section of our knowledge base.

Let’s sum it up

  1. Managed Kubernetes is a new Gcore Cloud service that allows you to use Kubernetes within our Cloud infrastructure and simplifies your work with it.
  2. Our new service allows you to focus on development. We do all routine tasks connected with master nodes and cluster deployment instead of you.
  3. You can create a cluster, customize it for your tasks, and manage it using a simple and convenient control panel.
  4. For now, the service is in beta testing, this is why you can use it for free.

We are constantly improving our cloud services to help our clients grow their business even faster and cheaper. Our convenient and technologically advanced cloud will allow you to achieve your business goals without any extra costs and efforts.

More about Gcore Cloud

Subscribe and discover the newest
updates, news, and features

We value your inbox and are committed to preventing spam