PATCH
/
cloud
/
v2
/
k8s
/
clusters
/
{project_id}
/
{region_id}
/
{cluster_name}
Python
from gcore import Gcore

client = Gcore(
    api_key="My API Key",
)
task_id_list = client.cloud.k8s.clusters.update(
    cluster_name="cluster_name",
    project_id=0,
    region_id=0,
)
print(task_id_list.tasks)
{
  "tasks": [
    "d478ae29-dedc-4869-82f0-96104425f565"
  ]
}

Authorizations

Authorization
string
header
required

API key for authentication. Make sure to include the word apikey, followed by a single space and then your token. Example: apikey 1234$abcdef

Path Parameters

project_id
integer
required

Project identifier

region_id
integer
required

Region identifier

cluster_name
string
required

Cluster name

Body

application/json
add_ons
object

Cluster add-ons configuration

authentication
object | null

Authentication settings

Examples:
{
"oidc": {
"client_id": "kubernetes",
"groups_claim": "groups",
"groups_prefix": "oidc:",
"issuer_url": "https://accounts.provider.example",
"required_claims": { "claim": "value" },
"signing_algs": ["RS256", "RS512"],
"username_claim": "sub",
"username_prefix": "oidc:"
}
}
autoscaler_config
object | null

Cluster autoscaler configuration.

It allows you to override the default cluster-autoscaler parameters provided by the platform with your preferred values.

Supported parameters (in alphabetical order):

  • balance-similar-node-groups (boolean: true/false) - Detect similar node groups and balance the number of nodes between them.
  • expander (string: random, most-pods, least-waste, price, priority, grpc) - Type of node group expander to be used in scale up. Specifying multiple values separated by commas will call the expanders in succession until there is only one option remaining.
  • expendable-pods-priority-cutoff (float) - Pods with priority below cutoff will be expendable. They can be killed without any consideration during scale down and they don't cause scale up. Pods with null priority (PodPriority disabled) are non expendable.
  • ignore-daemonsets-utilization (boolean: true/false) - Should CA ignore DaemonSet pods when calculating resource utilization for scaling down.
  • max-empty-bulk-delete (integer) - Maximum number of empty nodes that can be deleted at the same time.
  • max-graceful-termination-sec (integer) - Maximum number of seconds CA waits for pod termination when trying to scale down a node.
  • max-node-provision-time (duration: e.g., '15m') - The default maximum time CA waits for node to be provisioned - the value can be overridden per node group.
  • max-total-unready-percentage (float) - Maximum percentage of unready nodes in the cluster. After this is exceeded, CA halts operations.
  • new-pod-scale-up-delay (duration: e.g., '10s') - Pods less than this old will not be considered for scale-up. Can be increased for individual pods through annotation.
  • ok-total-unready-count (integer) - Number of allowed unready nodes, irrespective of max-total-unready-percentage.
  • scale-down-delay-after-add (duration: e.g., '10m') - How long after scale up that scale down evaluation resumes.
  • scale-down-delay-after-delete (duration: e.g., '10s') - How long after node deletion that scale down evaluation resumes.
  • scale-down-delay-after-failure (duration: e.g., '3m') - How long after scale down failure that scale down evaluation resumes.
  • scale-down-enabled (boolean: true/false) - Should CA scale down the cluster.
  • scale-down-unneeded-time (duration: e.g., '10m') - How long a node should be unneeded before it is eligible for scale down.
  • scale-down-unready-time (duration: e.g., '20m') - How long an unready node should be unneeded before it is eligible for scale down.
  • scale-down-utilization-threshold (float) - The maximum value between the sum of cpu requests and sum of memory requests of all pods running on the node divided by node's corresponding allocatable resource, below which a node can be considered for scale down.
  • scan-interval (duration: e.g., '10s') - How often cluster is reevaluated for scale up or down.
  • skip-nodes-with-custom-controller-pods (boolean: true/false) - If true cluster autoscaler will never delete nodes with pods owned by custom controllers.
  • skip-nodes-with-local-storage (boolean: true/false) - If true cluster autoscaler will never delete nodes with pods with local storage, e.g. EmptyDir or HostPath.
  • skip-nodes-with-system-pods (boolean: true/false) - If true cluster autoscaler will never delete nodes with pods from kube-system (except for DaemonSet or mirror pods).
Examples:
{ "scale-down-unneeded-time": "5m" }
cni
object | null

Cluster CNI settings

Examples:
{
"cilium": {
"encryption": true,
"hubble_relay": true,
"hubble_ui": true,
"lb_acceleration": true,
"lb_mode": "snat",
"tunnel": "geneve"
},
"provider": "cilium"
}
ddos_profile
object | null

Advanced DDoS Protection profile

Examples:
{
"enabled": true,
"fields": [
{
"base_field": 10,
"field_value": [45046, 45047]
}
],
"profile_template": 29
}
{ "enabled": false }
logging
object | null

Logging configuration

Examples:
{
"destination_region_id": 1,
"enabled": true,
"retention_policy": { "period": 45 },
"topic_name": "my-log-name"
}
{ "enabled": false }

null

Response

Task IDs for cluster update

tasks
string[]
required

List of task IDs representing asynchronous operations. Use these IDs to monitor operation progress: * GET /v1/tasks/{task_id} - Check individual task status and details Poll task status until completion (FINISHED/ERROR) before proceeding with dependent operations.

Examples:
["d478ae29-dedc-4869-82f0-96104425f565"]