Skip to main content
POST
/
cloud
/
v2
/
k8s
/
clusters
/
{project_id}
/
{region_id}
Python
from gcore import Gcore

client = Gcore(
    api_key="My API Key",
)
task_id_list = client.cloud.k8s.clusters.create(
    project_id=0,
    region_id=0,
    keypair="some_keypair",
    name="string",
    pools=[{
        "flavor_id": "g1-standard-1-2",
        "min_node_count": 3,
        "name": "my-pool",
    }],
    version="1.28.1",
)
print(task_id_list.tasks)
{
  "tasks": [
    "d478ae29-dedc-4869-82f0-96104425f565"
  ]
}

Authorizations

Authorization
string
header
required

API key for authentication. Make sure to include the word apikey, followed by a single space and then your token. Example: apikey 1234$abcdef

Path Parameters

project_id
integer
required

Project identifier

region_id
integer
required

Region identifier

Body

application/json
keypair
string
required

The keypair of the cluster

Required string length: 1 - 255
Examples:

"some_keypair"

name
string
required

The name of the cluster

Required string length: 1 - 20
Examples:

"string"

pools
K8sClusterPoolCreateV2Serializer · object[]
required

The pools of the cluster

Minimum length: 1
version
string
required

The version of the k8s cluster

Examples:

"1.28.1"

add_ons
object

Cluster add-ons configuration

authentication
object | null

Authentication settings

Examples:
{
"oidc": {
"client_id": "kubernetes",
"groups_claim": "groups",
"groups_prefix": "oidc:",
"issuer_url": "https://accounts.provider.example",
"required_claims": { "claim": "value" },
"signing_algs": ["RS256", "RS512"],
"username_claim": "sub",
"username_prefix": "oidc:"
}
}
autoscaler_config
object | null

Cluster autoscaler configuration.

It allows you to override the default cluster-autoscaler parameters provided by the platform with your preferred values.

Supported parameters (in alphabetical order):

  • balance-similar-node-groups (boolean: true/false) - Detect similar node groups and balance the number of nodes between them.
  • expander (string: random, most-pods, least-waste, price, priority, grpc) - Type of node group expander to be used in scale up. Specifying multiple values separated by commas will call the expanders in succession until there is only one option remaining.
  • expendable-pods-priority-cutoff (float) - Pods with priority below cutoff will be expendable. They can be killed without any consideration during scale down and they don't cause scale up. Pods with null priority (PodPriority disabled) are non expendable.
  • ignore-daemonsets-utilization (boolean: true/false) - Should CA ignore DaemonSet pods when calculating resource utilization for scaling down.
  • max-empty-bulk-delete (integer) - Maximum number of empty nodes that can be deleted at the same time.
  • max-graceful-termination-sec (integer) - Maximum number of seconds CA waits for pod termination when trying to scale down a node.
  • max-node-provision-time (duration: e.g., '15m') - The default maximum time CA waits for node to be provisioned - the value can be overridden per node group.
  • max-total-unready-percentage (float) - Maximum percentage of unready nodes in the cluster. After this is exceeded, CA halts operations.
  • new-pod-scale-up-delay (duration: e.g., '10s') - Pods less than this old will not be considered for scale-up. Can be increased for individual pods through annotation.
  • ok-total-unready-count (integer) - Number of allowed unready nodes, irrespective of max-total-unready-percentage.
  • scale-down-delay-after-add (duration: e.g., '10m') - How long after scale up that scale down evaluation resumes.
  • scale-down-delay-after-delete (duration: e.g., '10s') - How long after node deletion that scale down evaluation resumes.
  • scale-down-delay-after-failure (duration: e.g., '3m') - How long after scale down failure that scale down evaluation resumes.
  • scale-down-enabled (boolean: true/false) - Should CA scale down the cluster.
  • scale-down-unneeded-time (duration: e.g., '10m') - How long a node should be unneeded before it is eligible for scale down.
  • scale-down-unready-time (duration: e.g., '20m') - How long an unready node should be unneeded before it is eligible for scale down.
  • scale-down-utilization-threshold (float) - The maximum value between the sum of cpu requests and sum of memory requests of all pods running on the node divided by node's corresponding allocatable resource, below which a node can be considered for scale down.
  • scan-interval (duration: e.g., '10s') - How often cluster is reevaluated for scale up or down.
  • skip-nodes-with-custom-controller-pods (boolean: true/false) - If true cluster autoscaler will never delete nodes with pods owned by custom controllers.
  • skip-nodes-with-local-storage (boolean: true/false) - If true cluster autoscaler will never delete nodes with pods with local storage, e.g. EmptyDir or HostPath.
  • skip-nodes-with-system-pods (boolean: true/false) - If true cluster autoscaler will never delete nodes with pods from kube-system (except for DaemonSet or mirror pods).
Examples:
{ "scale-down-unneeded-time": "5m" }
cni
object | null

Cluster CNI settings

Examples:
{
"cilium": {
"encryption": true,
"hubble_relay": true,
"hubble_ui": true,
"lb_acceleration": true,
"lb_mode": "snat",
"tunnel": "geneve"
},
"provider": "cilium"
}
csi
object

Container Storage Interface (CSI) driver settings

ddos_profile
object | null

Advanced DDoS Protection profile

Examples:
{
"enabled": true,
"fields": [
{
"base_field": 10,
"field_value": [45046, 45047]
}
],
"profile_template": 29
}
fixed_network
string | null
default:""

The network of the cluster

Examples:

"3fa85f64-5717-4562-b3fc-2c963f66afa6"

fixed_subnet
string | null
default:""

The subnet of the cluster

Examples:

"3fa85f64-5717-4562-b3fc-2c963f66afa6"

is_ipv6
boolean | null
default:false

Enable public v6 address

Examples:

true

false

logging
object | null

Logging configuration

Examples:
{
"destination_region_id": 1,
"enabled": true,
"retention_policy": { "period": 45 },
"topic_name": "my-log-name"
}

null

pods_ip_pool
string | null

The IP pool for the pods

Examples:

"172.16.0.0/18"

pods_ipv6_pool
string | null

The IPv6 pool for the pods

Examples:

"2a03:90c0:88:393::/64"

services_ip_pool
string | null

The IP pool for the services

Examples:

"172.24.0.0/18"

services_ipv6_pool
string | null

The IPv6 pool for the services

Examples:

"2a03:90c0:88:381::/108"

Response

Task IDs for cluster creation

tasks
string[]
required

List of task IDs representing asynchronous operations. Use these IDs to monitor operation progress: * GET /v1/tasks/{task_id} - Check individual task status and details Poll task status until completion (FINISHED/ERROR) before proceeding with dependent operations.

Examples:
["d478ae29-dedc-4869-82f0-96104425f565"]