Deploy and run your containerized application in the Cloud, without managing any underlying infrastructure.
If you don’t have sufficient resources to a create a Container, request quota increase.
1. In the Gcore Customer Portal, navigate to Cloud > Container as a Service.
2. Click Create container.
3. In the Container image section, select the image type: public or private. The difference between them is that a private image is secured with credentials.
4. Enter your image URL. For example, nginx:latest. For a private image, enter the URL in the format https://registry.mysite.com
.
5. (Optional) Specify registry credentials. If you selected private image in the previous step, enter credentials for accessing that image. If you’ve already added createntials to the Customer Portal, choose them from the Credentials dropdown.
If you have no saved credentials, add them as follows:
To save the new credentials, click Create credentials.
(Optional) Set startup command. Enable the toggle if you want to execure a specific command when your container is initiated.
In the Port section, specify the port for connection to your container.
Currently, we support only one port for the container. If you need to add additional data, create a separate container.
Select the required MB of memory and mCPU (up to 2260 mCPU and 4096 MB). This configuration will be used for the deployed Kubernetes pod, where your container will be placed after creation.
In the Limits of autoscaling section, enter the range for the number of nodes you want to maintain under Minimum pods and Maximum pods.
In the Cooldown period field, set the interval (in seconds) between the trigger executions. This helps to prevent frequent and unnecessary scaling changes. You can enter a value between 1 and 3600 seconds.
To ensure more efficient use of computational resources and consistent model performance, define scaling thresholds for CPU, RAM, and HTTP requests resource utilization. You can combine any triggers or use a single one.
In the Autoscaling triggers, click Add trigger to view and modify current thresholds:
The minimum setting is 1% of the resource capacity. Only HTTP requests trigger can scale pods to and from 0.
The maximum setting is 100% of the resource capacity.
By default, the autoscaling parameters are set to 80% but you can enter any percentage within the specified range.
Note that the waiting times specified for the Cooldown period and HTTP requests trigger are combined, and they determine the total time the system waits before initiating another scaling action.
When setting up the cooldown period or HTTP scaling, use small values with caution because they may affect the scale window.
For the cooldown period, small values can lead to random unexpected scale triggers. For HTTP scaling, small values might affect the time it takes to aggregate HTTP request rates for making scaling decisions.
Enter the number of seconds after which a pod will be deleted when there are no requests to your pod. For example, if you enter 600, the pod will be deleted in 600 seconds—equal to ten minutes.
If you specify 0, the container will take approximately one minute to scale down.
(Optional) If you want to add metadata to your container, create variables in a form of key-value pairs. These variables will only be used in the environment of the created container.
For example, if your container supports it, you can configure where an application should write its log files within the container by setting an environment variable for the log file path:
LOG_FILE_PATH
/var/log/myapp.log
This variable directs the application to write logs to the specified path inside the container.
To protect your container endpoints from unauthorized access, enable the API Key authentication feature. Either select an existing API key or create a new one.
Specify the container name (this will be displayed in the Customer Portal) and additional information if needed.
In the top-right corner of the screen, click Create container. After a few minutes, the initial setup will be finished and the container will be launched.
Was this article helpful?
Discover our offerings, including virtual instances starting from 3.7 euro/mo, bare metal servers, AI Infrastructure, load balancers, Managed Kubernetes, Function as a Service, and Centralized Logging solutions.