Radar has landed - discover the latest DDoS attack trends. Get ahead, stay protected.Get the report
Under attack?

Products

Solutions

Resources

Partners

Why Gcore

  1. Home
  2. Developers
  3. Trapping Hackers with Containerized Deception

Trapping Hackers with Containerized Deception

  • By Gcore
  • March 14, 2023
  • 11 min read
Trapping Hackers with Containerized Deception

TL;DR

This article explores modern honeypots that leverage containerization by walking through the design of a high-interaction honeypot that can use arbitrary Docker containers to lure malicious attacks.

 
Photo by Clint Patterson / Unsplash

Introduction

While honeypots have been around for a very long time, this article will attempt to provide a fresh look at how containerization has affected the way we use honeypots in containerized environments today. Admittedly, I haven’t explored this topic since 2005. So, while researching for something to implement that would be equally valuable and interesting, I ran into at least half a dozen false starts. I assumed that, like every other area of computing, advanced honeypot systems would abound in the open-source community. But, I suppose I underestimated both the esoteric nature of the subject, as well as its well guarded commercial viability.

A lot has changed since 2005, but a lot has remained the same. A honeypot is not a complicated concept; it’s a system or service that intentionally exposes itself to attackers so that they can be detected as somebody tries to break in. Different than an intrusion detection system, a honeypot can be something as simple as a few lines of code that disguises itself to be a vulnerable open port on a system, or it can be something as advanced as a full-blown operating system with a secret logging system that analyzes patterns of behavior.

However, as developers and systems experts incorporate containerization into their designs, many of the traditional approaches to using honeypots become far less effective. In 2005, deploying a honeypot was often done once, usually placed somewhere easily accessible to the rest of the network. But with containerized systems, because of their isolation from networks and other services, deploying a honeypot, in the same way, becomes useless.

While developers are technically savvy, it is often the case that they aren’t security literate. Even if they are, it’s common to prioritize convenience. This makes securing Docker an even more critical part of a security minded DevOps professional. The same could be said for overburdened systems administrators that take a deemphasized approach to security in an effort to get the job done quickly.

Much of the problem stems from the fact that developers often have a very high level of access to computing resources within the company. Systems administrators, by necessity, have an even higher level of access than developers. Not only do they have access to all the source code in the system, along with the ability to test databases, but in many cases, especially in DevOps environments, they may even have access to production systems. In addition, developers need to perform many tasks for testing purposes that look suspicious to security software, providing cause to disable security software. This gives malicious software and attackers an even easier target to infect system hardware.

In the case of running Docker on a local windows environment, as many developers do for development purposes, it can’t be trusted that all development systems will adhere to proper security configurations. In my case, I have often enabled the Docker remote API on a host for testing purposes and left it enabled, either out of forgetfulness or because of convenience.

Overview of Deception Systems

 
Photo by Arget / Unsplash

Honeypots

Honeypots can be deployed alongside different types of systems in your network. They are decoys designed to lure attackers and malicious software so that the source can be detected, logged, and tracked. There are various types of honeypot. High interaction honey pots are designed to run as a service and is meant to be complex enough to fool a system into believing it is a full-featured operating system or device. Mid interaction honeypots emulate certain aspects of an application layer without being too complex and therefore making it more difficult to be easily compromised. Low interaction honeypots, which is what we will be discussing in this article, are easy to deploy and maintain, while serving as a simple early warning system to prevent infection of more critical systems in the environment.

Honeynets

A honeynet is a collection of honeypots designed to strategically track the methods and techniques of malicious software and attackers. This approach allows administrators to watch hackers and malicious code exploit the various vulnerabilities of the system, and can be used either in production or for research purposes to discover new vulnerabilities and attacks.

Basic Use Case

Honeypots have the advantage of not requiring detailed knowledge about network attack methodology. This is especially true of low interaction honeypots, which are relatively simple applications which sit on a port and listen, often imitating very little of the original service. They log access attempts and do little else. Such data collection can be invaluable when collecting certain types of access information, or to serve as an early warning of a compromised service, before there are any serious problems.

Background

Vulnerabilities

Using containers as honeypots have been the source of some debate, as containerization technology is both comparably immature compared to full virtualization. It can also be easy for some administrators to overlook potential configuration issues, such as the ones that follow.

Docker Engine API

An API is a programming interface for applications. It is a type of protocol; a ruleset; sometimes with abstraction, but regardless of implementation, it is always a standard method for programs of different types to talk to one another. REST stands for “Representational State Transfer”. It is a standard which developers can use in order to get and exchange information with other applications, sending a request for information in the form of a specific URL, while receiving data in the body of the return response.

The Docker Engine API is used by the Docker CLI to manage objects. Although a UNIX socket (unix:///var/run/docker.sock)is enabled by default on Linux systems, a TCP socket (tcp://127.0.0.1:2376) is enabled by default on Windows systems. On Linux systems, for development and automation purposes, this API can also be accessed directly by remote applications as a REST API by enabling it to be used by a TCP socket.

Base Images

Most of the vulnerabilities found today are in the base images themselves. In the case of Docker, the base image is composed of an operating system, often customized from another popular image found on Docker Hub. Customizations, malicious or accidental, along with the original base image, creates plenty of opportunities for attacks, sometimes even by the most novice of hackers.

Docker Hub

While Docker Hub has removed malicious Docker containers in the past, as a community repository, it is wide open to abuse and attack. Base images uploaded to Docker Hub should be used with caution. But given that many administrators decide that the convenience often outweighs the risk, taking prudent security measures to mitigate the risk of attack would be warranted.

Notable Honeypot Systems

There are a number of honeypot systems in popular use, however, very few specifically target or uniquely benefit the use of containerization. The two following open source packages are notable exceptions.

Modern Honey Network

MHN has a lot of potential for container integration, yet it is not officially supported to run Docker. As a comprehensive honeynet management system, honeypots could be easily deployed as part of a K8s configuration with a little effort. The honeypot data collected from internal network sources, in particular, could be invaluable to securing a distributed container environment.

Oncyberblog’s Whaler: A Docker API Honeypot

Over a dozen GitHub repositories bare the name Whaler, yet this one by Oncyberblog was actually a projected contender for this article. This project is unique because it attempts to lure attackers using an exposed Docker Engine API. It does have its limitations, however. First, because it is running an embedded Docker container, Docker must be run in privileged mode on its host. This is a problem because its an enormous security risk that could jeopardize the security of the host. Therefore, it would be necessary to take precautions by installing it on an isolated and secured host. The system requirements aren’t demanding, so the host doesn’t need to be of any substantial size. However, since you would need to monitor individual containers on the primary host, you would need a method of linking the segregated honeypot host to the application host in which the monitored application containers would be located.

System Overview

This project makes quick use of old-school Linux internals to handle container orchestration, load balancing, and security. For more information, source code, and updates, see DockerTrap, the companion GitHub repository created for this article.

System configuration

Change default port of SSH on the host

Before doing anything else, change the default port from 22 to something else, like 2222. The system will be luring attackers using port 22, so this port should be freed of use.

Install Docker

sudo apt -y install apt-transport-https ca-certificates curl software-properties-commoncurl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic stable"sudo apt updateapt-cache policy docker-cesudo apt -y install docker-ce

Install supporting system tools

sudo apt updatesudo apt -y install socat xinetd auditd netcat-openbsd

Configure xinetd

This system makes use of the following bash script, managed by xinetd, to spin up containers whenever an incoming connection is requested by port 22.

The following bash script should be made available to xinetd as /usr/bin/honeypot with 755 permissions and root ownership.

The EXT_IFACE variable should be changed to the interface that corresponds with the device you wish to receive incoming ssh connections on port 22.

#!/bin/bashEXT_IFACE=ens4MEM_LIMIT=128MSERVICE=22QUOTA_IN=5242880QUOTA_OUT=1310720REMOTE_HOST=`echo ${REMOTE_HOST} | grep -o '[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}'`{    CONTAINER_NAME="honeypot-${REMOTE_HOST}"    HOSTNAME=$(/bin/hostname)    # check if the container exists    if ! /usr/bin/docker inspect "${CONTAINER_NAME}" &> /dev/null; then        # create new container        CONTAINER_ID=$(/usr/bin/docker run --name ${CONTAINER_NAME} -h ${HOSTNAME} -e "REMOTE_HOST=${REMOTE_HOST}" -m ${MEM_LIMIT} -d -i honeypot ) ##/sbin/init)        CONTAINER_IP=$(/usr/bin/docker inspect --format '{{ .NetworkSettings.IPAddress }}' ${CONTAINER_ID})        PROCESS_ID=$(/usr/bin/docker inspect --format '{{ .State.Pid }}' ${CONTAINER_ID})        # drop all inbound and outbound traffic by default        /usr/bin/nsenter --target ${PROCESS_ID} -n /sbin/iptables -P INPUT DROP        /usr/bin/nsenter --target ${PROCESS_ID} -n /sbin/iptables -P OUTPUT DROP        # allow access to the service regardless of the quota        /usr/bin/nsenter --target ${PROCESS_ID} -n /sbin/iptables -A INPUT -p tcp -m tcp --dport ${SERVICE} -j ACCEPT        /usr/bin/nsenter --target ${PROCESS_ID} -n /sbin/iptables -A INPUT -m quota --quota ${QUOTA_IN} -j ACCEPT        # allow related outbound access limited by the quota        /usr/bin/nsenter --target ${PROCESS_ID} -n /sbin/iptables -A OUTPUT -p tcp --sport ${SERVICE} -m state --state ESTABLISHED,RELATED -m quota --quota ${QUOTA_OUT} -j ACCEPT        # enable the host to connect to rsyslog on the host        /usr/bin/nsenter --target ${PROCESS_ID} -n /sbin/iptables -A OUTPUT -p tcp -m tcp --dst 172.17.0.1 --dport 514 -j ACCEPT        # add iptables redirection rule        /sbin/iptables -t nat -A PREROUTING -i ${EXT_IFACE} -s ${REMOTE_HOST} -p tcp -m tcp --dport ${SERVICE} -j DNAT --to-destination ${CONTAINER_IP}        /sbin/iptables -t nat -A POSTROUTING -j MASQUERADE    else        # start container if exited and grab the cid        /usr/bin/docker start "${CONTAINER_NAME}" &> /dev/null        CONTAINER_ID=$(/usr/bin/docker inspect --format '{{ .Id }}' "${CONTAINER_NAME}")        CONTAINER_IP=$(/usr/bin/docker inspect --format '{{ .NetworkSettings.IPAddress }}' ${CONTAINER_ID})        # add iptables redirection rule        /sbin/iptables -t nat -A PREROUTING -i ${EXT_IFACE} -s ${REMOTE_HOST} -p tcp -m tcp --dport ${SERVICE} -j DNAT --to-destination ${CONTAINER_IP}        /sbin/iptables -t nat -A POSTROUTING -j MASQUERADE    fi    echo ${CONTAINER_IP}} &> /dev/null# forward traffic to the containerexec /usr/bin/socat stdin tcp:${CONTAINER_IP}:22,retry=60

The following service file should be created as /etc/xinetd.d/honeypot:

# Container launcher for an SSH honeypot service honeypot{        disable         = no        instances       = UNLIMITED        server          = /usr/bin/honeypot        socket_type     = stream        protocol        = tcp        port            = 22        user            = root        wait            = no        log_type        = SYSLOG authpriv info        log_on_success  = HOST PID        log_on_failure  = HOST}

Then, /etc/services should be updated to include the following to reflect the new ssh port and the honeypot port at port 22

ssh          2222/tcphoneypot     22/tcp

Configure crond

To handle stopping and cleaning up old containers, the following bash script should be deployed to /usr/bin/honeypot.clean with 755 permissions and root ownership.

#!/bin/bashEXT_IFACE=ens4SERVICE=22HOSTNAME=$(/bin/hostname)LIFETIME=$((3600 * 6)) # Six hoursdatediff () {    d1=$(/bin/date -d "$1" +%s)    d2=$(/bin/date -d "$2" +%s)    echo $((d1 - d2))}for CONTAINER_ID in $(/usr/bin/docker ps -a --no-trunc | grep "honeypot-" | cut -f1 -d" "); do    STARTED=$(/usr/bin/docker inspect --format '{{ .State.StartedAt }}' ${CONTAINER_ID})    RUNTIME=$(datediff now "${STARTED}")    if [[ "${RUNTIME}" -gt "${LIFETIME}" ]]; then        logger -p local3.info "Stopping honeypot container ${CONTAINER_ID}"        /usr/bin/docker stop $CONTAINER_ID    fi    RUNNING=$(/usr/bin/docker inspect --format '{{ .State.Running }}' ${CONTAINER_ID})    if [[ "$RUNNING" != "true" ]]; then        # delete iptables rule        CONTAINER_IP=$(/usr/bin/docker inspect --format '{{ .NetworkSettings.IPAddress }}' ${CONTAINER_ID})        REMOTE_HOST=$(/usr/bin/docker inspect --format '{{ .Name }}' ${CONTAINER_ID} | cut -f2 -d-)        /sbin/iptables -t nat -D PREROUTING -i ${EXT_IFACE} -s ${REMOTE_HOST} -p tcp --dport ${SERVICE} -j DNAT --to-destination ${CONTAINER_IP}        logger -p local3.info "Removing honeypot container ${CONTAINER_ID}"        /usr/bin/docker rm $CONTAINER_ID    fidone

By default, the above script is set to run every 5 minutes by appending the following to /etc/crontab.

*/5 * * * * /usr/bin/honeypot.clean

Configure auditd

Enable logging the execve systemcall in auditd by adding the following audit rules:

auditctl -a exit,always -F arch=b64 -S execveauditctl -a exit,always -F arch=b32 -S execve

Deploy apitrap.sh

The apitrap.sh script is an optional component that makes an attempt to simulate a Docker API on the host. Since this is a bash script, it is recommended that it is run from a Docker container, run as an unprivileged user, and redirected to port 2375 or 2376 in order to avoid potential exploits.

#!/bin/bash## Docker API headingH1="HTTP/1.1 404 Not Found\n"H2="Content-Type: application/json\n"H3="Date: "`date '+%a, %d %b %Y %T %Z'`"\n"H4="Content-Length: 29\n\n"## API error messageB1="{\"message\":\"page not found\"}\n"HEADERS+=$H1$H2$H3$H4## Default to port 2376 if no port is givenif ! test -z "$1"; then  PORT=$1;  else     PORT=2376; fiQUEUE_FILE=/tmp/apitraptest -p $QUEUE_FILE && rm $QUEUE_FILEmkfifo $QUEUE_FILEwhile true; do  cat "$QUEUE_FILE" | nc -l "$PORT" | while read -r line || [[ -n "$line" ]]; do    if echo $line | grep -q 'GET \|HEAD \|POST \|PUT \|DELETE \|CONNECT \|OPTIONS \|TRACE'; then      echo ">>> ["$(date)"] <<<"      echo : $line      echo -e $HEADERS$B1 > $QUEUE_FILE    fi  donedone

You would need to make appropriate changes to the system, but from inside any honeypot you decide to deploy, it should be possible to attain the host IP from inside a docker container. For example:

root@dockerhost:~# hostname -I |awk '{ print $2 }'172.17.0.1root@dockerhost:~# docker run -it honeypot-test /sbin/ip route|awk '/default/ { print $3 }'172.17.0.1root@dockerhost:~#

We show that the hostname for dockerhost is 172.17.0.1. Retrieving the Docker container’s gateway address reveals that it is also 172.17.0.1. This quite easily makes the host IP available for attack from the container. We should therefore, exploit this for our project by making sure the configuration for DockerTrap is redirected and available on the same host. However, those details will not be discussed here. Look for updates on the DockerTrap GitHub repository.

Build honeypot image from Dockerfile

The main feature of this image is that sshd is enabled, root login is enabled, and the password for root is set to root. You should modify this to include other user accounts, along with trivial passwords that will help lure more attacks.

FROM alpine:3.9ENTRYPOINT ["/entrypoint.sh"]EXPOSE 22## Root is gloriously unsecure!RUN apk add --no-cache openssh \  && sed -i s/#PermitRootLogin.*/PermitRootLogin\ yes/ /etc/ssh/sshd_config \  && echo "root:root" | chpasswdRUN echo -e '#!/bin/ash\n\nssh-keygen -A\n/usr/sbin/sshd -D -e "$@"' > /entrypoint.shRUN chmod 555 /entrypoint.sh

Commit final honeypot image

The system looks for a base image committed as honeypot:latest. As an ssh connection is made, the system automatically creates a unique instance of this image with the honeypot- prefix. Modify this image as needed.

Testing

SSH into honeypot

Each of the dynamically created honeypots will adopt the hostname of the host. At this point, if any socket connection is established on port 22 of localhost (or any other network adapter on the host that is configured to be used by DockerTrap), it will be redirected to the sshd daemon of the honeypot inside one of the honeypot containers. The Bash script, /usr/bin/honeypot triggered by xinetd makes sure that each IP address is directed to their corresponding container, so if an attacker attempts to log in from the same IP address a second, third, or fourth time, they will log into the same container each time.

dockertrap:~# ssh root@localhost -p 22The authenticity of host 'localhost (127.0.0.1)' can't be established.ECDSA key fingerprint is SHA256:dY6EIpV1nBw5143TkgPQU5SRWIkrxkZCiLWd+ktiNKE.Are you sure you want to continue connecting (yes/no)? yesWarning: Permanently added 'localhost' (ECDSA) to the list of known hosts.root@localhost's password:Welcome to Alpine!The Alpine Wiki contains a large amount of how-to guides and generalinformation about administrating Alpine systems.See <http://wiki.alpinelinux.org/>.You can setup the system with the command: setup-alpineYou may change this message by editing /etc/motd.dockertrap:~# ifconfig -aeth0      Link encap:Ethernet  HWaddr 02:42:AC:11:00:02          inet addr:172.17.0.2  Bcast:172.17.255.255  Mask:255.255.0.0          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1          RX packets:124 errors:0 dropped:0 overruns:0 frame:0          TX packets:96 errors:0 dropped:0 overruns:0 carrier:0          collisions:0 txqueuelen:0          RX bytes:13954 (13.6 KiB)  TX bytes:14026 (13.6 KiB)lo        Link encap:Local Loopback          inet addr:127.0.0.1  Mask:255.0.0.0          UP LOOPBACK RUNNING  MTU:65536  Metric:1          RX packets:0 errors:0 dropped:0 overruns:0 frame:0          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0          collisions:0 txqueuelen:1000          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)dockertrap:~# exitConnection to localhost closed.root@dockertrap:~#

Note that the ethernet device eth0 above is configured with an IP address of 172.17.0.2 and a MAC address of 02:42:AC:11:00:02. When logging in from an IP address other than the host’s localhost, such as my home PC, DockerTrap will spawn a new, yet nearly identical container for me to log into (except for the IP and MAC address, of course).

C:\Users\user>ssh root@35.238.100.5 -p 22The authenticity of host '35.238.100.5 (35.238.100.5)' can't be established.ECDSA key fingerprint is SHA256:VKG+5VhB0WL5ncPomfmb+XW484LtjS8oAs+BDM07sJQ.Are you sure you want to continue connecting (yes/no)? yesWarning: Permanently added '35.238.100.5' (ECDSA) to the list of known hosts.root@35.238.100.5's password:Welcome to Alpine!The Alpine Wiki contains a large amount of how-to guides and generalinformation about administrating Alpine systems.See <http://wiki.alpinelinux.org/>.You can setup the system with the command: setup-alpineYou may change this message by editing /etc/motd.honeypot2:~# ifconfig -aeth0      Link encap:Ethernet  HWaddr 02:42:AC:11:00:09          inet addr:172.17.0.9  Bcast:172.17.255.255  Mask:255.255.0.0          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1          RX packets:72 errors:0 dropped:0 overruns:0 frame:0          TX packets:50 errors:0 dropped:0 overruns:0 carrier:0          collisions:0 txqueuelen:0          RX bytes:7593 (7.4 KiB)  TX bytes:7041 (6.8 KiB)lo        Link encap:Local Loopback          inet addr:127.0.0.1  Mask:255.0.0.0          UP LOOPBACK RUNNING  MTU:65536  Metric:1          RX packets:0 errors:0 dropped:0 overruns:0 frame:0          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0          collisions:0 txqueuelen:1000          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)honeypot2:~# exitConnection to 35.238.100.5 closed.C:\Users\user>

The IP address and MAC address are different, yet the hostname is the same. After a while, we will notice that an increasing number of containers begin to spin up on the host, as random bots connect to port 22. It is worth noting that in order to prevent a memory resource attack, you will want to edit the /etc/xinetd.d/honeypot file so that xinetd limits the number of instances.

root@dockertrap:/usr/bin# docker psCONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES4971d1c8272a        honeypot            "/entrypoint.sh"    17 minutes ago      Up 17 minutes       22/tcp              honeypot-175.111.182.186bf4f9b94ad03        honeypot            "/entrypoint.sh"    27 minutes ago      Up 27 minutes       22/tcp              honeypot-58.96.198.15e69231243915        honeypot            "/entrypoint.sh"    29 minutes ago      Up 29 minutes       22/tcp              honeypot-906c4e2be5c7        honeypot            "/entrypoint.sh"    30 minutes ago      Up 30 minutes       22/tcp              honeypot-10.128.0.46root@dockertrap:/usr/bin#

Future Work

5Much of the design of DockerTrap can be applied to Kubernetes. Similar to how Docker enables resource restrictions here, K8s supports advanced security features, including an IPtables counterpart network policy. The API honeypot apitrap.sh can also be replaced by a more robust system like Whaler, which would help identify compromised systems that specifically seek out misconfigured Docker hosts.

Related articles

Pre-configure your dev environment with Gcore VM init scripts

Provisioning new cloud instances can be repetitive and time-consuming if you’re doing everything manually: installing packages, configuring environments, copying SSH keys, and more. With cloud-init, you can automate these tasks and launch development-ready instances from the start.Gcore Edge Cloud VMs support cloud-init out of the box. With a simple YAML script, you can automatically set up a development-ready instance at boot, whether you’re launching a single machine or spinning up a fleet.In this guide, we’ll walk through how to use cloud-init on Gcore Edge Cloud to:Set a passwordInstall packages and system updatesAdd users and SSH keysMount disks and write filesRegister services or install tooling like Docker or Node.jsLet’s get started.What is cloud-init?cloud-init is a widely used tool for customizing cloud instances during the first boot. It reads user-provided configuration data—usually YAML—and uses it to run commands, install packages, and configure the system. In this article, we will focus on Linux-based virtual machines.How to use cloud-init on GcoreFor Gcore Cloud VMs, cloud-init scripts are added during instance creation using the User data field in the UI or API.Step 1: Create a basic scriptStart with a simple YAML script. Here’s one that updates packages and installs htop:#cloud-config package_update: true packages: - htop Step 2: Launch a new VM with your scriptGo to the Gcore Customer Portal, navigate to VMs, and start creating a new instance (or just click here). When you reach the Additional options section, enable the User data option. Then, paste in your YAML cloud-init script.Once the VM boots, it will automatically run the script. This works the same way for all supported Linux distributions available through Gcore.3 real-world examplesLet’s look at three examples of how you can use this.Example 1: Add a password for a specific userThe below script sets the for the default user of the selected operating system:#cloud-config password: <password> chpasswd: {expire: False} ssh_pwauth: True Example 2: Dev environment with Docker and GitThe following script does the following:Installs Docker and GitAdds a new user devuser with sudo privilegesAuthorizes an SSH keyStarts Docker at boot#cloud-config package_update: true packages: - docker.io - git users: - default - name: devuser sudo: ALL=(ALL) NOPASSWD:ALL groups: docker shell: /bin/bash ssh-authorized-keys: - ssh-rsa AAAAB3Nza...your-key-here runcmd: - systemctl enable docker - systemctl start docker Example 3: Install Node.js and clone a repoThis script installs Node.js and clones a GitHub repo to your Gcore VM at launch:#cloud-config packages: - curl runcmd: - curl -fsSL https://deb.nodesource.com/setup_18.x | bash - - apt-get install -y nodejs - git clone https://github.com/example-user/dev-project.git /home/devuser/project Reusing and versioning your scriptsTo avoid reinventing the wheel, keep your cloud-init scripts:In version control (e.g., Git)Templated for different environments (e.g., dev vs staging)Modular so you can reuse base blocks across projectsYou can also use tools like Ansible or Terraform with cloud-init blocks to standardize provisioning across your team or multiple Gcore VM environments.Debugging cloud-initIf your script doesn’t behave as expected, SSH into the instance and check the cloud-init logs:sudo cat /var/log/cloud-init-output.log This file shows each command as it ran and any errors that occurred.Other helpful logs:/var/log/cloud-init.log /var/lib/cloud/instance/user-data.txt Pro tip: Echo commands or write log files in your script to help debug tricky setups—especially useful if you’re automating multi-node workflows across Gcore Cloud.Tips and best practicesIndentation matters! YAML is picky. Use spaces, not tabs.Always start the file with #cloud-config.runcmd is for commands that run at the end of boot.Use write_files to write configs, env variables, or secrets.Cloud-init scripts only run on the first boot. To re-run, you’ll need to manually trigger cloud-init or re-create the VM.Automate it all with GcoreIf you're provisioning manually, you're doing it wrong. Cloud-init lets you treat your VM setup as code: portable, repeatable, and testable. Whether you’re spinning up ephemeral dev boxes or preparing staging environments, Gcore’s support for cloud-init means you can automate it all.For more on managing virtual machines with Gcore, check out our product documentation.Explore Gcore VM product docs

How to cut egress costs and speed up delivery using Gcore CDN and Object Storage

If you’re serving static assets (images, videos, scripts, downloads) from object storage, you’re probably paying more than you need to, and your users may be waiting longer than they should.In this guide, we explain how to front your bucket with Gcore CDN to cache static assets, cut egress bandwidth costs, and get faster TTFB globally. We’ll walk through setup (public or private buckets), signed URL support, cache control best practices, debugging tips, and automation with the Gcore API or Terraform.Why bother?Serving directly from object storage hits your origin for every request and racks up egress charges. With a CDN in front, cached files are served from edge—faster for users, and cheaper for you.Lower TTFB, better UXWhen content is cached at the edge, it doesn’t have to travel across the planet to get to your user. Gcore CDN caches your assets at PoPs close to end users, so requests don’t hit origin unless necessary. Once cached, assets are delivered in a few milliseconds.Lower billsMost object storage providers charge $80–$120 per TB in egress fees. By fronting your storage with a CDN, you only pay egress once per edge location—then it’s all cache hits after that. If you’re using Gcore Storage and Gcore CDN, there’s zero egress fee between the two.Caching isn’t the only way you save. Gcore CDN can also compress eligible file types (like HTML, CSS, JavaScript, and JSON) on the fly, further shrinking bandwidth usage and speeding up file delivery—all without any changes to your storage setup.Less origin traffic and less data to transfer means smaller bills. And your storage bucket doesn’t get slammed under load during traffic spikes.Simple scaling, globallyThe CDN takes the hit, not your bucket. That means fewer rate-limit issues, smoother traffic spikes, and more reliable performance globally. Gcore CDN spans the globe, so you’re good whether your users are in Tokyo, Toronto, or Tel Aviv.Setup guide: Gcore CDN + Gcore Object StorageLet’s walk through configuring Gcore CDN to cache content from a storage bucket. This works with Gcore Object Storage and other S3-compatible services.Step 1: Prep your bucketPublic? Check files are publicly readable (via ACL or bucket policy).Private? Use Gcore’s AWS Signature V4 support—have your access key, secret, region, and bucket name ready.Gcore Object Storage URL format: https://<bucket-name>.<region>.cloud.gcore.lu/<object> Step 2: Create CDN resource (UI or API)In the Gcore Customer Portal:Go to CDN > Create CDN ResourceChoose "Accelerate and protect static assets"Set a CNAME (e.g. cdn.yoursite.com) if you want to use your domainConfigure origin:Public bucket: Choose None for authPrivate bucket: Choose AWS Signature V4, and enter credentialsChoose HTTPS as the origin protocolGcore will assign a *.gcdn.co domain. If you’re using a custom domain, add a CNAME: cdn.yoursite.com CNAME .gcdn.co Here’s how it works via Terraform: resource "gcore_cdn_resource" "cdn" { cname = "cdn.yoursite.com" origin_group_id = gcore_cdn_origingroup.origin.id origin_protocol = "HTTPS" } resource "gcore_cdn_origingroup" "origin" { name = "my-origin-group" origin { source = "mybucket.eu-west.cloud.gcore.lu" enabled = true } } Step 3: Set caching behaviorSet Cache-Control headers in your object metadata: Cache-Control: public, max-age=2592000 Too messy to handle in storage? Override cache logic in Gcore:Force TTLs by path or extensionIgnore or forward query strings in cache keyStrip cookies (if unnecessary for cache decisions)Pro tip: Use versioned file paths (/img/logo.v3.png) to bust cache safely.Secure access with signed URLsWant your assets to be private, but still edge-cacheable? Use Gcore’s Secure Token feature:Enable Secure Token in CDN settingsSet a secret keyGenerate time-limited tokens in your appPython example: import base64, hashlib, time secret = 'your_secret' path = '/videos/demo.mp4' expires = int(time.time()) + 3600 string = f"{expires}{path} {secret}" token = base64.urlsafe_b64encode(hashlib.md5(string.encode()).digest()).decode().strip('=') url = f"https://cdn.yoursite.com{path}?md5={token}&expires={expires}" Signed URLs are verified at the CDN edge. Invalid or expired? Blocked before origin is touched.Optional: Bind the token to an IP to prevent link sharing.Debug and cache tuneUse curl or browser devtools: curl -I https://cdn.yoursite.com/img/logo.png Look for:Cache: HIT or MISSCache-ControlX-Cached-SinceCache not working? Check for the following errors:Origin doesn’t return Cache-ControlCDN override TTL not appliedCache key includes query strings unintentionallyYou can trigger purges from the Gcore Customer Portal or automate them via the API using POST /cdn/purge. Choose one of three ways:Purge all: Clear the entire domain’s cache at once.Purge by URL: Target a specific full path (e.g., /images/logo.png).Purge by pattern: Target a set of files using a wildcard at the end of the pattern (e.g., /videos/*).Monitor and optimize at scaleAfter rollout:Watch origin bandwidth dropCheck hit ratio (aim for >90%)Audit latency (TTFB on HIT vs MISS)Consider logging using Gcore’s CDN logs uploader to analyze cache behavior, top requested paths, or cache churn rates.For maximum savings, combine Gcore Object Storage with Gcore CDN: egress traffic between them is 100% free. That means you can serve cached assets globally without paying a cent in bandwidth fees.Using external storage? You’ll still slash egress costs by caching at the edge and cutting direct origin traffic—but you’ll unlock the biggest savings when you stay inside the Gcore ecosystem.Save money and boost performance with GcoreStill serving assets direct from storage? You’re probably wasting money and compromising performance on the table. Front your bucket with Gcore CDN. Set smart cache headers or use overrides. Enable signed URLs if you need control. Monitor cache HITs and purge when needed. Automate the setup with Terraform. Done.Next steps:Create your CDN resourceUse private object storage with Signature V4Secure your CDN with signed URLsCreate a free CDN resource now

Bare metal vs. virtual machines: performance, cost, and use case comparison

Choosing the right type of server infrastructure is critical to how your application performs, scales, and fits your budget. For most workloads, the decision comes down to two core options: bare metal servers and virtual machines (VMs). Both can be deployed in the cloud, but they differ significantly in terms of performance, control, scalability, and cost.In this article, we break down the core differences between bare metal and virtual servers, highlight when to choose each, and explain how Gcore can help you deploy the right infrastructure for your needs. If you want to learn about either BM or VMs in detail, we’ve got articles for those: here’s the one for bare metal, and here’s a deep dive into virtual machines.Bare metal vs. virtual machines at a glanceWhen evaluating whether bare metal or virtual machines are right for your company, consider your specific workload requirements, performance priorities, and business objectives. Here’s a quick breakdown to help you decide what works best for you.FactorBare metal serversVirtual machinesPerformanceDedicated resources; ideal for high-performance workloadsShared resources; suitable for moderate or variable workloadsScalabilityOften requires manual scaling; less flexibleHighly elastic; easy to scale up or downCustomizationFull control over hardware, OS, and configurationLimited by hypervisor and provider’s environmentSecurityIsolated by default; no hypervisor layerShared environment with strong isolation protocolsCostHigher upfront cost; dedicated hardwarePay-as-you-go pricing; cost-effective for flexible workloadsBest forHPC, AI/ML, compliance-heavy workloadsStartups, dev/test, fast-scaling applicationsAll about bare metal serversA bare metal server is a single-tenant physical server rented from a cloud provider. Unlike virtual servers, the hardware is not shared with other users, giving you full access to all resources and deeper control over configurations. You get exclusive access and control over the hardware via the cloud provider, which offers the stability and security needed for high-demand applications.The benefits of bare metal serversHere are some of the business advantages of opting for a bare metal server:Maximized performance: Because they are dedicated resources, bare metal servers provide top-tier performance without sharing processing power, memory, or storage with other users. This makes them ideal for resource-intensive applications like high-performance computing (HPC), big data processing, and game hosting.Greater control: Since you have direct access to the hardware, you can customize the server to meet your specific requirements. This is especially important for businesses with complex, specialized needs that require fine-tuned configurations.High security: Bare metal servers offer a higher level of security than their alternatives due to the absence of virtualization. With no shared resources or hypervisor layer, there’s less risk of vulnerabilities that come with multi-tenant environments.Dedicated resources: Because you aren’t sharing the server with other users, all server resources are dedicated to your application so that you consistently get the performance you need.Who should use bare metal servers?Here are examples of instances where bare metal servers are the best option for a business:High-performance computing (HPC)Big data processing and analyticsResource-intensive applications, such as AI/ML workloadsGame and video streaming serversBusinesses requiring enhanced security and complianceAll about virtual machinesA virtual server (or virtual machine) runs on top of a physical server that’s been partitioned by a cloud provider using a hypervisor. This allows multiple VMs to share the same hardware while remaining isolated from each other.Unlike bare metal servers, virtual machines share the underlying hardware with other cloud provider customers. That means you’re using (and paying for) part of one server, providing cost efficiency and flexibility.The benefits of virtual machinesHere are some advantages of using a shared virtual machine:Scalability: Virtual machines are ideal for businesses that need to scale quickly and are starting at a small scale. With cloud-based virtualization, you can adjust your server resources (CPU, memory, storage) on demand to match changing workloads.Cost efficiency: You pay only for the resources you use with VMs, making them cost-effective for companies with fluctuating resource needs, as there is no need to pay for unused capacity.Faster deployment: VMs can be provisioned quickly and easily, which makes them ideal for anyone who wants to deploy new services or applications fast.Who should use virtual machines?VMs are a great fit for the following:Web hosting and application hostingDevelopment and testing environmentsRunning multiple apps with varying demandsStartups and growing businesses requiring scalabilityBusinesses seeking cost-effective, flexible solutionsWhich should you choose?There’s no one-size-fits-all answer. Your choice should depend on the needs of your workload:Choose bare metal if you need dedicated performance, low-latency access to hardware, or tighter control over security and compliance.Choose virtual servers if your priority is flexible scaling, faster deployment, and optimized cost.If your application uses GPU-based inference or AI training, check out our dedicated guide to VM vs. BM for AI workloads.Get started with Gcore BM or VMs todayAt Gcore, we provide both bare metal and virtual machine solutions, offering flexibility, performance, and reliability to meet your business needs. Gcore Bare Metal has the power and reliability needed for demanding workloads, while Gcore Virtual Machines offers customizable configurations, free egress traffic, and flexibility.Compare Gcore BM and VM pricing now

Optimize your workload: a guide to selecting the best virtual machine configuration

Virtual machines (VMs) offer the flexibility, scalability, and cost-efficiency that businesses need to optimize workloads. However, choosing the wrong setup can lead to poor performance, wasted resources, and unnecessary costs.In this guide, we’ll walk you through the essential factors to consider when selecting the best virtual machine configuration for your specific workload needs.﹟1 Understand your workload requirementsThe first step in choosing the right virtual machine configuration is understanding the nature of your workload. Workloads can range from light, everyday tasks to resource-intensive applications. When making your decision, consider the following:Compute-intensive workloads: Applications like video rendering, scientific simulations, and data analysis require a higher number of CPU cores. Opt for VMs with multiple processors or CPUs for smoother performance.Memory-intensive workloads: Databases, big data analytics, and high-performance computing (HPC) jobs often need more RAM. Choose a VM configuration that provides sufficient memory to avoid memory bottlenecks.Storage-intensive workloads: If your workload relies heavily on storage, such as file servers or applications requiring frequent read/write operations, prioritize VM configurations that offer high-speed storage options, such as SSDs or NVMe.I/O-intensive workloads: Applications that require frequent network or disk I/O, such as cloud services and distributed applications, benefit from VMs with high-bandwidth and low-latency network interfaces.﹟2 Consider VM size and scalabilityOnce you understand your workload’s requirements, the next step is to choose the right VM size. VM sizes are typically categorized by the amount of CPU, memory, and storage they offer.Start with a baseline: Select a VM configuration that offers a balanced ratio of CPU, RAM, and storage based on your workload type.Scalability: Choose a VM size that allows you to easily scale up or down as your needs change. Many cloud providers offer auto-scaling capabilities that adjust your VM’s resources based on real-time demand, providing flexibility and cost savings.Overprovisioning vs. underprovisioning: Avoid overprovisioning (allocating excessive resources) unless your workload demands peak capacity at all times, as this can lead to unnecessary costs. Similarly, underprovisioning can affect performance, so finding the right balance is essential.﹟3 Evaluate CPU and memory considerationsThe central processing unit (CPU) and memory (RAM) are the heart of a virtual machine. The configuration of both plays a significant role in performance. Workloads that need high processing power, such as video encoding, machine learning, or simulations, will benefit from VMs with multiple CPU cores. However, be mindful of CPU architecture—look for VMs that offer the latest processors (e.g., Intel Xeon, AMD EPYC) for better performance per core.It’s also important that the VM has enough memory to avoid paging, which occurs when the system uses disk space as virtual memory, significantly slowing down performance. Consider a configuration with more RAM and support for faster memory types like DDR4 for memory-heavy applications.﹟4 Assess storage performance and capacityStorage performance and capacity can significantly impact the performance of your virtual machine, especially for applications requiring large data volumes. Key considerations include:Disk type: For faster read/write operations, opt for solid-state drives (SSDs) over traditional hard disk drives (HDDs). Some cloud providers also offer NVMe storage, which can provide even greater speed for highly demanding workloads.Disk size: Choose the right size based on the amount of data you need to store and process. Over-allocating storage space might seem like a safe bet, but it can also increase costs unnecessarily. You can always resize disks later, so avoid over-allocating them upfront.IOPS and throughput: Some workloads require high input/output operations per second (IOPS). If this is a priority for your workload (e.g., databases), make sure that your VM configuration includes high IOPS storage options.﹟5 Weigh up your network requirementsWhen working with cloud-based VMs, network performance is a critical consideration. High-speed and low-latency networking can make a difference for applications such as online gaming, video conferencing, and real-time analytics.Bandwidth: Check whether the VM configuration offers the necessary bandwidth for your workload. For applications that handle large data transfers, such as cloud backup or file servers, make sure that the network interface provides high throughput.Network latency: Low latency is crucial for applications where real-time performance is key (e.g., trading systems, gaming). Choose VMs with low-latency networking options to minimize delays and improve the user experience.Network isolation and security: Check if your VM configuration provides the necessary network isolation and security features, especially when handling sensitive data or operating in multi-tenant environments.﹟6 Factor in cost considerationsWhile it’s essential that your VM has the right configuration, cost is always an important factor to consider. Cloud providers typically charge based on the resources allocated, so optimizing for cost efficiency can significantly impact your budget.Consider whether a pay-as-you-go or reserved model (which offers discounted rates in exchange for a long-term commitment) fits your usage pattern. The reserved option can provide significant savings if your workload runs continuously. You can also use monitoring tools to track your VM’s performance and resource usage over time. This data will help you make informed decisions about scaling up or down so you’re not paying for unused resources.﹟7 Evaluate security featuresSecurity is a primary concern when selecting a VM configuration, especially for workloads handling sensitive data. Consider the following:Built-in security: Look for VMs that offer integrated security features such as DDoS protection, web application firewall (WAF), and encryption.Compliance: Check that the VM configuration meets industry standards and regulations, such as GDPR, ISO 27001, and PCI DSS.Network security: Evaluate the VM's network isolation capabilities and the availability of cloud firewalls to manage incoming and outgoing traffic.﹟8 Consider geographic locationThe geographic location of your VM can impact latency and compliance. Therefore, it’s a good idea to choose VM locations that are geographically close to your end users to minimize latency and improve performance. In addition, it’s essential to select VM locations that comply with local data sovereignty laws and regulations.﹟9 Assess backup and recovery optionsBackup and recovery are critical for maintaining data integrity and availability. Look for VMs that offer automated backup solutions so that data is regularly saved. You should also evaluate disaster recovery capabilities, including the ability to quickly restore data and applications in case of failure.﹟10 Test and iterateFinally, once you've chosen a VM configuration, testing its performance under real-world conditions is essential. Most cloud providers offer performance monitoring tools that allow you to assess how well your VM is meeting your workload requirements.If you notice any performance bottlenecks, be prepared to adjust the configuration. This could involve increasing CPU cores, adding more memory, or upgrading storage. Regular testing and fine-tuning means that your VM is always optimized.Choosing a virtual machine that suits your requirementsSelecting the best virtual machine configuration is a key step toward optimizing your workloads efficiently, cost-effectively, and without unnecessary performance bottlenecks. By understanding your workload’s needs, considering factors like CPU, memory, storage, and network performance, and continuously monitoring resource usage, you can make informed decisions that lead to better outcomes and savings.Whether you're running a small application or large-scale enterprise software, the right VM configuration can significantly improve performance and cost. Gcore offers a wide range of virtual machine options that can meet your unique requirements. Our virtual machines are designed to meet diverse workload requirements, providing dedicated vCPUs, high-speed storage, and low-latency networking across 30+ global regions. You can scale compute resources on demand, benefit from free egress traffic, and enjoy flexible pricing models by paying only for the resources in use, maximizing the value of your cloud investments.Contact us to discuss your VM needs

How to get the size of a directory in Linux

Understanding how to check directory size in Linux is critical for managing storage space efficiently. Understanding this process is essential whether you’re assessing specific folder space or preventing storage issues.This comprehensive guide covers commands and tools so you can easily calculate and analyze directory sizes in a Linux environment. We will guide you step-by-step through three methods: du, ncdu, and ls -la. They’re all effective and each offers different benefits.What is a Linux directory?A Linux directory is a special type of file that functions as a container for storing files and subdirectories. It plays a key role in organizing the Linux file system by creating a hierarchical structure. This arrangement simplifies file management, making it easier to locate, access, and organize related files. Directories are fundamental components that help ensure smooth system operations by maintaining order and facilitating seamless file access in Linux environments.#1 Get Linux directory size using the du commandUsing the du command, you can easily determine a directory’s size by displaying the disk space used by files and directories. The output can be customized to be presented in human-readable formats like kilobytes (KB), megabytes (MB), or gigabytes (GB).Check the size of a specific directory in LinuxTo get the size of a specific directory, open your terminal and type the following command:du -sh /path/to/directoryIn this command, replace /path/to/directory with the actual path of the directory you want to assess. The -s flag stands for “summary” and will only display the total size of the specified directory. The -h flag makes the output human-readable, showing sizes in a more understandable format.Example: Here, we used the path /home/ubuntu/, where ubuntu is the name of our username directory. We used the du command to retrieve an output of 32K for this directory, indicating a size of 32 KB.Check the size of all directories in LinuxTo get the size of all files and directories within the current directory, use the following command:sudo du -h /path/to/directoryExample: In this instance, we again used the path /home/ubuntu/, with ubuntu representing our username directory. Using the command du -h, we obtained an output listing all files and directories within that particular path.#2 Get Linux directory size using ncduIf you’re looking for a more interactive and feature-rich approach to exploring directory sizes, consider using the ncdu (NCurses Disk Usage) tool. ncdu provides a visual representation of disk usage and allows you to navigate through directories, view size details, and identify large files with ease.For Debian or Ubuntu, use this command:sudo apt-get install ncduOnce installed, run ncdu followed by the path to the directory you want to analyze:ncdu /path/to/directoryThis will launch the ncdu interface, which shows a breakdown of file and subdirectory sizes. Use the arrow keys to navigate and explore various folders, and press q to exit the tool.Example: Here’s a sample output of using the ncdu command to analyze the home directory. Simply enter the ncdu command and press Enter. The displayed output will look something like this:#3 Get Linux directory size using 1s -1aYou can alternatively opt to use the ls command to list the files and directories within a directory. The options -l and -a modify the default behavior of ls as follows:-l (long listing format)Displays the detailed information for each file and directoryShows file permissions, the number of links, owner, group, file size, the timestamp of the last modification, and the file/directory name-a (all files)Instructs ls to include all files, including hidden files and directoriesIncludes hidden files on Linux that typically have names beginning with a . (dot)ls -la lists all files (including hidden ones) in long format, providing detailed information such as permissions, owner, group, size, and last modification time. This command is especially useful when you want to inspect file attributes or see hidden files and directories.Example: When you enter ls -la command and press Enter, you will see an output similar to this:Each line includes:File type and permissions (e.g., drwxr-xr-x):The first character indicates the file type- for a regular filed for a directoryl for a symbolic linkThe next nine characters are permissions in groups of three (rwx):r = readw = writex = executePermissions are shown for three classes of users: owner, group, and others.Number of links (e.g., 2):For regular files, this usually indicates the number of hard linksFor directories, it often reflects subdirectory links (e.g., the . and .. entries)Owner and group (e.g., user group)File size (e.g., 4096 or 1045 bytes)Modification date and time (e.g., Jan 7 09:34)File name (e.g., .bashrc, notes.txt, Documents):Files or directories that begin with a dot (.) are hidden (e.g., .bashrc)ConclusionThat’s it! You can now determine the size of a directory in Linux. Measuring directory sizes is a crucial skill for efficient storage management. Whether you choose the straightforward du command, use the visual advantages of the ncdu tool, or opt for the versatility of ls -la, this expertise enhances your ability to uphold an organized and efficient Linux environment.Looking to deploy Linux in the cloud? With Gcore Edge Cloud, you can choose from a wide range of pre-configured virtual machines suitable for Linux:Affordable shared compute resources starting from €3.2 per monthDeploy across 50+ cloud regions with dedicated servers for low-latency applicationsSecure apps and data with DDoS protection, WAF, and encryption at no additional costGet started today

How to Run Hugging Face Spaces on Gcore Inference at the Edge

Running machine learning models, especially large-scale models like GPT 3 or BERT, requires a lot of computing power and comes with a lot of latency. This makes real-time applications resource-intensive and challenging to deliver. Running ML models at the edge is a lightweight approach offering significant advantages for latency, privacy, and resource optimization.  Gcore Inference at the Edge makes it simple to deploy and manage custom models efficiently, giving you the ability to deploy and scale your favorite Hugging Face models globally in just a few clicks. In this guide, we’ll walk you through how easy it is to harness the power of Gcore’s edge AI infrastructure to deploy a Hugging Face Space model. Whether you’re developing NLP solutions or cutting-edge computer vision applications, deploying at the edge has never been simpler—or more powerful. Step 1: Log In to the Gcore Customer PortalGo to gcore.com and log in to the Gcore Customer Portal. If you don’t yet have an account, go ahead and create one—it’s free. Step 2: Go to Inference at the EdgeIn the Gcore Customer Portal, click Inference at the Edge from the left navigation menu. Then click Deploy custom model. Step 3: Choose a Hugging Face ModelOpen huggingface.com and browse the available models. Select the model you want to deploy. Navigate to the corresponding Hugging Face Space for the model. Click on Files in the Space and locate the Docker option. Copy the Docker image link and startup command from Hugging Face Space. Step 4: Deploy the Model on GcoreReturn to the Gcore Customer Portal deployment page and enter the following details: Model image URL: registry.hf.space/ethux-mistral-pixtral-demo:latest Startup command: python app.py Container port: 7860 Configure the pod as follows: GPU-optimized: 1x L40S vCPUs: 16 RAM: 232GiB For optimal performance, choose any available region for routing placement. Name your deployment and click Deploy.Step 5: Interact with Your ModelOnce the model is up and running, you’ll be provided with an endpoint. You can now interact with the model via this endpoint to test and use your deployed model at the edge.Powerful, Simple AI Deployment with GcoreGcore Inference at the Edge is the future of AI deployment, combining the ease of Hugging Face integration with the robust infrastructure needed for real-time, scalable, and global solutions. By leveraging edge computing, you can optimize model performance and simultaneously futureproof your business in a world that increasingly demands fast, secure, and localized AI applications. Deploying models to the edge allows you to capitalize on real-time insights, improve customer experiences, and outpace your competitors. Whether you’re leading a team of developers or spearheading a new AI initiative, Gcore Inference at the Edge offers the tools you need to innovate at the speed of tomorrow. Explore Gcore Inference at the Edge

Subscribe to our newsletter

Get the latest industry trends, exclusive insights, and Gcore updates delivered straight to your inbox.