Radar has landed - discover the latest DDoS attack trends. Get ahead, stay protected.Get the report
Under attack?

Products

Solutions

Resources

Partners

Why Gcore

  1. Home
  2. Developers
  3. Podman for Docker Users

Podman for Docker Users

  • By Gcore
  • March 27, 2023
  • 14 min read
Podman for Docker Users

Podman is the command-line interface tool that lets you interact with Libpod, a library for running and managing OCI-based containers. It is important to note that Podman doesn’t depend on a daemon, and it doesn’t require root privileges.

The first part of this tutorial focuses on similarities between Podman and Docker, and we’ll show how you can do the following:

  • Move a Docker image to Podman.
  • Create a bare-bones Nuxt.JS project and build a container image for it
  • Push your container image to Quay.io
  • Pull the image from Quay.io and run it with Docker.

In the second part of this tutorial, we’ll walk you through two of the most important features that differentiate Podman from Docker. In this section, you will do the following:

  • Create a Pod with Podman
  • Generate a Kubernetes Pod spec with Podman, and deploy it to a Kubernetes cluster.

Prerequisites

  1. This tutorial is intended for readers who have prior exposure to Docker. In the next sections, you will use commands such as run, build, push, commit, and tag. It is beyond the scope of this tutorial to explain how these commands work.
  2. A running Linux system with Podman and Docker installed.

You can enter the following command to check that Podman is installed on your system:

podman version
Version:            1.6.4RemoteAPI Version:  1Go Version:         go1.12.12OS/Arch:            linux/amd64

Refer Podman Installation Instructions for details on how to install Podman.

Use the following command to verify if Docker is installed:

docker --version
Docker version 18.06.3-ce, build d7080c1

See the Get Docker page for details on how to install Docker.

  1. Git. To check if Git is installed on your system enter, type the following command:
git version
git version 2.18.2

You can refer Getting Started – Installing Git on details of installing Git.

  1. Node.js 10 or higher. To check if Node.js is installed on your computer, type the following command:
node --version
v10.16.3

If Node.js is not installed, you can download the installer from the Downloads page.

  • A Kubernetes Cluster. If you don’t have a running Kubernetes cluster, refer the “Create a Kubernetes Cluster with Kind” section.
  • Additionally, you will need a Quay.io account.

Moving Images from Docker to Podman

If you’ve just installed Podman on a system on which you’ve already used Docker to pull one or more images, you’ll notice that running the podman images command doesn’t show your Docker images:

docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZEcassandra           latest              b571e0906e1b        10 days ago         324MB
podman images
REPOSITORY   TAG   IMAGE ID   CREATED   SIZE

The reason why you don’t see your Docker images is that Podman runs without root privileges. Thus, its repository is located in the user’s home directory – ~/.local/share/containers. However, Podman can import an image directly from the Docker daemon running on your machine, through the docker-daemon  transport.

In this section, you’ll use Docker to pull the hello-world image. Then, you’ll import it into Podman. Lastly, you’ll run the hello-world image with Podman.

  1. Download and run the hello-world image by executing the following command:
sudo docker run hello-world
Unable to find image 'hello-world:latest' locallylatest: Pulling from library/hello-world1b930d010525: Pull completeDigest: sha256:9572f7cdcee8591948c2963463447a53466950b3fc15a247fcad1917ca215a2fStatus: Downloaded newer image for hello-world:latestHello from Docker!This message shows that your installation appears to be working correctly.To generate this message, Docker took the following steps: 1. The Docker client contacted the Docker daemon. 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.    (amd64) 3. The Docker daemon created a new container from that image which runs the    executable that produces the output you are currently reading. 4. The Docker daemon streamed that output to the Docker client, which sent it    to your terminal.To try something more ambitious, you can run an Ubuntu container with: $ docker run -it ubuntu bashShare images, automate workflows, and more with a free Docker ID: https://hub.docker.com/For more examples and ideas, visit: https://docs.docker.com/get-started/
  1. The following docker images command lists the Docker images on your system and pretty-prints the output:
sudo docker images --format '{{.Repository}}:{{.Tag}}'
hello-world:latest
  1. Enter the podman pull command specifying the transport (docker-daemon) and the name of the image, separated by ::
podman pull docker-daemon:hello-world:latest
Getting image source signaturesCopying blob af0b15c8625b doneCopying config fce289e99e doneWriting manifest to image destinationStoring signaturesfce289e99eb9bca977dae136fbe2a82b6b7d4c372474c9235adc1741675f587e
  1. Once you’ve imported the image, running the podman images command will display the hello-world image:
podman images
REPOSITORY                      TAG      IMAGE ID       CREATED         SIZEdocker.io/library/hello-world   latest   fce289e99eb9   13 months ago   5.94 kB
  1. To run the image, enter the following podman run command:
podman run hello-world
Hello from Docker!This message shows that your installation appears to be working correctly.To generate this message, Docker took the following steps: 1. The Docker client contacted the Docker daemon. 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.    (amd64) 3. The Docker daemon created a new container from that image which runs the    executable that produces the output you are currently reading. 4. The Docker daemon streamed that output to the Docker client, which sent it    to your terminal.To try something more ambitious, you can run an Ubuntu container with: $ docker run -it ubuntu bashShare images, automate workflows, and more with a free Docker ID: https://hub.docker.com/For more examples and ideas, visit: https://docs.docker.com/get-started/

Creating a Basic Nuxt.js Project

For the scope of this tutorial, we’ll create a simple web-application using Nuxt.JS, a progressive Vue-based framework that aims to provide a great experience for developers. Then, in the next sections, you’ll use Podman to create a container image for your project and push it Quay.io. Lastly, you’ll use Docker to run the container image.

  1. Nuxt.JS is distributed as an NPM package. To install it, fire up a terminal window, and execute the following command:
npm install nuxt
+ nuxt@2.11.0added 1067 packages from 490 contributors and audited 9750 packages in 75.666sfound 0 vulnerabilities

Note that the above output was truncated for brevity.

  1. With Nuxt.JS installed on your computer, you can create a new bare-bones project:
npx create-nuxt-app podman-nuxtjs-demo

You will be prompted to answer a few questions:

create-nuxt-app v2.14.0✨  Generating Nuxt.js project in podman-nuxtjs-demo? Project name podman-nuxtjs-demo? Project description Podman Nuxt.JS demo? Author name Gcore? Choose the package manager Npm? Choose UI framework Bootstrap Vue? Choose custom server framework None (Recommended)? Choose Nuxt.js modules (Press <space> to select, <a> to toggle all, <i> to invert selection)? Choose linting tools ESLint? Choose test framework None? Choose rendering mode Universal (SSR)? Choose development tools jsconfig.json (Recommended for VS Code)

Once you answer these questions, npm will install the required dependencies:

🎉  Successfully created project podman-nuxtjs-demo  To get started:	cd podman-nuxtjs-demo	npm run dev  To build & start for production:	cd podman-nuxtjs-demo	npm run build	npm run start

Note that the above output was truncated for brevity.

  1. Enter the following commands to start your new application:
cd podman-nuxtjs-demo/ && npm run dev
> podman-nuxtjs-demo@1.0.0 dev /home/vagrant/podman-nuxtjs-demo> nuxt   ╭─────────────────────────────────────────────╮   │                                             │   │   Nuxt.js v2.11.0                           │   │   Running in development mode (universal)   │   │                                             │   │   Listening on: http://localhost:3000/      │   │                                             │   ╰─────────────────────────────────────────────╯ℹ Preparing project for development                                               14:39:30ℹ Initial build may take a while                                                  14:39:30✔ Builder initialized                                                             14:39:30✔ Nuxt files generated                                                            14:39:30✔ Client  Compiled successfully in 23.53s✔ Server  Compiled successfully in 17.82sℹ Waiting for file changes                                                        14:39:56ℹ Memory usage: 209 MB (RSS: 346 MB)                                              14:39:56
  1. Point your browser to
    http://localhost:3000
    , and you should see something similar to the screenshot below:

Building a Container Image for Your Nuxt.JS Project

In this section, we’ll look at how you can use Podman to build a container image for thepodman-nextjs-demo project.

  1. Create a file called Dockerfile and place the following content into it:
FROM node:10WORKDIR /usr/src/appCOPY package*.json ./RUN npm installCOPY . .EXPOSE 3000CMD [ "npm", "run", "dev" ]

For a quick refresher on the above Dockerfile commands, refer the Create a Docker Image section from the Debug a Node.js Application Running in a Docker Container tutorial.

  1. To avoid sending large files to the build context and speed up the process, create a file called .dockerignore with the following content:
node_modulesnpm-debug.log.nuxt

As you can see, this is just a plain-text file containing names of the files and directories that Podman should exclude from the build.

  1. Build the image. Execute the following podman build command, specifying the -t flag with the tagged name Podman will apply to the build image:
podman build -t podman-nuxtjs-demo:podman .
STEP 1: FROM node:10STEP 2: RUN mkdir -p /usr/src/nuxt-app--> Using cache c7198c4f08b90ecb5575bbce23fc095e5c65fe5dc4b4f77b23192e2eae094d6fSTEP 3: WORKDIR /usr/src/nuxt-app--> Using cache f1cc5aba3f36e122513c5ff0410f862d6099bcee886453f7fb30859f66e0ac78STEP 4: COPY . /usr/src/nuxt-app/--> Using cache fb4c322c98b41d446f5cceb88b3f9c451751d0cfe8ed9d0e6eb153919b498da3STEP 5: RUN npm install--> Using cache bb5324e79782b4522048dcc5f0f02c41b56e12198438aa59a7588a6824a435e1STEP 6: RUN npm run build> podman-nuxtjs-demo@1.0.0 build /usr/src/nuxt-app> nuxt buildℹ Production build✔ Builder initialized✔ Nuxt files generated✔ Client  Compiled successfully in 2.95m✔ Server  Compiled successfully in 10.91sHash: 7c4493c4d1c7b235dd8eVersion: webpack 4.41.6Time: 177257msBuilt at: 02/11/2020 4:48:17 PM                         Asset      Size  Chunks                                Chunk Names../server/client.manifest.json  16.1 KiB          [emitted]       7d497fe85470995d6e29.js  2.99 KiB       2  [emitted] [immutable]         pages/index       848739217655a36af267.js   671 KiB       4  [emitted] [immutable]  [big]  vendors.app       90036491716edfc3e86d.js   159 KiB       1  [emitted] [immutable]         commons.app                      LICENSES  1.95 KiB          [emitted]       b625f5fc00e8ff962762.js  2.31 KiB       3  [emitted] [immutable]         runtime       eac7116f7d28455b0958.js    36 KiB       0  [emitted] [immutable]         app + 2 hidden assetsEntrypoint app = b625f5fc00e8ff962762.js 90036491716edfc3e86d.js 848739217655a36af267.js eac7116f7d28455b0958.jsWARNING in asset size limit: The following asset(s) exceed the recommended size limit (244 KiB).This can impact web performance.Assets:  848739217655a36af267.js (671 KiB)Hash: e3d9cfd644a086dc9c5bVersion: webpack 4.41.6Time: 10916msBuilt at: 02/11/2020 4:48:29 PM                  Asset       Size  Chunks                         Chunk Namesd1d703b09adf296a453d.js   3.07 KiB       1  [emitted] [immutable]  pages/index              server.js    222 KiB       0  [emitted]              app   server.manifest.json  145 bytes          [emitted]Entrypoint app = server.js0d239b0083a60482b4b5fa60a99b96dd22045822e50fbd83b8a369d8179bf307STEP 7: EXPOSE 30001d037e041dd4a8d6c94a9f6fb8fe6578f5e00d27aab9168bad83e7ab260bbeaeSTEP 8: ENV NUXT_HOST=0.0.0.040d684a5441a8da38ed5198be722719f393be13a855a9e85cbc49e5c7155f7ccSTEP 9: ENV NUXT_PORT=30007d07961e058d66e172f4b9e01d50fb355c16060a990252c5bc7cd35d960f5f72STEP 10: CMD ["npm", "run", "dev"]STEP 11: COMMIT podman-nuxtjs-demo:podman54c55a8a44f30105371652bc2c25e0fbba200ad6c945654077151194aa0a66fe
  1. At this point, you can check that everything went well with:
podman images
REPOSITORY                     TAG      IMAGE ID       CREATED              SIZElocalhost/podman-nuxtjs-demo   podman   54c55a8a44f3   About a minute ago   1.09 GBdocker.io/library/node         10       bb78c02ca3bf   4 days ago           937 MB
  1. To run the podman-nuxtjs-demo:podman container, enter the podman run command and pass it the following arguments:
  • -dt to specify that the container should be run in the background and that Podman should allocate a pseudo-TTY
  • -p with the port on the host (3000) that’ll be forwarded to the container port (3000), separated by :.
  • The name of your image (podman-nuxtjs-demo:podman)
podman run -dt -p 3000:3000/tcp podman-nuxtjs-demo:podman

This will print out to the console the container ID:

4de08084dd1d33fcdae96cd493b3eb20406ea89ce2a3e8dbc833b38c2243ce43
  1. You can list your running containers with:
podman ps
CONTAINER ID  IMAGE                                COMMAND      CREATED        STATUS            PORTS                   NAMES4de08084dd1d  localhost/podman-nuxtjs-demo:podman  npm run dev  4 seconds ago  Up 4 seconds ago  0.0.0.0:3000->3000/tcp  objective_neumann
  1. To retrieve detailed information about your running container, enter the podman inspect command specifying the container ID:
podman inspect 4de08084dd1d33fcdae96cd493b3eb20406ea89ce2a3e8dbc833b38c2243ce43
podman inspect 4de08084dd1d33fcdae96cd493b3eb20406ea89ce2a3e8dbc833b38c2243ce43[    {        "Id": "4de08084dd1d33fcdae96cd493b3eb20406ea89ce2a3e8dbc833b38c2243ce43",        "Created": "2020-02-11T17:00:06.819669549Z",        "Path": "docker-entrypoint.sh",        "Args": [            "npm",            "run",            "dev"        ],        "State": {            "OciVersion": "1.0.1-dev",            "Status": "running",            "Running": true,            "Paused": false,            "Restarting": false,            "OOMKilled": false,            "Dead": false,            "Pid": 10637,            "ConmonPid": 10628,            "ExitCode": 0,            "Error": "",            "StartedAt": "2020-02-11T17:00:07.317812139Z",            "FinishedAt": "0001-01-01T00:00:00Z",            "Healthcheck": {                "Status": "",                "FailingStreak": 0,                "Log": null            }        },        "Image": "54c55a8a44f30105371652bc2c25e0fbba200ad6c945654077151194aa0a66fe",        "ImageName": "localhost/podman-nuxtjs-demo:podman",        "Rootfs": "",        "Pod": "",

Note that the above output was truncated for brevity.

  1. To retrieve the logs from your container, run the podman logs command specifying the container ID or the --latest flag:
podman logs --latest
> podman-nuxtjs-demo@1.0.0 dev /usr/src/nuxt-app> nuxt   ╭─────────────────────────────────────────────╮   │                                             │   │   Nuxt.js v2.11.0                           │   │   Running in development mode (universal)   │   │                                             │   │   Listening on: http://10.0.2.100:3000/     │   │                                             │   ╰─────────────────────────────────────────────╯ℹ Preparing project for developmentℹ Initial build may take a while✔ Builder initialized✔ Nuxt files generated✔ Client  Compiled successfully in 25.36s✔ Server  Compiled successfully in 19.21sℹ Waiting for file changesℹ Memory usage: 254 MB (RSS: 342 MB)
  1. Display the list of running processes inside your container:
podman top 4de08084dd1d
USER   PID   PPID   %CPU     ELAPSED           TTY     TIME   COMMANDroot   1     0      0.000    3m52.098907307s   pts/0   0s     npmroot   17    1      0.000    3m51.099829437s   pts/0   0s     sh -c nuxtroot   18    17     11.683   3m51.099997015s   pts/0   27s    node /usr/src/nuxt-app/node_modules/.bin/nuxt

Push Your Podman Image to Quay.io

  1. First, you must generate an encrypted password. Point your browser to http://quay.io, and then navigate to the Account Settings page:
  1. On the Account Settings page, select Generate Encrypted Password:
  1. When prompted, enter your Quay.io password:
  1. From the sidebar on the left, select Docker Login. Then, copy your encrypted password:
  1. You can now log in to Quay.io. Enter the podman login command specifying:
  • The registry server (quay.io)
  • The -u flag with your username
  • The -p flag with the encrypted password you retrieved earlier
podman login quay.io -u <YOUR_USER_NAME> -p="<YOUR_ENCRYPTED_PASSWORD>"
Login Succeeded!
  1. To push the podman-nuxtjs-demo image to Quay.io, enter the following podman push command:
podman push podman-nuxtjs-demo:podman quay.io/andreipope/podman-nuxtjs-demo:podman
Getting image source signaturesCopying blob 69dfa7bd7a92 doneCopying blob 4d1ab3827f6b doneCopying blob 7948c3e5790c doneCopying blob 01727b1a72df doneCopying blob 03dc1830d2d5 doneCopying blob 1d7382716a27 doneCopying blob 062fc3317d1a doneCopying blob 3d36b8a4efb1 doneCopying blob 1708ebc408a9 doneCopying blob 0aacf878561f doneCopying blob c49b91e9cfd0 doneCopying blob 4294ef3571b7 doneCopying blob 1da55789948c doneCopying config 54c55a8a44 doneWriting manifest to image destinationCopying config 54c55a8a44 doneWriting manifest to image destinationStoring signatures

In the above command, do not forget to replace our username (andreipope) with yours.

  1. Point your browser to https://quay.io/, navigate to the podman-nuxtjs-demo repository, and make sure the repository is public:

Run Your Podman Image with Docker

Container images are compatible between Podman and Docker. In this section, you’ll use Docker to pull the podman-nuxtjs-demo image from Quay.io and run it. Ideally, you would want to run this on a different machine.

  1. You can log in to Quay.io by entering the docker login command and passing it the following parameters:
  • The -u flag with your username
  • The -p flag with your encrypted password (you retrieved it from Quay.io in the previous section)
  • The name of the registry (quay.io)
docker login -u="<YOUR_USER_NAME>" -p="YOUR_ENCRYPTED_PASSWORD" quay.io
WARNING! Using --password via the CLI is insecure. Use --password-stdin.Login Succeeded
  1. To run the podman-nuxtjs-demo image, you can use the following command:
docker run -dt -p 3000:3000/tcp  quay.io/andreipope/podman-nuxtjs-demo:podman
Unable to find image 'quay.io/andreipope/podman-nuxtjs-demo:podman' locallypodman: Pulling from andreipope/podman-nuxtjs-demo03644a8453bd: Pull completee2c9fbbb35b2: Pull complete0c33fe27c91c: Pull complete957ac2567af6: Pull complete934d2e09d84d: Pull complete50c60e376f59: Pull complete3c43a52a3ecc: Pull completee74942a3267a: Pull completeaf1466e8bc5b: Pull complete3f24948a552e: Pull completedf2fea35a007: Pull complete7045f2526057: Pull complete5090c2f6d806: Pull completeDigest: sha256:fcf90cfc3fe1d0f7e975db8a271003cdd51d6f177e490eb39ec1e44d3659b815Status: Downloaded newer image for quay.io/andreipope/podman-nuxtjs-demo:podman1c0981690d66f2cd8cb77e9573f1dd4e9d7700869e08797b42fc33590d8baabf
  1. Wait a bit until Docker pulls the image and creates the container. Then, issue the the docker ps command to display the list of running containers:
docker ps
CONTAINER ID        IMAGE                                          COMMAND                  CREATED             STATUS              PORTS                    NAMES1c0981690d66        quay.io/andreipope/podman-nuxtjs-demo:podman   "docker-entrypoint.s…"   25 seconds ago      Up 20 seconds       0.0.0.0:3000->3000/tcp   practical_bose
  1. To make sure everything works as expected, point your browser to
    http://localhost:3000
    . You should see something similar to the screenshot below:

Creating Pods

Until now, you’ve used Podman similarly to how Docker is used. However, Podman brings a couple of new features such as the ability to create pods. A Pod is a group of tightly-coupled containers that share their storage and network resources. In a nutshell, you can use a Pod to model a logical host. In this section, we’ll walk you through the process of creating a Pod comprised of the podman-nuxtjs-demo container and a PostgreSQL database. Note that it is beyond the scope of this tutorial to show how you can configure the storage and network for your Pod.

  1. Create a pod with the podman-nuxtjs-demo container. Enter the podman run with the following arguments:
  • -dt to specify that the container should be run in the background and that Podman should allocate a pseudo-TTY
  • --pod with the name of your new Pod. Specifying -new indicates that you want to create a new Pod. Otherwise, Podman tries to attach the container to an existing Pod.
  • -p with the port on host (3000) that’ll be forwarded to the container port (3000), separated by :.
  • The name of your image (podman-nuxtjs-demo:podman)
podman run -dt --pod new:podman_demo -p 3000:3000/tcp quay.io/andreipope/podman-nuxtjs-demo:podman

This will print the identifier of your new Pod:

972c7c1db0c31a42ba4b41025078dfc6abb046f503aa413d6cca313068042041
  1. You can display the list of running Pods with the podman pod list command:
podman pod list
POD ID         NAME             STATUS    CREATED          # OF CONTAINERS   INFRA IDd15a2abd9d5b   podman_demo      Running   32 seconds ago   2                 6a5bc0360ae2

In the output above, the number of containers is 2. This is because all Podman Pods include something called an Infra container, which does nothing except that it goes to sleep. This way, it holds the namespace associated with the Pod so that Podman can attach other containers to the Pod.

  1. Print the list of running containers by entering the podman ps command followed by the -a and -p flags. This lists all containers and prints the identifiers and the names of the Pods your containers are associated with:
podman ps -ap
CONTAINER ID  IMAGE                                         COMMAND      CREATED            STATUS                         PORTS                   NAMES                POD972c7c1db0c3  quay.io/andreipope/podman-nuxtjs-demo:podman  npm run dev  56 seconds ago     Up 55 seconds ago              0.0.0.0:3000->3000/tcp  festive_yonath       d15a2abd9d5b6a5bc0360ae2  k8s.gcr.io/pause:3.1                                       56 seconds ago     Up 55 seconds ago              0.0.0.0:3000->3000/tcp  d15a2abd9d5b-infra   d15a2abd9d5b

As you can see, the Infra container uses the k8s.gcr.io/pause image.

  1. Run the postgres:11-alpine image and associate it with the podman_demo Pod:
podman run -dt --pod podman_demo postgres:11-alpined395bed40988a953257b9501497c66b886b2fb6e81f48aa0ac89d7cfe2639b75
  1. This takes a bit of time to complete. Once everything is ready, you should see that the number of containers has been increased to 3:
 podman pod list
POD ID         NAME             STATUS    CREATED          # OF CONTAINERS   INFRA IDd15a2abd9d5b   podman_demo      Running   8 minutes ago    3                 6a5bc0360ae2
  1. You can display the list of your running containers with the following podman ps command:
podman ps -ap
CONTAINER ID  IMAGE                                         COMMAND      CREATED            STATUS                        PORTS                   NAMES                PODab5bd4810494  docker.io/library/postgres:11-alpine          postgres     5 minutes ago      Up 3 minutes ago              0.0.0.0:3000->3000/tcp  dreamy_jackson       d15a2abd9d5b972c7c1db0c3  quay.io/andreipope/podman-nuxtjs-demo:podman  npm run dev  9 minutes ago      Up 9 minutes ago              0.0.0.0:3000->3000/tcp  festive_yonath       d15a2abd9d5b6a5bc0360ae2  k8s.gcr.io/pause:3.1                                       9 minutes ago      Up 9 minutes ago
  1. As an example, you can stop the podman-nuxtjs-demo container. The other containers in the Pod won’t be affected, and the status of the Pod will show as Running:
podman stop 972c7c1db0c3
972c7c1db0c31a42ba4b41025078dfc6abb046f503aa413d6cca313068042041
podman pod ps
POD ID         NAME             STATUS    CREATED          # OF CONTAINERS   INFRA IDd15a2abd9d5b   podman_demo      Running   12 minutes ago   3                 6a5bc0360ae2
  1. To start again the container, enter the podman start command followed by the identifier of the container you want to start:
podman start 972c7c1db0c3
972c7c1db0c31a42ba4b41025078dfc6abb046f503aa413d6cca313068042041
  1. At this point, if you run the podman ps -ap command, you should see that the status of the podman-nuxtjs-demo container is now Up:
podman ps -apCONTAINER ID  IMAGE                                         COMMAND      CREATED            STATUS                        PORTS                   NAMES                PODab5bd4810494  docker.io/library/postgres:11-alpine          postgres     7 minutes ago      Up 5 minutes ago             0.0.0.0:3000->3000/tcp  dreamy_jackson       d15a2abd9d5b972c7c1db0c3  quay.io/andreipope/podman-nuxtjs-demo:podman  npm run dev  14 minutes ago     Up 54 seconds ago             0.0.0.0:3000->3000/tcp  festive_yonath       d15a2abd9d5b6a5bc0360ae2  k8s.gcr.io/pause:3.1                                       14 minutes ago     Up 14 minutes ago             0.0.0.0:3000->3000/tcp  d15a2abd9d5b-infra   d15a2abd9d5b
  1. Lastly, let’s top the podman_demo pod:
podman pod stop podman_demo
d15a2abd9d5bcb6f403515c0ed4dd4cb7df252a87591a88975b5573eb7f20900
  1. Enter the following command to make sure your Pod is stopped:
podman pod ps
POD ID         NAME             STATUS    CREATED          # OF CONTAINERS   INFRA IDd15a2abd9d5b   podman_demo      Stopped   17 minutes ago   3                 6a5bc0360ae2

Generate a Kubernetes Pod Spec with Podman

Podman can perform a snapshot of your container/Pod and generate a Kubernetes spec. This way, it makes it easier for you to orchestrate your containers with Kubernetes. For the scope of this section, we’ll illustrate how to use Podman to generate a Kubernetes spec and deploy your Pod to Kubernetes.

  1. To create a Kubernetes spec for a container and save it into a file called podman-nuxtjs-demo.yaml, run the following podman generate kube command:
podman generate kube <CONTAINER_ID> > podman-nuxtjs-demo.yaml
  1. Let’s take a look at what’s inside the podman-nuxtjs-demo.yaml file:
cat podman-nuxtjs-demo.yaml
# Generation of Kubernetes YAML is still under development!## Save the output of this file and use kubectl create -f to import# it into Kubernetes.## Created with podman-1.6.4```YAMLapiVersion: v1kind: Podmetadata:  creationTimestamp: "2020-02-12T05:21:44Z"  labels:    app: objectiveneumann  name: objectiveneumannspec:  containers:  - command:    - npm    - run    - dev    env:    - name: PATH      value: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin    - name: TERM      value: xterm    - name: HOSTNAME    - name: container      value: podman    - name: NODE_VERSION      value: 10.19.0    - name: YARN_VERSION      value: 1.21.1    - name: NUXT_HOST      value: 0.0.0.0    - name: NUXT_PORT      value: "3000"    image: localhost/podman-nuxtjs-demo:podman    name: objectiveneumann    ports:    - containerPort: 3000      hostPort: 3000      protocol: TCP    resources: {}    securityContext:      allowPrivilegeEscalation: true      capabilities: {}      privileged: false      readOnlyRootFilesystem: false    tty: true    workingDir: /usr/src/nuxt-appstatus: {}

There is a lot of output here, but the parts we’re interested in are:

  • metadata.labels.app and metadata.name. You’ll have to give them more meaningful names
  • spec.containers.image. Since in real life you’ll have to pull the images from a registry, you must replace localhost/podman-nuxtjs-demo:podman with the address of your Quay.io container image.
  1. Edit the content of the podman-nuxtjs-demo.yaml file to the following:
# Generation of Kubernetes YAML is still under development!## Save the output of this file and use kubectl create -f to import# it into Kubernetes.## Created with podman-1.6.4apiVersion: v1kind: Podmetadata:  creationTimestamp: "2020-02-12T05:24:44Z"  labels:    app: podman-nuxtjs-demo  name: podman-nuxtjs-demospec:  containers:  - command:    - npm    - run    - dev    env:    - name: PATH      value: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin    - name: TERM      value: xterm    - name: HOSTNAME    - name: container      value: podman    - name: NODE_VERSION      value: 10.19.0    - name: YARN_VERSION      value: 1.21.1    - name: NUXT_HOST      value: 0.0.0.0    - name: NUXT_PORT      value: "3000"    image: quay.io/andreipope/podman-nuxtjs-demo:podman    name: objectiveneumann    ports:    - containerPort: 3000      hostPort: 3000      protocol: TCP    resources: {}    securityContext:      allowPrivilegeEscalation: true      capabilities: {}      privileged: false      readOnlyRootFilesystem: false    tty: true    workingDir: /usr/src/nuxt-appstatus: {}

The above spec uses the address of our container image – quay.io/andreipope/podman-nuxtjs-demo:podman. Make sure you replace this with your address.

  1. Now, if your Quay.io repository is private, Kubernetes must authenticate with the registry to pull the image. Point your browser to http://quay.io, and then navigate to the Settings section of your repository. Select Generate Encrypted Password, and you’ll be asked to type your password. From the sidebar on the left, select Kubernetes Secret to download your Kubernetes secrets file:
  1. Next, you must refer to this Kubernetes secret from the podman-nuxtjs-demo.yaml. You can do this by adding a field similar to the one below:
imagePullSecrets:    - name: andreipope-pull-secret

Note that the name of our Kubernetes secret is andreipope-pull-secret, but yours will be different.

At this point, your podman-nuxtjs-demo.yaml file should look something like the following:

# Generation of Kubernetes YAML is still under development!## Save the output of this file and use kubectl create -f to import# it into Kubernetes.## Created with podman-1.6.4apiVersion: v1kind: Podmetadata:  creationTimestamp: "2020-02-12T05:24:44Z"  labels:    app: podman-nuxtjs-demo  name: podman-nuxtjs-demospec:  containers:  - command:    - npm    - run    - dev    env:    - name: PATH      value: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin    - name: TERM      value: xterm    - name: HOSTNAME    - name: container      value: podman    - name: NODE_VERSION      value: 10.19.0    - name: YARN_VERSION      value: 1.21.1    - name: NUXT_HOST      value: 0.0.0.0    - name: NUXT_PORT      value: "3000"    image: quay.io/andreipope/podman-nuxtjs-demo:podman    name: objectiveneumann    ports:    - containerPort: 3000      hostPort: 3000      protocol: TCP    resources: {}    securityContext:      allowPrivilegeEscalation: true      capabilities: {}      privileged: false      readOnlyRootFilesystem: false    tty: true    workingDir: /usr/src/nuxt-app  imagePullSecrets:    - name: andreipope-pull-secretstatus: {}

Create a Kubernetes Cluster with Kind (Optional)

Kind is a tool for running local Kubernetes clusters using Docker container “nodes”. Follow the steps in this section if you don’t have a running Kubernetes cluster:

  1. Create a file called cluster.yaml with the following content:
kind create cluster --config cluster.yaml
# three node (two workers) cluster configkind: ClusterapiVersion: kind.x-k8s.io/v1alpha4nodes:- role: control-plane- role: worker- role: worker
  1. Apply the spec:
kind create cluster --config cluster.yaml
Creating cluster "kind" ... ✓ Ensuring node image (kindest/node:v1.16.3) 🖼 ✓ Preparing nodes 📦 ✓ Writing configuration 📜 ✓ Starting control-plane 🕹️ ✓ Installing CNI 🔌 ✓ Installing StorageClass 💾 ✓ Joining worker nodes 🚜Set kubectl context to "kind-kind"You can now use your cluster with:

This creates a Kubernetes cluster with a control plane and two worker nodes.

Deploying to Kubernetes

  1. Apply your Kubernetes pull secrets spec. Enter the kubectl create command specifying:
  • The -f flag with the name of the file (our example uses a file named andreipope-secret.yml)
  • The --namespace flag with the name of your namespace (we’re using the default namespace)
kubectl create -f andreipope-secret.yml --namespace=default
secret/andreipope-pull-secret created
  1. Now you’re ready to apply the podman-nuxt-js-demo spec:
kubectl apply -f podman-nuxt-js-demo.yaml
pod/podman-nuxtjs-demo created
  1. Monitor the status of your installation with:
kubectl get pods
NAME               READY   STATUS              RESTARTS   AGEpodman-nuxtjs-demo   0/1     ContainerCreating   0          85s
  1. You can retrieve more details about the status of your installation by entering the kubectl describe pod followed by the name of your Pod:
kubectl describe pod  podman-nuxtjs-demo
Name:         podman-nuxtjs-demoNamespace:    defaultPriority:     0Node:         kind-worker2/172.17.0.3Start Time:   Wed, 12 Feb 2020 19:36:37 +0200Labels:       app=podman-nuxtjs-demoAnnotations:  kubectl.kubernetes.io/last-applied-configuration:                {"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"creationTimestamp":"2020-02-12T05:24:44Z","labels":{"app":"podman-nuxtjs-dem...Status:       PendingIP:IPs:          <none>Containers:  objectiveneumann:    Container ID:    Image:         quay.io/andreipope/podman-nuxtjs-demo:podman    Image ID:    Port:          3000/TCP    Host Port:     3000/TCP    Command:      npm      run      dev    State:          Waiting      Reason:       ContainerCreating    Ready:          False    Restart Count:  0    Requests:      memory:  1Gi    Environment:      PATH:          /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin      TERM:          xterm      HOSTNAME:      container:     podman      NODE_VERSION:  10.19.0      YARN_VERSION:  1.21.1      NUXT_HOST:     0.0.0.0      NUXT_PORT:     3000    Mounts:      /var/run/secrets/kubernetes.io/serviceaccount from default-token-rp6n5 (ro)Conditions:  Type              Status  Initialized       True  Ready             False  ContainersReady   False  PodScheduled      TrueVolumes:  default-token-rp6n5:    Type:        Secret (a volume populated by a Secret)    SecretName:  default-token-rp6n5    Optional:    falseQoS Class:       BurstableNode-Selectors:  <none>Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s                 node.kubernetes.io/unreachable:NoExecute for 300sEvents:  Type    Reason     Age   From                   Message  ----    ------     ----  ----                   -------  Normal  Scheduled  57s   default-scheduler      Successfully assigned default/podman-nuxtjs-demo to kind-worker2  Normal  Pulling    55s   kubelet, kind-worker2  Pulling image "quay.io/andreipope/podman-nuxtjs-demo:podman"

As an alternative, you can list events with the following command:

kubectl get events
LAST SEEN   TYPE     REASON                    OBJECT                    MESSAGE4m55s       Normal   RegisteredNode            node/kind-control-plane   Node kind-control-plane event: Registered Node kind-control-plane in Controller4m37s       Normal   Starting                  node/kind-control-plane   Starting kube-proxy.4m36s       Normal   NodeHasSufficientMemory   node/kind-worker          Node kind-worker status is now: NodeHasSufficientMemory4m36s       Normal   NodeHasNoDiskPressure     node/kind-worker          Node kind-worker status is now: NodeHasNoDiskPressure4m36s       Normal   NodeHasSufficientPID      node/kind-worker          Node kind-worker status is now: NodeHasSufficientPID4m35s       Normal   RegisteredNode            node/kind-worker          Node kind-worker event: Registered Node kind-worker in Controller4m15s       Normal   Starting                  node/kind-worker          Starting kube-proxy.3m36s       Normal   NodeReady                 node/kind-worker          Node kind-worker status is now: NodeReady4m34s       Normal   NodeHasSufficientMemory   node/kind-worker2         Node kind-worker2 status is now: NodeHasSufficientMemory4m34s       Normal   NodeHasNoDiskPressure     node/kind-worker2         Node kind-worker2 status is now: NodeHasNoDiskPressure4m34s       Normal   NodeHasSufficientPID      node/kind-worker2         Node kind-worker2 status is now: NodeHasSufficientPID4m30s       Normal   RegisteredNode            node/kind-worker2         Node kind-worker2 event: Registered Node kind-worker2 in Controller4m15s       Normal   Starting                  node/kind-worker2         Starting kube-proxy.3m34s       Normal   NodeReady                 node/kind-worker2         Node kind-worker2 status is now: NodeReady3m29s       Normal   Scheduled                 pod/podman-nuxtjs-demo    Successfully assigned default/podman-nuxtjs-demo to kind-worker23m27s       Normal   Pulling                   pod/podman-nuxtjs-demo    Pulling image "quay.io/andreipope/podman-nuxtjs-demo:podman"
  1. Wait a bit until the pod is created. Then, you can list all pods with:
kubectl get pods
NAME                 READY   STATUS    RESTARTS   AGEpodman-nuxtjs-demo   1/1     Running   0          7m34s
  1. Now let’s forward all requests made to
    http://localhost:3000
    to port 3000 on the podman-nuxtjs-demo Pod:
kubectl port-forward pod/podman-nuxtjs-demo 3000:3000
Forwarding from 127.0.0.1:3000 -> 3000Forwarding from [::1]:3000 -> 3000Handling connection for 3000Handling connection for 3000Handling connection for 3000Handling connection for 3000Handling connection for 3000Handling connection for 3000Handling connection for 3000
  1. Point your browser to
    http://localhost:3000
    . If everything works well, you should see something like the following:

Congratulations on completing this tutorial, now you know enough to use Podman as a replacement for Docker. Stay tuned for our next tutorials where, amongst many other things, you’ll learn how to use Buildah.
Thanks for reading!

Explore Gcore Container as a Service

Related articles

Pre-configure your dev environment with Gcore VM init scripts

Provisioning new cloud instances can be repetitive and time-consuming if you’re doing everything manually: installing packages, configuring environments, copying SSH keys, and more. With cloud-init, you can automate these tasks and launch development-ready instances from the start.Gcore Edge Cloud VMs support cloud-init out of the box. With a simple YAML script, you can automatically set up a development-ready instance at boot, whether you’re launching a single machine or spinning up a fleet.In this guide, we’ll walk through how to use cloud-init on Gcore Edge Cloud to:Set a passwordInstall packages and system updatesAdd users and SSH keysMount disks and write filesRegister services or install tooling like Docker or Node.jsLet’s get started.What is cloud-init?cloud-init is a widely used tool for customizing cloud instances during the first boot. It reads user-provided configuration data—usually YAML—and uses it to run commands, install packages, and configure the system. In this article, we will focus on Linux-based virtual machines.How to use cloud-init on GcoreFor Gcore Cloud VMs, cloud-init scripts are added during instance creation using the User data field in the UI or API.Step 1: Create a basic scriptStart with a simple YAML script. Here’s one that updates packages and installs htop:#cloud-config package_update: true packages: - htop Step 2: Launch a new VM with your scriptGo to the Gcore Customer Portal, navigate to VMs, and start creating a new instance (or just click here). When you reach the Additional options section, enable the User data option. Then, paste in your YAML cloud-init script.Once the VM boots, it will automatically run the script. This works the same way for all supported Linux distributions available through Gcore.3 real-world examplesLet’s look at three examples of how you can use this.Example 1: Add a password for a specific userThe below script sets the for the default user of the selected operating system:#cloud-config password: <password> chpasswd: {expire: False} ssh_pwauth: True Example 2: Dev environment with Docker and GitThe following script does the following:Installs Docker and GitAdds a new user devuser with sudo privilegesAuthorizes an SSH keyStarts Docker at boot#cloud-config package_update: true packages: - docker.io - git users: - default - name: devuser sudo: ALL=(ALL) NOPASSWD:ALL groups: docker shell: /bin/bash ssh-authorized-keys: - ssh-rsa AAAAB3Nza...your-key-here runcmd: - systemctl enable docker - systemctl start docker Example 3: Install Node.js and clone a repoThis script installs Node.js and clones a GitHub repo to your Gcore VM at launch:#cloud-config packages: - curl runcmd: - curl -fsSL https://deb.nodesource.com/setup_18.x | bash - - apt-get install -y nodejs - git clone https://github.com/example-user/dev-project.git /home/devuser/project Reusing and versioning your scriptsTo avoid reinventing the wheel, keep your cloud-init scripts:In version control (e.g., Git)Templated for different environments (e.g., dev vs staging)Modular so you can reuse base blocks across projectsYou can also use tools like Ansible or Terraform with cloud-init blocks to standardize provisioning across your team or multiple Gcore VM environments.Debugging cloud-initIf your script doesn’t behave as expected, SSH into the instance and check the cloud-init logs:sudo cat /var/log/cloud-init-output.log This file shows each command as it ran and any errors that occurred.Other helpful logs:/var/log/cloud-init.log /var/lib/cloud/instance/user-data.txt Pro tip: Echo commands or write log files in your script to help debug tricky setups—especially useful if you’re automating multi-node workflows across Gcore Cloud.Tips and best practicesIndentation matters! YAML is picky. Use spaces, not tabs.Always start the file with #cloud-config.runcmd is for commands that run at the end of boot.Use write_files to write configs, env variables, or secrets.Cloud-init scripts only run on the first boot. To re-run, you’ll need to manually trigger cloud-init or re-create the VM.Automate it all with GcoreIf you're provisioning manually, you're doing it wrong. Cloud-init lets you treat your VM setup as code: portable, repeatable, and testable. Whether you’re spinning up ephemeral dev boxes or preparing staging environments, Gcore’s support for cloud-init means you can automate it all.For more on managing virtual machines with Gcore, check out our product documentation.Explore Gcore VM product docs

How to cut egress costs and speed up delivery using Gcore CDN and Object Storage

If you’re serving static assets (images, videos, scripts, downloads) from object storage, you’re probably paying more than you need to, and your users may be waiting longer than they should.In this guide, we explain how to front your bucket with Gcore CDN to cache static assets, cut egress bandwidth costs, and get faster TTFB globally. We’ll walk through setup (public or private buckets), signed URL support, cache control best practices, debugging tips, and automation with the Gcore API or Terraform.Why bother?Serving directly from object storage hits your origin for every request and racks up egress charges. With a CDN in front, cached files are served from edge—faster for users, and cheaper for you.Lower TTFB, better UXWhen content is cached at the edge, it doesn’t have to travel across the planet to get to your user. Gcore CDN caches your assets at PoPs close to end users, so requests don’t hit origin unless necessary. Once cached, assets are delivered in a few milliseconds.Lower billsMost object storage providers charge $80–$120 per TB in egress fees. By fronting your storage with a CDN, you only pay egress once per edge location—then it’s all cache hits after that. If you’re using Gcore Storage and Gcore CDN, there’s zero egress fee between the two.Caching isn’t the only way you save. Gcore CDN can also compress eligible file types (like HTML, CSS, JavaScript, and JSON) on the fly, further shrinking bandwidth usage and speeding up file delivery—all without any changes to your storage setup.Less origin traffic and less data to transfer means smaller bills. And your storage bucket doesn’t get slammed under load during traffic spikes.Simple scaling, globallyThe CDN takes the hit, not your bucket. That means fewer rate-limit issues, smoother traffic spikes, and more reliable performance globally. Gcore CDN spans the globe, so you’re good whether your users are in Tokyo, Toronto, or Tel Aviv.Setup guide: Gcore CDN + Gcore Object StorageLet’s walk through configuring Gcore CDN to cache content from a storage bucket. This works with Gcore Object Storage and other S3-compatible services.Step 1: Prep your bucketPublic? Check files are publicly readable (via ACL or bucket policy).Private? Use Gcore’s AWS Signature V4 support—have your access key, secret, region, and bucket name ready.Gcore Object Storage URL format: https://<bucket-name>.<region>.cloud.gcore.lu/<object> Step 2: Create CDN resource (UI or API)In the Gcore Customer Portal:Go to CDN > Create CDN ResourceChoose "Accelerate and protect static assets"Set a CNAME (e.g. cdn.yoursite.com) if you want to use your domainConfigure origin:Public bucket: Choose None for authPrivate bucket: Choose AWS Signature V4, and enter credentialsChoose HTTPS as the origin protocolGcore will assign a *.gcdn.co domain. If you’re using a custom domain, add a CNAME: cdn.yoursite.com CNAME .gcdn.co Here’s how it works via Terraform: resource "gcore_cdn_resource" "cdn" { cname = "cdn.yoursite.com" origin_group_id = gcore_cdn_origingroup.origin.id origin_protocol = "HTTPS" } resource "gcore_cdn_origingroup" "origin" { name = "my-origin-group" origin { source = "mybucket.eu-west.cloud.gcore.lu" enabled = true } } Step 3: Set caching behaviorSet Cache-Control headers in your object metadata: Cache-Control: public, max-age=2592000 Too messy to handle in storage? Override cache logic in Gcore:Force TTLs by path or extensionIgnore or forward query strings in cache keyStrip cookies (if unnecessary for cache decisions)Pro tip: Use versioned file paths (/img/logo.v3.png) to bust cache safely.Secure access with signed URLsWant your assets to be private, but still edge-cacheable? Use Gcore’s Secure Token feature:Enable Secure Token in CDN settingsSet a secret keyGenerate time-limited tokens in your appPython example: import base64, hashlib, time secret = 'your_secret' path = '/videos/demo.mp4' expires = int(time.time()) + 3600 string = f"{expires}{path} {secret}" token = base64.urlsafe_b64encode(hashlib.md5(string.encode()).digest()).decode().strip('=') url = f"https://cdn.yoursite.com{path}?md5={token}&expires={expires}" Signed URLs are verified at the CDN edge. Invalid or expired? Blocked before origin is touched.Optional: Bind the token to an IP to prevent link sharing.Debug and cache tuneUse curl or browser devtools: curl -I https://cdn.yoursite.com/img/logo.png Look for:Cache: HIT or MISSCache-ControlX-Cached-SinceCache not working? Check for the following errors:Origin doesn’t return Cache-ControlCDN override TTL not appliedCache key includes query strings unintentionallyYou can trigger purges from the Gcore Customer Portal or automate them via the API using POST /cdn/purge. Choose one of three ways:Purge all: Clear the entire domain’s cache at once.Purge by URL: Target a specific full path (e.g., /images/logo.png).Purge by pattern: Target a set of files using a wildcard at the end of the pattern (e.g., /videos/*).Monitor and optimize at scaleAfter rollout:Watch origin bandwidth dropCheck hit ratio (aim for >90%)Audit latency (TTFB on HIT vs MISS)Consider logging using Gcore’s CDN logs uploader to analyze cache behavior, top requested paths, or cache churn rates.For maximum savings, combine Gcore Object Storage with Gcore CDN: egress traffic between them is 100% free. That means you can serve cached assets globally without paying a cent in bandwidth fees.Using external storage? You’ll still slash egress costs by caching at the edge and cutting direct origin traffic—but you’ll unlock the biggest savings when you stay inside the Gcore ecosystem.Save money and boost performance with GcoreStill serving assets direct from storage? You’re probably wasting money and compromising performance on the table. Front your bucket with Gcore CDN. Set smart cache headers or use overrides. Enable signed URLs if you need control. Monitor cache HITs and purge when needed. Automate the setup with Terraform. Done.Next steps:Create your CDN resourceUse private object storage with Signature V4Secure your CDN with signed URLsCreate a free CDN resource now

Bare metal vs. virtual machines: performance, cost, and use case comparison

Choosing the right type of server infrastructure is critical to how your application performs, scales, and fits your budget. For most workloads, the decision comes down to two core options: bare metal servers and virtual machines (VMs). Both can be deployed in the cloud, but they differ significantly in terms of performance, control, scalability, and cost.In this article, we break down the core differences between bare metal and virtual servers, highlight when to choose each, and explain how Gcore can help you deploy the right infrastructure for your needs. If you want to learn about either BM or VMs in detail, we’ve got articles for those: here’s the one for bare metal, and here’s a deep dive into virtual machines.Bare metal vs. virtual machines at a glanceWhen evaluating whether bare metal or virtual machines are right for your company, consider your specific workload requirements, performance priorities, and business objectives. Here’s a quick breakdown to help you decide what works best for you.FactorBare metal serversVirtual machinesPerformanceDedicated resources; ideal for high-performance workloadsShared resources; suitable for moderate or variable workloadsScalabilityOften requires manual scaling; less flexibleHighly elastic; easy to scale up or downCustomizationFull control over hardware, OS, and configurationLimited by hypervisor and provider’s environmentSecurityIsolated by default; no hypervisor layerShared environment with strong isolation protocolsCostHigher upfront cost; dedicated hardwarePay-as-you-go pricing; cost-effective for flexible workloadsBest forHPC, AI/ML, compliance-heavy workloadsStartups, dev/test, fast-scaling applicationsAll about bare metal serversA bare metal server is a single-tenant physical server rented from a cloud provider. Unlike virtual servers, the hardware is not shared with other users, giving you full access to all resources and deeper control over configurations. You get exclusive access and control over the hardware via the cloud provider, which offers the stability and security needed for high-demand applications.The benefits of bare metal serversHere are some of the business advantages of opting for a bare metal server:Maximized performance: Because they are dedicated resources, bare metal servers provide top-tier performance without sharing processing power, memory, or storage with other users. This makes them ideal for resource-intensive applications like high-performance computing (HPC), big data processing, and game hosting.Greater control: Since you have direct access to the hardware, you can customize the server to meet your specific requirements. This is especially important for businesses with complex, specialized needs that require fine-tuned configurations.High security: Bare metal servers offer a higher level of security than their alternatives due to the absence of virtualization. With no shared resources or hypervisor layer, there’s less risk of vulnerabilities that come with multi-tenant environments.Dedicated resources: Because you aren’t sharing the server with other users, all server resources are dedicated to your application so that you consistently get the performance you need.Who should use bare metal servers?Here are examples of instances where bare metal servers are the best option for a business:High-performance computing (HPC)Big data processing and analyticsResource-intensive applications, such as AI/ML workloadsGame and video streaming serversBusinesses requiring enhanced security and complianceAll about virtual machinesA virtual server (or virtual machine) runs on top of a physical server that’s been partitioned by a cloud provider using a hypervisor. This allows multiple VMs to share the same hardware while remaining isolated from each other.Unlike bare metal servers, virtual machines share the underlying hardware with other cloud provider customers. That means you’re using (and paying for) part of one server, providing cost efficiency and flexibility.The benefits of virtual machinesHere are some advantages of using a shared virtual machine:Scalability: Virtual machines are ideal for businesses that need to scale quickly and are starting at a small scale. With cloud-based virtualization, you can adjust your server resources (CPU, memory, storage) on demand to match changing workloads.Cost efficiency: You pay only for the resources you use with VMs, making them cost-effective for companies with fluctuating resource needs, as there is no need to pay for unused capacity.Faster deployment: VMs can be provisioned quickly and easily, which makes them ideal for anyone who wants to deploy new services or applications fast.Who should use virtual machines?VMs are a great fit for the following:Web hosting and application hostingDevelopment and testing environmentsRunning multiple apps with varying demandsStartups and growing businesses requiring scalabilityBusinesses seeking cost-effective, flexible solutionsWhich should you choose?There’s no one-size-fits-all answer. Your choice should depend on the needs of your workload:Choose bare metal if you need dedicated performance, low-latency access to hardware, or tighter control over security and compliance.Choose virtual servers if your priority is flexible scaling, faster deployment, and optimized cost.If your application uses GPU-based inference or AI training, check out our dedicated guide to VM vs. BM for AI workloads.Get started with Gcore BM or VMs todayAt Gcore, we provide both bare metal and virtual machine solutions, offering flexibility, performance, and reliability to meet your business needs. Gcore Bare Metal has the power and reliability needed for demanding workloads, while Gcore Virtual Machines offers customizable configurations, free egress traffic, and flexibility.Compare Gcore BM and VM pricing now

Optimize your workload: a guide to selecting the best virtual machine configuration

Virtual machines (VMs) offer the flexibility, scalability, and cost-efficiency that businesses need to optimize workloads. However, choosing the wrong setup can lead to poor performance, wasted resources, and unnecessary costs.In this guide, we’ll walk you through the essential factors to consider when selecting the best virtual machine configuration for your specific workload needs.﹟1 Understand your workload requirementsThe first step in choosing the right virtual machine configuration is understanding the nature of your workload. Workloads can range from light, everyday tasks to resource-intensive applications. When making your decision, consider the following:Compute-intensive workloads: Applications like video rendering, scientific simulations, and data analysis require a higher number of CPU cores. Opt for VMs with multiple processors or CPUs for smoother performance.Memory-intensive workloads: Databases, big data analytics, and high-performance computing (HPC) jobs often need more RAM. Choose a VM configuration that provides sufficient memory to avoid memory bottlenecks.Storage-intensive workloads: If your workload relies heavily on storage, such as file servers or applications requiring frequent read/write operations, prioritize VM configurations that offer high-speed storage options, such as SSDs or NVMe.I/O-intensive workloads: Applications that require frequent network or disk I/O, such as cloud services and distributed applications, benefit from VMs with high-bandwidth and low-latency network interfaces.﹟2 Consider VM size and scalabilityOnce you understand your workload’s requirements, the next step is to choose the right VM size. VM sizes are typically categorized by the amount of CPU, memory, and storage they offer.Start with a baseline: Select a VM configuration that offers a balanced ratio of CPU, RAM, and storage based on your workload type.Scalability: Choose a VM size that allows you to easily scale up or down as your needs change. Many cloud providers offer auto-scaling capabilities that adjust your VM’s resources based on real-time demand, providing flexibility and cost savings.Overprovisioning vs. underprovisioning: Avoid overprovisioning (allocating excessive resources) unless your workload demands peak capacity at all times, as this can lead to unnecessary costs. Similarly, underprovisioning can affect performance, so finding the right balance is essential.﹟3 Evaluate CPU and memory considerationsThe central processing unit (CPU) and memory (RAM) are the heart of a virtual machine. The configuration of both plays a significant role in performance. Workloads that need high processing power, such as video encoding, machine learning, or simulations, will benefit from VMs with multiple CPU cores. However, be mindful of CPU architecture—look for VMs that offer the latest processors (e.g., Intel Xeon, AMD EPYC) for better performance per core.It’s also important that the VM has enough memory to avoid paging, which occurs when the system uses disk space as virtual memory, significantly slowing down performance. Consider a configuration with more RAM and support for faster memory types like DDR4 for memory-heavy applications.﹟4 Assess storage performance and capacityStorage performance and capacity can significantly impact the performance of your virtual machine, especially for applications requiring large data volumes. Key considerations include:Disk type: For faster read/write operations, opt for solid-state drives (SSDs) over traditional hard disk drives (HDDs). Some cloud providers also offer NVMe storage, which can provide even greater speed for highly demanding workloads.Disk size: Choose the right size based on the amount of data you need to store and process. Over-allocating storage space might seem like a safe bet, but it can also increase costs unnecessarily. You can always resize disks later, so avoid over-allocating them upfront.IOPS and throughput: Some workloads require high input/output operations per second (IOPS). If this is a priority for your workload (e.g., databases), make sure that your VM configuration includes high IOPS storage options.﹟5 Weigh up your network requirementsWhen working with cloud-based VMs, network performance is a critical consideration. High-speed and low-latency networking can make a difference for applications such as online gaming, video conferencing, and real-time analytics.Bandwidth: Check whether the VM configuration offers the necessary bandwidth for your workload. For applications that handle large data transfers, such as cloud backup or file servers, make sure that the network interface provides high throughput.Network latency: Low latency is crucial for applications where real-time performance is key (e.g., trading systems, gaming). Choose VMs with low-latency networking options to minimize delays and improve the user experience.Network isolation and security: Check if your VM configuration provides the necessary network isolation and security features, especially when handling sensitive data or operating in multi-tenant environments.﹟6 Factor in cost considerationsWhile it’s essential that your VM has the right configuration, cost is always an important factor to consider. Cloud providers typically charge based on the resources allocated, so optimizing for cost efficiency can significantly impact your budget.Consider whether a pay-as-you-go or reserved model (which offers discounted rates in exchange for a long-term commitment) fits your usage pattern. The reserved option can provide significant savings if your workload runs continuously. You can also use monitoring tools to track your VM’s performance and resource usage over time. This data will help you make informed decisions about scaling up or down so you’re not paying for unused resources.﹟7 Evaluate security featuresSecurity is a primary concern when selecting a VM configuration, especially for workloads handling sensitive data. Consider the following:Built-in security: Look for VMs that offer integrated security features such as DDoS protection, web application firewall (WAF), and encryption.Compliance: Check that the VM configuration meets industry standards and regulations, such as GDPR, ISO 27001, and PCI DSS.Network security: Evaluate the VM's network isolation capabilities and the availability of cloud firewalls to manage incoming and outgoing traffic.﹟8 Consider geographic locationThe geographic location of your VM can impact latency and compliance. Therefore, it’s a good idea to choose VM locations that are geographically close to your end users to minimize latency and improve performance. In addition, it’s essential to select VM locations that comply with local data sovereignty laws and regulations.﹟9 Assess backup and recovery optionsBackup and recovery are critical for maintaining data integrity and availability. Look for VMs that offer automated backup solutions so that data is regularly saved. You should also evaluate disaster recovery capabilities, including the ability to quickly restore data and applications in case of failure.﹟10 Test and iterateFinally, once you've chosen a VM configuration, testing its performance under real-world conditions is essential. Most cloud providers offer performance monitoring tools that allow you to assess how well your VM is meeting your workload requirements.If you notice any performance bottlenecks, be prepared to adjust the configuration. This could involve increasing CPU cores, adding more memory, or upgrading storage. Regular testing and fine-tuning means that your VM is always optimized.Choosing a virtual machine that suits your requirementsSelecting the best virtual machine configuration is a key step toward optimizing your workloads efficiently, cost-effectively, and without unnecessary performance bottlenecks. By understanding your workload’s needs, considering factors like CPU, memory, storage, and network performance, and continuously monitoring resource usage, you can make informed decisions that lead to better outcomes and savings.Whether you're running a small application or large-scale enterprise software, the right VM configuration can significantly improve performance and cost. Gcore offers a wide range of virtual machine options that can meet your unique requirements. Our virtual machines are designed to meet diverse workload requirements, providing dedicated vCPUs, high-speed storage, and low-latency networking across 30+ global regions. You can scale compute resources on demand, benefit from free egress traffic, and enjoy flexible pricing models by paying only for the resources in use, maximizing the value of your cloud investments.Contact us to discuss your VM needs

How to get the size of a directory in Linux

Understanding how to check directory size in Linux is critical for managing storage space efficiently. Understanding this process is essential whether you’re assessing specific folder space or preventing storage issues.This comprehensive guide covers commands and tools so you can easily calculate and analyze directory sizes in a Linux environment. We will guide you step-by-step through three methods: du, ncdu, and ls -la. They’re all effective and each offers different benefits.What is a Linux directory?A Linux directory is a special type of file that functions as a container for storing files and subdirectories. It plays a key role in organizing the Linux file system by creating a hierarchical structure. This arrangement simplifies file management, making it easier to locate, access, and organize related files. Directories are fundamental components that help ensure smooth system operations by maintaining order and facilitating seamless file access in Linux environments.#1 Get Linux directory size using the du commandUsing the du command, you can easily determine a directory’s size by displaying the disk space used by files and directories. The output can be customized to be presented in human-readable formats like kilobytes (KB), megabytes (MB), or gigabytes (GB).Check the size of a specific directory in LinuxTo get the size of a specific directory, open your terminal and type the following command:du -sh /path/to/directoryIn this command, replace /path/to/directory with the actual path of the directory you want to assess. The -s flag stands for “summary” and will only display the total size of the specified directory. The -h flag makes the output human-readable, showing sizes in a more understandable format.Example: Here, we used the path /home/ubuntu/, where ubuntu is the name of our username directory. We used the du command to retrieve an output of 32K for this directory, indicating a size of 32 KB.Check the size of all directories in LinuxTo get the size of all files and directories within the current directory, use the following command:sudo du -h /path/to/directoryExample: In this instance, we again used the path /home/ubuntu/, with ubuntu representing our username directory. Using the command du -h, we obtained an output listing all files and directories within that particular path.#2 Get Linux directory size using ncduIf you’re looking for a more interactive and feature-rich approach to exploring directory sizes, consider using the ncdu (NCurses Disk Usage) tool. ncdu provides a visual representation of disk usage and allows you to navigate through directories, view size details, and identify large files with ease.For Debian or Ubuntu, use this command:sudo apt-get install ncduOnce installed, run ncdu followed by the path to the directory you want to analyze:ncdu /path/to/directoryThis will launch the ncdu interface, which shows a breakdown of file and subdirectory sizes. Use the arrow keys to navigate and explore various folders, and press q to exit the tool.Example: Here’s a sample output of using the ncdu command to analyze the home directory. Simply enter the ncdu command and press Enter. The displayed output will look something like this:#3 Get Linux directory size using 1s -1aYou can alternatively opt to use the ls command to list the files and directories within a directory. The options -l and -a modify the default behavior of ls as follows:-l (long listing format)Displays the detailed information for each file and directoryShows file permissions, the number of links, owner, group, file size, the timestamp of the last modification, and the file/directory name-a (all files)Instructs ls to include all files, including hidden files and directoriesIncludes hidden files on Linux that typically have names beginning with a . (dot)ls -la lists all files (including hidden ones) in long format, providing detailed information such as permissions, owner, group, size, and last modification time. This command is especially useful when you want to inspect file attributes or see hidden files and directories.Example: When you enter ls -la command and press Enter, you will see an output similar to this:Each line includes:File type and permissions (e.g., drwxr-xr-x):The first character indicates the file type- for a regular filed for a directoryl for a symbolic linkThe next nine characters are permissions in groups of three (rwx):r = readw = writex = executePermissions are shown for three classes of users: owner, group, and others.Number of links (e.g., 2):For regular files, this usually indicates the number of hard linksFor directories, it often reflects subdirectory links (e.g., the . and .. entries)Owner and group (e.g., user group)File size (e.g., 4096 or 1045 bytes)Modification date and time (e.g., Jan 7 09:34)File name (e.g., .bashrc, notes.txt, Documents):Files or directories that begin with a dot (.) are hidden (e.g., .bashrc)ConclusionThat’s it! You can now determine the size of a directory in Linux. Measuring directory sizes is a crucial skill for efficient storage management. Whether you choose the straightforward du command, use the visual advantages of the ncdu tool, or opt for the versatility of ls -la, this expertise enhances your ability to uphold an organized and efficient Linux environment.Looking to deploy Linux in the cloud? With Gcore Edge Cloud, you can choose from a wide range of pre-configured virtual machines suitable for Linux:Affordable shared compute resources starting from €3.2 per monthDeploy across 50+ cloud regions with dedicated servers for low-latency applicationsSecure apps and data with DDoS protection, WAF, and encryption at no additional costGet started today

How to Run Hugging Face Spaces on Gcore Inference at the Edge

Running machine learning models, especially large-scale models like GPT 3 or BERT, requires a lot of computing power and comes with a lot of latency. This makes real-time applications resource-intensive and challenging to deliver. Running ML models at the edge is a lightweight approach offering significant advantages for latency, privacy, and resource optimization.  Gcore Inference at the Edge makes it simple to deploy and manage custom models efficiently, giving you the ability to deploy and scale your favorite Hugging Face models globally in just a few clicks. In this guide, we’ll walk you through how easy it is to harness the power of Gcore’s edge AI infrastructure to deploy a Hugging Face Space model. Whether you’re developing NLP solutions or cutting-edge computer vision applications, deploying at the edge has never been simpler—or more powerful. Step 1: Log In to the Gcore Customer PortalGo to gcore.com and log in to the Gcore Customer Portal. If you don’t yet have an account, go ahead and create one—it’s free. Step 2: Go to Inference at the EdgeIn the Gcore Customer Portal, click Inference at the Edge from the left navigation menu. Then click Deploy custom model. Step 3: Choose a Hugging Face ModelOpen huggingface.com and browse the available models. Select the model you want to deploy. Navigate to the corresponding Hugging Face Space for the model. Click on Files in the Space and locate the Docker option. Copy the Docker image link and startup command from Hugging Face Space. Step 4: Deploy the Model on GcoreReturn to the Gcore Customer Portal deployment page and enter the following details: Model image URL: registry.hf.space/ethux-mistral-pixtral-demo:latest Startup command: python app.py Container port: 7860 Configure the pod as follows: GPU-optimized: 1x L40S vCPUs: 16 RAM: 232GiB For optimal performance, choose any available region for routing placement. Name your deployment and click Deploy.Step 5: Interact with Your ModelOnce the model is up and running, you’ll be provided with an endpoint. You can now interact with the model via this endpoint to test and use your deployed model at the edge.Powerful, Simple AI Deployment with GcoreGcore Inference at the Edge is the future of AI deployment, combining the ease of Hugging Face integration with the robust infrastructure needed for real-time, scalable, and global solutions. By leveraging edge computing, you can optimize model performance and simultaneously futureproof your business in a world that increasingly demands fast, secure, and localized AI applications. Deploying models to the edge allows you to capitalize on real-time insights, improve customer experiences, and outpace your competitors. Whether you’re leading a team of developers or spearheading a new AI initiative, Gcore Inference at the Edge offers the tools you need to innovate at the speed of tomorrow. Explore Gcore Inference at the Edge

Subscribe to our newsletter

Get the latest industry trends, exclusive insights, and Gcore updates delivered straight to your inbox.