Gcore named a Leader in the GigaOm Radar for AI Infrastructure!Get the report
  1. Home
  2. Developers
  3. Create Serverless Functions with OpenFaaS

Create Serverless Functions with OpenFaaS

  • By Gcore
  • April 7, 2023
  • 13 min read
Create Serverless Functions with OpenFaaS

OpenFaaS is a serverless functions framework that runs on top of Docker and Kubernetes. In this tutorial, you’ll learn how to:

  • Deploy OpenFaaS to a Kubernetes cluster
  • Set up the OpenFaaS CLI
  • Create, build, and deploy serverless functions using the CLI
  • Invoke serverless functions using the CLI
  • Update an existing serverless function
  • Deploy serverless functions using the web interface
  • Monitor your serverless functions with Prometheus and Grafana

Prerequisites

  • A Kubernetes cluster. If you don’t have a running Kubernetes cluster, follow the instructions from the Set Up a Kubernetes Cluster with Kind section below.
  • A Docker Hub Account. See the Docker Hub page for details about creating a new account.
  • kubectl. Refer the Install and Set Up kubectl page for details about installing kubectl.
  • Node.js 10 or higher. To check if Node.js is installed on your computer, type the following command:
node --version

The following example output shows that Node.js is installed on your computer:

v10.16.3

If Node.js is not installed or you’re running an older version, you can download the installer from the Downloads page.

  • This tutorial assumes basic familiarity with Docker and Kubernetes.

Set Up a Kubernetes Cluster with Kind (Optional)

With Kind, you can run a local Kubernetes cluster using Docker containers as nodes. The steps in this section are optional. Follow them only if you don’t have a running Kubernetes cluster.

  1. Create a file named openfaas-cluster.yaml, and copy in the following spec:
# three node (two workers) cluster configkind: ClusterapiVersion: kind.x-k8s.io/v1alpha4nodes:- role: control-plane- role: worker- role: worker
  1. Use the kind create cluster command to create a Kubernetes cluster with one control plane and two worker nodes:
kind create cluster --config kind-specs/kind-cluster.yaml
Creating cluster "kind" ... ✓ Ensuring node image (kindest/node:v1.17.0) 🖼 ✓ Preparing nodes 📦 📦 📦 ✓ Writing configuration 📜 ✓ Starting control-plane 🕹️ ✓ Installing CNI 🔌 ✓ Installing StorageClass 💾 ✓ Joining worker nodes 🚜Set kubectl context to "kind-kind"You can now use your cluster with:kubectl cluster-info --context kind-kindThanks for using kind! 😊

Deploy OpenFaaS to a Kubernetes Cluster

You can install OpenFaaS using Helm, plain YAML files, or its own installer named arkade which provides a quick and easy way to get OpenFaaS running. In this section, you’ll deploy OpenFaaS with arkade.

  1. Enter the following command to install arkade:
curl -sLS https://dl.get-arkade.dev | sudo sh
Downloading package https://github.com/alexellis/arkade/releases/download/0.1.10/arkade-darwin as /Users/andrei/Desktop/openFaaS/faas-hello-world/arkade-darwinDownload complete.Running with sufficient permissions to attempt to move arkade to /usr/local/binNew version of arkade installed to /usr/local/binCreating alias 'ark' for 'arkade'.            _             _  __ _ _ __| | ____ _  __| | ___ / _` | '__| |/ / _` |/ _` |/ _ \| (_| | |  |   < (_| | (_| |  __/ \__,_|_|  |_|\_\__,_|\__,_|\___|Get Kubernetes apps the easy wayVersion: 0.1.10Git Commit: cf96105d37ed97ed644ab56c0660f0d8f4635996
  1. Now, install openfaas with:
arkade install openfaas
Using kubeconfig: /Users/andrei/.kube/configUsing helm3Node architecture: "amd64"Client: "x86_64", "Darwin"2020/03/10 16:20:40 User dir established as: /Users/andrei/.arkade/https://get.helm.sh/helm-v3.1.1-darwin-amd64.tar.gz/Users/andrei/.arkade/bin/helm3/darwin-amd64 darwin-amd64//Users/andrei/.arkade/bin/helm3/README.md darwin-amd64/README.md/Users/andrei/.arkade/bin/helm3/LICENSE darwin-amd64/LICENSE/Users/andrei/.arkade/bin/helm3/helm darwin-amd64/helm2020/03/10 16:20:43 extracted tarball into /Users/andrei/.arkade/bin/helm3: 3 files, 0 dirs (1.633976582s)"openfaas" has been added to your repositoriesHang tight while we grab the latest from your chart repositories......Successfully got an update from the "ibm-charts" chart repository...Successfully got an update from the "openfaas" chart repository...Successfully got an update from the "stable" chart repository...Successfully got an update from the "bitnami" chart repositoryUpdate Complete. ⎈ Happy Helming!⎈VALUES values.yamlCommand: /Users/andrei/.arkade/bin/helm3/helm [upgrade --install openfaas openfaas/openfaas --namespace openfaas --values /var/folders/nz/2gtkncgx56sgrpqvr40qhhrw0000gn/T/charts/openfaas/values.yaml --set gateway.directFunctions=true --set faasnetes.imagePullPolicy=Always --set gateway.replicas=1 --set queueWorker.replicas=1 --set clusterRole=false --set operator.create=false --set openfaasImagePullPolicy=IfNotPresent --set basicAuthPlugin.replicas=1 --set basic_auth=true --set serviceType=NodePort]Release "openfaas" does not exist. Installing it now.NAME: openfaasLAST DEPLOYED: Tue Mar 10 16:21:03 2020NAMESPACE: openfaasSTATUS: deployedREVISION: 1TEST SUITE: NoneNOTES:To verify that openfaas has started, run:  kubectl -n openfaas get deployments -l "release=openfaas, app=openfaas"======================================================================== OpenFaaS has been installed.                                        ========================================================================# Get the faas-clicurl -SLsf https://cli.openfaas.com | sudo sh# Forward the gateway to your machinekubectl rollout status -n openfaas deploy/gatewaykubectl port-forward -n openfaas svc/gateway 8080:8080 &# If basic auth is enabled, you can now log into your gateway:PASSWORD=$(kubectl get secret -n openfaas basic-auth -o jsonpath="{.data.basic-auth-password}" | base64 --decode; echo)echo -n $PASSWORD | faas-cli login --username admin --password-stdinfaas-cli store deploy figletfaas-cli list# For Raspberry Pifaas-cli store list \ --platform armhffaas-cli store deploy figlet \ --platform armhf# Find out more at:# https://github.com/openfaas/faasThanks for using arkade!
  1. To verify that the deployments were created, run the kubectl get deployments command. Specify the namespace and the selector using the -n and -l parameters as follows:
kubectl get deployments -n openfaas -l "release=openfaas, app=openfaas"

If the deployments are not yet ready, you should see something similar to the following example output:

NAME                READY   UP-TO-DATE   AVAILABLE   AGEalertmanager        0/1     1            0           45sbasic-auth-plugin   1/1     1            1           45sfaas-idler          0/1     1            0           45sgateway             0/1     1            0           45snats                1/1     1            1           45sprometheus          1/1     1            1           45squeue-worker        1/1     1            1           45s

Once the installation is finished, the output should look like this:

NAME                READY   UP-TO-DATE   AVAILABLE   AGEalertmanager        1/1     1            1           75sbasic-auth-plugin   1/1     1            1           75sfaas-idler          1/1     1            1           75sgateway             1/1     1            1           75snats                1/1     1            1           75sprometheus          1/1     1            1           75squeue-worker        1/1     1            1           75s
  1. Check the rollout status of the gateway deployment:
kubectl rollout status -n openfaas deploy/gateway

The following example output shows that the gateway deployment has been successfully rolled out:

deployment "gateway" successfully rolled out
  1. Use the kubectl port-forward command to forward all requests made to
    http://localhost:8080
    to the pod running the
    gateway
    service:
kubectl port-forward -n openfaas svc/gateway 8080:8080 &
[1] 78674Forwarding from 127.0.0.1:8080 -> 8080Forwarding from [::1]:8080 -> 8080

Note that the ampersand sign (&) runs the process in the background. You can use the jobs command to show the status of your background processes:

jobs
[1]  + running    kubectl port-forward -n openfaas svc/gateway 8080:8080
  1. Issue the following command to retrieve your password and save it into an environment variable named PASSWORD:
PASSWORD=$(kubectl get secret -n openfaas basic-auth -o jsonpath="{.data.basic-auth-password}" | base64 --decode; echo)

Set Up the OpenFaaS CLI

OpenFaaS provides a command-line utility you can use to build and deploy your serverless functions. You can install it by following the steps from the Installation page.

Create a Serverless Function Using the CLI

Now that OpenFaaS and the faas-cli command-line utility are installed, you can create and deploy serverless functions using the built-in template engine. OpenFaaS provides two types of templates:

  • The Classic templates are based on the Classic Watchdog and use stdio to communicate with your serverless function. Refer to the Watchdog page for more details about how OpenFaaS Watchdog works.
  • The of-watchdog templates use HTTP to communicate with your serverless function. These templates are available through the OpenFaaS Incubator GitHub repository.

In this tutorial, you’ll use a classic template.

  1. Run the following command to see the templates available in the official store:
faas-cli template store list
NAME                     SOURCE             DESCRIPTIONcsharp                   openfaas           Classic C# templatedockerfile               openfaas           Classic Dockerfile templatego                       openfaas           Classic Golang templatejava8                    openfaas           Classic Java 8 templatenode                     openfaas           Classic NodeJS 8 templatephp7                     openfaas           Classic PHP 7 templatepython                   openfaas           Classic Python 2.7 templatepython3                  openfaas           Classic Python 3.6 templatepython3-dlrs             intel              Deep Learning Reference Stack v0.4 for ML workloadsruby                     openfaas           Classic Ruby 2.5 templatenode10-express           openfaas-incubator Node.js 10 powered by express templateruby-http                openfaas-incubator Ruby 2.4 HTTP templatepython27-flask           openfaas-incubator Python 2.7 Flask templatepython3-flask            openfaas-incubator Python 3.6 Flask templatepython3-http             openfaas-incubator Python 3.6 with Flask and HTTPnode8-express            openfaas-incubator Node.js 8 powered by express templategolang-http              openfaas-incubator Golang HTTP templategolang-middleware        openfaas-incubator Golang Middleware templatepython3-debian           openfaas           Python 3 Debian templatepowershell-template      openfaas-incubator Powershell Core Ubuntu:16.04 templatepowershell-http-template openfaas-incubator Powershell Core HTTP Ubuntu:16.04 templaterust                     booyaa             Rust templatecrystal                  tpei               Crystal templatecsharp-httprequest       distantcam         C# HTTP templatecsharp-kestrel           burtonr            C# Kestrel HTTP templatevertx-native             pmlopes            Eclipse Vert.x native image templateswift                    affix              Swift 4.2 Templatelua53                    affix              Lua 5.3 Templatevala                     affix              Vala Templatevala-http                affix              Non-Forking Vala Templatequarkus-native           pmlopes            Quarkus.io native image templateperl-alpine              tmiklas            Perl language template based on Alpine imagenode10-express-service   openfaas-incubator Node.js 10 express.js microservice templatecrystal-http             koffeinfrei        Crystal HTTP templaterust-http                openfaas-incubator Rust HTTP templatebash-streaming           openfaas-incubator Bash Streaming template

☞ Note that you can specify an alternative store for templates. The following example command lists the templates from a repository named andreipope:

faas-cli template store list -u https://raw.githubusercontent.com/andreipope/my-custom-store/master/templates.json
  1. Download the official templates locally:
faas-cli template pull
Fetch templates from repository: https://github.com/openfaas/templates.git at master2020/03/11 20:51:22 Attempting to expand templates from https://github.com/openfaas/templates.git2020/03/11 20:51:25 Fetched 19 template(s) : [csharp csharp-armhf dockerfile go go-armhf java11 java11-vert-x java8 node node-arm64 node-armhf node12 php7 python python-armhf python3 python3-armhf python3-debian ruby] from https://github.com/openfaas/templates.git

☞ By default, the above command downloads the templates from the OpenFaaS official GitHub repository. If you want to use a custom repository, then you should specify the URL of your repository. The following example command pulls the templates from a repository named andreipope:

faas-cli template pull https://github.com/andreipope/my-custom-store/
  1. To create a new serverless function, run the faas-cli new command specifying:
  • The name of your new function (appfleet-hello-world)
  • The lang parameter followed by the programming language template (node).
faas-cli new appfleet-hello-world --lang node
Folder: appfleet-hello-world created.  ___                   _____           ____ / _ \ _ __   ___ _ __ |  ___|_ _  __ _/ ___|| | | | '_ \ / _ \ '_ \| |_ / _` |/ _` \___ \| |_| | |_) |  __/ | | |  _| (_| | (_| |___) | \___/| .__/ \___|_| |_|_|  \__,_|\__,_|____/      |_|Function created in folder: appfleet-hello-worldStack file written: appfleet-hello-world.ymlNotes:You have created a new function which uses Node.js 12.13.0 and the OpenFaaSClassic Watchdog.npm i --save can be used to add third-party packages like request or cheerionpm documentation: https://docs.npmjs.com/For high-throughput services, we recommend you use the node12 template whichuses a different version of the OpenFaaS watchdog.

At this point, your directory structure should look like the following:

tree . -L 2
.├── appfleet-hello-world│   ├── handler.js│   └── package.json├── appfleet-hello-world.yml└── template    ├── csharp    ├── csharp-armhf    ├── dockerfile    ├── go    ├── go-armhf    ├── java11    ├── java11-vert-x    ├── java8    ├── node    ├── node-arm64    ├── node-armhf    ├── node12    ├── php7    ├── python    ├── python-armhf    ├── python3    ├── python3-armhf    ├── python3-debian    └── ruby21 directories, 3 files

Things to note:

  • The appfleet-hello-world/handler.js file contains the code of your serverless function. You can use the echo command to list the contents of this file:
cat appfleet-hello-world/handler.js
"use strict"module.exports = async (context, callback) => {    return {status: "done"}}
  • You can specify the dependencies required by your serverless function in the package.json file. The automatically generated file is just an empty shell:
cat appfleet-hello-world/package.json
{  "name": "function",  "version": "1.0.0",  "description": "",  "main": "handler.js",  "scripts": {    "test": "echo \"Error: no test specified\" && exit 1"  },  "keywords": [],  "author": "",  "license": "ISC"}
  • The spec of the appfleet-hello-world function is stored in the appfleet-hello-world.yml file:
cat appfleet-hello-world.yml
version: 1.0provider:  name: openfaas  gateway: http://127.0.0.1:8080functions:  appfleet-hello-world:    lang: node    handler: ./appfleet-hello-world    image: appfleet-hello-world:latest

Build Your Serverless Function

  1. Open the appfleet-hello-world.yml file in a plain-text editor, and update the image field by prepending your Docker Hub user name to it. The following example prepends my username (andrepopescu12) to the image field:
image: andrepopescu12/appfleet-hello-world:latest

Once you’ve made this change, the appfleet-hello-world.yml file should look similar to the following:

version: 1.0provider:  name: openfaas  gateway: http://127.0.0.1:8080functions:  appfleet-hello-world:    lang: node    handler: ./appfleet-hello-world    image: <YOUR-DOCKER-HUB-ACCOUNT>/appfleet-hello-world:latest
  1. Build the function. Enter the faas-cli build command specifying the -f argument with the name of the YAML file you edited in the previous step (appfleet-hello-world.yml):
faas-cli build -f appfleet-hello-world.yml
[0] > Building appfleet-hello-world.Clearing temporary build folder: ./build/appfleet-hello-world/Preparing: ./appfleet-hello-world/ build/appfleet-hello-world/functionBuilding: andreipopescu12/appfleet-hello-world:latest with node template. Please wait..Sending build context to Docker daemon  10.24kBStep 1/24 : FROM openfaas/classic-watchdog:0.18.1 as watchdog ---> 94b5e0bef891Step 2/24 : FROM node:12.13.0-alpine as ship ---> 69c8cc9212ecStep 3/24 : COPY --from=watchdog /fwatchdog /usr/bin/fwatchdog ---> Using cache ---> ebab4b723c16Step 4/24 : RUN chmod +x /usr/bin/fwatchdog ---> Using cache ---> 7952724b5872Step 5/24 : RUN addgroup -S app && adduser app -S -G app ---> Using cache ---> 33c7f04595d2Step 6/24 : WORKDIR /root/ ---> Using cache ---> 77b9dee16c79Step 7/24 : ENV NPM_CONFIG_LOGLEVEL warn ---> Using cache ---> a3d3c0bb4480Step 8/24 : RUN mkdir -p /home/app ---> Using cache ---> 65457e03fcb1Step 9/24 : WORKDIR /home/app ---> Using cache ---> 50ab672e5660Step 10/24 : COPY package.json ./ ---> Using cache ---> 6143e79de873Step 11/24 : RUN npm i --production ---> Using cache ---> a41566487c6eStep 12/24 : COPY index.js ./ ---> Using cache ---> 566633e78d2cStep 13/24 : WORKDIR /home/app/function ---> Using cache ---> 04c9de75f170Step 14/24 : COPY function/*.json ./ ---> Using cache ---> 85cf909b646aStep 15/24 : RUN npm i --production || : ---> Using cache ---> c088cbcad583Step 16/24 : COPY --chown=app:app function/ . ---> Using cache ---> 192db89e5941Step 17/24 : WORKDIR /home/app/ ---> Using cache ---> ee2b7d7e8bd4Step 18/24 : RUN chmod +rx -R ./function     && chown app:app -R /home/app     && chmod 777 /tmp ---> Using cache ---> 81831389293eStep 19/24 : USER app ---> Using cache ---> ca0cade453f5Step 20/24 : ENV cgi_headers="true" ---> Using cache ---> afe8d7413349Step 21/24 : ENV fprocess="node index.js" ---> Using cache ---> 5471cfe85461Step 22/24 : EXPOSE 8080 ---> Using cache ---> caaa8ae11dc7Step 23/24 : HEALTHCHECK --interval=3s CMD [ -e /tmp/.lock ] || exit 1 ---> Using cache ---> 881b4d2adb92Step 24/24 : CMD ["fwatchdog"] ---> Using cache ---> 82b586f039dfSuccessfully built 82b586f039dfSuccessfully tagged andreipopescu12/appfleet-hello-world:latestImage: andreipopescu12/appfleet-hello-world:latest built.[0] < Building appfleet-hello-world done in 2.25s.[0] Worker done.Total build time: 2.25s
  1. You can list your Docker images with:
docker images
REPOSITORY                             TAG                 IMAGE ID            CREATED             SIZEandreipopescu12/appfleet-hello-world   latest              82b586f039df        25 minutes ago      96MB

Push Your Image to Docker Hub

  1. Log in to Docker Hub. Run the docker login command with the --username flag followed by your Docker Hub user name. The following example command logs you in as andreipopescu12:
docker login --username andreipopescu12

Next, you will be prompted to enter your Docker Hub password:

Password:Login Succeeded
  1. Use the faas-cli push command to push your serverless function to Docker Hub:
faas-cli push -f appfleet-hello-world.yml
The push refers to repository [docker.io/andreipopescu12/appfleet-hello-world]073c41b18852: Pusheda5c05e98c215: Pushedf749ad113dce: Pushede4f29400b370: Pushedb7d0eb42e645: Pushed84fba0eb2756: Pushedcf2a3f2bc398: Pushed942d3272b7d4: Pushed037b653b7d4e: Pushed966655dc62be: Pushed08d8e0925a73: Pushed6ce16b164ed0: Pushedd76ecd300100: Pushed77cae8ab23bf: Pushedlatest: digest: sha256:4150d4cf32e7e5ffc8fd15efeed16179bbf166536f1cc7a8c4105d01a4042928 size: 3447[0] < Pushing appfleet-hello-world [andreipopescu12/appfleet-hello-world:latest] done.[0] Worker done.

Deploy Your Function Using the CLI

  1. With your serverless function pushed to Docker Hub, log in to your local instance of the OpenFaaS gateway by entering the following command:
echo -n $PASSWORD | faas-cli login --username admin --password-stdin
  1. Run the faas-cli deploy command to deploy your serverless function:
faas-cli deploy -f appfleet-hello-world.yml
Deploying: appfleet-hello-world.WARNING! Communication is not secure, please consider using HTTPS. Letsencrypt.org offers free SSL/TLS certificates.Handling connection for 8080Handling connection for 8080Deployed. 202 Accepted.URL: http://127.0.0.1:8080/function/appfleet-hello-world

☞ OpenFaaS provides an auto-scaling mechanism based on the number of requests per second, which is read from Prometheus. For the sake of simplicity, we won’t cover auto-scaling in this tutorial. To further your knowledge, you can refer the Auto-scaling page.

  1. Use the faas-cli list command to list the functions deployed to your local OpenFaaS gateway:
faas-cli list
Function                      	Invocations    	Replicasappfleet-hello-world          	0              	1

☞ Note that you can also list the functions deployed to a different gateway by providing the URL of the gateway as follows:

faas-cli list --gateway https://<YOUR-GATEWAT-URL>:<YOUR-GATEWAY-PORT>
  1. You can use the faas-cli describe method to retrieve more details about the appfleet-hello-world function:
faas-cli describe appfleet-hello-world
Name:                appfleet-hello-worldStatus:              ReadyReplicas:            1Available replicas:  1Invocations:         1Image:               andreipopescu12/appfleet-hello-world:latestFunction process:    node index.jsURL:                 http://127.0.0.1:8080/function/appfleet-hello-worldAsync URL:           http://127.0.0.1:8080/async-function/appfleet-hello-worldLabels:              faas_function : appfleet-hello-worldAnnotations:         prometheus.io.scrape : false

Invoke Your Serverless Function Using the CLI

  1. To see your serverless function in action, issue the faas-cli invoke command, specifying:
  • The -f flag with the name of the YAML file that describes your function (appfleet-hello-world.yml)
  • The name of your function (appfleet-hello-world)
faas-cli invoke -f appfleet-hello-world.yml appfleet-hello-world
Reading from STDIN - hit (Control + D) to stop.
  1. Type CTRL+D. The following example output shows that your serverless function works as expected:
appfleetHandling connection for 8080{"status":"done"}

Update Your Function

The function you created, deployed, and then invoked in the previous sections is just an empty shell. In this section, we’ll update it to:

  • Read the name of a city from stdin
  • Fetch the weather forecast from the openweathermap.org
  • Print to the console the weather forecast
  1. Create an OpenWeatherMap account by following the instructions from the Sign Up page.
  2. Log in to OpenWeatherMap and then select API KEYS:
  1. From here, you can either copy the value of the default key or create a new API key, and then copy its value:
  1. Now that you have an OpenWeatherMap API key, you must use npm to install a few dependencies. The following command moves into the appfleet-hello-world directory and then installs the get-stdin and request packages:
cd appfleet-hello-world && npm i --save get-stdin request
  1. Replace the content of the handler.js file with:
"use strict"const getStdin = require('get-stdin')const request = require('request');let handler = (req) => {  request(`http://api.openweathermap.org/data/2.5/weather?q=${req}&?units=metric&APPID=<YOUR-OPENWEATHERMAP-APP-KEY>`, function (error, response, body) {    console.error('error:', error)    console.log('statusCode:', response && response.statusCode)    console.log('body:', JSON.stringify(body))  })};getStdin().then(val => {   handler(val);}).catch(e => {  console.error(e.stack);});module.exports = handler

☞ To try this function, replace <YOUR-OPENWEATHERMAP-API-KEY> with your OpenWeatherMap API KEY.

  1. You can use the faas-cli remove command to remove the function you’ve deployed earlier in this tutorial:
faas-cli remove appfleet-hello-world
Deleting: appfleet-hello-world.Handling connection for 8080Removing old function.
  1. Now that the old function has been removed, you must rebuild, push, and deploy your modified function. Instead of issuing three separate commands, you can use the openfaas-cli up command as in the following example:
faas-cli up -f appfleet-hello-world.yml
[0] > Building appfleet-hello-world.Clearing temporary build folder: ./build/appfleet-hello-world/Preparing: ./appfleet-hello-world/ build/appfleet-hello-world/functionBuilding: andreipopescu12/appfleet-hello-world:latest with node template. Please wait..Sending build context to Docker daemon  43.01kBStep 1/24 : FROM openfaas/classic-watchdog:0.18.1 as watchdog ---> 94b5e0bef891Step 2/24 : FROM node:12.13.0-alpine as ship ---> 69c8cc9212ecStep 3/24 : COPY --from=watchdog /fwatchdog /usr/bin/fwatchdog ---> Using cache ---> ebab4b723c16Step 4/24 : RUN chmod +x /usr/bin/fwatchdog ---> Using cache ---> 7952724b5872Step 5/24 : RUN addgroup -S app && adduser app -S -G app ---> Using cache ---> 33c7f04595d2Step 6/24 : WORKDIR /root/ ---> Using cache ---> 77b9dee16c79Step 7/24 : ENV NPM_CONFIG_LOGLEVEL warn ---> Using cache ---> a3d3c0bb4480Step 8/24 : RUN mkdir -p /home/app ---> Using cache ---> 65457e03fcb1Step 9/24 : WORKDIR /home/app ---> Using cache ---> 50ab672e5660Step 10/24 : COPY package.json ./ ---> Using cache ---> 6143e79de873Step 11/24 : RUN npm i --production ---> Using cache ---> a41566487c6eStep 12/24 : COPY index.js ./ ---> Using cache ---> 566633e78d2cStep 13/24 : WORKDIR /home/app/function ---> Using cache ---> 04c9de75f170Step 14/24 : COPY function/*.json ./ ---> Using cache ---> f5765914bd05Step 15/24 : RUN npm i --production || : ---> Using cache ---> a300be28c096Step 16/24 : COPY --chown=app:app function/ . ---> 91cd72d8ad7aStep 17/24 : WORKDIR /home/app/ ---> Running in fce50a76475aRemoving intermediate container fce50a76475a ---> 0ff17b0a9fafStep 18/24 : RUN chmod +rx -R ./function     && chown app:app -R /home/app     && chmod 777 /tmp ---> Running in 6d0c4c92fac1Removing intermediate container 6d0c4c92fac1 ---> 1e543bfbf6b0Step 19/24 : USER app ---> Running in 6d33f5ec237dRemoving intermediate container 6d33f5ec237d ---> cb7cf5dfab12Step 20/24 : ENV cgi_headers="true" ---> Running in 972c23374934Removing intermediate container 972c23374934 ---> 21c6e8198b21Step 21/24 : ENV fprocess="node index.js" ---> Running in 3be91f9d5228Removing intermediate container 3be91f9d5228 ---> aafb7a756d38Step 22/24 : EXPOSE 8080 ---> Running in da3183bd88c5Removing intermediate container da3183bd88c5 ---> 5f6fd7e66a95Step 23/24 : HEALTHCHECK --interval=3s CMD [ -e /tmp/.lock ] || exit 1 ---> Running in a590c91037aeRemoving intermediate container a590c91037ae ---> fbe20c32941fStep 24/24 : CMD ["fwatchdog"] ---> Running in 59cd231f0576Removing intermediate container 59cd231f0576 ---> 88cd8ac65adeSuccessfully built 88cd8ac65adeSuccessfully tagged andreipopescu12/appfleet-hello-world:latestImage: andreipopescu12/appfleet-hello-world:latest built.[0] < Building appfleet-hello-world done in 13.95s.[0] Worker done.Total build time: 13.95s[0] > Pushing appfleet-hello-world [andreipopescu12/appfleet-hello-world:latest].The push refers to repository [docker.io/andreipopescu12/appfleet-hello-world]04643e0c999f: Pusheddb3ccc4403b8: Pushed24d1d5a62262: Layer already existsadfa28db7666: Layer already existsb7d0eb42e645: Layer already exists84fba0eb2756: Layer already existscf2a3f2bc398: Layer already exists942d3272b7d4: Layer already exists037b653b7d4e: Layer already exists966655dc62be: Layer already exists08d8e0925a73: Layer already exists6ce16b164ed0: Layer already existsd76ecd300100: Layer already exists77cae8ab23bf: Layer already existslatest: digest: sha256:818d92b10d276d32bcc459e2918cb537051a14025e694eb59a9b3caa0bb4e41c size: 3456[0] < Pushing appfleet-hello-world [andreipopescu12/appfleet-hello-world:latest] done.[0] Worker done.Deploying: appfleet-hello-world.WARNING! Communication is not secure, please consider using HTTPS. Letsencrypt.org offers free SSL/TLS certificates.Handling connection for 8080Handling connection for 8080Deployed. 202 Accepted.URL: http://127.0.0.1:8080/function/appfleet-hello-world

☞ Note that you can skip the push or the deploy steps:

  • The following example command skips the push step:
faas-cli up -f appfleet-hello-world.yml --skip-push
  • The following example command skips the deploy step:
faas-cli up -f appfleet-hello-world.yml --skip-deploy
  1. To verify that the updated serverless function works as expected, invoke it as follows:
faas-cli invoke -f appfleet-hello-world.yml appfleet-hello-world
Reading from STDIN - hit (Control + D) to stop.BerlinHandling connection for 8080Hello, you are currently in BerlinstatusCode: 200body: "{\"coord\":{\"lon\":13.41,\"lat\":52.52},\"weather\":[{\"id\":802,\"main\":\"Clouds\",\"description\":\"scattered clouds\",\"icon\":\"03d\"}],\"base\":\"stations\",\"main\":{\"temp\":282.25,\"feels_like\":270.84,\"temp_min\":280.93,\"temp_max\":283.15,\"pressure\":1008,\"humidity\":61},\"visibility\":10000,\"wind\":{\"speed\":13.9,\"deg\":260,\"gust\":19},\"clouds\":{\"all\":40},\"dt\":1584107132,\"sys\":{\"type\":1,\"id\":1275,\"country\":\"DE\",\"sunrise\":1584077086,\"sunset\":1584119213},\"timezone\":3600,\"id\":2950159,\"name\":\"Berlin\",\"cod\":200}"
  1. To clean-up, run the faas-cli remove command with the name of your serverless function (appfleet-hello-world as an argument):
faas-cli remove appfleet-hello-world
Deleting: appfleet-hello-world.Handling connection for 8080Removing old function.

Deploy Serverless Functions Using the Web Interface

OpenFaaS provides a web-based user interface. In this section, you’ll learn how you can use it to deploy a serverless function.

  1. First, you must use the echo command to retrieve your password:
echo $PASSWORD
49IoP28G8247MZcj6a1FWUYUx
  1. Open a browser and visit
    http://localhost:8080
    . To log in, use the admin username and the password you retrieved in the previous step. You will be redirected to the OpenFaaS home page. Select the DEPLOY NEW FUNCTION button:
  1. A new window will be displayed. Select the Custom tab, and then type:
  • docker.io/andreipopescu12/appfleet-hello-world in the Docker Image input box
  • appfleet-hello-world in the Function name input box
  1. Once you’ve filled in the Docker image and Function name input boxes, select the DEPLOY button:
  1. Your new function will be visible in the left navigation bar. Click on it:

You’ll be redirected to the invoke function page:

  1. In the Request body input box, type in the name of the city you want to retrieve the weather forecast for, and then select the INVOKE button:

If everything works well, the weather forecast will be displayed in the Response Body field:

Monitor Your Serverless Functions with Prometheus and Grafana

The OpenFaaS gateway exposes the following metrics:

Retrieved from https://docs.openfaas.com/architecture/metrics/

In this section, you will learn how to set up Prometheus and Grafana to track the health of your serverless functions.

  1. Use the following command to list your deployments:
kubectl get deployments -n openfaas -l "release=openfaas, app=openfaas"
NAME                READY   UP-TO-DATE   AVAILABLE   AGEalertmanager        1/1     1            1           15mbasic-auth-plugin   1/1     1            1           15mfaas-idler          1/1     1            1           15mgateway             1/1     1            1           15mnats                1/1     1            1           15mprometheus          1/1     1            1           15mqueue-worker        1/1     1            1           15m
  1. To expose the prometheus deployment, create a service object named prometheus-ui:
kubectl expose deployment prometheus -n openfaas --type=NodePort --name=prometheus-ui
service/prometheus-ui exposed

☞ The --type=NodePort flag exposes the prometheus-ui service on each of the node’s IP addresses. Also, a ClusterIP service is created. You’ll use this to connect to the prometheus-ui service from outside of the cluster.

  1. To inspect the prometheus-ui service, enter the following command:
kubectl get svc prometheus-ui -n openfaas
NAME            TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGEprometheus-ui   NodePort   10.96.129.204   <none>        9090:31369/TCP   8m1s
  1. Forward all requests made to
    http://localhost:9090
    to the pod running the prometheus-ui service:
kubectl port-forward -n openfaas svc/prometheus-ui 9090:9090 &
  1. Now, you can point your browser to
    http://localhost:9090
    , and you should see a page similar to the following screenshot:
  1. To deploy Grafana, you’ll the stefanprodan/faas-grafana:4.6.3 image. Run the following command:
kubectl run grafana -n openfaas --image=stefanprodan/faas-grafana:4.6.3 --port=3000
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.deployment.apps/grafana created
  1. Now, you can list your deployments with:
kubectl get deployments -n openfaas
NAME                READY   UP-TO-DATE   AVAILABLE   AGEalertmanager        1/1     1            1           46mbasic-auth-plugin   1/1     1            1           46mfaas-idler          1/1     1            1           46mgateway             1/1     1            1           46mgrafana             1/1     1            1           107snats                1/1     1            1           46mprometheus          1/1     1            1           46mqueue-worker        1/1     1            1           46m
  1. Use the following kubectl expose deployment command to create a service object that exposes the grafana deployment:
kubectl expose deployment grafana -n openfaas --type=NodePort --name=grafana
service/grafana exposed
  1. Retrieve details about your new service with:
kubectl get service grafana -n openfaas
NAME      TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGEgrafana   NodePort   10.96.194.59   <none>        3000:32464/TCP   60s
  1. Forward all requests made to
    http://localhost:3030
    to the pod running the grafana service:
kubectl port-forward -n openfaas svc/grafana 3000:3000 &
[3] 3973Forwarding from 127.0.0.1:3000 -> 3000Forwarding from [::1]:3000 -> 3000
  1. Now that you set up the port forwarding, you can access Grafana by pointing your browser to
    http://localhost:3000
    :
  1. Log into Grafana using the username admin and password admin. The Home Dashboard page will be displayed:
  1. From the left menu, select Dashboards –> Import:
  1. Type https://grafana.com/grafana/dashboards/3434 in the Grafana.com Dashboard input box. Then, select the Load button:
  1. In the Import Dashboard dialog box, set the Prometheus data source to faas, and then select Import:

An empty dashboard will be displayed:

  1. Now, you can invoke your function a couple of times using the faas-cli invoke command as follows:
faas-cli invoke -f appfleet-hello-world.yml appfleet-hello-world
  1. Switch back to the browser window that opened Grafana. Your dashboard should be automatically updated and look similar to the following screenshot:

We hope this tutorial was useful for learning the basics of deploying serverless functions with OpenFaaS.

Thanks for reading!

Discover more with Gcore Function as a Service

Related articles

Query your cloud with natural language: A developer’s guide to Gcore MCP

What if you could ask your infrastructure questions and get real answers?With Gcore’s open-source implementation of the Model Context Protocol (MCP), now you can. MCP turns generative AI into an agent that understands your infrastructure, responds to your queries, and takes action when you need it to.In this post, we’ll demo how to use MCP to explore and inspect your Gcore environment just by prompting, to list resources, check audit logs, and generate cost reports. We’ll also walk through a fun bonus use case: provisioning infrastructure and exporting it to Terraform.What is MCP and why do devs love it?Originally developed by Anthropic, the Model Context Protocol (MCP) is an open standard that turns language models into agents that interact with structured tools: APIs, CLIs, or internal systems. Gcore’s implementation makes this protocol real for our customers.With MCP, you can:Ask questions about your infrastructureList, inspect, or filter cloud resourcesView cost data, audit logs, or deployment metadataExport configs to TerraformChain multi-step operations via natural languageGcore MCP removes friction from interacting with your infrastructure. Instead of wiring together scripts or context-switching across dashboards and CLIs, you can just…ask.That means:Faster debugging and auditsMore accessible infra visibilityFewer repetitive setup tasksBetter team collaborationBecause it’s open source, backed by the Gcore Python SDK, you can plug it into other APIs, extend tool definitions, or even create internal agents tailored to your stack. Explore the GitHub repo for yourself.What can you do with it?This isn’t just a cute chatbot. Gcore MCP connects your cloud to real-time insights. Here are some practical prompts you can use right away.Infrastructure inspection“List all VMs running in the Frankfurt region”“Which projects have over 80% GPU utilization?”“Show all volumes not attached to any instance”Audit and cost analysis“Get me the API usage for the last 24 hours”“Which users deployed resources in the last 7 days?”“Give a cost breakdown by region for this month”Security and governance“Show me firewall rules with open ports”“List all active API tokens and their scopes”Experimental automation“Create a secure network in Tokyo, export to Terraform, then delete it”We’ll walk through that last one in the full demo below.Full video demoWatch Gcore’s AI Software Engineer, Algis Dumbris, walk through setting up MCP on your machine and show off some use cases. If you prefer reading, we’ve broken down the process step-by-step below.Step-by-step walkthroughThis section maps to the video and shows exactly how to replicate the workflow locally.1. Install MCP locally (0:00–1:28)We use uv to isolate the environment and pull the project directly from GitHub.curl -Ls https://astral.sh/uv/install.sh | sh uvx add gcore-mcp-server https://github.com/G-Core/gcore-mcp-server Requirements:PythonGcore account + API keyTool config file (from the repo)2. Set up your environment (1:28–2:47)Configure two environment variables:GCORE_API_KEY for authGCORE_TOOLS to define what the agent can access (e.g., regions, instances, costs, etc.)Soon, tool selection will be automatic, but today you can define your toolset in YAML or JSON.3. Run a basic query (3:19–4:11)Prompt:“Find the Gcore region closest to Antalya.”The agent maps this to a regions.list call and returns: IstanbulNo need to dig through docs or write an API request.4. Provision, export, and clean up (4:19–5:32)This one’s powerful if you’re experimenting with CI/CD or infrastructure-as-code.Prompt:“Create a secure network in Tokyo. Export to Terraform. Then clean up.”The agent:Provisions the networkExports it to Terraform formatDestroys the resources afterwardYou get usable .tf output with no manual scripting. Perfect for testing, prototyping, or onboarding.Gcore: always building for developersTry it now:Clone the repoInstall UVX + configure your environmentStart prompting your infrastructureOpen issues, contribute tools, or share your use casesThis is early-stage software, and we’re just getting started. Expect more tools, better UX, and deeper integrations soon.Watch how easy it is to deploy an inference instance with Gcore

Cloud computing: types, deployment models, benefits, and how it works

Cloud computing is a model for enabling on-demand network access to a shared pool of configurable computing resources, such as networks, servers, storage, applications, and services that can be rapidly provisioned and released with minimal management effort or service provider interaction. According to research by Gartner (2024), the global cloud computing market size is projected to reach $1.25 trillion by 2025, reflecting the rapid growth and widespread adoption of these services.The National Institute of Standards and Technology (NIST) defines five core characteristics that distinguish cloud computing from traditional IT infrastructure. These include on-demand self-service, broad network access, resource pooling, rapid elasticity, and measured service.Each characteristic addresses specific business needs while enabling organizations to access computing resources without maintaining physical hardware on-premises.Cloud computing services are organized into three main categories that serve different business requirements and technical needs. Infrastructure as a Service (IaaS) provides basic computing resources, Platform as a Service (PaaS) offers development environments and tools, and Software as a Service (SaaS) delivers complete applications over the internet. Major cloud providers typically guarantee 99.9% or higher uptime in service level agreements to ensure reliable access to these services.Organizations can choose from four primary use models based on their security, compliance, and operational requirements. Public cloud services are offered over the internet to anyone, private clouds are proprietary networks serving limited users, hybrid clouds combine public and private cloud features, and community clouds serve specific groups with shared concerns. Each model provides different levels of control, security, and cost structures.Over 90% of enterprises use some form of cloud services as of 2024, according to Forrester Research (2024), making cloud computing knowledge important for modern business operations. This widespread adoption reflects how cloud computing has become a cornerstone of digital change and competitive advantage across industries.What is cloud computing?Cloud computing is a model that delivers computing resources like servers, storage, databases, and software over the internet on demand, allowing users to access and use these resources without owning or managing the physical infrastructure. Instead of buying and maintaining your own servers, you can rent computing power from cloud providers and scale resources up or down based on your needs.Over 90% of enterprises now use some form of cloud services, with providers typically guaranteeing 99.9% or higher uptime in their service agreements.The three main service models offer different levels of control and management. Infrastructure as a Service (IaaS) provides basic computing resources like virtual machines and storage. Platform as a Service (PaaS) adds development tools and runtime environments, and Software as a Service (SaaS) delivers complete applications that are ready to use. Each model handles different aspects of the technology stack, so you only manage what you need while the provider handles the rest.Cloud use models vary by ownership and access control. Public clouds serve multiple customers over the internet, private clouds operate exclusively for one organization, and hybrid clouds combine both approaches for flexibility. This variety lets organizations choose the right balance of cost, control, and security for their specific needs while maintaining the core benefits of cloud computing's flexible, elastic infrastructure.What are the main types of cloud computing services?The main types of cloud computing services refer to the different service models that provide computing resources over the internet with varying levels of management and control. The main types of cloud computing services are listed below.Infrastructure as a service (IaaS): This model provides basic computing infrastructure, including virtual machines, storage, and networking resources over the internet. Users can install and manage their own operating systems, applications, and development frameworks while the provider handles the physical hardware.Platform as a service (PaaS): This service offers a complete development and use environment in the cloud, including operating systems, programming languages, databases, and web servers. Developers can build, test, and use applications without managing the underlying infrastructure complexity.Software as a service (SaaS): This model delivers fully functional software applications over the internet through a web browser or mobile app. Users access the software on a subscription basis without needing to install, maintain, or update the applications locally.Function as a service (FaaS): Also known as serverless computing, this model allows developers to run individual functions or pieces of code in response to events. The cloud provider automatically manages server provisioning, scaling, and maintenance while charging only for actual compute time used.Database as a service (DBaaS): This service provides managed database solutions in the cloud, handling database administration tasks like backups, updates, and scaling. Organizations can access database functionality without maintaining physical database servers or hiring specialized database administrators.Storage as a service (STaaS): This model offers flexible cloud storage solutions for data backup, archiving, and file sharing needs. Users can store and retrieve data from anywhere with internet access while paying only for the storage space they actually use.What are the different cloud deployment models?Cloud use models refer to the different ways organizations can access and manage cloud computing resources based on ownership, location, and access control. The cloud use models are listed below.Public cloud: Services are delivered over the internet and shared among multiple organizations by third-party providers. Anyone can purchase and use these services on a pay-as-you-go basis, making them cost-effective for businesses without large upfront investments.Private cloud: Computing resources are dedicated to a single organization and can be hosted on-premises or by a third party. This model offers greater control, security, and customization options but requires higher costs and more management overhead.Hybrid cloud: Organizations combine public and private cloud environments, allowing data and applications to move between them as needed. This approach provides flexibility to keep sensitive data in private clouds while using public clouds for less critical workloads.Community cloud: Multiple organizations with similar requirements share cloud infrastructure and costs. Government agencies, healthcare organizations, or financial institutions often use this model to meet specific compliance and security standards.Multi-cloud: Organizations use services from multiple cloud providers to avoid vendor lock-in and improve redundancy. This plan allows businesses to choose the best services from different providers while reducing dependency on any single vendor.How does cloud computing work?Cloud computing works by delivering computing resources like servers, storage, databases, and software over the internet on an on-demand basis. Instead of owning physical hardware, users access these resources through web browsers or applications, while cloud providers manage the underlying infrastructure in data centers worldwide.The system operates through a front-end and back-end architecture. The front end includes your device, web browser, and network connection that you use to access cloud services.The back end consists of servers, storage systems, databases, and applications housed in the provider's data centers. When you request a service, the cloud infrastructure automatically allocates the necessary resources from its shared pool.The technology achieves its flexibility through virtualization, which creates multiple virtual instances from single physical servers. Resource pooling allows providers to serve multiple customers from the same infrastructure, while rapid elasticity automatically scales resources up or down based on demand.This elastic scaling can reduce resource costs by up to 30% compared to fixed infrastructure, according to McKinsey (2024), making cloud computing both flexible and cost-effective for businesses of all sizes.What are the key benefits of cloud computing?The key benefits of cloud computing refer to the advantages organizations and individuals gain from using internet-based computing services instead of traditional on-premises infrastructure. The key benefits of cloud computing are listed below.Cost reduction: Organizations eliminate upfront hardware investments and reduce ongoing maintenance expenses by paying only for resources they actually use. Cloud providers handle infrastructure management, reducing IT staffing costs and operational overhead.Flexibility and elasticity: Computing resources can expand or contract automatically based on demand, ensuring best performance during traffic spikes. This flexibility prevents over-provisioning during quiet periods and under-provisioning during peak usage.Improved accessibility: Users can access applications and data from any device with an internet connection, enabling remote work and global collaboration. This mobility supports modern work patterns and increases productivity across distributed teams.Enhanced reliability: Cloud providers maintain multiple data centers with redundant systems and backup infrastructure to ensure continuous service availability.Automatic updates and maintenance: Software updates, security patches, and system maintenance happen automatically without user intervention. This automation reduces downtime and ensures systems stay current with the latest features and security protections.Disaster recovery: Cloud services include built-in backup and recovery capabilities that protect against data loss from hardware failures or natural disasters. Recovery times are typically faster than traditional backup methods since data exists across multiple locations.Environmental effectiveness: Shared cloud infrastructure uses resources more effectively than individual company data centers, reducing overall energy consumption. Large cloud providers can achieve better energy effectiveness through economies of scale and advanced cooling technologies.What are the drawbacks and challenges of cloud computing?The drawbacks and challenges of cloud computing refer to the potential problems and limitations organizations may face when adopting cloud-based services. They are listed below.Security concerns: Organizations lose direct control over their data when it's stored on third-party servers. Data breaches, unauthorized access, and compliance issues become shared responsibilities between the provider and customer. Sensitive information may be vulnerable to cyber attacks targeting cloud infrastructure.Internet dependency: Cloud services require stable internet connections to function properly. Poor connectivity or outages can completely disrupt business operations and prevent access to critical applications. Remote locations with limited bandwidth face particular challenges accessing cloud resources.Vendor lock-in: Switching between cloud providers can be difficult and expensive due to proprietary technologies and data formats. Organizations may become dependent on specific platforms, limiting their flexibility to negotiate pricing or change services. Migration costs and technical complexity often discourage switching providers.Limited customization: Cloud services offer standardized solutions that may not meet specific business requirements. Organizations can't modify underlying infrastructure or install custom software configurations. This restriction can force businesses to adapt their processes to fit the cloud platform's limitations.Ongoing costs: Monthly subscription fees can accumulate to exceed traditional on-premise infrastructure costs over time. Unexpected usage spikes or data transfer charges can lead to budget overruns. Organizations lose the asset value that comes with owning physical hardware.Performance variability: Shared cloud resources can experience slower performance during peak usage periods. Network latency affects applications requiring real-time processing or frequent data transfers. Organizations can't guarantee consistent performance levels for mission-critical applications.Compliance complexity: Meeting regulatory requirements becomes more challenging when data is stored across multiple locations. Organizations must verify that cloud providers meet industry-specific compliance standards. Audit trails and data governance become shared responsibilities that require careful coordination.Gcore Edge CloudWhen building AI applications that require serious computational power, the infrastructure you choose can make or break your project's success. Whether you're training large language models, running complex inference workloads, or tackling high-performance computing challenges, having access to the latest GPU technology without performance bottlenecks becomes critical.Gcore's AI GPU Cloud Infrastructure addresses these demanding requirements with bare metal NVIDIA H200. H100. A100. L40S, and GB200 GPUs, delivering zero virtualization overhead for maximum performance. The platform's ultra-fast InfiniBand networking and multi-GPU cluster support make it particularly well-suited for distributed training and large-scale AI workloads, starting from just €1.25/hour. Multi-instance GPU (MIG) support also allows you to improve resource allocation and costs for smaller inference tasks.Discover how Gcore's bare metal GPU performance can accelerate your AI training and inference workloads at https://gcore.com/gpu-cloud.Frequently asked questionsPeople often have questions about cloud computing basics, costs, and how it fits their specific needs. These answers cover the key service models, use options, and practical considerations that help clarify what cloud computing can do for your organization.What's the difference between cloud computing and traditional hosting?Cloud computing delivers resources over the internet on demand, while traditional hosting provides fixed server resources at dedicated locations. Cloud offers elastic growth and pay-as-you-go pricing, whereas traditional hosting requires upfront capacity planning and fixed costs regardless of actual usage.What is cloud computing security?Cloud computing security protects data, applications, and infrastructure in cloud environments through shared responsibility models between providers and users. Cloud providers secure the underlying infrastructure while users protect their data, applications, and access controls.What is virtualization in cloud computing?Virtualization in cloud computing creates multiple virtual machines (VMs) on a single physical server using hypervisor software that separates computing resources. This technology allows cloud providers to increase hardware effectiveness and offer flexible, isolated environments to multiple users simultaneously.Is cloud computing secure for business data?Yes, cloud computing is secure for business data when proper security measures are in place, with major providers offering encryption, access controls, and compliance certifications that often exceed what most businesses can achieve on-premises. Cloud service providers typically guarantee 99.9% or higher uptime in service level agreements while maintaining enterprise-grade security standards.How much does cloud computing cost compared to on-premises infrastructure?Cloud computing typically costs 20-40% less than on-premises infrastructure due to shared resources, reduced hardware purchases, and lower maintenance expenses, according to IDC (2024). However, costs vary primarily based on usage patterns, with predictable workloads sometimes being cheaper on-premises while variable workloads benefit more from cloud's pay-as-you-go model.How do I choose between IaaS, PaaS, and SaaS?Choose based on your control needs. IaaS gives you full infrastructure control, PaaS handles infrastructure so you focus on development, and SaaS provides ready-to-use applications with no technical management required.

Pre-configure your dev environment with Gcore VM init scripts

Provisioning new cloud instances can be repetitive and time-consuming if you’re doing everything manually: installing packages, configuring environments, copying SSH keys, and more. With cloud-init, you can automate these tasks and launch development-ready instances from the start.Gcore Edge Cloud VMs support cloud-init out of the box. With a simple YAML script, you can automatically set up a development-ready instance at boot, whether you’re launching a single machine or spinning up a fleet.In this guide, we’ll walk through how to use cloud-init on Gcore Edge Cloud to:Set a passwordInstall packages and system updatesAdd users and SSH keysMount disks and write filesRegister services or install tooling like Docker or Node.jsLet’s get started.What is cloud-init?cloud-init is a widely used tool for customizing cloud instances during the first boot. It reads user-provided configuration data—usually YAML—and uses it to run commands, install packages, and configure the system. In this article, we will focus on Linux-based virtual machines.How to use cloud-init on GcoreFor Gcore Cloud VMs, cloud-init scripts are added during instance creation using the User data field in the UI or API.Step 1: Create a basic scriptStart with a simple YAML script. Here’s one that updates packages and installs htop:#cloud-config package_update: true packages: - htop Step 2: Launch a new VM with your scriptGo to the Gcore Customer Portal, navigate to VMs, and start creating a new instance (or just click here). When you reach the Additional options section, enable the User data option. Then, paste in your YAML cloud-init script.Once the VM boots, it will automatically run the script. This works the same way for all supported Linux distributions available through Gcore.3 real-world examplesLet’s look at three examples of how you can use this.Example 1: Add a password for a specific userThe below script sets the for the default user of the selected operating system:#cloud-config password: <password> chpasswd: {expire: False} ssh_pwauth: True Example 2: Dev environment with Docker and GitThe following script does the following:Installs Docker and GitAdds a new user devuser with sudo privilegesAuthorizes an SSH keyStarts Docker at boot#cloud-config package_update: true packages: - docker.io - git users: - default - name: devuser sudo: ALL=(ALL) NOPASSWD:ALL groups: docker shell: /bin/bash ssh-authorized-keys: - ssh-rsa AAAAB3Nza...your-key-here runcmd: - systemctl enable docker - systemctl start docker Example 3: Install Node.js and clone a repoThis script installs Node.js and clones a GitHub repo to your Gcore VM at launch:#cloud-config packages: - curl runcmd: - curl -fsSL https://deb.nodesource.com/setup_18.x | bash - - apt-get install -y nodejs - git clone https://github.com/example-user/dev-project.git /home/devuser/project Reusing and versioning your scriptsTo avoid reinventing the wheel, keep your cloud-init scripts:In version control (e.g., Git)Templated for different environments (e.g., dev vs staging)Modular so you can reuse base blocks across projectsYou can also use tools like Ansible or Terraform with cloud-init blocks to standardize provisioning across your team or multiple Gcore VM environments.Debugging cloud-initIf your script doesn’t behave as expected, SSH into the instance and check the cloud-init logs:sudo cat /var/log/cloud-init-output.log This file shows each command as it ran and any errors that occurred.Other helpful logs:/var/log/cloud-init.log /var/lib/cloud/instance/user-data.txt Pro tip: Echo commands or write log files in your script to help debug tricky setups—especially useful if you’re automating multi-node workflows across Gcore Cloud.Tips and best practicesIndentation matters! YAML is picky. Use spaces, not tabs.Always start the file with #cloud-config.runcmd is for commands that run at the end of boot.Use write_files to write configs, env variables, or secrets.Cloud-init scripts only run on the first boot. To re-run, you’ll need to manually trigger cloud-init or re-create the VM.Automate it all with GcoreIf you're provisioning manually, you're doing it wrong. Cloud-init lets you treat your VM setup as code: portable, repeatable, and testable. Whether you’re spinning up ephemeral dev boxes or preparing staging environments, Gcore’s support for cloud-init means you can automate it all.For more on managing virtual machines with Gcore, check out our product documentation.Explore Gcore VM product docs

How to cut egress costs and speed up delivery using Gcore CDN and Object Storage

If you’re serving static assets (images, videos, scripts, downloads) from object storage, you’re probably paying more than you need to, and your users may be waiting longer than they should.In this guide, we explain how to front your bucket with Gcore CDN to cache static assets, cut egress bandwidth costs, and get faster TTFB globally. We’ll walk through setup (public or private buckets), signed URL support, cache control best practices, debugging tips, and automation with the Gcore API or Terraform.Why bother?Serving directly from object storage hits your origin for every request and racks up egress charges. With a CDN in front, cached files are served from edge—faster for users, and cheaper for you.Lower TTFB, better UXWhen content is cached at the edge, it doesn’t have to travel across the planet to get to your user. Gcore CDN caches your assets at PoPs close to end users, so requests don’t hit origin unless necessary. Once cached, assets are delivered in a few milliseconds.Lower billsMost object storage providers charge $80–$120 per TB in egress fees. By fronting your storage with a CDN, you only pay egress once per edge location—then it’s all cache hits after that. If you’re using Gcore Storage and Gcore CDN, there’s zero egress fee between the two.Caching isn’t the only way you save. Gcore CDN can also compress eligible file types (like HTML, CSS, JavaScript, and JSON) on the fly, further shrinking bandwidth usage and speeding up file delivery—all without any changes to your storage setup.Less origin traffic and less data to transfer means smaller bills. And your storage bucket doesn’t get slammed under load during traffic spikes.Simple scaling, globallyThe CDN takes the hit, not your bucket. That means fewer rate-limit issues, smoother traffic spikes, and more reliable performance globally. Gcore CDN spans the globe, so you’re good whether your users are in Tokyo, Toronto, or Tel Aviv.Setup guide: Gcore CDN + Gcore Object StorageLet’s walk through configuring Gcore CDN to cache content from a storage bucket. This works with Gcore Object Storage and other S3-compatible services.Step 1: Prep your bucketPublic? Check files are publicly readable (via ACL or bucket policy).Private? Use Gcore’s AWS Signature V4 support—have your access key, secret, region, and bucket name ready.Gcore Object Storage URL format: https://<bucket-name>.<region>.cloud.gcore.lu/<object> Step 2: Create CDN resource (UI or API)In the Gcore Customer Portal:Go to CDN > Create CDN ResourceChoose "Accelerate and protect static assets"Set a CNAME (e.g. cdn.yoursite.com) if you want to use your domainConfigure origin:Public bucket: Choose None for authPrivate bucket: Choose AWS Signature V4, and enter credentialsChoose HTTPS as the origin protocolGcore will assign a *.gcdn.co domain. If you’re using a custom domain, add a CNAME: cdn.yoursite.com CNAME .gcdn.co Here’s how it works via Terraform: resource "gcore_cdn_resource" "cdn" { cname = "cdn.yoursite.com" origin_group_id = gcore_cdn_origingroup.origin.id origin_protocol = "HTTPS" } resource "gcore_cdn_origingroup" "origin" { name = "my-origin-group" origin { source = "mybucket.eu-west.cloud.gcore.lu" enabled = true } } Step 3: Set caching behaviorSet Cache-Control headers in your object metadata: Cache-Control: public, max-age=2592000 Too messy to handle in storage? Override cache logic in Gcore:Force TTLs by path or extensionIgnore or forward query strings in cache keyStrip cookies (if unnecessary for cache decisions)Pro tip: Use versioned file paths (/img/logo.v3.png) to bust cache safely.Secure access with signed URLsWant your assets to be private, but still edge-cacheable? Use Gcore’s Secure Token feature:Enable Secure Token in CDN settingsSet a secret keyGenerate time-limited tokens in your appPython example: import base64, hashlib, time secret = 'your_secret' path = '/videos/demo.mp4' expires = int(time.time()) + 3600 string = f"{expires}{path} {secret}" token = base64.urlsafe_b64encode(hashlib.md5(string.encode()).digest()).decode().strip('=') url = f"https://cdn.yoursite.com{path}?md5={token}&expires={expires}" Signed URLs are verified at the CDN edge. Invalid or expired? Blocked before origin is touched.Optional: Bind the token to an IP to prevent link sharing.Debug and cache tuneUse curl or browser devtools: curl -I https://cdn.yoursite.com/img/logo.png Look for:Cache: HIT or MISSCache-ControlX-Cached-SinceCache not working? Check for the following errors:Origin doesn’t return Cache-ControlCDN override TTL not appliedCache key includes query strings unintentionallyYou can trigger purges from the Gcore Customer Portal or automate them via the API using POST /cdn/purge. Choose one of three ways:Purge all: Clear the entire domain’s cache at once.Purge by URL: Target a specific full path (e.g., /images/logo.png).Purge by pattern: Target a set of files using a wildcard at the end of the pattern (e.g., /videos/*).Monitor and optimize at scaleAfter rollout:Watch origin bandwidth dropCheck hit ratio (aim for >90%)Audit latency (TTFB on HIT vs MISS)Consider logging using Gcore’s CDN logs uploader to analyze cache behavior, top requested paths, or cache churn rates.For maximum savings, combine Gcore Object Storage with Gcore CDN: egress traffic between them is 100% free. That means you can serve cached assets globally without paying a cent in bandwidth fees.Using external storage? You’ll still slash egress costs by caching at the edge and cutting direct origin traffic—but you’ll unlock the biggest savings when you stay inside the Gcore ecosystem.Save money and boost performance with GcoreStill serving assets direct from storage? You’re probably wasting money and compromising performance on the table. Front your bucket with Gcore CDN. Set smart cache headers or use overrides. Enable signed URLs if you need control. Monitor cache HITs and purge when needed. Automate the setup with Terraform. Done.Next steps:Create your CDN resourceUse private object storage with Signature V4Secure your CDN with signed URLsCreate a free CDN resource now

Bare metal vs. virtual machines: performance, cost, and use case comparison

Choosing the right type of server infrastructure is critical to how your application performs, scales, and fits your budget. For most workloads, the decision comes down to two core options: bare metal servers and cloud virtual machines (VMs). Both can be deployed in the cloud, but they differ significantly in terms of performance, control, scalability, and cost.In this article, we break down the core differences between bare metal and virtual servers, highlight when to choose each, and explain how Gcore can help you deploy the right infrastructure for your needs. If you want to learn about either BM or VMs in detail, we’ve got articles for those: here’s the one for bare metal, and here’s a deep dive into virtual machines.Bare metal vs. virtual machines at a glanceWhen evaluating whether bare metal or virtual machines are right for your company, consider your specific workload requirements, performance priorities, and business objectives. Here’s a quick breakdown to help you decide what works best for you.FactorBare metal serversVirtual machinesPerformanceDedicated resources; ideal for high-performance workloadsShared resources; suitable for moderate or variable workloadsScalabilityOften requires manual scaling; less flexibleHighly elastic; easy to scale up or downCustomizationFull control over hardware, OS, and configurationLimited by hypervisor and provider’s environmentSecurityIsolated by default; no hypervisor layerShared environment with strong isolation protocolsCostHigher upfront cost; dedicated hardwarePay-as-you-go pricing; cost-effective for flexible workloadsBest forHPC, AI/ML, compliance-heavy workloadsStartups, dev/test, fast-scaling applicationsAll about bare metal serversA bare metal server is a single-tenant physical server rented from a cloud provider. Unlike virtual servers, the hardware is not shared with other users, giving you full access to all resources and deeper control over configurations. You get exclusive access and control over the hardware via the cloud provider, which offers the stability and security needed for high-demand applications.The benefits of bare metal serversHere are some of the business advantages of opting for a bare metal server:Maximized performance: Because they are dedicated resources, bare metal servers provide top-tier performance without sharing processing power, memory, or storage with other users. This makes them ideal for resource-intensive applications like high-performance computing (HPC), big data processing, and game hosting.Greater control: Since you have direct access to the hardware, you can customize the server to meet your specific requirements. This is especially important for businesses with complex, specialized needs that require fine-tuned configurations.High security: Bare metal servers offer a higher level of security than their alternatives due to the absence of virtualization. With no shared resources or hypervisor layer, there’s less risk of vulnerabilities that come with multi-tenant environments.Dedicated resources: Because you aren’t sharing the server with other users, all server resources are dedicated to your application so that you consistently get the performance you need.Who should use bare metal servers?Here are examples of instances where bare metal servers are the best option for a business:High-performance computing (HPC)Big data processing and analyticsResource-intensive applications, such as AI/ML workloadsGame and video streaming serversBusinesses requiring enhanced security and complianceAll about virtual machinesA virtual server (or virtual machine) runs on top of a physical server that’s been partitioned by a cloud provider using a hypervisor. This allows multiple VMs to share the same hardware while remaining isolated from each other.Unlike bare metal servers, virtual machines share the underlying hardware with other cloud provider customers. That means you’re using (and paying for) part of one server, providing cost efficiency and flexibility.The benefits of virtual machinesHere are some advantages of using a shared virtual machine:Scalability: Virtual machines are ideal for businesses that need to scale quickly and are starting at a small scale. With cloud-based virtualization, you can adjust your server resources (CPU, memory, storage) on demand to match changing workloads.Cost efficiency: You pay only for the resources you use with VMs, making them cost-effective for companies with fluctuating resource needs, as there is no need to pay for unused capacity.Faster deployment: VMs can be provisioned quickly and easily, which makes them ideal for anyone who wants to deploy new services or applications fast.Who should use virtual machines?VMs are a great fit for the following:Web hosting and application hostingDevelopment and testing environmentsRunning multiple apps with varying demandsStartups and growing businesses requiring scalabilityBusinesses seeking cost-effective, flexible solutionsWhich should you choose?There’s no one-size-fits-all answer. Your choice should depend on the needs of your workload:Choose bare metal if you need dedicated performance, low-latency access to hardware, or tighter control over security and compliance.Choose virtual servers if your priority is flexible scaling, faster deployment, and optimized cost.If your application uses GPU-based inference or AI training, check out our dedicated guide to VM vs. BM for AI workloads.Get started with Gcore BM or VMs todayAt Gcore, we provide both bare metal and virtual machine solutions, offering flexibility, performance, and reliability to meet your business needs. Gcore Bare Metal has the power and reliability needed for demanding workloads, while online virtual machines offers customizable configurations, free egress traffic, and flexibility.Compare Gcore BM and VM pricing now

Optimize your workload: a guide to selecting the best virtual machine configuration

Virtual machines (VMs) offer the flexibility, scalability, and cost-efficiency that businesses need to optimize workloads. However, choosing the wrong setup can lead to poor performance, wasted resources, and unnecessary costs.In this guide, we’ll walk you through the essential factors to consider when selecting the best virtual machine configuration for your specific workload needs.﹟1 Understand your workload requirementsThe first step in choosing the right virtual machine configuration is understanding the nature of your workload. Workloads can range from light, everyday tasks to resource-intensive applications. When making your decision, consider the following:Compute-intensive workloads: Applications like video rendering, scientific simulations, and data analysis require a higher number of CPU cores. Opt for VMs with multiple processors or CPUs for smoother performance.Memory-intensive workloads: Databases, big data analytics, and high-performance computing (HPC) jobs often need more RAM. Choose a VM configuration that provides sufficient memory to avoid memory bottlenecks.Storage-intensive workloads: If your workload relies heavily on storage, such as file servers or applications requiring frequent read/write operations, prioritize VM configurations that offer high-speed storage options, such as SSDs or NVMe.I/O-intensive workloads: Applications that require frequent network or disk I/O, such as cloud services and distributed applications, benefit from VMs with high-bandwidth and low-latency network interfaces.﹟2 Consider VM size and scalabilityOnce you understand your workload’s requirements, the next step is to choose the right VM size. VM sizes are typically categorized by the amount of CPU, memory, and storage they offer.Start with a baseline: Select a VM configuration that offers a balanced ratio of CPU, RAM, and storage based on your workload type.Scalability: Choose a VM size that allows you to easily scale up or down as your needs change. Many cloud providers offer auto-scaling capabilities that adjust your VM’s resources based on real-time demand, providing flexibility and cost savings.Overprovisioning vs. underprovisioning: Avoid overprovisioning (allocating excessive resources) unless your workload demands peak capacity at all times, as this can lead to unnecessary costs. Similarly, underprovisioning can affect performance, so finding the right balance is essential.﹟3 Evaluate CPU and memory considerationsThe central processing unit (CPU) and memory (RAM) are the heart of a virtual machine. The configuration of both plays a significant role in performance. Workloads that need high processing power, such as video encoding, machine learning, or simulations, will benefit from VMs with multiple CPU cores. However, be mindful of CPU architecture—look for VMs that offer the latest processors (e.g., Intel Xeon, AMD EPYC) for better performance per core.It’s also important that the VM has enough memory to avoid paging, which occurs when the system uses disk space as virtual memory, significantly slowing down performance. Consider a configuration with more RAM and support for faster memory types like DDR4 for memory-heavy applications.﹟4 Assess storage performance and capacityStorage performance and capacity can significantly impact the performance of your virtual machine, especially for applications requiring large data volumes. Key considerations include:Disk type: For faster read/write operations, opt for solid-state drives (SSDs) over traditional hard disk drives (HDDs). Some cloud providers also offer NVMe storage, which can provide even greater speed for highly demanding workloads.Disk size: Choose the right size based on the amount of data you need to store and process. Over-allocating storage space might seem like a safe bet, but it can also increase costs unnecessarily. You can always resize disks later, so avoid over-allocating them upfront.IOPS and throughput: Some workloads require high input/output operations per second (IOPS). If this is a priority for your workload (e.g., databases), make sure that your VM configuration includes high IOPS storage options.﹟5 Weigh up your network requirementsWhen working with cloud-based VMs, network performance is a critical consideration. High-speed and low-latency networking can make a difference for applications such as online gaming, video conferencing, and real-time analytics.Bandwidth: Check whether the VM configuration offers the necessary bandwidth for your workload. For applications that handle large data transfers, such as cloud backup or file servers, make sure that the network interface provides high throughput.Network latency: Low latency is crucial for applications where real-time performance is key (e.g., trading systems, gaming). Choose VMs with low-latency networking options to minimize delays and improve the user experience.Network isolation and security: Check if your VM configuration provides the necessary network isolation and security features, especially when handling sensitive data or operating in multi-tenant environments.﹟6 Factor in cost considerationsWhile it’s essential that your VM has the right configuration, cost is always an important factor to consider. Cloud providers typically charge based on the resources allocated, so optimizing for cost efficiency can significantly impact your budget.Consider whether a pay-as-you-go or reserved model (which offers discounted rates in exchange for a long-term commitment) fits your usage pattern. The reserved option can provide significant savings if your workload runs continuously. You can also use monitoring tools to track your VM’s performance and resource usage over time. This data will help you make informed decisions about scaling up or down so you’re not paying for unused resources.﹟7 Evaluate security featuresSecurity is a primary concern when selecting a VM configuration, especially for workloads handling sensitive data. Consider the following:Built-in security: Look for VMs that offer integrated security features such as DDoS protection, WAAP security, and encryption.Compliance: Check that the VM configuration meets industry standards and regulations, such as GDPR, ISO 27001, and PCI DSS.Network security: Evaluate the VM's network isolation capabilities and the availability of cloud firewalls to manage incoming and outgoing traffic.﹟8 Consider geographic locationThe geographic location of your VM can impact latency and compliance. Therefore, it’s a good idea to choose VM locations that are geographically close to your end users to minimize latency and improve performance. In addition, it’s essential to select VM locations that comply with local data sovereignty laws and regulations.﹟9 Assess backup and recovery optionsBackup and recovery are critical for maintaining data integrity and availability. Look for VMs that offer automated backup solutions so that data is regularly saved. You should also evaluate disaster recovery capabilities, including the ability to quickly restore data and applications in case of failure.﹟10 Test and iterateFinally, once you've chosen a VM configuration, testing its performance under real-world conditions is essential. Most cloud providers offer performance monitoring tools that allow you to assess how well your VM is meeting your workload requirements.If you notice any performance bottlenecks, be prepared to adjust the configuration. This could involve increasing CPU cores, adding more memory, or upgrading storage. Regular testing and fine-tuning means that your VM is always optimized.Choosing a virtual machine that suits your requirementsSelecting the best virtual machine configuration is a key step toward optimizing your workloads efficiently, cost-effectively, and without unnecessary performance bottlenecks. By understanding your workload’s needs, considering factors like CPU, memory, storage, and network performance, and continuously monitoring resource usage, you can make informed decisions that lead to better outcomes and savings.Whether you're running a small application or large-scale enterprise software, the right VM configuration can significantly improve performance and cost. Gcore provides flexible online virtual machine options that can meet your unique requirements. Our virtual machines are designed to meet diverse workload requirements, providing dedicated vCPUs, high-speed storage, and low-latency networking across 30+ global regions. You can scale compute resources on demand, benefit from free egress traffic, and enjoy flexible pricing models by paying only for the resources in use, maximizing the value of your cloud investments.Contact us to discuss your VM needs

Subscribe to our newsletter

Get the latest industry trends, exclusive insights, and Gcore updates delivered straight to your inbox.