Gcore named a Leader in the GigaOm Radar for AI Infrastructure!Get the report
  1. Home
  2. Developers
  3. Create Serverless Functions with OpenFaaS

Create Serverless Functions with OpenFaaS

  • By Gcore
  • April 7, 2023
  • 13 min read
Create Serverless Functions with OpenFaaS

OpenFaaS is a serverless functions framework that runs on top of Docker and Kubernetes. In this tutorial, you’ll learn how to:

  • Deploy OpenFaaS to a Kubernetes cluster
  • Set up the OpenFaaS CLI
  • Create, build, and deploy serverless functions using the CLI
  • Invoke serverless functions using the CLI
  • Update an existing serverless function
  • Deploy serverless functions using the web interface
  • Monitor your serverless functions with Prometheus and Grafana

Prerequisites

  • A Kubernetes cluster. If you don’t have a running Kubernetes cluster, follow the instructions from the Set Up a Kubernetes Cluster with Kind section below.
  • A Docker Hub Account. See the Docker Hub page for details about creating a new account.
  • kubectl. Refer the Install and Set Up kubectl page for details about installing kubectl.
  • Node.js 10 or higher. To check if Node.js is installed on your computer, type the following command:
node --version

The following example output shows that Node.js is installed on your computer:

v10.16.3

If Node.js is not installed or you’re running an older version, you can download the installer from the Downloads page.

  • This tutorial assumes basic familiarity with Docker and Kubernetes.

Set Up a Kubernetes Cluster with Kind (Optional)

With Kind, you can run a local Kubernetes cluster using Docker containers as nodes. The steps in this section are optional. Follow them only if you don’t have a running Kubernetes cluster.

  1. Create a file named openfaas-cluster.yaml, and copy in the following spec:
# three node (two workers) cluster configkind: ClusterapiVersion: kind.x-k8s.io/v1alpha4nodes:- role: control-plane- role: worker- role: worker
  1. Use the kind create cluster command to create a Kubernetes cluster with one control plane and two worker nodes:
kind create cluster --config kind-specs/kind-cluster.yaml
Creating cluster "kind" ... ✓ Ensuring node image (kindest/node:v1.17.0) 🖼 ✓ Preparing nodes 📦 📦 📦 ✓ Writing configuration 📜 ✓ Starting control-plane 🕹️ ✓ Installing CNI 🔌 ✓ Installing StorageClass 💾 ✓ Joining worker nodes 🚜Set kubectl context to "kind-kind"You can now use your cluster with:kubectl cluster-info --context kind-kindThanks for using kind! 😊

Deploy OpenFaaS to a Kubernetes Cluster

You can install OpenFaaS using Helm, plain YAML files, or its own installer named arkade which provides a quick and easy way to get OpenFaaS running. In this section, you’ll deploy OpenFaaS with arkade.

  1. Enter the following command to install arkade:
curl -sLS https://dl.get-arkade.dev | sudo sh
Downloading package https://github.com/alexellis/arkade/releases/download/0.1.10/arkade-darwin as /Users/andrei/Desktop/openFaaS/faas-hello-world/arkade-darwinDownload complete.Running with sufficient permissions to attempt to move arkade to /usr/local/binNew version of arkade installed to /usr/local/binCreating alias 'ark' for 'arkade'.            _             _  __ _ _ __| | ____ _  __| | ___ / _` | '__| |/ / _` |/ _` |/ _ \| (_| | |  |   < (_| | (_| |  __/ \__,_|_|  |_|\_\__,_|\__,_|\___|Get Kubernetes apps the easy wayVersion: 0.1.10Git Commit: cf96105d37ed97ed644ab56c0660f0d8f4635996
  1. Now, install openfaas with:
arkade install openfaas
Using kubeconfig: /Users/andrei/.kube/configUsing helm3Node architecture: "amd64"Client: "x86_64", "Darwin"2020/03/10 16:20:40 User dir established as: /Users/andrei/.arkade/https://get.helm.sh/helm-v3.1.1-darwin-amd64.tar.gz/Users/andrei/.arkade/bin/helm3/darwin-amd64 darwin-amd64//Users/andrei/.arkade/bin/helm3/README.md darwin-amd64/README.md/Users/andrei/.arkade/bin/helm3/LICENSE darwin-amd64/LICENSE/Users/andrei/.arkade/bin/helm3/helm darwin-amd64/helm2020/03/10 16:20:43 extracted tarball into /Users/andrei/.arkade/bin/helm3: 3 files, 0 dirs (1.633976582s)"openfaas" has been added to your repositoriesHang tight while we grab the latest from your chart repositories......Successfully got an update from the "ibm-charts" chart repository...Successfully got an update from the "openfaas" chart repository...Successfully got an update from the "stable" chart repository...Successfully got an update from the "bitnami" chart repositoryUpdate Complete. ⎈ Happy Helming!⎈VALUES values.yamlCommand: /Users/andrei/.arkade/bin/helm3/helm [upgrade --install openfaas openfaas/openfaas --namespace openfaas --values /var/folders/nz/2gtkncgx56sgrpqvr40qhhrw0000gn/T/charts/openfaas/values.yaml --set gateway.directFunctions=true --set faasnetes.imagePullPolicy=Always --set gateway.replicas=1 --set queueWorker.replicas=1 --set clusterRole=false --set operator.create=false --set openfaasImagePullPolicy=IfNotPresent --set basicAuthPlugin.replicas=1 --set basic_auth=true --set serviceType=NodePort]Release "openfaas" does not exist. Installing it now.NAME: openfaasLAST DEPLOYED: Tue Mar 10 16:21:03 2020NAMESPACE: openfaasSTATUS: deployedREVISION: 1TEST SUITE: NoneNOTES:To verify that openfaas has started, run:  kubectl -n openfaas get deployments -l "release=openfaas, app=openfaas"======================================================================== OpenFaaS has been installed.                                        ========================================================================# Get the faas-clicurl -SLsf https://cli.openfaas.com | sudo sh# Forward the gateway to your machinekubectl rollout status -n openfaas deploy/gatewaykubectl port-forward -n openfaas svc/gateway 8080:8080 &# If basic auth is enabled, you can now log into your gateway:PASSWORD=$(kubectl get secret -n openfaas basic-auth -o jsonpath="{.data.basic-auth-password}" | base64 --decode; echo)echo -n $PASSWORD | faas-cli login --username admin --password-stdinfaas-cli store deploy figletfaas-cli list# For Raspberry Pifaas-cli store list \ --platform armhffaas-cli store deploy figlet \ --platform armhf# Find out more at:# https://github.com/openfaas/faasThanks for using arkade!
  1. To verify that the deployments were created, run the kubectl get deployments command. Specify the namespace and the selector using the -n and -l parameters as follows:
kubectl get deployments -n openfaas -l "release=openfaas, app=openfaas"

If the deployments are not yet ready, you should see something similar to the following example output:

NAME                READY   UP-TO-DATE   AVAILABLE   AGEalertmanager        0/1     1            0           45sbasic-auth-plugin   1/1     1            1           45sfaas-idler          0/1     1            0           45sgateway             0/1     1            0           45snats                1/1     1            1           45sprometheus          1/1     1            1           45squeue-worker        1/1     1            1           45s

Once the installation is finished, the output should look like this:

NAME                READY   UP-TO-DATE   AVAILABLE   AGEalertmanager        1/1     1            1           75sbasic-auth-plugin   1/1     1            1           75sfaas-idler          1/1     1            1           75sgateway             1/1     1            1           75snats                1/1     1            1           75sprometheus          1/1     1            1           75squeue-worker        1/1     1            1           75s
  1. Check the rollout status of the gateway deployment:
kubectl rollout status -n openfaas deploy/gateway

The following example output shows that the gateway deployment has been successfully rolled out:

deployment "gateway" successfully rolled out
  1. Use the kubectl port-forward command to forward all requests made to
    http://localhost:8080
    to the pod running the
    gateway
    service:
kubectl port-forward -n openfaas svc/gateway 8080:8080 &
[1] 78674Forwarding from 127.0.0.1:8080 -> 8080Forwarding from [::1]:8080 -> 8080

Note that the ampersand sign (&) runs the process in the background. You can use the jobs command to show the status of your background processes:

jobs
[1]  + running    kubectl port-forward -n openfaas svc/gateway 8080:8080
  1. Issue the following command to retrieve your password and save it into an environment variable named PASSWORD:
PASSWORD=$(kubectl get secret -n openfaas basic-auth -o jsonpath="{.data.basic-auth-password}" | base64 --decode; echo)

Set Up the OpenFaaS CLI

OpenFaaS provides a command-line utility you can use to build and deploy your serverless functions. You can install it by following the steps from the Installation page.

Create a Serverless Function Using the CLI

Now that OpenFaaS and the faas-cli command-line utility are installed, you can create and deploy serverless functions using the built-in template engine. OpenFaaS provides two types of templates:

  • The Classic templates are based on the Classic Watchdog and use stdio to communicate with your serverless function. Refer to the Watchdog page for more details about how OpenFaaS Watchdog works.
  • The of-watchdog templates use HTTP to communicate with your serverless function. These templates are available through the OpenFaaS Incubator GitHub repository.

In this tutorial, you’ll use a classic template.

  1. Run the following command to see the templates available in the official store:
faas-cli template store list
NAME                     SOURCE             DESCRIPTIONcsharp                   openfaas           Classic C# templatedockerfile               openfaas           Classic Dockerfile templatego                       openfaas           Classic Golang templatejava8                    openfaas           Classic Java 8 templatenode                     openfaas           Classic NodeJS 8 templatephp7                     openfaas           Classic PHP 7 templatepython                   openfaas           Classic Python 2.7 templatepython3                  openfaas           Classic Python 3.6 templatepython3-dlrs             intel              Deep Learning Reference Stack v0.4 for ML workloadsruby                     openfaas           Classic Ruby 2.5 templatenode10-express           openfaas-incubator Node.js 10 powered by express templateruby-http                openfaas-incubator Ruby 2.4 HTTP templatepython27-flask           openfaas-incubator Python 2.7 Flask templatepython3-flask            openfaas-incubator Python 3.6 Flask templatepython3-http             openfaas-incubator Python 3.6 with Flask and HTTPnode8-express            openfaas-incubator Node.js 8 powered by express templategolang-http              openfaas-incubator Golang HTTP templategolang-middleware        openfaas-incubator Golang Middleware templatepython3-debian           openfaas           Python 3 Debian templatepowershell-template      openfaas-incubator Powershell Core Ubuntu:16.04 templatepowershell-http-template openfaas-incubator Powershell Core HTTP Ubuntu:16.04 templaterust                     booyaa             Rust templatecrystal                  tpei               Crystal templatecsharp-httprequest       distantcam         C# HTTP templatecsharp-kestrel           burtonr            C# Kestrel HTTP templatevertx-native             pmlopes            Eclipse Vert.x native image templateswift                    affix              Swift 4.2 Templatelua53                    affix              Lua 5.3 Templatevala                     affix              Vala Templatevala-http                affix              Non-Forking Vala Templatequarkus-native           pmlopes            Quarkus.io native image templateperl-alpine              tmiklas            Perl language template based on Alpine imagenode10-express-service   openfaas-incubator Node.js 10 express.js microservice templatecrystal-http             koffeinfrei        Crystal HTTP templaterust-http                openfaas-incubator Rust HTTP templatebash-streaming           openfaas-incubator Bash Streaming template

☞ Note that you can specify an alternative store for templates. The following example command lists the templates from a repository named andreipope:

faas-cli template store list -u https://raw.githubusercontent.com/andreipope/my-custom-store/master/templates.json
  1. Download the official templates locally:
faas-cli template pull
Fetch templates from repository: https://github.com/openfaas/templates.git at master2020/03/11 20:51:22 Attempting to expand templates from https://github.com/openfaas/templates.git2020/03/11 20:51:25 Fetched 19 template(s) : [csharp csharp-armhf dockerfile go go-armhf java11 java11-vert-x java8 node node-arm64 node-armhf node12 php7 python python-armhf python3 python3-armhf python3-debian ruby] from https://github.com/openfaas/templates.git

☞ By default, the above command downloads the templates from the OpenFaaS official GitHub repository. If you want to use a custom repository, then you should specify the URL of your repository. The following example command pulls the templates from a repository named andreipope:

faas-cli template pull https://github.com/andreipope/my-custom-store/
  1. To create a new serverless function, run the faas-cli new command specifying:
  • The name of your new function (appfleet-hello-world)
  • The lang parameter followed by the programming language template (node).
faas-cli new appfleet-hello-world --lang node
Folder: appfleet-hello-world created.  ___                   _____           ____ / _ \ _ __   ___ _ __ |  ___|_ _  __ _/ ___|| | | | '_ \ / _ \ '_ \| |_ / _` |/ _` \___ \| |_| | |_) |  __/ | | |  _| (_| | (_| |___) | \___/| .__/ \___|_| |_|_|  \__,_|\__,_|____/      |_|Function created in folder: appfleet-hello-worldStack file written: appfleet-hello-world.ymlNotes:You have created a new function which uses Node.js 12.13.0 and the OpenFaaSClassic Watchdog.npm i --save can be used to add third-party packages like request or cheerionpm documentation: https://docs.npmjs.com/For high-throughput services, we recommend you use the node12 template whichuses a different version of the OpenFaaS watchdog.

At this point, your directory structure should look like the following:

tree . -L 2
.├── appfleet-hello-world│   ├── handler.js│   └── package.json├── appfleet-hello-world.yml└── template    ├── csharp    ├── csharp-armhf    ├── dockerfile    ├── go    ├── go-armhf    ├── java11    ├── java11-vert-x    ├── java8    ├── node    ├── node-arm64    ├── node-armhf    ├── node12    ├── php7    ├── python    ├── python-armhf    ├── python3    ├── python3-armhf    ├── python3-debian    └── ruby21 directories, 3 files

Things to note:

  • The appfleet-hello-world/handler.js file contains the code of your serverless function. You can use the echo command to list the contents of this file:
cat appfleet-hello-world/handler.js
"use strict"module.exports = async (context, callback) => {    return {status: "done"}}
  • You can specify the dependencies required by your serverless function in the package.json file. The automatically generated file is just an empty shell:
cat appfleet-hello-world/package.json
{  "name": "function",  "version": "1.0.0",  "description": "",  "main": "handler.js",  "scripts": {    "test": "echo \"Error: no test specified\" && exit 1"  },  "keywords": [],  "author": "",  "license": "ISC"}
  • The spec of the appfleet-hello-world function is stored in the appfleet-hello-world.yml file:
cat appfleet-hello-world.yml
version: 1.0provider:  name: openfaas  gateway: http://127.0.0.1:8080functions:  appfleet-hello-world:    lang: node    handler: ./appfleet-hello-world    image: appfleet-hello-world:latest

Build Your Serverless Function

  1. Open the appfleet-hello-world.yml file in a plain-text editor, and update the image field by prepending your Docker Hub user name to it. The following example prepends my username (andrepopescu12) to the image field:
image: andrepopescu12/appfleet-hello-world:latest

Once you’ve made this change, the appfleet-hello-world.yml file should look similar to the following:

version: 1.0provider:  name: openfaas  gateway: http://127.0.0.1:8080functions:  appfleet-hello-world:    lang: node    handler: ./appfleet-hello-world    image: <YOUR-DOCKER-HUB-ACCOUNT>/appfleet-hello-world:latest
  1. Build the function. Enter the faas-cli build command specifying the -f argument with the name of the YAML file you edited in the previous step (appfleet-hello-world.yml):
faas-cli build -f appfleet-hello-world.yml
[0] > Building appfleet-hello-world.Clearing temporary build folder: ./build/appfleet-hello-world/Preparing: ./appfleet-hello-world/ build/appfleet-hello-world/functionBuilding: andreipopescu12/appfleet-hello-world:latest with node template. Please wait..Sending build context to Docker daemon  10.24kBStep 1/24 : FROM openfaas/classic-watchdog:0.18.1 as watchdog ---> 94b5e0bef891Step 2/24 : FROM node:12.13.0-alpine as ship ---> 69c8cc9212ecStep 3/24 : COPY --from=watchdog /fwatchdog /usr/bin/fwatchdog ---> Using cache ---> ebab4b723c16Step 4/24 : RUN chmod +x /usr/bin/fwatchdog ---> Using cache ---> 7952724b5872Step 5/24 : RUN addgroup -S app && adduser app -S -G app ---> Using cache ---> 33c7f04595d2Step 6/24 : WORKDIR /root/ ---> Using cache ---> 77b9dee16c79Step 7/24 : ENV NPM_CONFIG_LOGLEVEL warn ---> Using cache ---> a3d3c0bb4480Step 8/24 : RUN mkdir -p /home/app ---> Using cache ---> 65457e03fcb1Step 9/24 : WORKDIR /home/app ---> Using cache ---> 50ab672e5660Step 10/24 : COPY package.json ./ ---> Using cache ---> 6143e79de873Step 11/24 : RUN npm i --production ---> Using cache ---> a41566487c6eStep 12/24 : COPY index.js ./ ---> Using cache ---> 566633e78d2cStep 13/24 : WORKDIR /home/app/function ---> Using cache ---> 04c9de75f170Step 14/24 : COPY function/*.json ./ ---> Using cache ---> 85cf909b646aStep 15/24 : RUN npm i --production || : ---> Using cache ---> c088cbcad583Step 16/24 : COPY --chown=app:app function/ . ---> Using cache ---> 192db89e5941Step 17/24 : WORKDIR /home/app/ ---> Using cache ---> ee2b7d7e8bd4Step 18/24 : RUN chmod +rx -R ./function     && chown app:app -R /home/app     && chmod 777 /tmp ---> Using cache ---> 81831389293eStep 19/24 : USER app ---> Using cache ---> ca0cade453f5Step 20/24 : ENV cgi_headers="true" ---> Using cache ---> afe8d7413349Step 21/24 : ENV fprocess="node index.js" ---> Using cache ---> 5471cfe85461Step 22/24 : EXPOSE 8080 ---> Using cache ---> caaa8ae11dc7Step 23/24 : HEALTHCHECK --interval=3s CMD [ -e /tmp/.lock ] || exit 1 ---> Using cache ---> 881b4d2adb92Step 24/24 : CMD ["fwatchdog"] ---> Using cache ---> 82b586f039dfSuccessfully built 82b586f039dfSuccessfully tagged andreipopescu12/appfleet-hello-world:latestImage: andreipopescu12/appfleet-hello-world:latest built.[0] < Building appfleet-hello-world done in 2.25s.[0] Worker done.Total build time: 2.25s
  1. You can list your Docker images with:
docker images
REPOSITORY                             TAG                 IMAGE ID            CREATED             SIZEandreipopescu12/appfleet-hello-world   latest              82b586f039df        25 minutes ago      96MB

Push Your Image to Docker Hub

  1. Log in to Docker Hub. Run the docker login command with the --username flag followed by your Docker Hub user name. The following example command logs you in as andreipopescu12:
docker login --username andreipopescu12

Next, you will be prompted to enter your Docker Hub password:

Password:Login Succeeded
  1. Use the faas-cli push command to push your serverless function to Docker Hub:
faas-cli push -f appfleet-hello-world.yml
The push refers to repository [docker.io/andreipopescu12/appfleet-hello-world]073c41b18852: Pusheda5c05e98c215: Pushedf749ad113dce: Pushede4f29400b370: Pushedb7d0eb42e645: Pushed84fba0eb2756: Pushedcf2a3f2bc398: Pushed942d3272b7d4: Pushed037b653b7d4e: Pushed966655dc62be: Pushed08d8e0925a73: Pushed6ce16b164ed0: Pushedd76ecd300100: Pushed77cae8ab23bf: Pushedlatest: digest: sha256:4150d4cf32e7e5ffc8fd15efeed16179bbf166536f1cc7a8c4105d01a4042928 size: 3447[0] < Pushing appfleet-hello-world [andreipopescu12/appfleet-hello-world:latest] done.[0] Worker done.

Deploy Your Function Using the CLI

  1. With your serverless function pushed to Docker Hub, log in to your local instance of the OpenFaaS gateway by entering the following command:
echo -n $PASSWORD | faas-cli login --username admin --password-stdin
  1. Run the faas-cli deploy command to deploy your serverless function:
faas-cli deploy -f appfleet-hello-world.yml
Deploying: appfleet-hello-world.WARNING! Communication is not secure, please consider using HTTPS. Letsencrypt.org offers free SSL/TLS certificates.Handling connection for 8080Handling connection for 8080Deployed. 202 Accepted.URL: http://127.0.0.1:8080/function/appfleet-hello-world

☞ OpenFaaS provides an auto-scaling mechanism based on the number of requests per second, which is read from Prometheus. For the sake of simplicity, we won’t cover auto-scaling in this tutorial. To further your knowledge, you can refer the Auto-scaling page.

  1. Use the faas-cli list command to list the functions deployed to your local OpenFaaS gateway:
faas-cli list
Function                      	Invocations    	Replicasappfleet-hello-world          	0              	1

☞ Note that you can also list the functions deployed to a different gateway by providing the URL of the gateway as follows:

faas-cli list --gateway https://<YOUR-GATEWAT-URL>:<YOUR-GATEWAY-PORT>
  1. You can use the faas-cli describe method to retrieve more details about the appfleet-hello-world function:
faas-cli describe appfleet-hello-world
Name:                appfleet-hello-worldStatus:              ReadyReplicas:            1Available replicas:  1Invocations:         1Image:               andreipopescu12/appfleet-hello-world:latestFunction process:    node index.jsURL:                 http://127.0.0.1:8080/function/appfleet-hello-worldAsync URL:           http://127.0.0.1:8080/async-function/appfleet-hello-worldLabels:              faas_function : appfleet-hello-worldAnnotations:         prometheus.io.scrape : false

Invoke Your Serverless Function Using the CLI

  1. To see your serverless function in action, issue the faas-cli invoke command, specifying:
  • The -f flag with the name of the YAML file that describes your function (appfleet-hello-world.yml)
  • The name of your function (appfleet-hello-world)
faas-cli invoke -f appfleet-hello-world.yml appfleet-hello-world
Reading from STDIN - hit (Control + D) to stop.
  1. Type CTRL+D. The following example output shows that your serverless function works as expected:
appfleetHandling connection for 8080{"status":"done"}

Update Your Function

The function you created, deployed, and then invoked in the previous sections is just an empty shell. In this section, we’ll update it to:

  • Read the name of a city from stdin
  • Fetch the weather forecast from the openweathermap.org
  • Print to the console the weather forecast
  1. Create an OpenWeatherMap account by following the instructions from the Sign Up page.
  2. Log in to OpenWeatherMap and then select API KEYS:
  1. From here, you can either copy the value of the default key or create a new API key, and then copy its value:
  1. Now that you have an OpenWeatherMap API key, you must use npm to install a few dependencies. The following command moves into the appfleet-hello-world directory and then installs the get-stdin and request packages:
cd appfleet-hello-world && npm i --save get-stdin request
  1. Replace the content of the handler.js file with:
"use strict"const getStdin = require('get-stdin')const request = require('request');let handler = (req) => {  request(`http://api.openweathermap.org/data/2.5/weather?q=${req}&?units=metric&APPID=<YOUR-OPENWEATHERMAP-APP-KEY>`, function (error, response, body) {    console.error('error:', error)    console.log('statusCode:', response && response.statusCode)    console.log('body:', JSON.stringify(body))  })};getStdin().then(val => {   handler(val);}).catch(e => {  console.error(e.stack);});module.exports = handler

☞ To try this function, replace <YOUR-OPENWEATHERMAP-API-KEY> with your OpenWeatherMap API KEY.

  1. You can use the faas-cli remove command to remove the function you’ve deployed earlier in this tutorial:
faas-cli remove appfleet-hello-world
Deleting: appfleet-hello-world.Handling connection for 8080Removing old function.
  1. Now that the old function has been removed, you must rebuild, push, and deploy your modified function. Instead of issuing three separate commands, you can use the openfaas-cli up command as in the following example:
faas-cli up -f appfleet-hello-world.yml
[0] > Building appfleet-hello-world.Clearing temporary build folder: ./build/appfleet-hello-world/Preparing: ./appfleet-hello-world/ build/appfleet-hello-world/functionBuilding: andreipopescu12/appfleet-hello-world:latest with node template. Please wait..Sending build context to Docker daemon  43.01kBStep 1/24 : FROM openfaas/classic-watchdog:0.18.1 as watchdog ---> 94b5e0bef891Step 2/24 : FROM node:12.13.0-alpine as ship ---> 69c8cc9212ecStep 3/24 : COPY --from=watchdog /fwatchdog /usr/bin/fwatchdog ---> Using cache ---> ebab4b723c16Step 4/24 : RUN chmod +x /usr/bin/fwatchdog ---> Using cache ---> 7952724b5872Step 5/24 : RUN addgroup -S app && adduser app -S -G app ---> Using cache ---> 33c7f04595d2Step 6/24 : WORKDIR /root/ ---> Using cache ---> 77b9dee16c79Step 7/24 : ENV NPM_CONFIG_LOGLEVEL warn ---> Using cache ---> a3d3c0bb4480Step 8/24 : RUN mkdir -p /home/app ---> Using cache ---> 65457e03fcb1Step 9/24 : WORKDIR /home/app ---> Using cache ---> 50ab672e5660Step 10/24 : COPY package.json ./ ---> Using cache ---> 6143e79de873Step 11/24 : RUN npm i --production ---> Using cache ---> a41566487c6eStep 12/24 : COPY index.js ./ ---> Using cache ---> 566633e78d2cStep 13/24 : WORKDIR /home/app/function ---> Using cache ---> 04c9de75f170Step 14/24 : COPY function/*.json ./ ---> Using cache ---> f5765914bd05Step 15/24 : RUN npm i --production || : ---> Using cache ---> a300be28c096Step 16/24 : COPY --chown=app:app function/ . ---> 91cd72d8ad7aStep 17/24 : WORKDIR /home/app/ ---> Running in fce50a76475aRemoving intermediate container fce50a76475a ---> 0ff17b0a9fafStep 18/24 : RUN chmod +rx -R ./function     && chown app:app -R /home/app     && chmod 777 /tmp ---> Running in 6d0c4c92fac1Removing intermediate container 6d0c4c92fac1 ---> 1e543bfbf6b0Step 19/24 : USER app ---> Running in 6d33f5ec237dRemoving intermediate container 6d33f5ec237d ---> cb7cf5dfab12Step 20/24 : ENV cgi_headers="true" ---> Running in 972c23374934Removing intermediate container 972c23374934 ---> 21c6e8198b21Step 21/24 : ENV fprocess="node index.js" ---> Running in 3be91f9d5228Removing intermediate container 3be91f9d5228 ---> aafb7a756d38Step 22/24 : EXPOSE 8080 ---> Running in da3183bd88c5Removing intermediate container da3183bd88c5 ---> 5f6fd7e66a95Step 23/24 : HEALTHCHECK --interval=3s CMD [ -e /tmp/.lock ] || exit 1 ---> Running in a590c91037aeRemoving intermediate container a590c91037ae ---> fbe20c32941fStep 24/24 : CMD ["fwatchdog"] ---> Running in 59cd231f0576Removing intermediate container 59cd231f0576 ---> 88cd8ac65adeSuccessfully built 88cd8ac65adeSuccessfully tagged andreipopescu12/appfleet-hello-world:latestImage: andreipopescu12/appfleet-hello-world:latest built.[0] < Building appfleet-hello-world done in 13.95s.[0] Worker done.Total build time: 13.95s[0] > Pushing appfleet-hello-world [andreipopescu12/appfleet-hello-world:latest].The push refers to repository [docker.io/andreipopescu12/appfleet-hello-world]04643e0c999f: Pusheddb3ccc4403b8: Pushed24d1d5a62262: Layer already existsadfa28db7666: Layer already existsb7d0eb42e645: Layer already exists84fba0eb2756: Layer already existscf2a3f2bc398: Layer already exists942d3272b7d4: Layer already exists037b653b7d4e: Layer already exists966655dc62be: Layer already exists08d8e0925a73: Layer already exists6ce16b164ed0: Layer already existsd76ecd300100: Layer already exists77cae8ab23bf: Layer already existslatest: digest: sha256:818d92b10d276d32bcc459e2918cb537051a14025e694eb59a9b3caa0bb4e41c size: 3456[0] < Pushing appfleet-hello-world [andreipopescu12/appfleet-hello-world:latest] done.[0] Worker done.Deploying: appfleet-hello-world.WARNING! Communication is not secure, please consider using HTTPS. Letsencrypt.org offers free SSL/TLS certificates.Handling connection for 8080Handling connection for 8080Deployed. 202 Accepted.URL: http://127.0.0.1:8080/function/appfleet-hello-world

☞ Note that you can skip the push or the deploy steps:

  • The following example command skips the push step:
faas-cli up -f appfleet-hello-world.yml --skip-push
  • The following example command skips the deploy step:
faas-cli up -f appfleet-hello-world.yml --skip-deploy
  1. To verify that the updated serverless function works as expected, invoke it as follows:
faas-cli invoke -f appfleet-hello-world.yml appfleet-hello-world
Reading from STDIN - hit (Control + D) to stop.BerlinHandling connection for 8080Hello, you are currently in BerlinstatusCode: 200body: "{\"coord\":{\"lon\":13.41,\"lat\":52.52},\"weather\":[{\"id\":802,\"main\":\"Clouds\",\"description\":\"scattered clouds\",\"icon\":\"03d\"}],\"base\":\"stations\",\"main\":{\"temp\":282.25,\"feels_like\":270.84,\"temp_min\":280.93,\"temp_max\":283.15,\"pressure\":1008,\"humidity\":61},\"visibility\":10000,\"wind\":{\"speed\":13.9,\"deg\":260,\"gust\":19},\"clouds\":{\"all\":40},\"dt\":1584107132,\"sys\":{\"type\":1,\"id\":1275,\"country\":\"DE\",\"sunrise\":1584077086,\"sunset\":1584119213},\"timezone\":3600,\"id\":2950159,\"name\":\"Berlin\",\"cod\":200}"
  1. To clean-up, run the faas-cli remove command with the name of your serverless function (appfleet-hello-world as an argument):
faas-cli remove appfleet-hello-world
Deleting: appfleet-hello-world.Handling connection for 8080Removing old function.

Deploy Serverless Functions Using the Web Interface

OpenFaaS provides a web-based user interface. In this section, you’ll learn how you can use it to deploy a serverless function.

  1. First, you must use the echo command to retrieve your password:
echo $PASSWORD
49IoP28G8247MZcj6a1FWUYUx
  1. Open a browser and visit
    http://localhost:8080
    . To log in, use the admin username and the password you retrieved in the previous step. You will be redirected to the OpenFaaS home page. Select the DEPLOY NEW FUNCTION button:
  1. A new window will be displayed. Select the Custom tab, and then type:
  • docker.io/andreipopescu12/appfleet-hello-world in the Docker Image input box
  • appfleet-hello-world in the Function name input box
  1. Once you’ve filled in the Docker image and Function name input boxes, select the DEPLOY button:
  1. Your new function will be visible in the left navigation bar. Click on it:

You’ll be redirected to the invoke function page:

  1. In the Request body input box, type in the name of the city you want to retrieve the weather forecast for, and then select the INVOKE button:

If everything works well, the weather forecast will be displayed in the Response Body field:

Monitor Your Serverless Functions with Prometheus and Grafana

The OpenFaaS gateway exposes the following metrics:

Retrieved from https://docs.openfaas.com/architecture/metrics/

In this section, you will learn how to set up Prometheus and Grafana to track the health of your serverless functions.

  1. Use the following command to list your deployments:
kubectl get deployments -n openfaas -l "release=openfaas, app=openfaas"
NAME                READY   UP-TO-DATE   AVAILABLE   AGEalertmanager        1/1     1            1           15mbasic-auth-plugin   1/1     1            1           15mfaas-idler          1/1     1            1           15mgateway             1/1     1            1           15mnats                1/1     1            1           15mprometheus          1/1     1            1           15mqueue-worker        1/1     1            1           15m
  1. To expose the prometheus deployment, create a service object named prometheus-ui:
kubectl expose deployment prometheus -n openfaas --type=NodePort --name=prometheus-ui
service/prometheus-ui exposed

☞ The --type=NodePort flag exposes the prometheus-ui service on each of the node’s IP addresses. Also, a ClusterIP service is created. You’ll use this to connect to the prometheus-ui service from outside of the cluster.

  1. To inspect the prometheus-ui service, enter the following command:
kubectl get svc prometheus-ui -n openfaas
NAME            TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGEprometheus-ui   NodePort   10.96.129.204   <none>        9090:31369/TCP   8m1s
  1. Forward all requests made to
    http://localhost:9090
    to the pod running the prometheus-ui service:
kubectl port-forward -n openfaas svc/prometheus-ui 9090:9090 &
  1. Now, you can point your browser to
    http://localhost:9090
    , and you should see a page similar to the following screenshot:
  1. To deploy Grafana, you’ll the stefanprodan/faas-grafana:4.6.3 image. Run the following command:
kubectl run grafana -n openfaas --image=stefanprodan/faas-grafana:4.6.3 --port=3000
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.deployment.apps/grafana created
  1. Now, you can list your deployments with:
kubectl get deployments -n openfaas
NAME                READY   UP-TO-DATE   AVAILABLE   AGEalertmanager        1/1     1            1           46mbasic-auth-plugin   1/1     1            1           46mfaas-idler          1/1     1            1           46mgateway             1/1     1            1           46mgrafana             1/1     1            1           107snats                1/1     1            1           46mprometheus          1/1     1            1           46mqueue-worker        1/1     1            1           46m
  1. Use the following kubectl expose deployment command to create a service object that exposes the grafana deployment:
kubectl expose deployment grafana -n openfaas --type=NodePort --name=grafana
service/grafana exposed
  1. Retrieve details about your new service with:
kubectl get service grafana -n openfaas
NAME      TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGEgrafana   NodePort   10.96.194.59   <none>        3000:32464/TCP   60s
  1. Forward all requests made to
    http://localhost:3030
    to the pod running the grafana service:
kubectl port-forward -n openfaas svc/grafana 3000:3000 &
[3] 3973Forwarding from 127.0.0.1:3000 -> 3000Forwarding from [::1]:3000 -> 3000
  1. Now that you set up the port forwarding, you can access Grafana by pointing your browser to
    http://localhost:3000
    :
  1. Log into Grafana using the username admin and password admin. The Home Dashboard page will be displayed:
  1. From the left menu, select Dashboards –> Import:
  1. Type https://grafana.com/grafana/dashboards/3434 in the Grafana.com Dashboard input box. Then, select the Load button:
  1. In the Import Dashboard dialog box, set the Prometheus data source to faas, and then select Import:

An empty dashboard will be displayed:

  1. Now, you can invoke your function a couple of times using the faas-cli invoke command as follows:
faas-cli invoke -f appfleet-hello-world.yml appfleet-hello-world
  1. Switch back to the browser window that opened Grafana. Your dashboard should be automatically updated and look similar to the following screenshot:

We hope this tutorial was useful for learning the basics of deploying serverless functions with OpenFaaS.

Thanks for reading!

Discover more with Gcore Function as a Service

Related articles

What's the difference between multi-cloud and hybrid cloud?

Multi-cloud and hybrid cloud represent two distinct approaches to distributed computing architecture that build upon the foundation of cloud computing to help organizations improve their IT infrastructure.Multi-cloud environments involve using multiple public cloud providers simultaneously to distribute workloads across different platforms. This approach allows organizations to select the best services from each provider while reducing vendor lock-in risk by up to 60%.Companies typically choose multi-cloud strategies to access specialized tools and improve performance for specific applications.Hybrid cloud architecture combines private cloud infrastructure with one or more public cloud services to create a unified computing environment. These deployments are growing at a compound annual growth rate of 22% through 2025, driven by organizations seeking to balance security requirements with flexibility needs. The hybrid model allows sensitive data to remain on private servers while taking advantage of public cloud resources for less critical workloads.The architectural differences between these approaches center on infrastructure ownership and management complexity.Multi-cloud focuses exclusively on public cloud providers and requires managing multiple distinct platforms with unique tools and configurations. Hybrid cloud integrates both private and public resources, creating different challenges related to connectivity, data synchronization, and unified management across diverse environments.Understanding these cloud strategies is important because the decision directly impacts an organization's operational flexibility, security posture, and long-term technology costs. The right choice depends on specific business requirements, regulatory compliance needs, and existing infrastructure investments.What is multi-cloud?Multi-cloud is a strategy that utilizes multiple public cloud providers simultaneously to distribute workloads, applications, and data across different cloud platforms, rather than relying on a single vendor. Organizations adopt this approach to improve performance by matching specific workloads to the best-suited cloud services, reducing vendor lock-in risks, and maintaining operational flexibility. According to Precedence Research (2024), 85% of enterprises will adopt a multi-cloud plan by 2025, reflecting the growing preference for distributed cloud architectures that can reduce vendor dependency risks by up to 60%.What is hybrid cloud?Hybrid cloud is a computing architecture that combines private cloud infrastructure with one or more public cloud services, creating a unified and flexible IT environment. This approach allows organizations to keep sensitive data and critical applications on their private infrastructure while using public clouds for less sensitive workloads, development environments, or handling traffic spikes.The combination of private and public clouds enables cooperation in data and application portability, giving businesses the control and security of private infrastructure alongside the flexibility and cost benefits of public cloud services. Organizations report up to 40% cost savings by using hybrid cloud for peak demand management, offloading non-critical workloads to public clouds during high usage periods.What are the key architectural differences?Key architectural differences refer to the distinct structural and operational approaches between multi-cloud and hybrid cloud environments. The key architectural differences are listed below.Infrastructure composition: Multi-cloud environments utilize multiple public cloud providers simultaneously, distributing workloads across various platforms, including major cloud providers. Hybrid cloud combines private infrastructure with public cloud services to create a unified environment.Data placement plan: Multi-cloud spreads data across various public cloud platforms based on performance and cost optimization needs. Hybrid cloud keeps sensitive data on private infrastructure while moving less critical workloads to public clouds.Network connectivity: Multi-cloud requires separate network connections to each public cloud provider, creating multiple pathways for data flow. A hybrid cloud establishes dedicated connections between private and public environments to facilitate cooperation.Management complexity: Multi-cloud environments require separate management tools and processes for each cloud provider, resulting in increased operational overhead. Hybrid cloud focuses on unified management platforms that coordinate between private and public resources.Security architecture: Multi-cloud implements security policies independently across each cloud platform, requiring multiple security frameworks. Hybrid cloud maintains centralized security controls that extend from private infrastructure to public cloud resources.Workload distribution: Multi-cloud assigns specific applications to different providers based on specialized capabilities and regional requirements. Hybrid cloud flexibly moves workloads between private and public environments based on demand and compliance needs.Combination approach: Multi-cloud typically operates with loose coupling between different cloud environments, maintaining platform independence. Hybrid cloud requires tight communication protocols to ensure smooth data flow between private and public components.What are the benefits of multi-cloud?The benefits of multi-cloud refer to the advantages organizations gain from using multiple public cloud providers simultaneously to distribute workloads and reduce dependency on a single vendor. The benefits of multi-cloud are listed below.Vendor independence: Multi-cloud strategies prevent organizations from becoming locked into a single provider's ecosystem and pricing structure. Companies can switch providers or redistribute workloads if one vendor changes terms or experiences service issues.Cost optimization: Organizations can select the most cost-effective provider for each specific workload or service type. This approach allows companies to take advantage of competitive pricing across different platforms and avoid paying premium rates for all services.Performance improvement: Different cloud providers excel in various geographic regions and service types, enabling optimal workload placement. Companies can route traffic to the fastest-performing provider for each user location or application requirement.Risk mitigation: Distributing workloads across multiple providers reduces the impact of service outages or security incidents. If one provider experiences downtime, critical applications can continue running on alternative platforms.Access to specialized services: Each cloud provider offers unique tools and services that may be best-in-class for specific use cases. Organizations can combine the strongest AI services from one provider with the best database solutions from another.Compliance flexibility: Multi-cloud environments enable organizations to meet different regulatory requirements by selecting providers with appropriate certifications for each jurisdiction. This approach is particularly valuable for companies operating across multiple countries with varying data protection laws.Negotiating power: Using multiple providers strengthens an organization's position when negotiating contracts and pricing. Vendors are more likely to offer competitive rates and better terms when they know customers have alternatives readily available.What are the benefits of hybrid cloud?The benefits of hybrid cloud refer to the advantages organizations gain from combining private cloud infrastructure with public cloud services in a unified environment. The benefits of hybrid cloud are listed below.Cost optimization: Organizations can keep predictable workloads on cost-effective private infrastructure while using public clouds for variable demands. This approach can reduce overall IT spending by 20-40% compared to all-public or all-private models.Enhanced security control: Sensitive data and critical applications remain on private infrastructure under direct organizational control. Public cloud resources handle less sensitive workloads, creating a balanced security approach that meets compliance requirements.Improved flexibility: Companies can quickly scale resources up or down by moving workloads between private and public environments. This flexibility enables businesses to handle traffic spikes without maintaining expensive, idle on-premises capacity.Workload optimization: Different applications can run on the most suitable infrastructure based on performance, security, and cost requirements. Database servers may remain private, while web applications utilize public cloud resources for a broader global reach.Disaster recovery capabilities: Organizations can replicate critical data and applications across both private and public environments. This redundancy provides multiple recovery options and reduces downtime risks during system failures.Regulatory compliance: Companies in regulated industries can keep sensitive data on private infrastructure while using public clouds for approved workloads. This separation helps meet industry-specific compliance requirements without sacrificing cloud benefits.Reduced vendor dependency: Hybrid environments prevent complete reliance on a single cloud provider by maintaining private infrastructure options. Organizations retain the ability to shift workloads if public cloud costs increase or service quality declines.When should you use multi-cloud vs hybrid cloud?You should use multi-cloud when your organization needs maximum flexibility across different public cloud providers, while hybrid cloud works best when you must keep sensitive data on-premises while accessing public cloud flexibility.Choose a multi-cloud approach when you want to avoid vendor lock-in and require specialized services from multiple providers. This approach works well when your team has expertise managing multiple platforms and you can handle increased operational complexity. Multi-cloud becomes essential when compliance requirements vary by region or when you need best-of-breed services that no single provider offers completely.Select hybrid cloud when regulatory requirements mandate on-premises data storage, but you still need public cloud benefits.This model fits organizations with existing private infrastructure investments that want gradual cloud migration. Hybrid cloud works best when you need consistent performance for critical applications while using public clouds for development, testing, or seasonal workload spikes.Consider multi-cloud when your budget allows for higher management overhead in exchange for reduced vendor dependency.Choose a hybrid cloud when you need tighter security control over core systems while maintaining cost-effectiveness through selective public cloud use for non-sensitive workloads.What are the challenges of multi-cloud?Multi-cloud challenges refer to the difficulties organizations face when managing workloads across multiple public cloud providers simultaneously. The multi-cloud challenges are listed below.Increased management complexity: Managing multiple cloud platforms requires teams to master different interfaces, APIs, and operational procedures. Each provider has unique tools and configurations, making it difficult to maintain consistent governance across environments.Security and compliance gaps: Different cloud providers employ varying security models and hold different compliance certifications, creating potential vulnerabilities. Organizations must ensure consistent security policies across all platforms while meeting regulatory requirements in each environment.Data combination difficulties: Moving and synchronizing data between different cloud platforms can be complex and costly. Each provider uses different data formats and transfer protocols, making cooperation challenging.Cost management complexity: Tracking and improving costs across multiple cloud providers becomes increasingly difficult. Different pricing models, billing cycles, and cost structures make it hard to compare expenses and identify optimization opportunities.Skill and training requirements: IT teams need expertise in multiple cloud platforms, requiring wide training and certification programs. This increases hiring costs and creates potential knowledge gaps when staff turnover occurs.Network connectivity issues: Establishing reliable, high-performance connections between different cloud providers can be technically challenging. Latency and bandwidth limitations may affect application performance and user experience.Vendor-specific lock-in risks: While multi-cloud reduces overall vendor dependency, organizations may still face lock-in with specific services or applications. Moving workloads between providers often requires significant re-architecture and development effort.What are the challenges of hybrid cloud?Challenges of hybrid cloud refer to the technical, operational, and planned difficulties organizations face when combining private and public cloud infrastructure. The challenges of hybrid cloud are listed below.Complex combination: Connecting private and public cloud environments requires careful planning and technical work. Different systems often use incompatible protocols, making cooperation in data flow difficult to achieve.Security gaps: Managing security across multiple environments creates potential weak points where data can be exposed. Organizations must maintain consistent security policies between private infrastructure and public cloud services.Network latency: Data transfer between private and public clouds can create delays that affect application performance. This latency becomes more noticeable for real-time applications that need instant responses.Cost management: Tracking expenses across hybrid environments proves challenging when costs come from multiple sources. Organizations often struggle to predict total spending when workloads shift between private and public resources.Skills shortage: Managing hybrid cloud requires expertise in both private infrastructure and public cloud platforms. Many IT teams lack the specialized knowledge needed to handle this complex environment effectively.Compliance complexity: Meeting regulatory requirements becomes more challenging when data is transferred between different cloud environments. Organizations must ensure that both private and public components meet industry standards and comply with relevant legal requirements.Vendor lock-in risks: Choosing specific public cloud services can make it difficult to switch providers later. This dependency limits flexibility and can increase long-term costs as organizations become tied to particular platforms.Can you combine multi-cloud and hybrid cloud strategies?Yes, you can combine multi-cloud and hybrid cloud strategies to create a flexible infrastructure that uses multiple public cloud providers while maintaining private cloud components. This combined approach allows organizations to place sensitive workloads on private infrastructure while distributing other applications across public clouds for best performance and cost effectiveness.The combination works by using hybrid cloud architecture as your foundation, then extending public cloud components across multiple providers rather than relying on just one. For example, you might keep customer data on private servers, while using one public cloud for web applications and another for data analytics and machine learning workloads.This dual plan maximizes both security and flexibility.You get the data control and compliance benefits of hybrid cloud while avoiding vendor lock-in through multi-cloud distribution. Many large enterprises adopt this approach to balance regulatory requirements with operational agility; however, it requires more complex management tools and expertise to coordinate effectively across multiple platforms.How does Gcore support multi-cloud and hybrid cloud deployments?When using multi-cloud or hybrid cloud strategies, success often depends on having the right infrastructure foundation that can seamlessly connect and manage resources across different environments.Gcore's global infrastructure, with over 210 points of presence and an average latency of 30ms, provides the connectivity backbone that multi-cloud and hybrid deployments require. Our edge cloud services bridge the gap between your private infrastructure and public cloud resources, while our CDN ensures consistent performance across all environments. This integrated approach helps organizations achieve the 30% performance improvements and 40% cost savings that well-architected hybrid deployments typically deliver.Whether you're distributing workloads across multiple public clouds or combining private infrastructure with cloud resources, having reliable, low-latency connectivity becomes the foundation that makes everything else possible.Explore how Gcore's infrastructure can support your multi-cloud and hybrid cloud plan at gcore.com.Frequently asked questionsIs multi-cloud more expensive than hybrid cloud?Multi-cloud is typically more expensive than hybrid cloud due to higher management complexity, multiple vendor contracts, and increased operational overhead. Multi-cloud requires managing separate billing, security policies, and combination tools across different public cloud providers, while hybrid cloud focuses resources on improving one private-public cloud relationship.Do I need special tools to manage multi-cloud environments?Yes, multi-cloud environments require specialized management tools to handle the complexity of multiple cloud platforms. These tools include cloud management platforms (CMPs), infrastructure-as-code solutions, and unified monitoring systems that provide centralized control across different providers.Can I migrate from hybrid cloud to multi-cloud?Yes, you can migrate from hybrid cloud to multi-cloud by transitioning your workloads from the combined private-public model to multiple public cloud providers. This migration requires careful planning to redistribute applications across different platforms while maintaining performance and security standards.How do I ensure security across multiple clouds?You can ensure security across multiple clouds by using centralized identity management, consistent security policies, and unified monitoring tools. This approach maintains security standards regardless of which cloud provider hosts your workloads.

What is multi-cloud? Strategy, benefits, and best practices

Multi-cloud is a cloud usage model where an organization utilizes public cloud services from two or more cloud service providers, often combining public, private, and hybrid clouds, as well as different service models, such as Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). According to the 2024 State of the Cloud Report by Flexera, 92% of enterprises now use multiple cloud services.Multi-cloud architecture works by distributing applications and data across multiple cloud providers, using each provider's strengths and geographic locations to improve performance, cost, and compliance. This approach enables workload, data, traffic, and workflow portability across different cloud platforms, creating enhanced flexibility and resilience for organizations.Multi-cloud environments can reduce latency by up to 30% through geographical distribution of processing requests to physically closer cloud units.The main types of multi-cloud deployments include hybrid cloud with multi-cloud services and workload-specific multi-cloud configurations. In hybrid multi-cloud setups, sensitive data remains on private clouds, while flexible workloads run across multiple public clouds. Workload-specific multi-cloud matches different applications to the cloud provider best suited for their specific requirements and performance needs.Multi-cloud offers several key benefits that drive enterprise adoption across industries.Over 80% of enterprises report improved disaster recovery capabilities with multi-cloud strategies, as organizations can distribute their infrastructure across multiple providers to avoid single points of failure. This approach also provides cost optimization opportunities, vendor independence, and access to specialized services from different providers.Understanding multi-cloud architecture is important because it represents the dominant cloud plan for modern enterprises seeking to balance performance, cost, security, and compliance requirements. Organizations that master multi-cloud use gain competitive advantages through increased flexibility, improved disaster recovery, and the ability to choose the best services from each provider.What is multi-cloud?Multi-cloud is a planned approach to cloud use where organizations utilize services from two or more cloud providers simultaneously. Creating an integrated environment that combines public, private, and hybrid clouds, along with different service models like IaaS. PaaS and SaaS. This architecture enables workload and data portability across different platforms, allowing businesses to distribute applications based on each provider's strengths, geographic locations, and specific capabilities. According to Flexera (2024), 92% of enterprises now use multiple cloud services, reflecting the growing adoption of this integrated approach. Multi-cloud differs from simply using multiple isolated cloud environments by focusing on unified management and planned distribution rather than maintaining separate, disconnected cloud silos.How does multi-cloud architecture work?Multi-cloud architecture works by distributing applications, data, and workloads across multiple cloud service providers to create an integrated computing environment. Organizations connect and manage services from different cloud platforms through centralized orchestration tools and APIs, treating the diverse infrastructure as a unified system rather than separate silos. The architecture operates through several key mechanisms.First, workload distribution allows companies to place specific applications on the cloud platform best suited for each task. Compute-intensive processes might run on one provider while data analytics runs on another. Second, data replication and synchronization tools keep information consistent across platforms, enabling failover and backup capabilities.Third, network connectivity solutions, such as VPNs and dedicated connections, securely link the different cloud environments. Management is facilitated through cloud orchestration platforms that provide a single control plane for monitoring, utilizing, and scaling resources across all connected providers. These tools consistently handle authentication, resource allocation, and policy enforcement, regardless of the underlying cloud platform.Load balancers and traffic management systems automatically route user requests to the most suitable cloud location, based on factors such as geographic proximity, current capacity, and performance requirements. This distributed approach enables organizations to avoid vendor lock-in while improving costs through competitive pricing negotiations.It also improves disaster recovery by spreading risk across multiple platforms and helps meet regulatory compliance requirements by placing data in specific geographic regions as needed.What are the types of multi-cloud deployments?Types of multi-cloud deployments refer to the different architectural approaches organizations use to distribute workloads and services across multiple cloud providers. The types of multi-cloud deployments are listed below.Hybrid multi-cloud: This approach combines private cloud infrastructure with services from multiple public cloud providers. Organizations store sensitive data and critical applications on private clouds, while utilizing different public clouds for specific workloads, such as development, testing, or seasonal growth.Workload-specific multi-cloud: Different applications and workloads are matched to the cloud provider that best serves their specific requirements. For example, compute-intensive tasks may run on one provider, while machine learning workloads utilize another provider's specialized AI services.Geographic multi-cloud: Services are distributed across multiple cloud providers based on geographic regions to meet data sovereignty requirements and reduce latency. This use ensures compliance with local regulations while improving performance for users in different locations.Disaster recovery multi-cloud: Primary workloads run on one cloud provider while backup systems and disaster recovery infrastructure operate on different providers. This approach creates redundancy and ensures business continuity in the event that one provider experiences outages.Cost-optimized multi-cloud: Organizations carefully place workloads across different providers based on pricing models and cost structures. This usage type enables companies to benefit from competitive pricing and avoid vendor lock-in situations.Compliance-driven multi-cloud: Different cloud providers are used to meet specific regulatory and compliance requirements across various jurisdictions. Financial services and healthcare organizations often use this approach to satisfy industry-specific regulations while maintaining operational flexibility.What are the benefits of multi-cloud?The benefits of multi-cloud refer to the advantages organizations gain from using cloud services across multiple providers in an integrated approach. The benefits of multi-cloud are listed below.Vendor independence: Multi-cloud prevents organizations from becoming locked into a single provider's ecosystem and pricing structure. Companies can switch between providers or negotiate better terms when they're not dependent on one vendor.Cost optimization: Organizations can choose the most cost-effective provider for each specific workload or service type. This approach allows companies to negotiate up to 20% better pricing by using competition among providers.Improved disaster recovery: Distributing workloads across multiple cloud providers creates natural redundancy and backup options. Over 80% of enterprises report improved disaster recovery capabilities with multi-cloud strategies in place.Regulatory compliance: Multi-cloud enables organizations to meet data sovereignty requirements by storing data in specific geographic regions. Financial and healthcare companies can comply with local regulations while maintaining global operations.Performance optimization: Different providers excel in different services, allowing organizations to match workloads with the best-suited platform. Multi-cloud environments can reduce latency by up to 30% through geographic distribution of processing requests.Risk mitigation: Spreading operations across multiple providers reduces the impact of service outages or security incidents. If one provider experiences downtime, critical operations can continue on alternative platforms.Access to specialized services: Each cloud provider offers unique tools and capabilities that may not be available elsewhere. Organizations can combine the best machine learning tools from one provider with superior storage solutions from another.What are the challenges of multi-cloud?Challenges of multi-cloud refer to the difficulties and obstacles organizations face when managing and operating cloud services across multiple cloud providers. The challenges of multi-cloud are listed below.Increased complexity: Managing multiple cloud environments creates operational overhead that can overwhelm IT teams, leading to inefficiencies and increased costs. Each provider has different interfaces, APIs, and management tools that require specialized knowledge and training.Security management: Maintaining consistent cloud security policies across different cloud platforms becomes exponentially more difficult. Organizations must monitor and secure multiple attack surfaces while ensuring compliance standards are met across all environments.Cost visibility: Tracking and controlling expenses across multiple cloud providers creates billing complexity that's hard to manage. Without proper monitoring tools, organizations often face unexpected costs and struggle to improve spending across platforms.Data combination: Moving and synchronizing data between different cloud environments introduces latency and compatibility issues. Organizations must also handle varying data formats and transfer protocols between different providers.Skill requirements: Multi-cloud environments demand expertise in multiple platforms, creating significant training costs and talent acquisition challenges. IT teams need to master different cloud architectures, tools, and best practices simultaneously.Vendor management: Coordinating with multiple cloud providers for support, updates, and service-level agreements creates an administrative burden. Organizations must maintain separate relationships and contracts while ensuring consistent service quality.Network connectivity: Establishing reliable, high-performance connections between different cloud environments requires careful planning and often expensive dedicated links. Latency and bandwidth limitations can impact application performance across distributed workloads.How to implement a multi-cloud strategyYou use a multi-cloud plan by selecting multiple cloud providers, designing an integrated architecture, and establishing unified management processes across all platforms.First, assess your organization's specific needs and define clear objectives for multi-cloud adoption. Identify which workloads require high availability, which need cost optimization, and which must comply with data sovereignty requirements. Document your current infrastructure, performance requirements, and budget constraints to guide provider selection.Next, select 2-3 cloud providers based on their strengths for different use cases. Choose providers that excel in areas matching your workload requirements - one might offer superior compute services while another provides better data analytics tools. Avoid selecting too many providers initially, as this increases management complexity.Then, design your multi-cloud architecture with clear workload distribution rules. Map specific applications and data types to the most suitable cloud platforms based on performance, compliance, and cost factors. Plan for data synchronization and communication pathways between different cloud environments.After that, establish unified identity and access management across all selected platforms. Set up single sign-on solutions and consistent security policies to maintain control while enabling cooperative user access. This prevents security gaps that often emerge when managing multiple separate cloud accounts.Use centralized monitoring and management tools that provide visibility across all cloud environments. Use cloud management platforms or multi-cloud orchestration tools that can track performance, costs, and security metrics from a single dashboard.Create standardized use processes and automation workflows that work consistently across different cloud platforms. Utilize infrastructure-as-code tools and containerization to ensure that applications can be deployed and managed uniformly, regardless of the underlying cloud provider.Finally, establish clear governance policies for data placement, workload migration, and cost management. Define which types of data can be stored where, set up automated cost alerts, and create procedures for moving workloads between clouds when needed. Start with a pilot project using two providers before expanding to additional platforms - this allows you to refine your processes and identify potential combination challenges early.What is the difference between multi-cloud and hybrid cloud?Multi-cloud differs from hybrid cloud primarily in provider diversity, infrastructure composition, and management scope. Multi-cloud utilizes services from multiple public cloud providers to avoid vendor lock-in and optimize specific workloads, while hybrid cloud combines public and private cloud infrastructure to strike a balance between security, control, and flexibility within a unified environment. Infrastructure architecture distinguishes these approaches.Multi-cloud distributes workloads across different public cloud platforms, with each provider handling specific applications based on their strengths. One might excel at machine learning, while another offers better database services. Hybrid cloud integrates on-premises private infrastructure with public cloud resources, creating a bridge between internal systems and external cloud capabilities that organizations can control directly.Management complexity varies considerably between the two models. Multi-cloud requires coordinating multiple vendor relationships, different APIs, security protocols, and billing systems across various platforms. Hybrid cloud focuses on managing the connection and data flow between private and public environments, typically involving fewer vendors but requiring more advanced combinations between on-premises and cloud infrastructure. Cost and compliance considerations also differ substantially.Multi-cloud enables organizations to negotiate better pricing by playing providers against each other and selecting the most cost-effective service for each workload, according to Flexera (2024), with 92% of enterprises now using multiple cloud services. Hybrid cloud prioritizes data sovereignty and regulatory compliance by keeping sensitive information on private infrastructure.Public clouds are particularly valuable for less critical workloads in industries with strict data governance requirements.What are multi-cloud best practices?Multi-cloud best practices refer to proven methods and strategies for effectively managing and operating workloads across multiple cloud service providers. The multi-cloud best practices are listed below.Develop a clear multi-cloud plan: Define specific business objectives for using multiple cloud providers before use. This plan should identify which workloads belong on which platforms and establish clear criteria for cloud selection based on performance, cost, and compliance requirements.Establish consistent security policies: Create unified security frameworks that work across all cloud environments to maintain consistent protection across all environments. This includes standardized identity and access management, encryption protocols, and security monitoring that spans multiple platforms.Utilize cloud-agnostic tools: Select management and monitoring tools that can operate across various cloud platforms to minimize complexity. These tools help maintain visibility and control over resources regardless of which provider hosts them.Plan for data governance: Use precise data classification and management policies that address where different types of data can be stored. This includes considering data sovereignty requirements and ensuring compliance with regulations across all cloud environments.Design for portability: Build applications and configure workloads so they can move between cloud providers when needed. This approach prevents vendor lock-in and maintains flexibility for future changes in cloud plan.Monitor costs across platforms: Track spending and resource usage across all cloud providers to identify optimization opportunities. Regular cost analysis helps ensure the multi-cloud approach delivers the expected financial benefits.Establish disaster recovery procedures: Create backup and recovery plans that work across multiple cloud environments to improve resilience. This includes testing failover procedures and ensuring that data can be recovered from any provider in the event of outages.How does Gcore support multi-cloud strategies?When building multi-cloud strategies, the success of your approach depends heavily on having infrastructure partners that can bridge different cloud environments while maintaining consistent performance. Gcore's global infrastructure supports multi-cloud deployments with over 210 points of presence worldwide, delivering an average latency of 30ms that helps reduce the geographic performance gaps that often challenge multi-cloud architectures.Our edge cloud services and CDN services work across your existing cloud providers, creating a unified connectivity layer that multi-cloud environments need, while avoiding the vendor lock-in concerns that drive organizations toward multi-cloud strategies in the first place.This approach typically reduces the operational complexity that causes 40% increases in management overhead, while maintaining the flexibility to distribute workloads based on each provider's strengths. Discover how Gcore's infrastructure can support your multi-cloud strategy at gcore.com.Frequently asked questionsWhat is an example of multi-cloud?An example of multi-cloud is a company using cloud services from multiple providers, such as running databases on one platform, web applications on another, and data analytics on a third provider, while managing them as one integrated system. This differs from simply having separate accounts with different providers by creating unified management and workload distribution across platforms.How many cloud providers do I need for multi-cloud?Most organizations need 2-3 cloud providers for effective multi-cloud use. This typically includes one primary provider for core workloads and one to two secondary providers for specific services, disaster recovery, or compliance requirements.Can small businesses use multi-cloud?Yes, small businesses can utilize a multi-cloud approach by starting with two cloud providers for specific workloads, such as backup and primary operations. This approach helps them avoid vendor lock-in and improve disaster recovery without the complexity of managing many platforms at once.What is the difference between multi-cloud and multitenancy?Multi-cloud utilizes multiple cloud providers for various services, whereas multitenancy enables multiple customers to share the same cloud infrastructure. Multi-cloud is about distributing workloads across different cloud platforms for flexibility and avoiding vendor lock-in. In contrast, multitenancy involves sharing resources, where a single provider serves multiple isolated customer environments on shared hardware.Which industries benefit most from multi-cloud?Financial services, healthcare, retail, and manufacturing industries benefit most from multi-cloud strategies due to their strict compliance requirements and diverse workload needs. These sectors use multi-cloud to meet data sovereignty laws, improve disaster recovery, and reduce costs across different cloud providers' specialized services.Can I use Kubernetes for multi-cloud?Yes. Kubernetes supports multi-cloud deployments through its cloud-agnostic architecture and standardized APIs that work across different cloud providers. You can run Kubernetes clusters on multiple clouds simultaneously, distribute workloads based on specific requirements, and maintain consistent application use patterns regardless of the underlying infrastructure. Read more about Gcore’s Managed Kubernetes service here.

What is cloud migration? Benefits, strategy, and best practices

Cloud migration is the process of transferring digital assets, such as data, applications, and IT resources, from on-premises data centers to cloud platforms, including public, private, hybrid, or multi-cloud environments. Organizations can reduce IT infrastructure costs by up to 30% through cloud migration, making this transition a critical business priority.The migration process involves six distinct approaches that organizations can choose based on their specific needs and technical requirements. These include rehosting (lift-and-shift), replatforming (making small changes), refactoring (redesigning applications for the cloud), repurchasing (switching to new cloud-based software), retiring (decommissioning old systems), and retaining (keeping some systems on-premises).Each approach offers different levels of complexity and potential benefits.Cloud migration follows a structured approach divided into key phases that ensure a successful transition. These phases typically involve planning and assessment, selecting cloud service providers, designing the target cloud architecture, migrating workloads, testing and validation, and optimization post-migration. Proper execution of these phases helps reduce risks and downtime during the migration process.The business advantages of cloud migration extend beyond simple cost reduction to include increased flexibility, improved performance, and enhanced security capabilities.Cloud environments also enable faster development cycles and provide better support for remote work and global collaboration.Understanding cloud migration is crucial for modern businesses, as downtime during migration can result in revenue losses averaging $5,600 per minute. Conversely, successful migrations can drive a competitive advantage through improved operational effectiveness and enhanced technological capabilities.What is cloud migration?Cloud migration is the process of moving digital assets, applications, data, and IT resources from on-premises infrastructure to cloud-based environments, which can include public, private, hybrid, or multi-cloud platforms. This planned shift allows organizations to replace traditional physical servers and data centers with flexible, internet-accessible computing resources hosted by cloud service providers. The migration process involves careful planning, assessment of existing systems, and systematic transfer of workloads to improve performance, reduce costs, and improve operational flexibility in modern IT environments.What are the types of cloud migration?Types of cloud migration refer to the different strategies and approaches organizations use to move their digital assets, applications, and data from on-premises infrastructure to cloud environments. The types of cloud migration are listed below.Rehosting: This approach moves applications to the cloud without making any changes to the code or architecture. Also known as "lift-and-shift," it's the fastest migration method and works well for applications that don't require immediate optimization.Replatforming: This plan involves making minor changes to applications during migration to take advantage of cloud benefits. Organizations might upgrade database versions or modify configurations while keeping the core architecture intact.Refactoring: This approach redesigns applications specifically for cloud-native architectures to increase cloud benefits. While more time-intensive, refactoring can improve performance by up to 50% and enable better flexibility and cost effectiveness.Repurchasing: This method replaces existing applications with cloud-based software-as-a-service (SaaS) solutions. Organizations switch from licensed software to subscription-based cloud alternatives that offer similar functionality.Retiring: This plan involves decommissioning applications that are no longer needed or useful. Organizations identify redundant or outdated systems and shut them down instead of migrating them to reduce costs and complexity.Retaining: This approach keeps certain applications on-premises due to compliance requirements, technical limitations, or business needs. Organizations maintain hybrid environments where some workloads remain in traditional data centers, while others migrate to the cloud.What are the phases of cloud migration?The phases of cloud migration refer to the structured stages organizations follow when moving their digital assets, applications, and IT resources from on-premises infrastructure to cloud environments. The phases of cloud migration are listed below.Planning and assessment: Organizations evaluate their current IT infrastructure, applications, and data to determine what can be migrated to the cloud. This phase includes identifying dependencies, assessing security requirements, and creating a detailed migration roadmap with timelines and resource allocation.Cloud provider selection: Teams research and compare different cloud service providers based on their specific technical requirements, compliance needs, and budget constraints. The selection process involves evaluating service offerings, pricing models, geographic availability, and support capabilities.Architecture design: IT teams design the target cloud environment, including network configurations, security controls, and resource allocation strategies. This phase involves creating detailed technical specifications for how applications and data will operate in the new cloud infrastructure.Migration execution: The actual transfer of applications, data, and workloads from on-premises systems to the cloud takes place during this phase. Organizations often migrate in phases, starting with less critical systems to reduce business disruption and risk.Testing and validation: Migrated systems undergo complete testing to ensure they function correctly in the cloud environment and meet performance requirements. This phase includes user acceptance testing, security validation, and performance benchmarking against pre-migration baselines.Optimization and monitoring: After successful migration, teams fine-tune cloud resources for cost-effectiveness and performance while establishing ongoing monitoring processes. This final phase focuses on right-sizing resources, using automated growing, and setting up alerting systems for continuous improvement.What are the benefits of cloud migration?The benefits of cloud migration refer to the advantages organizations gain when moving their digital assets, applications, and IT infrastructure from on-premises data centers to cloud environments. The benefits of cloud migration are listed below.Cost reduction: Organizations can reduce IT infrastructure costs by up to 30% through cloud migration by eliminating the need for physical hardware maintenance, cooling systems, and dedicated IT staff. The pay-as-you-use model means companies only pay for resources they actually consume, avoiding overprovisioning expenses.Improved flexibility: Cloud platforms enable businesses to scale resources up or down instantly in response to demand, eliminating the need for additional hardware purchases. This flexibility is particularly valuable during peak seasons or unexpected traffic spikes when traditional infrastructure would require weeks or months to expand.Enhanced performance: Applications often run faster in cloud environments due to optimized infrastructure and global content delivery networks. Refactoring applications for the cloud can improve performance by up to 50% compared to legacy on-premises systems.Better security: Cloud providers invest billions in security infrastructure, offering advanced threat detection, encryption, and compliance certifications that most organizations can't afford independently. Multi-layered security protocols and automatic updates protect against emerging threats more effectively than traditional IT setups.Increased accessibility: Cloud migration enables remote work and global collaboration by making applications and data accessible from anywhere with an internet connection. Teams can work on the same projects simultaneously, regardless of their physical location.Faster new idea: Cloud environments provide access to advanced technologies such as artificial intelligence, machine learning, and advanced analytics without requiring specialized hardware investments. Development teams can use new features and applications much faster than with traditional infrastructure.Automatic updates: Cloud platforms handle software updates, security patches, and system maintenance automatically, reducing the burden on internal IT teams. This ensures systems stay current with the latest features and security improvements without manual intervention.What are the challenges of cloud migration?Cloud migration challenges refer to the obstacles and difficulties organizations face when moving their digital assets, applications, and IT infrastructure from on-premises environments to cloud platforms. The challenges of cloud migration are listed below.Security and compliance risks: Moving sensitive data to cloud environments creates new security vulnerabilities and regulatory compliance concerns. Organizations must ensure that data protection standards are maintained throughout the migration process and that cloud configurations meet industry-specific requirements, such as HIPAA or GDPR.Legacy application compatibility: Older applications often weren't designed for cloud environments and may require significant modifications or complete rebuilds. This compatibility gap can lead to unexpected technical issues, extended timelines, and increased costs during the migration process.Downtime and business disruption: Migration activities can cause service interruptions that impact business operations and customer experience. Even brief outages can result in revenue losses, with downtime during cloud migration causing financial impacts averaging $5,600 per minute.Cost overruns and budget management: Initial cost estimates often fall short due to unexpected technical requirements, data transfer fees, and extended migration timelines. Organizations frequently underestimate the resources needed for testing, training, and post-migration optimization activities.Data transfer complexity: Moving large volumes of data to the cloud can be time-consuming and expensive, especially when dealing with bandwidth limitations. Network constraints and data transfer costs can greatly impact migration schedules and budgets.Skills and knowledge gaps: Cloud migration requires specialized expertise that many internal IT teams lack. Organizations often struggle to find qualified personnel or need to invest heavily in training existing staff on cloud technologies and best practices.Vendor lock-in concerns: Choosing specific cloud platforms can create dependencies that make future migrations difficult and expensive. Organizations worry about losing flexibility and negotiating power once their systems are deeply integrated with a particular cloud provider's services.How to create a cloud migration strategyYou create a cloud migration plan by assessing your current infrastructure, defining clear objectives, choosing the right migration approach, and planning the execution in phases with proper risk management.First, conduct a complete inventory of your current IT infrastructure, including applications, databases, storage systems, and network configurations. Document dependencies between systems, performance requirements, and compliance needs to understand what you're working with.Next, define your business objectives for the migration, such as cost reduction targets, performance improvements, or flexibility requirements. Set specific, measurable goals, such as reducing infrastructure costs by 25% or improving application response times by 40%.Then, evaluate and select your target cloud environment based on your requirements. Consider factors such as data residency rules, integration capabilities with existing systems, and whether a public, private, or hybrid cloud model best suits your needs.Choose the appropriate migration plan for each workload. Use lift-and-shift for simple applications that require quick migration, replatforming for applications that benefit from minor cloud optimizations, or refactoring for applications that can achieve significant performance improvements through cloud-native redesign.Create a detailed migration timeline with phases, starting with less critical applications as pilots. Plan for testing periods, rollback procedures, and staff training to ensure smooth transitions without disrupting business operations.Establish security and compliance frameworks for your cloud environment before migration begins. Set up identity management, data encryption, network security controls, and monitoring systems that meet your industry's regulatory requirements.Finally, develop a complete testing and validation plan that includes performance benchmarks, security assessments, and user acceptance criteria.Plan for post-migration optimization to fine-tune performance and costs once systems are running in the cloud. Start with a pilot migration of non-critical applications to validate your plan and identify potential issues before moving mission-critical systems.What are cloud migration tools and services?Cloud migration tools and services refer to the software platforms, applications, and professional services that help organizations move their digital assets from on-premises infrastructure to cloud environments. The cloud migration tools and services are listed below.Assessment and discovery tools: These tools scan existing IT infrastructure to identify applications, dependencies, and migration readiness. They create detailed inventories of current systems and recommend the best migration approach for each workload.Data migration services: Specialized platforms that transfer large volumes of data from on-premises storage to cloud environments with minimal downtime. These services often include data validation, encryption, and progress monitoring to ensure secure and complete transfers.Application migration platforms: Tools that help move applications to the cloud through automated lift-and-shift processes or guided refactoring. They handle compatibility issues and provide testing environments to validate application performance before going live.Database migration tools: Services designed to move databases between different environments while maintaining data integrity and reducing service interruptions. They support various database types and can handle schema conversions when moving between different database systems.Network migration solutions: Tools that establish secure connections between on-premises and cloud environments during the migration process. They manage bandwidth optimization, traffic routing, and ensure consistent network performance throughout the transition.Backup and disaster recovery services: Solutions that create secure copies of critical data and applications before migration begins. These services provide rollback capabilities and ensure business continuity if issues arise during the migration process.Migration management platforms: End-to-end orchestration tools that coordinate key factors of cloud migration projects. They provide project tracking, resource allocation, timeline management, and reporting capabilities for complex enterprise migrations.How long does cloud migration take?Cloud migration doesn't have a fixed timeline and can range from weeks to several years, depending on the complexity of your infrastructure and the migration plan. Simple lift-and-shift migrations of small applications might complete in 2-4 weeks, while complex enterprise transformations involving application refactoring can take 12-24 months or longer. The timeline depends on several key factors.Your chosen migration plan plays the biggest role. Rehosting existing applications takes much less time than refactoring them for cloud-native architectures. The size and complexity of your current infrastructure also matter greatly, as does the amount of data you're moving and the number of applications that need migration.Organizations typically see faster results when they break large migrations into smaller phases rather than attempting everything at once. This phased approach reduces risk and allows teams to learn from early migrations to improve later ones.Planning and assessment phases alone can take 2-8 weeks for enterprise environments, while the actual migration work varies widely based on your specific requirements and available resources.What are cloud migration best practices?Cloud migration best practices refer to the proven methods and strategies organizations follow to successfully move their digital assets from on-premises infrastructure to cloud environments. The cloud migration best practices are listed below.Assessment and planning: Conduct a complete inventory of your current IT infrastructure, applications, and data before starting migration. This assessment helps identify dependencies, security requirements, and the best migration plan for each workload.Choose the right migration plan: Select from six main approaches: rehosting (lift-and-shift), replatforming, refactoring, repurchasing, retiring, or retaining systems. Match each application to the most appropriate plan based on complexity, business value, and technical requirements.Start with low-risk workloads: Begin migration with non-critical applications and data that have minimal dependencies. This approach allows your team to gain experience and refine processes before moving mission-critical systems.Test thoroughly before going live: Run comprehensive testing in the cloud environment, including performance, security, and integration tests. Create rollback plans for each workload in case issues arise during or after migration.Monitor costs continuously: Set up cost monitoring and alerts from day one to avoid unexpected expenses. Cloud costs can escalate quickly without proper governance and resource management.Train your team: Provide cloud skills training for IT staff before and during migration. Teams need new expertise in cloud-native tools, security models, and cost optimization techniques.Plan for minimal downtime: Schedule migrations during low-usage periods and use techniques like blue-green deployments to reduce service interruptions. Downtime during cloud migration can cause revenue losses averaging $5,600 per minute.Use security from the start: Apply cloud security best practices, including encryption, access controls, and compliance frameworks appropriate for your industry. Cloud security models differ greatly from on-premises approaches.How does Gcore support cloud migration?When planning your cloud migration plan, having the right infrastructure foundation becomes critical for success. Gcore's global cloud infrastructure supports migration with 210+ points of presence worldwide and 30ms average latency, ensuring your applications maintain peak performance throughout the transition process.Our edge cloud services are designed to handle the complex demands of modern migration projects, from lift-and-shift operations to complete application refactoring beyond infrastructure reliability. Gcore addresses common migration challenges such as downtime risks and cost overruns by providing flexible resources that adapt to your specific migration timeline and requirements.With integrated CDN, edge computing, and AI infrastructure services, you can modernize your applications while maintaining the flexibility to use hybrid or multi-cloud strategies as your business needs evolve. Discover how Gcore's cloud infrastructure can support your migration plan. Frequently asked questionsCan I migrate to multiple clouds simultaneously?Yes, you can migrate to multiple clouds simultaneously using parallel migration strategies and multi-cloud management tools. This approach requires careful coordination to avoid resource conflicts and ensure consistent security policies across all target platforms.What happens to my data during cloud migration?Your data moves from your current servers to cloud infrastructure through secure, encrypted transfer protocols. During migration, data typically gets copied (not moved) first, so your original files remain intact until you verify the transfer completed successfully.Do I need to migrate everything to the cloud?No, you don't need to migrate everything to the cloud. Most successful organizations adopt a hybrid approach, keeping critical legacy systems on-premises while moving suitable workloads to cloud platforms. Only 45% of enterprise workloads are expected to be in the cloud by 2025, with many companies retaining key applications in their existing infrastructure.How do I minimize downtime during migration?Yes, you can reduce downtime during migration to under four hours using phased migration strategies, automated failover systems, and parallel environment testing. Plan migrations during low-traffic periods and maintain rollback procedures to ensure a quick recovery in the event of issues.Should I use a migration service provider?Yes, migration service providers reduce project complexity and risk by handling technical challenges that cause 70% of DIY migrations to exceed budget or timeline. These providers bring specialized expertise in cloud architecture, security compliance, and automated migration tools that most internal teams lack for large-scale enterprise migrations.

What is a private cloud? Benefits, use cases, and implementation

A private cloud is a cloud computing environment dedicated exclusively to a single organization, providing a single-tenant infrastructure that improves security, control, and customization compared to public clouds.Private cloud environments can be deployed in two primary models based on location and management approach. Organizations can host private clouds on-premises within their own data centers, maintaining direct control over hardware and infrastructure, or outsource to third-party providers through hosted and managed private cloud services that deliver dedicated resources without the burden of physical maintenance.The technical foundation of private clouds relies on several core architectural components working together to create isolated, flexible environments.These include virtualization technologies such as hypervisors and container platforms, software-defined networking that enables flexible network management, software-defined storage systems, cloud management platforms for orchestration, and advanced security protocols that protect sensitive data and applications.Private cloud adoption delivers measurable business value through improved operational effectiveness and cost control. Well-managed private cloud environments can reduce IT operational costs by up to 30% compared to traditional on-premises infrastructure while achieving average uptime rates exceeding 99.9%, making them attractive for organizations with strict performance and reliability requirements.Understanding private cloud architecture and use becomes essential as organizations seek to balance the benefits of cloud computing with the need for enhanced security, regulatory compliance, and direct control over their IT infrastructure.What is a private cloud?A private cloud is a cloud computing environment dedicated exclusively to a single organization, providing complete control over infrastructure, data, and security policies. This single-tenant model means all computing resources, servers, storage, and networking serve only one organization, unlike public clouds, where resources are shared among multiple users. Private clouds can be hosted on-premises within an organization's own data center or managed by third-party providers while maintaining the exclusive access model.This approach offers enhanced security, customization capabilities, and regulatory compliance control that many enterprises require for sensitive workloads.The foundation of private cloud architecture relies on virtualization technologies and software-defined infrastructure to create flexible environments. Hypervisors like VMware ESXi. Microsoft Hyper-V, and KVM enable multiple virtual machines to run on physical servers, while container platforms such as Docker and Kubernetes provide lightweight application isolation. Software-defined networking (SDN) allows flexible network management and security micro-segmentation, while software-defined storage (SDS) pools storage resources for effective allocation.Cloud management platforms like OpenStack. VMware vRealize, and Nutanix organize these components, providing automated provisioning, self-service portals, and policy management that simplify operations.Private clouds excel in scenarios requiring strict security, compliance, or performance requirements. Financial institutions use private clouds to maintain complete control over sensitive customer data while meeting regulations like GDPR and PCI DSS. Healthcare organizations use private clouds to securely process patient records while ensuring HIPAA compliance.Government agencies use private clouds with advanced security controls and network isolation to protect classified information. Manufacturing companies use private clouds to safeguard intellectual property and maintain operational control over critical systems.The operational benefits of private clouds include improved resource control, predictable performance, and customizable security policies. Organizations can configure hardware specifications, security protocols, and compliance measures to meet specific requirements without the constraints of shared public cloud environments.Private clouds also enable better cost predictability for consistent workloads, as organizations aren't subject to variable pricing based on demand fluctuations. Resource provisioning times in well-managed private clouds typically occur within minutes, providing the agility benefits of cloud computing while maintaining complete environmental control.How does a private cloud work?A private cloud works by creating a dedicated computing environment that serves only one organization, using virtualized resources managed through software-defined infrastructure. The system pools physical servers, storage, and networking equipment into shared resources that can be flexibly allocated to different applications and users within the organization.The core mechanism relies on virtualization technology, where hypervisors like VMware ESXi or Microsoft Hyper-V create multiple virtual machines from physical hardware. These virtual environments run independently while sharing the same underlying infrastructure, allowing for better resource use and isolation.Container platforms, such as Docker and Kubernetes, provide an additional layer of virtualization for applications.Software-defined networking (SDN) controls how data flows through the private cloud, creating virtual networks that can be configured and modified through software rather than physical hardware changes. This allows IT teams to set up secure network segments, manage traffic, and apply security policies flexibly. Software-defined storage (SDS) works similarly, abstracting storage resources so they can be managed and allocated as needed.Cloud management platforms serve as the control center, providing self-service portals where users can request resources, automated provisioning systems that use new services quickly, and monitoring tools that track performance and usage.These platforms handle the orchestration of all components, ensuring resources are available when needed and properly secured in accordance with organizational policies.What are the benefits of a private cloud?The benefits of a private cloud refer to the advantages organizations gain from using dedicated, single-tenant cloud computing environments. The benefits of a private cloud are listed below.Enhanced security control: Private clouds provide isolated environments where organizations maintain complete control over security policies and access controls. This single-tenant architecture reduces exposure to external threats and allows for custom security configurations tailored to specific compliance requirements.Improved data governance: Organizations can use strict data residency and handling policies since they control where data is stored and processed. This level of control is essential for industries such as healthcare and finance that must comply with regulations such as HIPAA or PCI DSS.Customizable infrastructure: Private clouds allow organizations to tailor hardware, software, and network configurations to meet specific performance and operational requirements. This flexibility enables optimization for unique workloads that might not perform well in standardized public cloud environments.Predictable performance: Dedicated resources eliminate the "noisy neighbor" effect common in shared environments, providing consistent performance for critical applications. Organizations can guarantee specific performance levels and resource availability for their most important workloads.Cost predictability: While initial setup costs may be higher, private clouds offer more predictable ongoing expenses compared to usage-based public cloud pricing. Organizations can better forecast IT budgets and avoid unexpected charges from traffic spikes or resource overuse.Regulatory compliance: Private clouds make it easier to meet strict industry regulations by providing complete visibility and control over data handling processes. Organizations can use specific compliance frameworks and undergo audits more easily when they control the entire infrastructure stack.Reduced latency: On-premises private clouds can provide faster response times for applications that require low latency, as data doesn't need to travel to external data centers. This proximity benefit is particularly valuable for real-time applications and high-frequency trading systems.What are common private cloud use cases?Common private cloud use cases refer to specific business scenarios and applications where organizations use dedicated, single-tenant cloud environments to meet their operational needs. These use cases are listed below.Regulatory compliance: Organizations in heavily regulated industries use private clouds to meet strict data governance requirements. Financial institutions utilize private clouds to comply with regulations such as SOX and Basel III, while healthcare providers ensure HIPAA compliance to protect patient data.Sensitive data protection: Companies handling confidential information choose private clouds for enhanced security controls and data isolation. Government agencies and defense contractors use private clouds to protect classified information and maintain complete control over data access and storage locations.Legacy application modernization: Businesses modernize outdated systems by migrating them to private cloud environments while maintaining existing integrations. This approach enables organizations to reap the benefits of the cloud, such as flexibility and automation, without having to completely rebuild their critical applications.Disaster recovery and backup: Private clouds serve as secure backup environments for business-critical data and applications. Organizations can replicate their production environments in private clouds to ensure rapid recovery times and reduce downtime during outages.Development and testing environments: IT teams use private clouds to create isolated development and testing spaces that mirror production systems. This setup enables faster application development cycles while maintaining security boundaries between different project environments.High-performance computing: Research institutions and engineering firms use private clouds to handle computationally intensive workloads. These environments provide dedicated resources for tasks like scientific modeling, financial analysis, and complex simulations without resource contention.Hybrid cloud combination: Organizations use private clouds as secure foundations for hybrid cloud strategies, connecting internal systems with public cloud services. This approach allows companies to keep sensitive workloads private while using public clouds for less critical applications.What are the challenges of private cloud implementation?Challenges of private cloud use refer to the technical, financial, and operational obstacles organizations face when using dedicated cloud infrastructure. The challenges of private cloud use are listed below.High upfront costs: Private cloud deployments require significant initial investment in hardware, software licenses, and infrastructure setup. Organizations typically spend 40-60% more in the first year compared to public cloud alternatives.Complex technical expertise requirements: Managing private clouds demands specialized skills in virtualization, software-defined networking, and cloud orchestration platforms. Many organizations struggle to find qualified staff with experience in technologies like OpenStack, VMware vSphere, or Kubernetes.Resource planning difficulties: Determining the right amount of compute, storage, and network capacity proves challenging without historical usage data. Over-provisioning leads to wasted resources, while under-provisioning causes performance issues and user frustration.Integration with existing systems: Legacy applications and infrastructure often don't work smoothly with modern private cloud platforms. Organizations must invest time and money in application modernization or complex integration solutions to ensure seamless operations.Ongoing maintenance overhead: Private clouds require continuous monitoring, security updates, and performance optimization. IT teams spend 30-40% of their time on routine maintenance tasks that cloud providers handle automatically in public cloud environments.Flexibility limitations: Physical hardware constraints limit how quickly organizations can expand their private cloud capacity. Adding new resources often takes weeks or months, compared to the instant growth available in public clouds.Security and compliance complexity: While private clouds offer better control, organizations must design and maintain their own security frameworks to ensure optimal security and compliance. Meeting regulatory requirements, such as GDPR or HIPAA, becomes the organization's full responsibility rather than being shared with a provider.How to develop a private cloud strategyYou develop a private cloud plan by assessing your organization's requirements, choosing the right use model, and creating a detailed use roadmap that aligns with your business goals and technical needs.First, conduct a complete assessment of your current IT infrastructure, workloads, and business requirements. Document your data sensitivity levels, compliance needs, performance requirements, and existing hardware capacity to understand what you're working with today.Next, define your security and compliance requirements based on your industry regulations. Identify specific standards, such as HIPAA for healthcare, PCI DSS for payment processing, or GDPR for European data handling, that will influence your private cloud design.Then, choose your model from on-premises, hosted, or managed private cloud options. On-premises solutions offer maximum control but require a significant capital investment, while hosted solutions reduce infrastructure costs but may limit customization options.Next, select your core technology stack, which includes virtualization platforms, software-defined networking solutions, and cloud management tools. Consider technologies such as VMware vSphere, Microsoft Hyper-V, or open-source options like OpenStack, based on your team's expertise and budget constraints.Create a detailed migration plan that prioritizes workloads based on business criticality and technical complexity. Start with less critical applications to test your processes before moving mission-critical systems to the private cloud environment.Establish governance policies for resource allocation, access controls, and cost management. Define who can provision resources, set spending limits, and create approval workflows to prevent cloud sprawl and maintain security standards.Finally, develop a monitoring and optimization plan that includes performance metrics, capacity planning, and regular security audits. Set up automated alerts for resource use, security incidents, and system performance to maintain best operations.Start with a pilot project involving 2-3 non-critical applications to validate your plan and refine processes before growing to your entire infrastructure.Gcore private cloud solutionsWhen building a private cloud infrastructure, the foundation you choose determines your long-term success in achieving the security, performance, and compliance benefits these environments promise. Gcore's private cloud solutions address the core challenges organizations face with dedicated infrastructure that combines enterprise-grade security with the flexibility needed for flexible workloads. Our platform delivers the 99.9%+ uptime reliability that well-managed private clouds require, while our global infrastructure, with over 210 points of presence, ensures consistent 30ms latency performance across all your locations.What sets our approach apart is the elimination of common private cloud use barriers—from complex setup processes to unpredictable growing costs, while maintaining the single-tenant isolation and customizable security controls that make private clouds attractive for regulated industries. Our managed private cloud options provide the dedicated resources and compliance capabilities you need without the overhead of building and maintaining the infrastructure yourself.Discover how Gcore private cloud solutions can provide the secure, flexible foundation your organization needs.Frequently asked questionsIs private cloud more secure than public cloud?No, a private cloud isn't inherently more secure than a public cloud - security depends on use, management, and specific use cases, rather than the use model alone. Private clouds offer enhanced control over security configurations, dedicated infrastructure that eliminates multi-tenant risks, and customizable compliance frameworks that can reduce security incidents by up to 40% in well-managed environments. However, public clouds benefit from enterprise-grade security teams, automatic updates, and massive security investments that many organizations can't match internally.How does private cloud differ from on-premises infrastructure?Private cloud differs from on-premises infrastructure by providing cloud-native services and self-service capabilities through virtualization and software-defined management, while on-premises infrastructure typically uses dedicated physical servers without cloud orchestration. On-premises infrastructure relies on fixed hardware allocations, whereas private cloud pools resources flexibly and offers automated provisioning through cloud management platforms.What happens to my data if I switch private cloud providers?Your data remains yours and can be migrated to a new provider, though the process requires careful planning and may involve temporary service disruptions. Most private cloud providers offer data portability tools and migration assistance, but you'll need to account for differences in storage formats, security protocols, and API structures between platforms.

What is a cloud GPU? Definition, types, and benefits

A cloud GPU is a remotely rented graphics processing unit hosted in a cloud provider's data center, accessible over the internet via APIs or virtual machines. These virtualized resources allow users to access powerful computing capabilities without the need for physical hardware ownership, with hourly pricing typically ranging from $0.50 to $3.00 depending on the GPU model and provider.Cloud GPU computing operates through virtualization technology that partitions physical GPU resources in data centers, enabling multiple users to share hardware capacity. Major cloud providers use NVIDIA, AMD, or Intel hardware to create flexible computing environments where GPU instances can be provisioned within minutes.This system allows users to scale their GPU capacity up or down based on demand, paying only for the resources they actually consume.The distinction between physical and virtual GPU resources centers on ownership, access, and performance characteristics. Physical GPUs are dedicated hardware components installed locally on devices or servers, providing direct access to all GPU cores and memory. Virtual GPUs represent shared physical hardware that has been partitioned among multiple users, offering flexible resource allocation with slightly reduced performance compared to dedicated hardware.Cloud GPU services come in different configurations to meet varied computing needs and budget requirements.These include dedicated instances that provide exclusive access to entire GPU units, shared instances that partition GPU resources among multiple users, and specialized configurations optimized for specific workloads like machine learning or graphics rendering. Leading platforms offer different pricing models, from pay-per-hour usage to monthly subscriptions with committed capacity.Understanding cloud GPU technology has become important as organizations increasingly require powerful computing resources for artificial intelligence, data processing, and graphics-intensive applications. NVIDIA currently dominates over 80% of the GPU market share for AI and cloud computing hardware, making these virtualized resources a critical component of modern computing infrastructure.What is a cloud GPU?A cloud GPU is a graphics processing unit that runs in a remote data center and can be accessed over the internet, allowing users to rent GPU computing power on-demand without owning the physical hardware. Instead of buying expensive GPU hardware upfront, you can access powerful graphics processors through cloud providers like Gcore.Cloud GPU instances can be set up within minutes and scaled from single GPUs to thousands of units depending on your computing needs, making them ideal for AI training, 3D rendering, and scientific simulations that require massive parallel processing power.How does cloud GPU computing work?Cloud GPU computing works by virtualizing graphics processing units in remote data centers and making them accessible over the internet through APIs or virtual machines. Instead of buying and maintaining physical GPU hardware, you rent computing power from cloud providers who manage massive GPU clusters in their facilities.The process starts when you request GPU resources through a cloud platform's interface. The provider's orchestration system allocates available GPU capacity from their hardware pool, which typically includes high-end cards like NVIDIA A100s or H100s.Your workload runs on these virtualized GPU instances, with the actual processing happening in the data center while you access it remotely.Cloud providers use virtualization technology to partition physical GPUs among multiple users. This sharing model reduces costs since you're only paying for the compute time you actually use, rather than the full cost of owning dedicated hardware. The virtualization layer manages resource allocation, ensuring each user gets their allocated GPU memory and processing cores.You can scale your GPU usage up or down in real-time based on your needs.If you're training a machine learning model that requires more processing power, you can instantly provision additional GPU instances. When the job completes, you can release those resources and stop paying for them. This flexibility makes cloud GPUs particularly valuable for AI training, scientific computing, and graphics rendering workloads with variable resource requirements.What's the difference between a physical GPU and a cloud GPU?Physical GPUs differ from cloud GPUs primarily in ownership model, accessibility, and resource allocation. Physical GPUs are dedicated hardware components installed directly in your local machine or server, giving you complete control and direct access to all GPU cores. Cloud GPUs are virtualized graphics processing units hosted in remote data centers that you access over the internet through APIs or virtual machines.Physical GPUs provide superior performance consistency since you have dedicated access to all processing cores without sharing resources.They deliver the full computational power of the hardware with minimal latency for local operations. Cloud GPUs run on shared physical hardware through virtualization, which typically delivers 80-95% of dedicated GPU performance. However, cloud GPUs can scale instantly from single instances to clusters with thousands of GPUs, while physical GPUs require hardware procurement that takes weeks or months.Physical GPUs work best for applications requiring consistent performance, data privacy, or minimal latency, such as real-time gaming, sensitive research, or production systems with predictable workloads.Cloud GPUs excel for variable workloads like AI model training, batch processing, or development environments where you need flexible growing. A startup can spin up dozens of cloud GPU instances for a training job, then scale back down immediately after completion.Cost structures differ especially between the approaches. Physical GPUs require substantial upfront investment, often $5,000-$40,000 per high-end unit, plus ongoing maintenance and power costs.Cloud GPUs operate on pay-per-use pricing, typically ranging from $0.50 to $3.00 per hour, depending on the GPU model and provider. This makes cloud GPUs more cost-effective for intermittent use, while physical GPUs become economical for continuous, long-term workloads.What are the types of cloud GPU services?Types of cloud GPU services refer to the different categories and use models of graphics processing units available through cloud computing platforms. The types of cloud GPU services are listed below.Infrastructure as a Service (IaaS) GPUs provide raw GPU compute power through virtual machines that users can configure and manage. Gcore offers various GPU instance types with different performance levels and pricing models.Platform as a Service (PaaS) GPU solutions offer pre-configured environments optimized for specific workloads like machine learning or rendering. Users get access to GPU resources without managing the underlying infrastructure or software stack.Container-based GPU services allow users to use GPU-accelerated applications using containerization technologies like Docker and Kubernetes. This approach provides better resource isolation and easier application use across different environments.Serverless GPU computing automatically scale GPU resources based on demand without requiring users to provision or manage servers. Users pay only for actual compute time, making it cost-effective for sporadic workloads.Specialized AI/ML GPU platforms are specifically designed for artificial intelligence and machine learning workloads with optimized frameworks and tools. They often include pretrained models, development environments, and automated growing features.Graphics rendering services focus on visual computing tasks like 3D rendering, video processing, and game streaming. They're optimized for graphics-intensive applications rather than general compute workloads.Multi-tenant shared GPU services allow multiple users to share the same physical GPU resources through virtualization technology. This approach reduces costs while still providing adequate performance for many applications.What are the benefits of cloud GPU?The benefits of cloud GPU refer to the advantages organizations and individuals gain from using remotely hosted graphics processing units instead of physical hardware. The benefits of cloud GPU are listed below.Cost effectiveness: Cloud GPUs eliminate the need for large upfront hardware investments, allowing users to pay only for actual usage time. Organizations can access high-end GPU power for $0.50 to $3.00 per hour instead of purchasing hardware that costs thousands of dollars.Instant flexibility: Users can scale GPU resources up or down within minutes based on current workload demands. This flexibility allows teams to handle varying computational needs without maintaining excess hardware capacity during low-demand periods.Access to the latest hardware: Cloud providers regularly update their GPU offerings with the newest models, giving users access to advanced technology. Users can switch between different GPU types, like NVIDIA A100s or H100s, without purchasing new hardware.Reduced maintenance overhead: Cloud providers handle all hardware maintenance, updates, and replacements, freeing users from technical management tasks. This approach eliminates downtime from hardware failures and reduces IT staff requirements.Global accessibility: Teams can access powerful GPU resources from anywhere with an internet connection, enabling remote work and collaboration. Multiple users can share and coordinate GPU usage across different geographic locations.Rapid use: Cloud GPU instances can be provisioned and ready for use within minutes, compared to weeks or months for physical hardware procurement. This speed enables faster project starts and quicker response to business opportunities.Flexible resource allocation: Organizations can allocate GPU resources flexibly across different projects and teams based on priority and deadlines. This approach maximizes resource usage and prevents GPU hardware from sitting idle.What are cloud GPUs used for?Cloud GPUs are used for graphics processing units hosted remotely in data centers and accessed over the internet for computational tasks. The uses of cloud GPUs are listed below.Machine learning training: Cloud GPUs accelerate the training of deep learning models by processing massive datasets in parallel. Training complex neural networks that might take weeks on CPUs can be completed in hours or days with powerful GPU clusters.AI inference use: Cloud GPUs serve trained AI models to make real-time predictions and classifications for applications. This includes powering chatbots, image recognition systems, and recommendation engines that need fast response times.3D rendering and animation: Cloud GPUs handle computationally intensive graphics rendering for movies, games, and architectural visualization. Studios can access high-end GPU power without investing in expensive local hardware that sits idle between projects.Scientific computing: Researchers use cloud GPUs for complex simulations in physics, chemistry, and climate modeling that require massive parallel processing. These workloads benefit from GPU acceleration while avoiding the high costs of dedicated supercomputing infrastructure.Cryptocurrency mining: Cloud GPUs provide the computational power needed for mining various cryptocurrencies through parallel hash calculations. Miners can scale their operations up or down based on market conditions without hardware commitments.Video processing and streaming: Cloud GPUs encode, decode, and transcode video content for streaming platforms and content delivery networks. This includes real-time video compression and format conversion for different devices and bandwidth requirements.Game streaming services: Cloud GPUs render games remotely and stream the video output to users' devices, enabling high-quality gaming without local hardware. Players can access demanding games on smartphones, tablets, or low-powered computers.What are the limitations of cloud GPUs?The limitations of cloud GPUs refer to the constraints and drawbacks organizations face when using remotely hosted graphics processing units accessed over the Internet. They are listed below.Network latency: Cloud GPUs depend on internet connectivity, which introduces delays between your application and the GPU. This latency can slow down real-time applications like gaming or interactive simulations that need immediate responses.Limited control: You can't modify hardware configurations or install custom drivers on cloud GPUs since they're managed by the provider. This restriction limits your ability to improve performance for specific workloads or use specialized software.Data transfer costs: Moving large datasets to and from cloud GPUs can be expensive and time-consuming. Organizations working with terabytes of data often face significant bandwidth charges and upload delays.Performance variability: Shared cloud infrastructure means your GPU performance can fluctuate based on other users' workloads. You might experience slower processing during peak usage times when resources are in high demand.Ongoing subscription costs: Cloud GPU pricing accumulates over time, making long-term projects potentially more expensive than owning hardware. Extended usage can cost more than purchasing dedicated GPUs outright.Security concerns: Your data and computations run on third-party infrastructure, which may not meet strict compliance requirements. Industries handling sensitive information often can't use cloud GPUs due to regulatory restrictions.Internet dependency: Cloud GPUs become completely inaccessible during internet outages or connectivity issues. This dependency can halt critical operations that would otherwise continue with local hardware.How to get started with cloud GPUsYou get started with cloud GPUs by choosing a provider, setting up an account, selecting the right GPU instance for your workload, and configuring your development environment.Choose a cloud GPU provider: Consider your options based on geographic needs, budget, and required GPU models. Look for providers offering the latest NVIDIA GPUs (H100s, A100s, L40S) with global infrastructure for low-latency access. Consider factors like available GPU types, pricing models, and support quality.Create an account and configure billing with your chosen provider: Many platforms offer trial credits or pay-as-you-go options that let you test GPU performance before committing to reserved instances. Set up usage alerts to monitor spending during initial testing.Select the appropriate GPU instance type for your workload: High-memory GPUs like H100s or A100s excel at large-scale AI training, while L40S instances provide cost-effective options for inference and rendering. Match your GPU selection to your specific memory, compute, and budget requirements.Launch your GPU instance: This can be done through the web console, API, or command-line interface. Choose from pre-configured images with popular ML frameworks (PyTorch, TensorFlow, CUDA) already installed, or start with a clean OS image for custom configurations. Deployment typically takes under 60 seconds with modern cloud platforms.Configure your development environment: Connect via SSH or remote desktop, install required packages, and set up your workflow. Use integrated cloud storage for efficient data transfer rather than uploading large datasets through your local connection. Configure persistent storage to preserve your work between sessions.Test with a sample workload: Verify performance and compatibility before scaling up. Run benchmark tests relevant to your use case, monitor resource utilization, and validate that your application performs as expected. Start with shorter rental periods while optimizing your setup.Optimize for production: Implement auto-scaling policies, set up monitoring dashboards, and establish backup procedures. Configure security groups and access controls to protect your instances and data.Start with shorter rental periods and smaller instances while you learn the platform's interface and improve your workflows for cloud environments.Gcore cloud GPU solutionsWhen choosing between cloud and physical GPU solutions for your AI workloads, the decision often comes down to balancing performance requirements with operational flexibility. Gcore cloud GPU infrastructure addresses this challenge by providing dedicated GPU instances with near-native performance while maintaining the flexibility advantages of cloud computing. This is all accessible through our global network of 210+ points of presence with 30ms average latency.Our cloud GPU solutions eliminate the weeks-long procurement cycles typical of physical hardware, allowing you to provision high-performance GPU instances within minutes and scale from single instances to large clusters as your training demands evolve. This approach typically reduces infrastructure costs by 30-40% compared to maintaining fixed on-premise capacity, while our enterprise-grade infrastructure ensures 99.9% uptime for mission-critical AI workloads.Discover how Gcore cloud GPU solutions can accelerate your AI projects while reducing operational overhead.Explore Gcore GPU CloudFrequently asked questionsHow does cloud GPU performance compare to local GPUs?Cloud GPU performance typically delivers 80-95% of local GPU performance while offering instant flexibility and lower upfront costs. Local GPUs provide maximum performance and predictable latency but lack the flexibility to scale resources on demand.What are the security considerations for cloud GPUs?Yes, cloud GPUs have several critical security considerations, including data encryption, access controls, and compliance requirements. Key concerns include securing data in transit and at rest, managing multi-tenant isolation in shared GPU environments, and meeting regulatory standards like GDPR or HIPAA for sensitive workloads.What programming frameworks work with cloud GPUs?Yes, all major programming frameworks work with cloud GPUs including TensorFlow, PyTorch, JAX, CUDA-based applications, and other parallel computing libraries. Cloud GPU providers typically offer pre-configured environments with GPU drivers, CUDA toolkits, and popular ML frameworks already installed.How much do cloud GPUs cost compared to buying hardware?Cloud GPUs cost $0.50-$3.00 per hour while comparable physical GPUs require $5,000-$40,000 upfront plus ongoing maintenance costs. For occasional use, cloud GPUs are cheaper, but heavy continuous workloads favor owned hardware after 6-12 months of usage.

What is cloud networking: benefits, components, and implementation strategies

Cloud networking is the use and management of network resources, including hardware and software, hosted on public or private cloud infrastructures rather than on-premises equipment. Over 90% of enterprises are expected to adopt cloud networking solutions by 2025, indicating rapid industry-wide adoption for IT infrastructure modernization.Cloud networking operates through advanced technologies that separate traditional hardware dependencies from network management. Software-Defined Networking (SDN) serves as a core technology, decoupling network control from hardware to allow centralized, programmable management and automation of network configurations.This approach enables organizations to manage their entire network infrastructure through software interfaces rather than physical device manipulation.The main components of cloud networking include several key elements that work together to create flexible network environments. Virtual Private Clouds (VPCs) provide isolated virtual network environments within the cloud, allowing organizations to define IP ranges, subnets, and routing for enhanced security and control. Virtual network functions (VNFs) replace traditional hardware devices like firewalls, load balancers, and routers with software-based equivalents for easier use and improved flexibility.Cloud networking delivers significant advantages that transform how organizations approach network infrastructure management.These solutions can reduce network operational costs by up to 30% compared to traditional on-premises networking through reduced hardware requirements, lower maintenance overhead, and improved resource use. Cloud networks can scale bandwidth and compute resources within seconds to minutes, demonstrating superior agility compared to traditional manual provisioning methods.Understanding cloud networking has become essential for modern businesses seeking to modernize their IT infrastructure and improve operational effectiveness. This technology enables organizations to build more flexible and cost-effective network solutions that adapt quickly to changing business requirements.What is cloud networking?Cloud networking is the use and management of network resources through virtualized, software-defined environments hosted on cloud infrastructure rather than traditional on-premises hardware. This approach uses technologies like Software-Defined Networking (SDN) to separate network control from physical devices, allowing centralized management and programmable automation of network configurations. Virtual Private Clouds (VPCs) create isolated network environments within the cloud. In contrast, virtual network functions replace traditional hardware like firewalls and load balancers with flexible software alternatives that can scale within seconds to meet changing demands.How does cloud networking work?Cloud networking works by moving your network infrastructure from physical hardware to virtualized, software-defined environments hosted in the cloud. Instead of managing routers, switches, and firewalls in your data center, you access these network functions as services running on cloud platforms.The core mechanism relies on Software-Defined Networking (SDN), which separates network control from the underlying hardware. This means you can configure, manage, and modify your entire network through software interfaces rather than physically touching equipment.When you need a new subnet or firewall rule, you simply define it through an API or web console, and the cloud platform instantly creates the virtual network components.Virtual Private Clouds (VPCs) form the foundation of cloud networking by creating isolated network environments within the shared cloud infrastructure. You define your own IP address ranges, create subnets across different availability zones, and set up routing tables exactly like you would with physical networks. The difference is that all these components exist as software abstractions that can be modified in seconds.Network functions that traditionally required dedicated hardware appliances now run as Virtual Network Functions (VNFs).Load balancers, firewalls, VPN gateways, and intrusion detection systems all operate as software services that you can use, scale, or remove on demand. This approach can reduce network operational costs by up to 30% compared to traditional on-premises networking while providing the flexibility to scale bandwidth and compute resources within seconds to minutes.What are the main components of cloud networking?The main components of cloud networking refer to the key technologies and services that enable network infrastructure to operate in virtualized cloud environments. They are listed below.Software-defined networking (SDN): SDN separates network control from hardware devices, allowing centralized management through software controllers. This approach enables automated network configuration and policy enforcement across cloud resources.Virtual private clouds (VPCs): VPCs create isolated network environments within public cloud infrastructure, giving organizations control over IP addressing, subnets, and routing. They provide secure boundaries between different workloads and applications.Virtual network functions (VNFs): VNFs replace traditional hardware appliances like firewalls, load balancers, and routers with software-based alternatives. These functions can be deployed quickly and scaled on demand without physical hardware constraints.Cloud load balancers: These distribute incoming network traffic across multiple servers or resources to prevent overload and maintain performance. They automatically adjust traffic routing based on server health and capacity.Network security services: Cloud-native security tools include distributed firewalls, intrusion detection systems, and encryption services that protect data in transit. These services combine directly with cloud infrastructure for consistent security policies.Hybrid connectivity solutions: VPN gateways and dedicated network connections link on-premises infrastructure with cloud resources. These components enable secure data transfer between different network environments.Network monitoring and analytics: Real-time monitoring tools track network performance, bandwidth usage, and security events across cloud infrastructure. They provide visibility into traffic patterns and help identify potential issues before they affect users.What are the benefits of cloud networking?The benefits of cloud networking refer to the advantages organizations gain when they move their network infrastructure from physical hardware to virtualized, cloud-based environments. The benefits of cloud networking are listed below.Cost reduction: Cloud networking eliminates the need for expensive physical hardware like routers, switches, and firewalls. Organizations can reduce network operational costs by up to 30% compared to traditional on-premises networking through reduced maintenance, power consumption, and hardware replacement expenses.Instant flexibility: Cloud networks can scale bandwidth and compute resources within seconds to minutes based on demand. This flexibility allows businesses to handle traffic spikes during peak periods without over-provisioning resources during normal operations.Centralized management: Software-Defined Networking (SDN) enables administrators to control entire network infrastructures from a single dashboard. This centralized approach simplifies configuration changes, policy enforcement, and troubleshooting across distributed locations.Enhanced security: Virtual Private Clouds (VPCs) create isolated network environments that prevent unauthorized access between different applications or tenants. Cloud networking achieves compliance with strict standards like GDPR and HIPAA through built-in encryption and access controls.High availability: Cloud providers maintain network uptime SLAs of 99.99% or higher through redundant infrastructure and automatic failover mechanisms. This reliability exceeds what most organizations can achieve with on-premises equipment.Reduced complexity: Network-as-a-Service (NaaS) models eliminate the need for specialized networking staff to manage physical infrastructure. Organizations can focus on their core business while cloud providers handle network maintenance and updates.Global reach: Cloud networking enables instant use of network resources across multiple geographic regions. This global presence improves application performance for users worldwide without requiring physical infrastructure investments in each location.What's the difference between cloud networking and traditional networking?Cloud networking differs from traditional networking primarily in infrastructure location, resource management, and flexibility mechanisms. Traditional networking relies on physical hardware like routers, switches, and firewalls installed and maintained on-premises, while cloud networking delivers these functions as virtualized services managed remotely through cloud platforms.Infrastructure and management approachesTraditional networks require organizations to purchase, install, and configure physical equipment in data centers or office PoPs. IT teams must handle hardware maintenance, software updates, and capacity planning manually.Cloud networking operates through software-defined infrastructure where network functions run as virtual services. Administrators manage entire network configurations through web interfaces and APIs, enabling centralized control across multiple locations without physical hardware access.Flexibility and speedTraditional networking scales through hardware procurement processes that often take weeks or months to complete. Adding network capacity requires purchasing equipment, scheduling installations, and configuring devices individually.Cloud networks scale instantly through software provisioning, allowing organizations to add or remove bandwidth, create new network segments, or use security policies in minutes. This agility enables businesses to respond quickly to changing demands without infrastructure investments.Cost structure and resource allocationTraditional networking involves significant upfront capital expenses for hardware purchases, plus ongoing costs for power, cooling, and maintenance staff. Organizations must estimate future capacity needs and often over-provision to handle peak loads.Cloud networking operates on pay-as-you-go models where costs align with actual usage. According to industry case studies (2024), cloud networking can reduce network operational costs by up to 30% compared to traditional on-premises networking through improved resource effectiveness and reduced maintenance overhead.What are common cloud networking use cases?Common cloud networking use cases refer to the specific scenarios and applications in which organizations use cloud-based networking solutions to meet their infrastructure and connectivity needs. Below are some common cloud networking use cases.Hybrid cloud connectivity: Organizations connect their on-premises infrastructure with cloud resources to create cooperative hybrid cloud environments. This approach allows companies to maintain sensitive data locally while using cloud services for flexibility.Multi-cloud networking: Businesses distribute workloads across multiple cloud providers to avoid vendor lock-in and improve redundancy. This plan enables organizations to choose the best services from different providers while maintaining consistent network policies.Remote workforce enablement: Companies provide secure network access for distributed teams through cloud-based VPN and zero-trust network solutions. These implementations support remote work by ensuring employees can safely access corporate resources from any location.Application modernization: Organizations migrate legacy applications to cloud environments while maintaining network performance and security requirements. Cloud networking supports containerized applications and microservices architectures that require flexible connectivity.Disaster recovery and backup: Businesses replicate their network infrastructure in the cloud to ensure continuity during outages or disasters. Cloud networking enables rapid failover and recovery processes that reduce downtime and data loss.Global content delivery: Companies distribute content and applications closer to end users through cloud-based edge networking solutions. This approach reduces latency and improves user experience for geographically dispersed audiences.Development and testing environments: Teams create isolated network environments in the cloud for application development, testing, and staging. These environments can be quickly provisioned and torn down without affecting production systems.How to implement a cloud networking strategyYou implement a cloud networking plan by defining your network architecture requirements, selecting appropriate cloud services, and establishing security and connectivity frameworks that align with your business objectives.First, assess your current network infrastructure and identify which components can move to the cloud. Document your existing bandwidth requirements, security policies, and compliance needs to establish baseline requirements for your cloud network design.Next, design your Virtual Private Cloud (VPC) architecture by defining IP address ranges, subnets, and routing tables. Create separate subnets for different application tiers and establish network segmentation to isolate critical workloads from less sensitive traffic. We can assist you with that, have a look at our virtual private cloud services.Then, establish connectivity between your on-premises infrastructure and cloud resources through VPN connections or dedicated network links. Configure hybrid connectivity to ensure cooperation communication while maintaining security boundaries between environments.After that, use Software-Defined Networking (SDN) controls to centralize network management and enable automated configuration changes. Set up network policies that can flexibly adjust bandwidth allocation and routing based on application demands.Configure cloud-native security services, including network access control lists, security groups, and distributed firewalls. Apply the principle of least privilege by restricting network access to only necessary ports and protocols for each service.Use network monitoring and analytics tools to track performance metrics like latency, throughput, and packet loss. Establish baseline performance measurements and set up automated alerts for network anomalies or capacity thresholds.Finally, create disaster recovery and backup procedures for your network configurations. Document your network topology and maintain version control for configuration changes to enable quick recovery during outages.Start with a pilot using non-critical workloads to validate your network design and performance before migrating mission-critical applications to your new cloud networking environment.Learn more about building a faster, more flexible network with Gcore Cloud.Frequently asked questionsWhat's the difference between cloud networking and SD-WAN?Cloud networking is a broad infrastructure approach that virtualizes entire network environments in the cloud. At the same time, SD-WAN is a specific technology that connects and manages multiple network locations through software-defined controls. Cloud networking includes virtual networks, security services, and compute resources hosted by cloud providers, whereas SD-WAN focuses on connecting branch offices, data centers, and cloud resources through intelligent traffic routing and centralized management.Is cloud networking secure?Yes, cloud networking is secure when properly configured, offering advanced security features like encryption, network isolation, and centralized access controls. Major cloud providers maintain 99.99% uptime SLAs and comply with strict security standards, including GDPR and HIPAA, through technologies like Virtual Private Clouds that isolate network traffic.How much does cloud networking cost compared to traditional networking?Cloud networking costs 20-40% less than traditional networking due to reduced hardware expenses, maintenance, and staffing requirements. Organizations save on upfront capital expenditures while gaining predictable monthly operational costs through subscription-based cloud services.How does cloud networking affect network performance?Cloud networking can both improve and reduce network performance depending on your specific setup and requirements.Cloud networking typically improves performance through global content delivery networks that reduce latency by 40-60%, automatic growing that handles traffic spikes within seconds, and advanced routing that optimizes data paths. However, performance can decrease if you're moving from a well-optimized local network to a poorly configured cloud setup, or if your applications require extremely low latency that adds overhead from internet routing and virtualization layers.What happens if cloud networking services experience outages?Cloud networking outages cause service disruptions, including loss of connectivity, reduced application performance, and potential data access issues lasting from minutes to several hours. Most major cloud providers maintain 99.99% uptime guarantees and use redundant systems to reduce outage impact through automatic failover to backup infrastructure.

Subscribe to our newsletter

Get the latest industry trends, exclusive insights, and Gcore updates delivered straight to your inbox.