Radar has landed - discover the latest DDoS attack trends. Get ahead, stay protected.Get the report
Under attack?

Products

Solutions

Resources

Partners

Why Gcore

  1. Home
  2. Blog
  3. How to Manage Hidden Vulnerabilities in Kubernetes RBAC Permissions

How to Manage Hidden Vulnerabilities in Kubernetes RBAC Permissions

  • By Gcore
  • June 10, 2024
  • 8 min read
How to Manage Hidden Vulnerabilities in Kubernetes RBAC Permissions

This article was originally published on The New Stack. It’s written by Dmitrii Bubnov, a DevSecOps engineer at Gcore with 14 years of experience in IT.


Role-based access control (RBAC) is the default access control approach in Kubernetes. This model categorizes permissions using specific verbs to define allowed interactions with resources. Within this system, three lesser-known permissions—escalate, bind, and impersonate—can override existing role limitations, grant unauthorized access to restricted areas, expose confidential data, or even allow complete control over a cluster. This article explains these potent permissions, offering insights into their functions and guidance on mitigating their associated risks.

A Quick Reminder about RBAC Role and Verbs

In this article, I assume you are already familiar with the key concepts of Kubernetes RBAC. If not, please refer to Kubernetes’ own documentation.

However, we do need to briefly recall one important concept directly related to this article: role. This describes access rights to K8s resources within a specific namespace and the available operations. Roles consist of a list of rules. Rules include verbs—available operations for defined resources.

Here is an example of a role from the K8s documentation that grants read access to pods:

apiVersion: rbac.authorization.k8s.io/v1kind: Rolemetadata:  namespace: default  name: pod-readerrules:- apiGroups: [""] # "" points to the core API group  resources: ["pods"]  verbs: ["get", "watch", "list"]

Verbs like get, watch, and list are commonly used. However, more intriguing ones also exist.

Three Lesser-Known Kubernetes RBAC Permissions

For more granular and complex permissions management, the K8s RBAC has the following verbs:

  • escalate: Allows users to create and edit roles even if they don’t have initial permissions to do so.
  • bind: Allows users to create and edit role bindings and cluster role bindings with permissions that they haven’t been assigned.
  • impersonate: Allows users to impersonate other users and gain their privileges in the cluster or in a different group. Critical data can be accessed using this verb.

Below, we’ll learn them in more detail. But first, let’s create a test namespace and name it rbac:

kubectl create ns rbac

Then, create a test SA privesc:

kubectl -n rbac create sa privesc

We’ll use them throughout the rest of this tutorial.

Escalate

By default, the Kubernetes RBAC API doesn’t allow users to escalate privileges by simply editing a role or role binding. This restriction works at the API level even if the RBAC authorizer is disabled. The only exception is if the role has the escalate verb.

In the image below, the SA with only update and patch permissions can’t add a new verb to the role. But if we add a new role with the escalate verb, it becomes possible:

Figure 1: Adding the escalate verb to the role allows the user to change the role permissions and add a new verb

Let’s see how it works in more detail.

Create a role that allows read-only access to pods and roles in this namespace:

kubectl -n rbac create role view --verb=list,watch,get --resource=role,pod

Bind this role to the SA privesc:

kubectl -n rbac create rolebinding view --role=view --serviceaccount=rbac:privesc

Check if the role can be updated:

kubectl auth can-i update role -n rbac --as=system:serviceaccount:rbac:privesc no

As we can see, the SA can read roles but can’t edit them.

Create a new role that allows role editing in the rbac namespace:

kubectl -n rbac create role edit --verb=update,patch --resource=role

Bind this new role to the SA privesc:

kubectl -n rbac create rolebinding edit --role=edit --serviceaccount=rbac:privesc

Check if the role can be updated:

kubectl auth can-i update role -n rbac --as=system:serviceaccount:rbac:privescyes

Check if the role can be deleted:

kubectl auth can-i delete role -n rbac --as=system:serviceaccount:rbac:privescno

The SA can now edit roles but can’t delete them.

For the sake of experimental accuracy, let’s check the SA capabilities. To do this, we’ll use a JWT (JSON Web Token):

TOKEN=$(kubectl -n rbac create token privesc --duration=8h)

We should remove the old authentication parameters from the config because Kubernetes will check the user’s certificate first and won’t check the token if it already knows about the certificate.

cp ~/.kube/config ~/.kube/rbac.confexport KUBECONFIG=~/.kube/rbac.confkubectl config delete-user kubernetes-adminkubectl config set-credentials privesc --token=$TOKENkubectl config set-context --current --user=privesc

This role shows we can edit other roles:

kubectl -n rbac get role edit -oyamlapiVersion: rbac.authorization.k8s.io/v1kind: Rolemetadata:  name: edit  namespace: rbacrules:- apiGroups:  - rbac.authorization.k8s.io  resources:  - roles  verbs:  - update  - patch

Let’s try to add a new verb, list, which we have already used in the view role:

kubectl -n rbac edit  role editOKapiVersion: rbac.authorization.k8s.io/v1kind: Rolemetadata:  name: edit  namespace: rbacrules:- apiGroups:  - rbac.authorization.k8s.io  resources:  - roles  verbs:  - update  - patch  - list   # the new verb we added

Success.

Now, let’s try to add a new verb, delete, which we haven’t used in other roles:

kubectl -n rbac edit  role editapiVersion: rbac.authorization.k8s.io/v1kind: Rolemetadata:  name: edit  namespace: rbacrules:- apiGroups:  - rbac.authorization.k8s.io  resources:  - roles  verbs:  - update  - patch  - delete   # trying to add a new verberror: roles.rbac.authorization.k8s.io "edit" could not be patched: roles.rbac.authorization.k8s.io "edit" is forbidden: user "system:serviceaccount:rbac:privesc" (groups=["system:serviceaccounts" "system:serviceaccounts:rbac" "system:authenticated"]) is attempting to grant RBAC permissions not currently held:{APIGroups:["rbac.authorization.k8s.io"], Resources:["roles"], Verbs:["delete"]}

This confirms that Kubernetes doesn’t allow users or service accounts to add new permissions if they don’t already have them—only if users or service accounts are bound to roles with such permissions.

Let’s extend the privesc SA permissions. We’ll do this by using the admin config and adding a new role with the escalate verb:

KUBECONFIG=~/.kube/config kubectl -n rbac create role escalate --verb=escalate --resource=role

Now, let’s bind the privesc SA to the new role:

KUBECONFIG=~/.kube/config kubectl -n rbac create rolebinding escalate --role=escalate --serviceaccount=rbac:privesc

Check again if we can add a new verb to the role:

kubectl -n rbac edit  role editapiVersion: rbac.authorization.k8s.io/v1kind: Rolemetadata:  name: edit  namespace: rbacrules:- apiGroups:  - rbac.authorization.k8s.io  resources:  - roles  verbs:  - update  - patch  - delete   # the new verb we addedrole.rbac.authorization.k8s.io/edit edited

Now it works. The user can escalate the SA privileges by editing the existing role. This means that the escalate verb gives the admin privileges, including those of the namespace admin or even cluster admin.

Bind

The bind verb allows the user to edit the RoleBinding or ClusterRoleBinding for privilege escalation, similar to escalate, which allows the user to edit Role or ClusterRole.

In the image below, the SA with the role binding that has the update, patch, and create verbs can’t add delete until we create a new role with the bind verb.

Figure 2: Adding the new role with the bind verb allows the user to extend the role’s binding permissions

Now, let’s take a closer look at how this works.

Let’s change the kubeconfig file to admin:

export KUBECONFIG=~/.kube/config

Remove old roles and bindings:

kubectl -n rbac delete rolebinding view edit escalatekubectl -n rbac delete role view edit escalate

Allow the SA to view and edit the role binding and pod resources in the namespace:

kubectl -n rbac create role view --verb=list,watch,get --resource=role,rolebinding,podkubectl -n rbac create rolebinding view --role=view --serviceaccount=rbac:privesckubectl -n rbac create role edit --verb=update,patch,create --resource=rolebinding,podkubectl -n rbac create rolebinding edit --role=edit --serviceaccount=rbac:privesc

Create separate roles to work with pods, but still don’t bind the role:

kubectl -n rbac create role pod-view-edit --verb=get,list,watch,update,patch --resource=podkubectl -n rbac create role delete-pod --verb=delete --resource=pod

Change the kubeconfig to the SA privesc and try to edit the role binding:

export KUBECONFIG=~/.kube/rbac.confkubectl -n rbac create rolebinding pod-view-edit --role=pod-view-edit --serviceaccount=rbac:privescrolebinding.rbac.authorization.k8s.io/pod-view-edit created

The new role has been successfully bound to the SA. Note that the pod-view-edit role contains verbs and resources that were already bound to the SA by the role binding view and edit.

Now, let’s try to bind a role with a new verb, delete, which is missing in the roles that are bound to the SA:

kubectl -n rbac create rolebinding delete-pod --role=delete-pod --serviceaccount=rbac:privescerror: failed to create rolebinding: rolebindings.rbac.authorization.k8s.io "delete-pod" is forbidden: user "system:serviceaccount:rbac:privesc" (groups=["system:serviceaccounts" "system:serviceaccounts:rbac" "system:authenticated"]) is attempting to grant RBAC permissions not currently held:{APIGroups:[""], Resources:["pods"], Verbs:["delete"]}

Kubernetes doesn’t allow this, even though we have permission to edit and create role bindings. But we can fix that with the bind verb. Let’s do so using the admin config:

KUBECONFIG=~/.kube/config kubectl -n rbac create role bind --verb=bind --resource=rolerole.rbac.authorization.k8s.io/bind createdKUBECONFIG=~/.kube/config kubectl -n rbac create rolebinding bind --role=bind --serviceaccount=rbac:privescrolebinding.rbac.authorization.k8s.io/bind created

Try once more to create a role binding with the new delete verb:

kubectl -n rbac create rolebinding delete-pod --role=delete-pod --serviceaccount=rbac:privescrolebinding.rbac.authorization.k8s.io/delete-pod created

Now it works. So, using the bind verb, the SA can bind any role to itself or any user.

Impersonate

The impersonate verb in K8s is like sudo in Linux. If users have impersonate access, they can authenticate as other users and run commands on their behalf. kubectl has the --as, --as-group, and --as-uid options, which allow commands to be run as a different user, group, or UID (a universally unique identifier), respectively. If a user were given impersonation permissions, they would become the namespace admin, or—if there is a cluster-admin service account in the namespace—even the cluster admin.

Impersonate is helpful to check the RBAC permissions delegated to a user: An admin should perform a command according to the template kubectl auth can-i --as=$USERNAME -n $NAMESPACE $VERB $RESOURCE and check if the authorization works as designed.

In our example, the SA wouldn’t get info about pods in the rbac namespace just performing kubectl -n rbac get pod. But it becomes possible if there is a role with the impersonate verb:

kubectl auth can-i get pod -n rbac --as=system:serviceaccount:rbac:privescyes

Figure 3: Getting info about pods with a role that has the impersonate verb

Let’s create a new service account, impersonator, in the rbac namespace; this SA will have no permissions:

KUBECONFIG=~/.kube/config kubectl -n rbac create sa impersonatorserviceaccount/impersonator created

Now, create a role with the impersonate verb and a role binding:

KUBECONFIG=~/.kube/config kubectl -n rbac create role impersonate --resource=serviceaccounts --verb=impersonate --resource-name=privesc

(Look at the --resource-name parameter in the above command: it only allows impersonation as the privesc SA.)

role.rbac.authorization.k8s.io/impersonate createdKUBECONFIG=~/.kube/config kubectl -n rbac create rolebinding impersonator --role=impersonate --serviceaccount=rbac:impersonatorrolebinding.rbac.authorization.k8s.io/impersonator created

Create a new context:

TOKEN=$(KUBECONFIG=~/.kube/config kubectl -n rbac create token impersonator --duration=8h)kubectl config set-credentials impersonate --token=$TOKEN   User "impersonate" set.kubectl config set-context impersonate@kubernetes  --user=impersonate --cluster=kubernetesContext "impersonate@kubernetes" created.kubectl config use-context impersonate@kubernetesSwitched to context "impersonate@kubernetes".

Check the permissions:

kubectl auth can-i --list -n rbacResources                                       Non-Resource URLs                     Resource Names   Verbsselfsubjectaccessreviews.authorization.k8s.io   []                                    []               [create]selfsubjectrulesreviews.authorization.k8s.io    []                                    []               [create]...serviceaccounts                                 []                                    [privesc]        [impersonate]

No additional permissions exist besides impersonate, as specified in the role. But if we impersonate the impersonator SA as the privesc SA, we can see that we get the same permissions that the privesc SA has:

kubectl auth can-i --list -n rbac --as=system:serviceaccount:rbac:privescResources                                       Non-Resource URLs                     Resource Names   Verbsroles.rbac.authorization.k8s.io                 []                                    [edit]           [bind escalate]selfsubjectaccessreviews.authorization.k8s.io   []                                    []               [create]selfsubjectrulesreviews.authorization.k8s.io    []                                    []               [create]pods                                            []                                    []               [get list watch update patch delete create]...rolebindings.rbac.authorization.k8s.io          []                                    []               [list watch get update patch create bind escalate]roles.rbac.authorization.k8s.io                 []                                    []               [list watch get update patch create bind escalate]configmaps                                      []                                    []               [update patch create delete]secrets                                         []                                    []               [update patch create delete]

Thus, the impersonate SA has all of its own privileges and all the privileges of the SA it’s impersonating, including those that a namespace admin has.

How to Mitigate Potential Threats

The escalate, bind, and impersonate verbs can be used to create flexible permissions, resulting in granular management of access to K8s infrastructure. But they also open the door to malicious use, since, in some cases, they enable a user to access crucial infrastructure components with admin privileges.

Three practices can help you mitigate the potential dangers of misuse or malicious use of these verbs:

  • Regularly check RBAC manifests
  • Use the resourceNames field in Role and ClusterRole manifests
  • Use external tools to monitor roles

Let’s look at each in turn.

Regularly Check RBAC Manifests

To prevent unauthorized access and RBAC misconfiguration, periodically check your cluster RBAC manifests:

kubectl get clusterrole -A -oyaml | yq '.items[] | select (.rules[].verbs[] | contains("esalate" | "bind" | "impersonate"))  | .metadata.name'kubectl get role -A -oyaml | yq '.items[] | select (.rules[].verbs[] | contains("esalate" | "bind" | "impersonate"))  | .metadata.name'

Use the ResourceNames Field

To restrict the use of escalate, bind, impersonate, or any other verbs, configure the resourceNames field in the Role and ClusterRole manifests. There, you can—and should—enter the names of resources that can be used.

Here is an example of a ClusterRole that allows the creation of a ClusterRoleBinding with roleRef named edit and view:

apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata:  name: role-grantorrules:- apiGroups: ["rbac.authorization.k8s.io"]  resources: ["clusterroles"]  verbs: ["bind"]  resourceNames: ["edit","view"]

You can do the same with escalate and impersonate.

Note that in the case of bind, an admin sets permissions in a role, and users can only bind that role to themselves if allowed in resourceNames. With escalate, users can write any parameters within a role and become admins of a namespace or cluster. So, bind restricts users, while escalate gives them more options. Keep this in mind if you need to grant these permissions.

Use External Tools to Monitor Roles

Consider using automated systems that monitor creating or editing roles with suspicious content, such as Falco or Tetragon.

You can also redirect Kubernetes audit logs to a log management system like Gcore Managed Logging, which is useful for analyzing and parsing K8s logs. To prevent accidental resource deletion, create a separate service account with the delete verb and allow users to impersonate only that service account. This is the principle of least privilege. To simplify this process, you can use the kubectl plugin kubectl-sudo.

At Gcore, we use these methods to make our Managed Kubernetes service more secure. We recommend that all our customers do the same. Using managed services doesn’t guarantee that your services are 100% secured by default, but at Gcore we do everything possible to ensure our customers’ protection, including encouraging RBAC best practices.

Conclusion

The escalate, bind, and impersonate verbs allow admins to manage access to K8s infrastructure flexibly and let users escalate their privileges. These are powerful tools that, if abused, can cause significant damage to a K8s cluster. Carefully review any use of these verbs and ensure that the least privilege rule is followed: Users must have the minimum privileges necessary to operate, no more.

Looking for a simple way to manage your K8s clusters? Try Gcore Managed Kubernetes. We offer Virtual Machines and Bare Metal servers with GPU worker nodes to boost your AI/ML workloads. Prices for worker nodes are the same as for our Virtual Machines and Bare Metal servers. We provide free, production-grade cluster management with a 99.9% SLA for your peace of mind.

Explore Gcore Managed Kubernetes

Related articles

The cloud control gap: why EU companies are auditing jurisdiction in 2025

Europe’s cloud priorities are changing fast, and rightly so. With new regulations taking effect, concerns about jurisdictional control rising, and trust becoming a key differentiator, more companies are asking a simple question: Who really controls our data?For years, European companies have relied on global cloud giants headquartered outside the EU. These providers offered speed, scale, and a wide range of services. But 2025 is a different landscape.Recent developments have shown that data location doesn’t always mean data protection. A service hosted in an EU data center may still be subject to laws from outside the EU, like the US CLOUD Act, which could require the provider to hand over customer data regardless of where it’s stored.For regulated industries, government contractors, and data-sensitive businesses, that’s a growing problem. Sovereignty today goes beyond compliance. It’s central to business trust, operational transparency, and long-term risk management.Rising risks of non-EU cloud dependencyIn 2025, the conversation has shifted from “is this provider GDPR-compliant?” to “what happens if this provider is forced to act against our interests?”Here are three real concerns European companies now face:Foreign jurisdiction risk: Cloud providers based outside Europe may be legally required to share customer data with foreign authorities, even if it’s stored in the EU.Operational disruption: Geopolitical tensions or executive decisions abroad could affect service availability or create new barriers to access.Reputational and compliance exposure: Customers and regulators increasingly expect companies to use providers aligned with European standards and legal protections.European leaders are actively pushing for “full-stack European solutions” across cloud and AI infrastructure, citing sovereignty and legal clarity as top concerns. Leading European firms like Deutsche Telekom and Airbus have criticized proposals that would grant non-EU tech giants access to sensitive EU cloud data.This reinforces a broader industry consensus: jurisdictional control is a serious strategic issue for European businesses across industries. Relying on foreign cloud services introduces risks that no business can control, and that few can absorb.What European companies must do nextEuropean businesses can’t wait for disruption to happen. They must build resilience now, before potentially devastating problems occur.Audit their cloud stack to identify data locations and associated legal jurisdictions.Repatriate sensitive workloads to EU-based providers with clear legal accountability frameworks.Consider deploying hybrid or multi-cloud architectures, blending hyperscaler agility and EU sovereign assurance.Over 80% of European firms using cloud infrastructure are actively exploring or migrating to sovereign solutions. This is a smart strategic maneuver in an increasingly complex and regulated cloud landscape.Choosing a futureproof pathIf your business depends on the cloud, sovereignty should be part of your planning. It’s not about political trends or buzzwords. It’s about control, continuity, and credibility.European cloud providers like Gcore support organizations in achieving key sovereignty milestones:EU legal jurisdiction over dataAlignment with sectoral compliance requirementsResilience to legal and geopolitical disruptionTrust with EU customers, partners, and regulatorsIn 2025, that’s a serious competitive edge that shows your customers that you take their data protection seriously. A European provider is quickly becoming a non-negotiable for European businesses.Want to explore what digital sovereignty looks like in practice?Gcore’s infrastructure is fully self-owned, jurisdictionally transparent, and compliant with EU data laws. As a European provider, we understand the legal, operational, and reputational demands on EU businesses.Talk to us about sovereignty strategies for cloud, AI, network, and security that protect your data, your customers, and your business. We’re ready to provide a free, customized consultation to help your European business prepare for sovereignty challenges.Auditing your cloud stack is the first step. Knowing what to look for in a provider comes next.Not all EU-based cloud providers guarantee sovereignty. Learn what to evaluate in infrastructure, ownership, and legal control to make the right decision.Learn how to verify EU cloud control in our blog

Outpacing cloud‑native threats: How to secure distributed workloads at scale

The cloud never stops. Neither do the threats.Every shift toward containers, microservices, and hybrid clouds creates new opportunities for innovation…and for attackers. Legacy security, built for static systems, crumbles under the speed, scale, and complexity of modern cloud-native environments.To survive, organizations need a new approach: one that’s dynamic, AI-driven, automated, and rooted in zero trust.In this article, we break down the hidden risks of cloud-native architectures and show how intelligent, automated security can outpace threats, protect distributed workloads, and power secure growth at scale.The challenges of cloud-native environmentsCloud-native architectures are designed for maximum flexibility and speed. Applications run in containers that can scale in seconds. Microservices split large applications into smaller, independent parts. Hybrid and multi-cloud deployments stretch workloads across public clouds, private clouds, and on-premises infrastructure.But this agility comes at a cost. It expands the attack surface dramatically, and traditional perimeter-based security can’t keep up.Containers share host resources, which means if one container is breached, attackers may gain access to others on the same system. Microservices rely heavily on APIs to communicate, and every exposed API is a potential attack vector. Hybrid cloud environments create inconsistent security controls across platforms, making gaps easier for attackers to exploit.Legacy security tools, built for unchanging, centralized environments, lack the real-time visibility, scalability, and automated response needed to secure today’s dynamic systems. Organizations must rethink cloud security from the ground up, prioritizing speed, automation, and continuous monitoring.Solution #1: AI-powered threat detection forsmarter defensesModern threats evolve faster than any manual security process can track. Rule-based defenses simply can’t adapt fast enough.The solution? AI-driven threat detection.Instead of relying on static rules, AI models monitor massive volumes of data in real time, spotting subtle anomalies that signal an attack before real damage is done. For example, an AI-based platform can detect an unauthorized process in a container trying to access confidential data, flag it as suspicious, and isolate the threat within milliseconds before attackers can move laterally or exfiltrate information.This proactive approach learns, adapts, and neutralizes new attack vectors before they become widespread. By continuously monitoring system behavior and automatically responding to abnormal activity, AI closes the gap between detection and action, critical in cloud-native, regulated environments where even milliseconds matter.Solution #2: Zero trust as the new security baseline“Trust but verify” no longer cuts it. In a cloud-native world, the new rule is “trust nothing, verify everything”.Zero-trust security assumes that threats exist both inside and outside the network perimeter. Every request—whether from a user, device, or application—must be authenticated, authorized, and validated.In distributed architectures, zero trust isolates workloads, meaning even if attackers breach one component, they can’t easily pivot across systems. Strict identity and access management controls limit the blast radius, minimizing potential damage.Combined with AI-driven monitoring, zero trust provides deep, continuous verification, blocking insider threats, compromised credentials, and advanced persistent threats before they escalate.Solution #3: Automated security policies for scalingprotectionManual security management is impossible in dynamic environments where thousands of containers and microservices are spun up and down in real time.Automation is the way forward. AI-powered security policies can continuously analyze system behavior, detect deviations, and adjust defenses automatically, without human intervention.This eliminates the lag between detection and response, shrinks the attack window, and drastically reduces the risk of human error. It also ensures consistent security enforcement across all environments: public cloud, private cloud, and on-premises.For example, if a system detects an unusual spike in API calls, an automated security policy can immediately apply rate limiting or restrict access, shutting down the threat without impacting overall performance.Automation doesn’t just respond faster. It maintains resilience and operational continuity even in the face of complex, distributed threats.Unifying security across cloud environmentsSecuring distributed workloads isn’t just about having smarter tools, it’s about making them work together. Different cloud platforms, technologies, and management protocols create fragmentation, opening cracks that attackers can exploit. Security gaps between systems are as dangerous as the threats themselves.Modern cloud-native security demands a unified approach. Organizations need centralized platforms that pull real-time data from every endpoint, regardless of platform or location, and present it through a single management dashboard. This gives IT and security teams full, end-to-end visibility over threats, system health, and compliance posture. It also allows security policies to be deployed, updated, and enforced consistently across every environment, without relying on multiple, siloed tools.Unification strengthens security, simplifies operations, and dramatically reduces overhead, critical for scaling securely at cloud-native speeds. That’s why at Gcore, our integrated suite of products includes security for cloud, network, and AI workloads, all managed in a single, intuitive interface.Why choose Gcore for cloud-native security?Securing cloud-native workloads requires more than legacy firewalls and patchwork solutions. It demands dynamic, intelligent protection that moves as fast as your business does.Gcore Edge Security delivers robust, AI-driven security built for the cloud-native era. By combining real-time AI threat detection, zero-trust enforcement, automated responses, and compliance-first design, Gcore security solutions protect distributed applications without slowing down development cycles.Discover why WAAP is essential for cloud security in 2025

Edge Cloud news: more regions and volume options available

At Gcore, we’re committed to delivering high-performance, globally distributed infrastructure that adapts to your workloads—wherever they run. This month, we’re excited to share major updates to our Edge Cloud platform: two new cloud IaaS regions in Europe and expanded storage options in São Paulo.New IaaS regions in Luxembourg and Portugal available nowLuxembourg‑3 and Sines‑2 mark the next step in the Gcore mission to bring compute closer to users. From compliance-focused deployments in Central Europe to GPU‑powered workloads in the Iberian Peninsula, these new regions are built to support diverse infrastructure needs at scale.Luxembourg‑3: expanding connectivity in Central EuropeWe’re expanding our European footprint by opening an additional IaaS point of presence (PoP) in Luxembourg. Strategically located in the heart of Europe, this region offers low-latency connectivity across the EU and is a strong compliance choice for data residency requirements.Here’s what’s available in Luxembourg‑3:Virtual Machines: High-performance, reliable, and scalable compute power for a wide range of workloads - with free egress traffic and pay-as-you-go billing for active instances only.Volumes: Standard, High IOPS, and Low Latency block storage for any workload profile.Load Balancers: Distribute traffic intelligently across instances to boost availability, performance, and fault tolerance.Managed Kubernetes: Fully managed Kubernetes clusters with automated provisioning, scaling, and updates optimized for production-ready deployments.Sines‑2, Portugal: a new hub for Southern Europe and a boost for AI workloadsWe’re also opening a brand-new location: Sines‑2, Portugal. This location enhances coverage across Southern Europe and boosts our AI and compute capabilities with more GPU availability.In addition to offering the same IaaS services as Luxembourg‑3, Sines‑2 also includes:H100 NVIDIA GPUs for AI/ML, high-performance computing, and rendering workloads.New VAST NFS Fileshare support for scalable, high-throughput file storage.This new region is ideal for organizations looking to deploy close to the Iberian Peninsula, reducing latency for regional users while gaining access to powerful GPU resources.Enhanced volume types in São PauloVolumes are the backbone of any cloud workload. They store the OS, applications, and essential data for your virtual machines. Developers and businesses building latency-sensitive or I/O-intensive applications now have more options in the São Paulo-2 region, thanks to two newly added volume types optimized for speed and responsiveness:Low-latency volumesDesigned for applications where every millisecond matters, Low Latency Volumes are non-redundant block storage ideal for:ETCD clustersTransactional databasesOther real-time, latency-critical workloadsBy minimizing overhead and focusing on speed, this volume type delivers faster response times for performance-sensitive use cases. This block storage offers IOPS up to 5000 and an average latency of 300 microseconds.High-IOPS volumesFor applications that demand both speed and resilience, High IOPS Volumes offer a faster alternative to our Standard Volumes:Higher IOPS and increased throughputSuitable for high-traffic web apps, analytics engines, and demanding databasesThis volume type accelerates data-heavy workloads and keeps performance consistent under peak demand by delivering significantly higher throughput and IOPS. The block storage offers IOPS up to 9,000 and a 500 MB/s bandwidth limit.Ready to deploy with Gcore?These new additions help to fine-tune your performance strategy, whether you're optimizing for throughput, latency, or both.From scaling in LATAM to expanding into the EU or pushing performance at the edge, Gcore continues to evolve with your needs. Explore our new capabilities in Luxembourg‑3, Sines‑2, and São Paulo‑2.Discover more about Gcore Cloud Edge Services

5 ways to keep gaming customers engaged with optimal performance

Nothing frustrates a gamer more than lag, stuttering, or server crashes. When technical issues interfere with gameplay, it can be a deal breaker. Players know that the difference between winning and losing should be down to a player’s skill, not lag, latency issues, or slow connection speed—and they want gaming companies to make that possible every time they play.And gamers aren’t shy about expressing their opinion if a game hasn’t met their expectations. A game can live or die by word-of-mouth, and, in a highly competitive industry, gamers are more than happy to spend their time and money elsewhere. A huge 78% of gamers have “rage-quit” a game due to latency issues.That’s why reliable infrastructure is crucial for your gaming offering. A solid foundation is good for your bottom line and your reputation and, most importantly, provides a great gaming experience for customers, keeping them happy, loyal, and engaged. This article suggests five technologies to boost player engagement in real-world gaming scenarios.The technology powering seamless gaming experiencesHaving the right technology behind the scenes is essential to deliver a smooth, high-performance gaming experience. From optimizing game deployment and content delivery to enabling seamless multiplayer scalability, these technologies work together to reduce latency, prevent server overloads, and guarantee fast, reliable connections.Bare Metal Servers provide dedicated compute power for high-performing massive multiplayer games without virtualization overhead.CDN solutions reduce download times and minimize patch distribution delays, allowing players to get into the action faster.Managed Kubernetes simplifies multiplayer game scaling, handling sudden spikes in player activity.Load Balancers distribute traffic intelligently, preventing server overload during peak times.Edge Cloud reduces latency for real-time interactions, improving responsiveness for multiplayer gaming.Let’s look at five real-world scenarios illustrating how the right infrastructure can significantly enhance customer experience—leading to smooth, high-performance gaming, even during peak demand.#1 Running massive multiplayer games with bare metal serversImagine a multiplayer FPS (first-person shooter gaming) game studio that’s preparing for launch and needs low-latency, high-performance infrastructure to handle real-time player interactions. They can strategically deploy Gcore Bare Metal servers across global locations, reducing ping times and providing smooth gameplay.Benefit: Dedicated bare metal resources deliver consistent performance, eliminating lag spikes and server crashes during peak hours. Stable connections and seamless playing are assured for precision gameplay.#2 Seamless game updates and patch delivery with CDN integrationLet’s say you have a game that regularly pushes extensive updates to millions of players worldwide. Instead of overwhelming origin servers, they can use Gcore CDN to cache and distribute patches, reducing download times and preventing bottlenecks.Benefit: Faster updates for players, reduced server tension, and seamless game launches and updates.#3 Scaling multiplayer games with Managed KubernetesAfter a big update, a game may experience a sudden spike in the number of players. With Gcore Managed Kubernetes, the game autoscales its infrastructure, dynamically adjusting resources to meet player demand without downtime.Benefit: Elastic, cost-efficient scaling keeps matchmaking fast and smooth, even under heavy loads.#4 Load balancing for high-availability game serversAn online multiplayer game with a global base requires low latency and high availability. Gcore Load Balancers distribute traffic across multiple regional server clusters, reducing ping times and preventing server congestion during peak hours.Benefit: Consistent, lag-free gameplay with improved regional connectivity and failover protection.#5 Supporting live events and seasonal game launchesIn the case of a gaming company hosting a global in-game event, attracting millions of players simultaneously, leveraging Gcore CDN, Load Balancers, and autoscaling cloud infrastructure can prevent crashes and provide a seamless and uninterrupted experience.Benefit: Players enjoy smooth, real-time participation while the infrastructure is stable under extreme load.Building customer loyalty with reliable gaming infrastructureIn a challenging climate, focusing on maintaining customer happiness and loyalty is vital. The most foolproof way to deliver this is by investing in reliable and secure infrastructure behind the scenes. With infrastructure that’s both scalable and high-performing, you can deliver uninterrupted, seamless experiences that keep players engaged and satisfied.Since its foundation in 2014, Gcore has been a reliable partner for game studios looking to deliver seamless, high-performance gaming experiences worldwide, including Nitrado, Saber, and Wargaming. If you’d like to learn more about our global infrastructure and how it provides a scalable, high-performance solution for game distribution and real-time games, get in touch.Talk to our gaming infrastructure experts

How cloud infrastructure maximizes efficiency in the gaming industry

The gaming industry is currently facing several challenges, with many companies having laid off staff over the past year due to rising development costs and a fall in product demand post-pandemic. These difficult circumstances mean it’s more important than ever for gaming firms of all sizes to maximize efficiency and keep costs down. One way companies can do this is by implementing reliable infrastructure that supports the speedy development of new games.This article explores how dependable cloud infrastructure at the edge—including virtual machines, bare metal, and GPUs—helps gaming companies work more efficiently. Edge computing allows developers to build, test, and deploy games faster while minimizing latency, reducing server costs, and handling complex rendering and AI workloads.The key benefits of edge cloud infrastructure for gamingReliable cloud infrastructure benefits gaming companies in a variety of ways. It’s a replacement for relying on outdated arrangements such as proprietary on-premises data centers, which lack flexibility, have limited scalability, require significant upfront investment, and need teams that are fully dedicated to their maintenance and management. Cloud compute resources, including virtual machines, bare metal servers, and GPUs, can support your game development and testing more cost-effectively, keeping your gaming company competitive in the market and cost efficient.Here’s how reliable cloud infrastructure can benefit your business:Speeds up development cycles: Cloud-based infrastructure accelerates game builds, testing, and deployment by providing on-demand access to high-performance compute resources. Developers can run several testing environments and collaborate from anywhere.Scales on demand: From indie studios launching a first title to major AAA developers handling millions of players, cloud solutions can scale resources instantly. Storage options and load balancing enable infrastructure to adapt to player demand, preventing performance issues during peak times while optimizing costs during off-peak periods.Offers low-latency performance: Cloud solutions reduce lag, optimize the experience for developers and end-users by deploying servers close to players, and improve their in-game experience.Delivers high-performance compute: Bare Metal servers and GPU instances deliver the power required for game development by providing dedicated resources. This enables faster rendering, complex simulations, and seamless real-time processing for graphics-intensive applications, leading to smooth gameplay experiences and faster iteration cycles.Maximizes cost efficiency: Flexible pricing models help studios optimize costs while maintaining high performance. Pay-as-you-go plans mean companies only pay for the resources used. Commitment plans that give discounts for use cases that require consistent/planned capacity are also available.How Gcore cloud infrastructure works: real-life examplesGcore cloud infrastructure can be helpful in many common scenarios for developers. Here are some real-world examples demonstrating how Gcore virtual machines and GPUs can help:Example 1: Faster game building and testing with scalable virtual machinesLet’s say a game studio developing a cross-platform game needs to compile large amounts of code and assets quickly. By leveraging Gcore’s Virtual Machines, they can create automated CI/CD pipelines that speed up game builds and testing across different environments, reducing wait times. Scalable virtual machines allow developers to spin up multiple test environments on demand, running compatibility and performance tests simultaneously.Example 2: High-performance graphics rendering with GPU computeVisually rich games (like open-world role-playing games) need to render complex 3D environments efficiently. Instead of investing in expensive local hardware, they can use Gcore’s GPU infrastructure to accelerate rendering and AI-powered animation workflows. Access to powerful GPUs without upfront investment enables faster iteration of visual assets and machine-learning-driven game enhancements.If your business faces rendering challenges, one of our experts can advise you on the most suitable cloud infrastructure package.Partnering for success: why gaming companies choose GcoreIn a challenging gaming industry climate, it’s vital to have the right tools and solutions at your disposal. Cloud infrastructure at the edge can significantly enhance game development efficiency for gaming businesses of all sizes.Gcore was founded in 2014 for gamers, by gamers, and we have been a trusted partner to global gaming companies including Nitrado, Saber, and Wargaming since day one. If you’d like to learn more about our gaming industry expertise and how our cloud infrastructure can help you operate in a more efficient and cost effective way, get in touch.Talk to us about your gaming cloud infrastructure needs

Edge cloud trends 2025: AI, big data, and security

Edge cloud is a distributed computing model that brings cloud resources like compute, storage, and networking closer to end users and devices. Instead of relying on centralized data centers, edge cloud infrastructure processes data at the network’s edge, reducing latency and improving performance for real-time applications.In 2025, the edge cloud landscape will evolve even further, shaping industries from gaming and finance to healthcare and manufacturing. But what are the key trends driving this transformation? In this article, we’ll explore five key trends in edge computing for 2025 and explain how the technology helps with pressing issues in key industries. Read on to discover whether it’s time for your company to adopt edge cloud computing.#1 Edge computing is integral to modern infrastructureEdge computing is on the rise and is set to become an indispensable technology across industries. By the end of this year, at least 40% of larger enterprises are expected to have adopted edge computing as part of their IT infrastructure. And this trend shows no signs of slowing. By the end of 2028, worldwide spending for edge computing is anticipated to reach $378 billion. That’s almost a 50% increase from 2024. There’s no question that edge computing is rapidly becoming integral to modern businesses.#2 Edge computing will power AI-driven, real-time workloadsAs real-time digital experiences become the norm, the demand for edge computing is accelerating. From video streaming and immersive XR applications to AI-powered gaming and financial trading, industries are pushing the limits of latency-sensitive workloads. Edge cloud computing provides the necessary infrastructure to process data closer to users, meeting their demands for performance and responsiveness. AI inference will become part of all kinds of applications, and edge computing will deliver faster responses to users than ever before.New AI-powered features in mobile gaming are driving greater demand for edge computing. While game streaming services haven’t yet gained widespread adoption, the high computational demands of AI inference could change that. Since running a large language model (LLM) efficiently on a smartphone is still impractical, these games require high-performance support from edge infrastructure to deliver a smooth experience.Multiplayer games require ultra-low latency for a smooth, real-time experience. With edge computing, game providers can deploy servers closer to players, reducing lag and ensuring high-performance gameplay. Because edge computing is decentralized, it also makes it easier to scale gaming platforms as player demand grows.The same advantage applies to high-frequency trading, where milliseconds can determine profitability. Traders have long benefited from placing servers near financial markets, and edge computing further simplifies deploying infrastructure close to preferred exchanges, optimizing trade execution speeds.#3 Edge computing will handle big dataEmerging real-time applications generate massive volumes of data. IoT devices, stock exchanges, and GenAI models all produce and rely on vast datasets, requiring efficient processing solutions.Traditionally, organizations have managed large-scale data ingestion through horizontal scaling in cloud computing. Edge computing is the next logical step, enabling big data workloads to be processed closer to their source. This distributed approach accelerates data processing, delivering faster insights and improved performance even when handling huge quantities of data.#4 Edge computing will simplify data sovereigntyThe concept of data sovereignty states that data is subject to the same laws and regulations as the user who created it. For example, the GDPR in Europe requires organizations to store their citizens’ and residents’ data on servers subject to European laws. This can cause headaches for companies working with a centralized cloud, since they may have to comply with a complex web of fast-changing data sovereignty laws. Put simply: cloud location matters.With data privacy regulations on the rise, edge computing is emerging as a key technology to simplify compliance. Edge cloud means allows running distributed server networks and geofencing data to servers in specific countries. The result is that companies can scale globally without worrying about compliance, since edge cloud companies like Gcore automate most of the regulatory requirement processes.#5 Edge computing will improve securityEdge computing is crucial to solving the issues of a globally connected world, but its security story has until now been a double-edged sword. On the one hand, the edge ensures data doesn’t need to travel great distances on public networks, where it can be exposed to malicious attacks. On the other hand, central data centers are much easier to secure than a distributed server network. More servers mean a higher potential for one to be compromised, making it a potentially risky choice for privacy-sensitive workloads in healthcare and finance.However, cloud providers are starting to add features to their solutions that bring edge security into line with traditional cloud resources. Secure hardware enclaves and encrypted data transmissions deliver end-to-end security, so data will never be accessible in cleartext to an edge location provider or other third parties. If, for any reason, these encryption mechanisms should fail, AI-driven threat scanners can detect and notify quickly.If your business is looking to adopt edge cloud while prioritizing security, look for a provider that specializes in both. Avoid solutions where security is an afterthought or a bolt-on. Gcore cloud servers integrate seamlessly with Gcore Edge Security solutions, so your servers are protected to the highest levels at the click of a button.Unlock the next wave of edge computing with GcoreThe trend is clear: Internet-enabled devices are rapidly entering every part of our lives. This raises the bar for performance and security, and edge cloud computing delivers solutions to meet these new requirements. Distributed data processing means GenAI models can scale efficiently, and location-independent deployments enable high-performance real-time workloads from high-frequency trading to XR gaming to IoT.At Gcore, we provide a global edge cloud platform designed to meet the performance, scalability, and security demands of modern businesses. With over 180 points of presence worldwide, our infrastructure ensures ultra-low latency for AI-powered applications, real-time gaming, big data workloads, and more. Our edge solutions help businesses navigate evolving data sovereignty regulations by enabling localized data processing for global operations. And with built-in security features like DDoS protection, WAAP, and AI-driven threat detection, you leverage the full potential of edge computing without compromising on security.Ready to learn more about why edge cloud matters? Dive into our blogs on cloud data sovereignty.Get in touch to discuss your edge cloud 2025 goals

Subscribe to our newsletter

Get the latest industry trends, exclusive insights, and Gcore updates delivered straight to your inbox.