Explore Gcore Cloud Edge Services
Service group
- Service group
- Learn more →
Virtual instances
Deploy projects faster with ready-made virtual instance images.
- Learn more →
Application marketplace
Get access to ready-made systems and applied services or suggest your own product.
Powerful computing resources and services in 50+ locations around the world
Deploy instances in optimal locations around the world within 15 minutes. No need to search for Tier IV data centers, purchase advanced hardware, and worry about upgrading.
Test your Latency
Solutions for all sectors
Gaming
Create games and integrate any online entertainment solution of any complexity.
A simple way to control your Cloud
The Cloud control panel is integrated with other infrastructure products: CDN, Streaming, Storage, DDoS Protection, and DNS Hosting.

Manage all Cloud services with one simple and efficient panel.
Read about new locations, configurations, and features in our blog
More articles →- Introducing Gcore for Startups: created for builders, by builders
Building a startup is tough. Every decision about your infrastructure can make or break your speed to market and burn rate. Your time, team, and budget are stretched thin. That’s why you need a partner that helps you scale without compromise.At Gcore, we get it. We’ve been there ourselves, and we’ve helped thousands of engineering teams scale global applications under pressure.That’s why we created the Gcore Startups Program: to give early-stage founders the infrastructure, support, and pricing they actually need to launch and grow.At Gcore, we launched the Startups Program because we’ve been in their shoes. We know what it means to build under pressure, with limited resources, and big ambitions. We wanted to offer early-stage founders more than just short-term credits and fine print; our goal is to give them robust, long-term infrastructure they can rely on.Dmitry Maslennikov, Head of Gcore for StartupsWhat you get when you joinThe program is open to startups across industries, whether you’re building in fintech, AI, gaming, media, or something entirely new.Here’s what founders receive:Startup-friendly pricing on Gcore’s cloud and edge servicesCloud credits to help you get started without riskWhite-labeled dashboards to track usage across your team or customersPersonalized onboarding and migration supportGo-to-market resources to accelerate your launchYou also get direct access to all Gcore products, including Everywhere Inference, GPU Cloud, Managed Kubernetes, Object Storage, CDN, and security services. They’re available globally via our single, intuitive Gcore Customer Portal, and ready for your production workloads.When startups join the program, they get access to powerful cloud and edge infrastructure at startup-friendly pricing, personal migration support, white-labeled dashboards for tracking usage, and go-to-market resources. Everything we provide is tailored to the specific startup’s unique needs and designed to help them scale faster and smarter.Dmitry MaslennikovWhy startups are choosing GcoreWe understand that performance and flexibility are key for startups. From high-throughput AI inference to real-time media delivery, our infrastructure was designed to support demanding, distributed applications at scale.But what sets us apart is how we work with founders. We don’t force startups into rigid plans or abstract SLAs. We build with you 24/7, because we know your hustle isn’t a 9–5.One recent success story: an AI startup that migrated from a major hyperscaler told us they cut their inference costs by over 40%…and got actual human support for the first time. What truly sets us apart is our flexibility: we’re not a faceless hyperscaler. We tailor offers, support, and infrastructure to each startup’s stage and needs.Dmitry MaslennikovWe’re excited to support startups working on AI, machine learning, video, gaming, and real-time apps. Gcore for Startups is delivering serious value to founders in industries where performance, cost efficiency, and responsiveness make or break product experience.Ready to scale smarter?Apply today and get hands-on support from engineers who’ve been in your shoes. If you’re an early-stage startup with a working product and funding (pre-seed to Series A), we’ll review your application quickly and tailor infrastructure that matches your stage, stack, and goals.To get started, head on over to our Gcore for Startups page and book a demo.Discover Gcore for Startups
09 Jul 2025 - The cloud control gap: why EU companies are auditing jurisdiction in 2025
Europe’s cloud priorities are changing fast, and rightly so. With new regulations taking effect, concerns about jurisdictional control rising, and trust becoming a key differentiator, more companies are asking a simple question: Who really controls our data?For years, European companies have relied on global cloud giants headquartered outside the EU. These providers offered speed, scale, and a wide range of services. But 2025 is a different landscape.Recent developments have shown that data location doesn’t always mean data protection. A service hosted in an EU data center may still be subject to laws from outside the EU, like the US CLOUD Act, which could require the provider to hand over customer data regardless of where it’s stored.For regulated industries, government contractors, and data-sensitive businesses, that’s a growing problem. Sovereignty today goes beyond compliance. It’s central to business trust, operational transparency, and long-term risk management.Rising risks of non-EU cloud dependencyIn 2025, the conversation has shifted from “is this provider GDPR-compliant?” to “what happens if this provider is forced to act against our interests?”Here are three real concerns European companies now face:Foreign jurisdiction risk: Cloud providers based outside Europe may be legally required to share customer data with foreign authorities, even if it’s stored in the EU.Operational disruption: Geopolitical tensions or executive decisions abroad could affect service availability or create new barriers to access.Reputational and compliance exposure: Customers and regulators increasingly expect companies to use providers aligned with European standards and legal protections.European leaders are actively pushing for “full-stack European solutions” across cloud and AI infrastructure, citing sovereignty and legal clarity as top concerns. Leading European firms like Deutsche Telekom and Airbus have criticized proposals that would grant non-EU tech giants access to sensitive EU cloud data.This reinforces a broader industry consensus: jurisdictional control is a serious strategic issue for European businesses across industries. Relying on foreign cloud services introduces risks that no business can control, and that few can absorb.What European companies must do nextEuropean businesses can’t wait for disruption to happen. They must build resilience now, before potentially devastating problems occur.Audit their cloud stack to identify data locations and associated legal jurisdictions.Repatriate sensitive workloads to EU-based providers with clear legal accountability frameworks.Consider deploying hybrid or multi-cloud architectures, blending hyperscaler agility and EU sovereign assurance.Over 80% of European firms using cloud infrastructure are actively exploring or migrating to sovereign solutions. This is a smart strategic maneuver in an increasingly complex and regulated cloud landscape.Choosing a futureproof pathIf your business depends on the cloud, sovereignty should be part of your planning. It’s not about political trends or buzzwords. It’s about control, continuity, and credibility.European cloud providers like Gcore support organizations in achieving key sovereignty milestones:EU legal jurisdiction over dataAlignment with sectoral compliance requirementsResilience to legal and geopolitical disruptionTrust with EU customers, partners, and regulatorsIn 2025, that’s a serious competitive edge that shows your customers that you take their data protection seriously. A European provider is quickly becoming a non-negotiable for European businesses.Want to explore what digital sovereignty looks like in practice?Gcore’s infrastructure is fully self-owned, jurisdictionally transparent, and compliant with EU data laws. As a European provider, we understand the legal, operational, and reputational demands on EU businesses.Talk to us about sovereignty strategies for cloud, AI, network, and security that protect your data, your customers, and your business. We’re ready to provide a free, customized consultation to help your European business prepare for sovereignty challenges.Auditing your cloud stack is the first step. Knowing what to look for in a provider comes next.Not all EU-based cloud providers guarantee sovereignty. Learn what to evaluate in infrastructure, ownership, and legal control to make the right decision.Learn how to verify EU cloud control in our blog
29 Aug 2025 - Outpacing cloud‑native threats: How to secure distributed workloads at scale
The cloud never stops. Neither do the threats.Every shift toward containers, microservices, and hybrid clouds creates new opportunities for innovation…and for attackers. Legacy security, built for static systems, crumbles under the speed, scale, and complexity of modern cloud-native environments.To survive, organizations need a new approach: one that’s dynamic, AI-driven, automated, and rooted in zero trust.In this article, we break down the hidden risks of cloud-native architectures and show how intelligent, automated security can outpace threats, protect distributed workloads, and power secure growth at scale.The challenges of cloud-native environmentsCloud-native architectures are designed for maximum flexibility and speed. Applications run in containers that can scale in seconds. Microservices split large applications into smaller, independent parts. Hybrid and multi-cloud deployments stretch workloads across public clouds, private clouds, and on-premises infrastructure.But this agility comes at a cost. It expands the attack surface dramatically, and traditional perimeter-based security can’t keep up.Containers share host resources, which means if one container is breached, attackers may gain access to others on the same system. Microservices rely heavily on APIs to communicate, and every exposed API is a potential attack vector. Hybrid cloud environments create inconsistent security controls across platforms, making gaps easier for attackers to exploit.Legacy security tools, built for unchanging, centralized environments, lack the real-time visibility, scalability, and automated response needed to secure today’s dynamic systems. Organizations must rethink cloud security from the ground up, prioritizing speed, automation, and continuous monitoring.Solution #1: AI-powered threat detection forsmarter defensesModern threats evolve faster than any manual security process can track. Rule-based defenses simply can’t adapt fast enough.The solution? AI-driven threat detection.Instead of relying on static rules, AI models monitor massive volumes of data in real time, spotting subtle anomalies that signal an attack before real damage is done. For example, an AI-based platform can detect an unauthorized process in a container trying to access confidential data, flag it as suspicious, and isolate the threat within milliseconds before attackers can move laterally or exfiltrate information.This proactive approach learns, adapts, and neutralizes new attack vectors before they become widespread. By continuously monitoring system behavior and automatically responding to abnormal activity, AI closes the gap between detection and action, critical in cloud-native, regulated environments where even milliseconds matter.Solution #2: Zero trust as the new security baseline“Trust but verify” no longer cuts it. In a cloud-native world, the new rule is “trust nothing, verify everything”.Zero-trust security assumes that threats exist both inside and outside the network perimeter. Every request—whether from a user, device, or application—must be authenticated, authorized, and validated.In distributed architectures, zero trust isolates workloads, meaning even if attackers breach one component, they can’t easily pivot across systems. Strict identity and access management controls limit the blast radius, minimizing potential damage.Combined with AI-driven monitoring, zero trust provides deep, continuous verification, blocking insider threats, compromised credentials, and advanced persistent threats before they escalate.Solution #3: Automated security policies for scalingprotectionManual security management is impossible in dynamic environments where thousands of containers and microservices are spun up and down in real time.Automation is the way forward. AI-powered security policies can continuously analyze system behavior, detect deviations, and adjust defenses automatically, without human intervention.This eliminates the lag between detection and response, shrinks the attack window, and drastically reduces the risk of human error. It also ensures consistent security enforcement across all environments: public cloud, private cloud, and on-premises.For example, if a system detects an unusual spike in API calls, an automated security policy can immediately apply rate limiting or restrict access, shutting down the threat without impacting overall performance.Automation doesn’t just respond faster. It maintains resilience and operational continuity even in the face of complex, distributed threats.Unifying security across cloud environmentsSecuring distributed workloads isn’t just about having smarter tools, it’s about making them work together. Different cloud platforms, technologies, and management protocols create fragmentation, opening cracks that attackers can exploit. Security gaps between systems are as dangerous as the threats themselves.Modern cloud-native security demands a unified approach. Organizations need centralized platforms that pull real-time data from every endpoint, regardless of platform or location, and present it through a single management dashboard. This gives IT and security teams full, end-to-end visibility over threats, system health, and compliance posture. It also allows security policies to be deployed, updated, and enforced consistently across every environment, without relying on multiple, siloed tools.Unification strengthens security, simplifies operations, and dramatically reduces overhead, critical for scaling securely at cloud-native speeds. That’s why at Gcore, our integrated suite of products includes security for cloud, network, and AI workloads, all managed in a single, intuitive interface.Why choose Gcore for cloud-native security?Securing cloud-native workloads requires more than legacy firewalls and patchwork solutions. It demands dynamic, intelligent protection that moves as fast as your business does.Gcore Edge Security delivers robust, AI-driven security built for the cloud-native era. By combining real-time AI threat detection, zero-trust enforcement, automated responses, and compliance-first design, Gcore security solutions protect distributed applications without slowing down development cycles.Discover why WAAP is essential for cloud security in 2025
26 Jun 2025
What we guarantee
Premium technical support
Our experts will help with integration
GDPR
Compliance
24/7 availability
Real time monitoring and maintenance
DDoS protection
At the network and transport layers
SLA
99.95% with financial guarantees
Recent case studiesMore case studies
How ProSieben scaled Germany’s Next Top Model TOPSHOT for real-time AI portraits with Gcore
To celebrate the 20th season of Germany’s Next Top Model (GNTM), ProSieben’s marketing team launched GNTM TOPSHOT, an AI-powered feature in the Joyn app that instantly transforms user photos into studio-grade, show-inspired portraits.At broadcast scale, the challenge was steep: handle massive primetime spikes, deliver results instantly, and guarantee strict EU privacy compliance. To make it possible, ProSieben turned to Gcore Everywhere Inference.“We needed something that could scale on demand, deliver in seconds, and keep data local. Gcore’s infrastructure let us bring a creative idea to life - without breaking the user experience.”- Simon Hawe, Technical Lead, JoynThe challenge: broadcast-scale creativity in real timeTOPSHOT had to satisfy five tough, non-negotiable requirements:Primetime surges. Usage surged before, during, and after live broadcasts across Germany, Austria, and Switzerland.Ultra-low latency. Fans expected results within 5 - 10 seconds per portrait, end-to-end.Privacy by design. Joyn deletes all photos immediately to keep user data truly private - no caching, no storage. Caching wasn’t an option; every request had to be generated fresh.High fidelity. Each portrait required a ~100-stage pipeline for segmentation, relighting, skin/pose preservation, and compositing, to match GNTM’s signature aesthetic.Frictionless UX. No login required. The feature had to “just work,” even on mobile connections.Turning fans into models in real timeProSieben needed an inference platform that was scalable, privacy-compliant, and easy to integrate. Gcore Everywhere Inference delivered:One endpoint, nearest node. Smart Routing automatically sent each request to the closest GPU endpoint, minimizing jitter and wait times.Autoscaling GPUs. Serverless orchestration spun GPU capacity up or down in real time, handling unpredictable primetime peaks.EU-ready deployments. Hybrid support (on-prem, Gcore Cloud, public cloud) gave ProSieben full flexibility on data residency.Optimized for image workloads. Everywhere Inference ran TOPSHOT’s complex pipelines on NVIDIA H100, A100, and L40S GPUs - excellent for generative image models.“Our biggest challenge was combining visual fidelity with real-time response. Gcore’s Smart Routing and auto-scaling made that possible at primetime scale.”- Benjamin Risom, Chief Product Officer, JoynThe architecture allowed ProSieben to:Route all traffic through a single inference endpoint, fronted by their own load balancerKeep portrait generation under 10 seconds - even during broadcast surgesMeet strict privacy guarantees: no logins, no storage of inputs or outputsDeliver a seamless experience inside the Joyn appTOPSHOT went live with five portrait scenes in April 2025, with three more added weeks later.“We could focus on the creative, knowing the infrastructure would scale with us. That made it possible to deliver something really special for our viewers.”— Sebastian v. Wyschetzki, Creative Lead, Seven.One Entertainment GroupReal-time engagement, broadcast scaleTOPSHOT launched into a season already driving cultural buzz.The GNTM Season 20 finale (June 19, 2025) drew 3.87M viewers with a 22.4% share in the 14 - 49 demo.Joyn saw 10M viewers in April 2025 (+80% YoY) and a 40% increase in watch time in Q1 2025 YoY.TV Total host Sebastian Pufpaff demoed TOPSHOT live on air, praising the visuals and sparking organic uptake.Trade press highlighted the “scalable Gcore infrastructure” behind the feature.“Gcore’s platform gave us regional performance, privacy control, and GPU scaling without the heavy lifting of building and managing infrastructure ourselves.”— Paolo Garri, Infrastructure Architect, JoynWhat’s next?Building on TOPSHOT’s success, ProSieben is playing with new potential fan-facing AI experiences: video portraits, real-time filters, or stylized animations. With Gcore’s flexible infrastructure, the team is free to keep experimenting without re-architecting.“The success of TOPSHOT showed us what’s possible. Now we’re asking: how far can we take this?”— Jutta Meyer, Executive VP Marketing & Creation, Seven.One Entertainment Group
Higgsfield AI kickstarts partnership with Gcore for scalable AI infrastructure and Managed Kubernetes support
Founded in 2023, Higgsfield is building the Video Reasoning engine for the attention economy. Its AI-native, browser-based platform condenses ideation, editing, and post-production into a single workflow, enabling creators and enterprises to produce cinematic-quality short-form video in minutes instead of weeks.Higgsfield delivers fast, controllable, and scalable outcomes that preserve narrative continuity and cultural resonance across media, marketing, and brand communication. With operations spanning the US, Europe, and Asia, Higgsfield is headquartered in Silicon Valley and backed by world-class investors and veteran technologists with a track record of billion-scale products and outcomes.As they prepare for scale and increasing demand, Higgsfield needed robust, flexible infrastructure that could meet the needs of their dynamic workloads and rapidly growing user base.What it takes to power generative AI at scaleHiggsfield had worked with other GPU providers, but struggled with limited capacity and the lack of scalable orchestration options. The generative platform relies on running large-scale AI models efficiently, so their team's key infrastructure priorities were:Instant access to high-performance H100 GPUs with the ability to scale up and down based on project demandAutoscaling GPU infrastructure to support unpredictable, high-volume generative AI workloadsManaged Kubernetes with GPU worker nodes, load balancers, and cloud networking for ease of orchestration, autoscaling, and reliabilityFast onboarding and close support to move quickly from testing to deploymentTransparent and predictable pricing with fast and simple contracting, and PAYG or commitment models available.Availability and flexibility for future expansionWhy Gcore infrastructure stood out from the crowdHiggsfield approached Gcore in need of a large volume of H100 GPUs immediately, and with the flexibility to scale on demand. Gcore provided rapid access to the required H100 GPUs, helping Higgsfield eliminate supply constraints and meet fast-moving development timelines.Transparent pricing gave Higgsfield budget predictability and easier cost control, which was essential for their high-frequency release cycles. They also valued Gcore’s commitment to sustainable hardware design, high reliability and uptime, and 24/7 availability of DevOps engineering support.Additionally, deploying infrastructure through the Gcore Sines 3 cluster in Portugal provided the regional flexibility and high-performance Higgsfield needed to support its platform.Higgsfield chose Gcore for its ability to deliver managed Kubernetes with GPU worker nodes, enabling them to scale dynamically, flexing compute resources based on real-time user demand. Speed and flexibility are essential to Higgsfield's operations: They expect to start cooperating with partners quickly and scale capacity on demand. The streamlined service offering, fast onboarding, and highly responsive support that Gcore provides enabled them to do exactly that.“The combination of GPU scaling, H100 availability, and Managed Kubernetes was invaluable for us. Gcore gave us the control and flexibility our engineering team needed to move flexibly and fast.”— Alex Mashrabov, CEO, Higgsfield AIA fast, hands-on start with dedicated engineering supportGcore’s team provided dedicated engineering support and helped Higgsfield test their workloads through a one-week trial. After validating performance, Higgsfield quickly transitioned to full deployment.“Direct access to Gcore’s engineering team made the onboarding smooth and efficient. We could test and validate quickly, then scale up without friction.”— Anwar Omar, Lead Infrastructure Engineer, Higgsfield AIScalable performance and a strong foundation for growthWhile it’s early days, Higgsfield is already live and actively running GPU-powered workloads with Gcore in production.The key outcomes so far include:Seamless deployment to a managed Kubernetes environment with GPU worker nodes and autoscalingOn-demand access to H100 GPUs for compute-intensive generative workloadsKubernetes-based orchestration for efficient container scaling and resource optimizationScalable infrastructure that flexes based on demandA strong foundation for future product growth and global scalingWhat’s next?Higgsfield is currently exploring the possibility of extending the relationship beyond model training and into distributed inference infrastructure with Everywhere Inference.Their product roadmap involves releasing new features at a high velocity, often requiring larger GPU volumes for short periods—making flexible infrastructure a must. Gcore’s scalable, on-demand model supports this cadence without overprovisioning.We’re excited about the potential of our partnership with Gcore. The team has been incredibly responsive, and the infrastructure is a great fit for Higgsfield. We’re actively exploring additional possibilities—from Everywhere Inference to broader scaling—and we’re looking forward to seeing where this collaboration can take us next.— Alex Mashrabov, CEO, Higgsfield AI
Leonardo AI delivers high-speed, global content creation with Gcore AI services
Leonardo.Ai helps creators turn ideas into stunning AI-generated content in seconds. Headquartered in Australia and now part of Canva, the company gives game developers, designers, and marketers powerful tools to generate and refine images, videos, and creative assets in real time.As James Stewart, DevOps Engineering Manager at Leonardo.Ai explains, the team’s top priority is speed. Their north-star value is “go fast”, taking ideas to prototype and release at an impressive pace. But delivering that kind of speed at scale takes serious GPU infrastructure and deep levels of expertise around orchestration.Seeking speed, scale, and infrastructure maturity under pressureDelivering AI speed at scale for customers worldwide requires powerful, on-demand GPU inference infrastructure. Early on, Leonardo found that limited GPU availability and high cost were bottlenecks.GPUs make up a significant part of our operating costs, so competitive pricing and availability are crucial for us.James Stewart, DevOps Engineering Manager, Leonardo.AiWith big growth goals ahead, Leonardo needed an efficient, flexible GPU provider that would support their plans for speed and scale. They looked at AI providers from global hyperscalers to local GPU services. Some providers looked promising but had no availability. Others offered low prices or easy access (no long-term commitment) but were missing essential features like private networking, infrastructure-as-code, or 24/7 support.Cheap GPUs alone weren’t enough for us. We needed a mature platform with Terraform support, private networking, and reliable support. Otherwise, deployment and maintenance become really painful for our engineers at scale.James Stewart, DevOps Engineering Manager, Leonardo.AiFortunately, they found what they were looking for in Gcore: solid GPU availability thanks to its Northern Data partnership, a fully-featured cloud platform, and genuinely helpful technical support.We chose Gcore for overall platform integration, features, and support. Compared to some of the less capable GPU providers we’ve utilized, when using Gcore our engineers don’t need to battle with manual infrastructure deployment or performance issues. Which means they can focus on the part of the job that they love: actually building.James Stewart, DevOps Engineering Manager, Leonardo.AiFinding a flexible provider that can meet Leonardo’s need for speedLeonardo AI needed infrastructure that wouldn’t slow innovation or momentum. With Gcore, it found a fast, flexible, and reliable AI platform able to match its speed of development and ambition. Leonardo chose to run their inference on Gcore GPU Cloud with Bare Metal hosting, offering isolation, power, and flexibility for their AI workloads. Their demanding inference workloads run on current-gen NVIDIA H100 and A100 GPUs with zero virtualization overhead. This means their image and video generation services deliver fast, high-res output with no lag or slowdowns, even under the heaviest loads.On-demand pricing lets Leonardo AI scale GPU usage based on traffic, product cycles, or model testing needs. There’s no overprovisioning or unnecessary spending. Leonardo gets a lean, responsive setup that adapts to the business’ scale, coupled with tailored support so their team can get the most out of the infrastructure.We push our infrastructure hard and Gcore handles it with ease. The combination of raw GPU power, availability, fast and easy provisioning, and flexible scaling lets us move as fast as we need to. What really sets Gcore apart though, is the hands-on, personalized support. Their team really understands our setup and helps us to optimize it to our specific needs.James Stewart, DevOps Engineering Manager, Leonardo.AiDelivering real-time creation with top-tier AI infrastructurePartnering with Gcore helps Leonardo to maintain its famously rapid pace of development and consistently deliver innovative new features to Leonardo.Ai users.With Gcore, we can spin up GPU nodes instantly and trust that they’ll work reliably and consistently. Knowing that Gcore has the capacity that we need, when we need it, allows us to quickly and confidently develop new, cutting-edge features for Leonardo customers without worrying whether or not we’ll have the GPUs available to power them.James Stewart, DevOps Engineering Manager, Leonardo.AiThe team now uses Terraform to provision GPUs on demand, and containerised workflows to “go fast” when deploying the suite of Gcore AI services powering Leonardo.Ai.Powering global AI creativityGcore GPU Cloud has become part of the backbone of Leonardo AI’s infrastructure. By offloading infrastructure complexity to Gcore, the Leonardo AI team can stay focused on their customers and innovation.Our partnership with Gcore gives us the flexibility and performance to innovate without limits. We can scale our AI workloads globally and keep our customers creating.James Stewart, DevOps Engineering Manager, Leonardo.AiReady to scale your AI workloads globally? Discover how Gcore’s AI services can power your next-generation applications. Find out more about GPU Cloud and Everywhere Inference, see how easy it is to deploy with just three clicks, or get in touch with our AI team for a personalized consultation.
Funcom delivers the successful launch of Dune: Awakening in South America with Gcore
Founded in 1993, Funcom is a leading developer and publisher of online multiplayer and open-world games. Known for its rich storytelling and immersive universes, Funcom has developed acclaimed titles like Conan Exiles, The Secret World, and Anarchy Online. With its latest and most ambitious project, Dune: Awakening, Funcom is building an expansive open-world multiplayer survival game on a massive scale set in the iconic sci-fi universe of Dune.Launching Dune: Awakening with low-latency performance for South American playersIn preparation for the global launch of Dune: Awakening, Funcom faced a critical challenge: delivering a smooth, high-performance multiplayer experience for players in South America, a region often underserved by traditional infrastructure providers.With a large and passionate LATAM player base, the stakes were high. Funcom needed to deploy compute-intensive workloads capable of powering real-time gameplay and matchmaking with minimal latency, all while providing resilience against potential DDoS attacks during the launch window.Choosing Gcore for high-frequency compute power and managed orchestrationTo meet these infrastructure demands, Funcom partnered with Gcore to deploy:Bare Metal Servers configured with AMD Ryzen 9 9950x CPUs for high single-threaded performanceManaged Kubernetes clusters to orchestrate scalable multiplayer backend services on bare metal serversBuilt-in advanced DDoS Protection to secure critical launch infrastructureThe robust presence of Gcore in Latin America, supported by its global backbone and edge PoPs, made it possible for Funcom to deliver a high-quality experience to South American players comparable to what’s typically available in North America or Europe.The Gcore infrastructure in South America is purpose-built to support latency-sensitive workloads like online multiplayer gaming. With multi-terabit capacity in São Paulo, participation in IX.br (the region’s largest internet exchange), and private peering agreements with major ISPs such as Claro and TIM, Gcore ensures stable, low-latency connectivity across the region. Crucially, DDoS mitigation is handled locally, eliminating the need for long-haul traffic rerouting and enabling faster, more reliable protection at scale.The ability to directly deploy high-frequency bare metal nodes in the region has been a cornerstone of our South American launch strategy. Gcore allows us to reach players in regions where performance at this level is not usually possible.Stian Drageset, CFO & COO, FuncomGuaranteeing smooth operations with Kubernetes and low-latency infrastructureWith Gcore Managed Kubernetes, Funcom was able to dynamically manage containers across a cluster of powerful bare metal nodes, crucial for maintaining game state, matchmaking, and multiplayer interactions in real time. This setup enables flexible scaling in response to player demand, whether it spikes on launch day or ramps up as more players join.Thanks to Gcore’s managed services, our team can focus on game logic and player experience, not orchestration or hardware.Rui Casais, CEO, FuncomProving performance at scale during beta—and beyondAnticipation was already high leading up to the launch. During the invite-only beta weekend in May 2025, the game attracted nearly 40,000 concurrent players—a strong early signal of the momentum behind the title. Behind the scenes, Gcore supported Funcom with high-performance Bare Metal servers and Managed Kubernetes to provide uninterrupted performance at scale during this critical milestone. That success laid the groundwork for a smooth and stable full launch in South America.Monitoring results post-launchAs Dune: Awakening prepared for its launch, Funcom and Gcore closely monitored infrastructure performance and prepared for a high-concurrency environment. Post-launch data included:Reached the top ten most-played games on Steam globally within 24 hours of launch, climbing to number two within the first weekPeak of 142,000 concurrent players in the first couple of days, and 189,000 by the end of the weekExpanding into underserved gaming regionsThis deployment showcases how Gcore’s infrastructure helps game studios expand into emerging regions like South America, where consistent low-latency, high-frequency compute has traditionally been harder to access.South America is often seen as a “blue ocean” market in the gaming industry—vast, underserved, and perceived as difficult to serve due to infrastructure limitations. With a population of over 400 million, the region holds immense potential. Gcore makes it easy for publishers like Funcom to unlock that opportunity, delivering a seamless experience to players across LATAM without compromise.Gcore’s ability to deliver high-frequency compute in South America gives us a real advantage in reaching players where latency and infrastructure have long been challenges for online multiplayer gaming.Stian Drageset, CFO & COO, FuncomPowering next-gen multiplayer survival games globallyBy choosing Gcore Bare Metal servers and Managed Kubernetes, Funcom is positioned to deliver a high-performance multiplayer experience to players in South America and beyond. The flexibility of Gcore infrastructure ensures optimal resource usage, rapid scaling, and reliable DDoS protection—foundational components for a smooth multiplayer survival game launch.Scale your multiplayer experience—everywhereLooking to launch your next multiplayer title in regions others can’t reach? Gcore offers flexible, high-performance infrastructure tailored for real-time gaming. Contact us to learn more about how we can help you reach every corner of the globe.Contact us
Customers that trust Gcore to power their business and infrastructure
Cloud services
Virtual data center with 50+ Cloud services.
Content Delivery Network
Next-gen CDN for dynamic and static content delivery worldwide.
DDoS Protection
Reliable infrastructure protection against L3, L4 & L7 DDoS attacks.