Gaming industry under DDoS attack. Get DDoS protection now. Start onboarding
  1. Home
  2. Blog
  3. Celebrating Gcore’s 10th Anniversary—A Decade of Innovation
Expert insights
Industry trends
News

Celebrating Gcore’s 10th Anniversary—A Decade of Innovation

  • February 27, 2024
  • 3 min read
Celebrating Gcore’s 10th Anniversary—A Decade of Innovation

Ten years ago, Gcore embarked on a bold journey. Starting with a focus on the gaming sector, we leveraged our deep passion and understanding of its demands and expanded our horizons to become a global leader in edge computing and AI. Throughout our first decade, Gcore has been driven by a belief in the transformative power of technology and a commitment to innovation. We are both humbled by and proud of our achievements and moved to reflect on our journey so far.

How It All Began

Our story began with a dream to revolutionize gaming experiences worldwide. In the heart of Luxembourg, a group of gamers recognized the critical demand for seamless, lag-free gaming experiences, a challenge we aimed to meet by developing low-latency infrastructure solutions for this sector. Thus Gcore was founded, with the original goal of helping gaming companies captivate and inspire their audiences around the world.

This led us to focus on edge computing, bringing our customers’ services closer to users around the globe. We soon realized our potential impact extended far beyond gaming, as increasingly diverse industries looked for lower-latency options than traditional cloud providers were offering. We launched our CDN and hosting services and expanded our global reach by opening new points of presence. Gcore was no longer just about amazing gaming experiences, but about providing a better internet experience for everyone.

Innovation Through Challenges

Back in our earliest months as a startup, every day brought a new set of challenges. In one memorable instance, our team personally transported equipment by airplane to ensure we met a customer deadline. On another occasion, the infrastructure did not meet our exacting standards, and equipment couldn’t be installed as planned. We custom-designed a solution to ensure our technology could operate to the highest standard, regardless of the challenges along the way.

As we continued to expand to regions outside the well-developed infrastructures of Europe and the US, we encountered a lack of data centers, diverse regulatory environments, and language barriers, complicating our efforts to serve a truly global audience. The logistical hurdles of transporting equipment across borders and customizing solutions to meet local standards tested our resilience and innovation. We navigated a complex ecosystem of vendors and partners to build edge computing solutions that could deliver our customers’ content and applications globally.

Our hands-on approach and willingness to tackle the complexities of global deployment were key to our growth. We evolved our offerings by listening to our customers’ needs, making it our goal to offer an impressive range of IT solutions under a single digital roof with exceptional customer experience at the heart of our services. Our customers wanted to focus on their core business without the hassle of dealing with multiple vendors or worrying about the underlying infrastructure, so we stepped up to the plate. Meeting and preempting our customers’ needs has always been a driving force for innovation at Gcore.

As an increasing number and range of businesses moved online, the demand for robust, secure cloud and edge computing solutions surged. We embarked on a mission to build a truly global network, delivering innovative solutions to businesses across six continents.

Preempting AI’s Rise

Our strategic pivot towards AI began in 2020, responding to the tech community’s growing recognition of AI’s transformative potential. We understood that high-performance computing (HPC) capacities needed to be automated and made accessible as a service available from anywhere. Our vision was validated by the rapid rise in popularity of large language models (LLMs) and broader adoption of AI technologies in 2022.

We integrated AI technologies across our services and continue to launch new Gcore Edge AI services, with some exciting new offerings planned for 2024. Our collaboration with industry leaders like NVIDIA is poised to address the most challenging workloads in the coming years, like building capacity for training AI models and performing AI inference at the edge. Our vision is to connect the world to AI, anywhere, anytime.

Today, we are focused on delivering innovative and robust edge AI, cloud, network, and security solutions. We remain driven to serve our customers’ IT needs and continue to innovate ceaselessly to drive technological progress.

Here’s to the Next Ten Years

A laser focus on our customers and our mission and relentless innovation are the keys to Gcore’s success over the past decade. They remain our North Star today. As we step into our next decade, we’re poised to provide trailblazing edge services with AI at the forefront, actively shaping the future of technology.

Thank you to our employees for your continued support and dedication over the past ten years. To our customers, partners, and stakeholders: You keep us motivated to deliver innovative edge solutions and AI-driven automation that redefine the boundaries of technology. Thank you for trusting us with your business.

Here’s to ten years of innovation, collaboration, and growth—and many more to come.

Related articles

Introducing Gcore Everywhere AI: 3-click AI training and inference for any environment

For enterprises, telcos, and CSPs, AI adoption sounds promising…until you start measuring impact. Most projects stall or even fail before ROI starts to appear. ML engineers lose momentum setting up clusters. Infrastructure teams battle to balance performance, cost, and compliance. Business leaders see budgets rise while value stays locked in prototypes.Gcore Everywhere AI changes that. It simplifies AI training, deployment, and scaling across on-premises, hybrid, and cloud environments, giving every team the speed, reliability, and control they need to turn AI initiatives into real outcomes.Why we built Everywhere AIEnterprises need AI that runs where it makes the most sense: on-premises for privacy, in the cloud for scale, or across both for hybrid agility. Not all enterprises are “AI-ready”, meaning that for many, the complexity of integrating AI offsets its benefits. We noticed that fragmented toolchains, complex provisioning, and compliance overhead can hinder the value of AI adoption.That’s why we built Everywhere AI: to simplify deployment, orchestration, and scaling for AI workloads across any environment, all controlled in one intuitive platform. We’re on a mission to bring every enterprise, CSP, and telco team a consistent, secure, and simple way to make AI efficient—everywhere.There are many tools on the market that promise to deliver similar benefits, but no other is able to simplify the deployment process to the point where it’s accessible to anyone in the business, regardless of their technical expertise. To use Everywhere AI, you don’t need to have a Ph.D. in Machine Learning or be a seasoned infrastructure engineer. Everywhere AI is for everyone at your organization.Enterprises today need AI that simply works, whether on-premises, in the cloud, or in hybrid deployments. With Everywhere AI, we’ve taken the complexity out of AI deployment, giving customers an easier, faster way to deploy high-performance AI with a streamlined user experience, stronger ROI, and simplified compliance across environments. This launch is a major step toward our goal at Gcore to make enterprise-grade AI accessible, reliable, and performant.Seva Vayner, Product Director of Edge Cloud and AI at GcoreFeatures and benefitsEverywhere AI brings together everything needed to train, deploy, and scale AI securely and efficiently:Deploy in just 3 clicks: Move from concept to training in minutes using JupyterLab or Slurm. Or simply select your tool, cluster size, and location, and let Everywhere AI handle your setup, orchestration, and scaling automatically.Unified control plane: Manage training, inference, and scaling from one dashboard, across on-prem, hybrid, and cloud. Operate in public or private clouds, or in fully air-gapped environments when data can’t leave your network.Gcore Smart Routing: Inference requests automatically reach the nearest compliant GPU region for ultra-low latency and simplified regulatory alignment. Built on Gcore’s global edge network (210+ PoPs), Smart Routing delivers uncompromising performance worldwide.Auto-scaling: Handle demand spikes seamlessly. Scale to zero when idle to reduce costs, or burst instantly for inference peaks.Privacy and sovereignty: Designed for regulated industries, Everywhere AI supports hard multitenancy for project isolation and sovereign privacy for sensitive workloads. Whether hybrid or fully disconnected, your models stay under your control.Proven resultsEnterprises deploying Everywhere AI can expect to see measurable, repeatable improvements:2× higher GPU utilization: Boost efficiency from ~40% to 80–95% with multi-tenancy and auto-scaling.80% lower infrastructure admin load: Infrastructure teams are more productive with automated software rollout and updates.From POC to results in one week: Enterprise teams take less than a week to onboard, test, and start seeing performance improvements from Everywhere AI.Early adopters are already validating Everywhere AI’s performance and flexibility.Gcore Everywhere AI and HPE GreenLake streamlines operations by removing manual provisioning, improving GPU utilization, and meeting application requirements, including fully air-gapped environments and ultra-low latency. By simplifying AI deployment and management, we’re helping enterprises deliver AI faster and create applications that deliver benefits regardless of scale: good for ML engineers, infrastructure teams, and business leaders.Vijay Patel, Global Director Service Providers and Co-Location Business, HPEPurpose-built for regulated industriesEverywhere AI is designed for organizations where privacy, uptime, and compliance are non-negotiable.Telcos: Use CDN-integrated Smart Routing to deliver real-time inference at carrier scale with consistent QoS.Finance firms: Deploy risk and fraud prevention models on-premises for data residency while scaling, benefiting from auto-scaling and multi-tenancy for maximum efficiency.Healthcare providers: Run imaging and diagnostics AI inside hospital networks to protect PHI.Public-sector agencies: Deliver robust AI-driven citizen services securely under strict compliance regimes.Industrial enterprises: Leverage model and GPU health checks on edge deployments to keep critical predictive maintenance models running in remote sites.Run AI on your termsWhether you’re training large models on-premises, scaling inference at the edge, or operating across multiple regions, Gcore Everywhere AI gives you full control over performance, cost, and compliance.Ready to deploy AI everywhere you need it? Discover how Everywhere AI can simplify and accelerate your AI operations.Learn more about Everywhere AI

Gcore partners with AVEQ to elevate streaming performance monitoring

At Gcore, delivering exceptional streaming experiences to users across our global network is at the heart of what we do. We're excited to share how we're taking our CDN performance monitoring to new heights through our partnership with AVEQ and their innovative Surfmeter solution.Operating a massive global network spanning 210 points of presence across six continents comes with unique challenges. While our globally distributed caching infrastructure already ensures optimal performance for end-users, we recognized the need for deeper insights into the complex interactions between applications and our underlying network. We needed to move beyond traditional server-side monitoring to truly understand what our customers' users experience in the real world.Real-world performance visibilityThat's where AVEQ's Surfmeter comes in. We're now using Surfmeter to gain unprecedented visibility into our network performance through automated, active measurements that simulate actual streaming video quality, exactly as end-users experience it.This isn't about checking boxes or reviewing server logs. It's about measuring what users see on their screens at home. With Surfmeter, our engineering teams can identify and resolve potential bottlenecks or suboptimal configurations, and collaborate more effectively with our customers to continuously improve Quality of Experience (QoE).How we use SurfmeterAVEQ helps us simulate and analyze real-world scenarios where users access different video streams. Their software runs both on network nodes close to our data center CDN caches and at selected end-user locations with genuine ISP connections.What sets Surfmeter apart is its authentic approach: it opens video streams from the same platforms and players that end-users rely on, ensuring measurements truly represent real-world conditions. Unlike monitoring solutions that simply check stream availability, Surfmeter doesn't make assumptions or use third-party playback engines. Instead, it precisely replicates how video players request and decode data served from our CDN.Rapid issue resolutionWhen performance issues arise, Surfmeter provides our engineers with the deep insights needed to quickly identify root causes. Whether the problem lies within our CDN, with peering partners, or on the server side, we can pinpoint it with precision.By monitoring individual video requests, including headers and timings, and combining this data with our internal logging, we gain complete visibility and observability into our entire streaming pipeline. Surfmeter can also perform ping and traceroute tests from the same device, measuring video QoE, allowing our engineers to access all collected data through one API rather than manually connecting to devices for troubleshooting.Competitive benchmarking and future capabilitiesSurfmeter also enables us to benchmark our performance against other services and network providers. By deploying Surfmeter probes at customer-like locations, we can measure streaming from any source via different ISPs.This partnership reflects our commitment to transparency and data-driven service excellence. By leveraging AVEQ's Surfmeter solution, we ensure that our customers receive the best possible streaming performance, backed by objective, end-user-centric insights.Learn more about Gcore CDN

How we engineered a single pipeline for LL-HLS and LL-DASH

Viewers in sports, gaming, and interactive events expect real-time, low-latency streaming experiences. To deliver this, the industry has rallied around two powerful protocols: Low-Latency HLS (LL-HLS) and Low-Latency DASH (LL-DASH).While they share a goal, their methods are fundamentally different. LL-HLS delivers video in a sequence of tiny, discrete files. LL-DASH delivers it as a continuous, chunked download of a larger file. This isn't just a minor difference in syntax; it implies completely different behaviors for the packager, the CDN, and the player.This duality presents a major architectural challenge: How do you build a single, efficient, and cost-effective pipeline that can serve both protocols simultaneously from one source?At Gcore, we took on this unification problem. The result is a robust, single-source pipeline that delivers streams with a glass-to-glass latency of approximately 2.0 seconds for LL-DASH and 3.0 seconds for LL-HLS. This is the story of how we designed it.Understanding the dualityTo build a unified system, we first had to deeply understand the differences in how each protocol operates at the delivery level.LL-DASH: the continuous feedMPEG-DASH has always been flexible, using a single manifest file to define media segments by their timing. Low-Latency DASH builds on this by using Chunked CMAF segments.Imagine a file that is still being written to on the server. Instead of waiting for the whole file to be finished, the player can request it, and the server can send it piece by piece using Chunked Transfer Encoding. The player receives a continuous stream of bytes and can start playback as soon as it has enough data.Single, long-lived files: A segment might be 2–6 seconds long, but it’s delivered as it’s being generated.Timing-based requests: The player knows when a segment should be available and requests it. The server uses chunked transfer to send what it has so far.Player-driven latency: The manifest contains a targetLatency attribute, giving the player a strong hint about how close to the live edge it should play.LL-HLS: the rapid-fire deliveryLL-HLS takes a different approach. It extends the traditional playlist-based HLS by breaking segments into even smaller chunks called Parts.Think of it like getting breaking news updates. The server pre-announces upcoming Parts in the manifest before they are fully available. The player then requests a Part, but the server holds that request open until the Part is ready to be delivered at full speed. This is called a Blocking Playlist Reload.Many tiny files (Parts): A 2-second segment might be broken into four 0.5-second Parts, each requested individually.Manifest-driven updates: The server constantly updates the manifest with new Parts, and uses special tags like #EXT-X-PART-INF and #EXT-X-SERVER-CONTROL to manage delivery.Server-enforced timing: The server controls when the player receives data by holding onto requests, which helps synchronize all viewers.A simplified diagram visually comparing the LL-HLS delivery of many small parts versus the LL-DASH chunked transfer of a single, larger segment over the same time period.These two philosophies demand different things from a CDN. LL-DASH requires the CDN to intelligently cache and serve partially complete files. LL-HLS requires the CDN to handle a massive volume of short, bursty requests and hold connections open for manifest updates. A traditional CDN is optimized for neither.Forging a unified strategyWith two different delivery models, where do you start? You find the one thing they both depend on: the keyframe.Playback can only start from a keyframe (or I-frame). Therefore, the placement of keyframes, which defines the Group of Pictures (GOP), is the foundational layer that both protocols must respect. By enforcing a consistent keyframe interval on the source stream, we could create a predictable media timeline. This timeline can then be described in two different “languages” in the manifests for LL-HLS and LL-DASH.A single timeline with consistent GOPs being packaged for both protocols.This realization led us to a baseline configuration, but each parameter involved a critical engineering trade-off:GOP: 1 second. We chose a frequent, 1-second GOP. The primary benefit is extremely fast stream acquisition; a player never has to wait more than a second for a keyframe to begin playback. The trade-off is a higher bitrate. A 1-second GOP can increase bitrate by 10–15% compared to a more standard 2-second GOP because you're storing more full-frame data. For real-time, interactive use cases, we prioritized startup speed over bitrate savings.Segment Size: 2 seconds. A 2-second segment duration provides a sweet spot. For LL-DASH and modern HLS players, it's short enough to keep manifest sizes manageable. For older, standard HLS clients, it prevents them from falling too far behind the live edge, keeping latency reduced even on legacy players.Part Size: 0.5 seconds. For LL-HLS, this means we deliver four updates per segment. This frequency is aggressive enough to achieve sub-3-second latency while being coarse enough to avoid overwhelming networks with excessive request overhead, which can happen with part durations in the 100–200ms range.Cascading challenges through the pipeline1. Ingest: predictability is paramountTo produce a clean, synchronized output, you need a clean, predictable input. We found that the encoder settings of the source stream are critical. An unstable source with a variable bitrate or erratic keyframe placement will wreck any attempt at low-latency delivery.For our users, we recommend settings that prioritize speed and predictability over compression efficiency:Rate control: Constant Bitrate (CBR)Keyframe interval: A fixed interval (e.g., every 30 frames for 30 FPS, to match our 1s GOP).Encoder tune: zerolatencyAdvanced options: Disable B-frames (bframes=0) and scene-cut detection (scenecut=0) to ensure keyframes are placed exactly where you command them to be.Here is an example ffmpeg command in Bash that encapsulates these principles:ffmpeg -re -i "source.mp4" -c:a aac -c:v libx264 \ -profile:v baseline -tune zerolatency -preset veryfast \ -x264opts "bframes=0:scenecut=0:keyint=30" \ -f flv "rtmp://your-ingest-url"2. Transcoding and packagingOur transcoding and Just-In-Time Packaging (JITP) layer is where the unification truly happens. This component does more than just convert codecs; it has to operate on a stream that is fundamentally incomplete.The primary challenge is that the packager must generate manifests and parts from media files that are still being written by the transcoder. This requires a tightly-coupled architecture where the packager can safely read from the transcoder's buffer.To handle the unpredictable nature of live sources, especially user-generated content via WebRTC, we use a hybrid workflow:GPU Workers (Nvidia/Intel): These handle the heavy lifting of decoding and encoding. Offloading to GPU hardware is crucial for minimizing processing latency and preserving advanced color formats like HDR+.Software Workers and Filters: These provide flexibility. When a live stream from a mobile device suddenly changes resolution or its framerate drops due to a poor connection, a rigid hardware pipeline would crash. Our software layer can handle these context changes gracefully, for instance, by scaling the erratic source and overlaying it on a stable, black-background canvas, meaning the output stream never stops.This makes our JITP a universal packager, creating three synchronized content types from a single, resilient source:LL-DASH (CMAF)LL-HLS (CMAF)Standard HLS (MPEG-TS) for backward compatibility3. CDN delivery: solving two problems at onceThis was the most intensive part of the engineering effort. Our CDN had to be taught how to excel at two completely different, high-performance tasks simultaneously.For LL-DASH, we developed a custom caching module we call chunked-proxy. When the first request for a new .m4s segment arrives, our edge server requests it from the origin. As bytes flow in from the origin, the chunked-proxy immediately forwards them to the client. When a second client requests the same file, our edge server serves all the bytes it has already cached and then appends the new bytes to both clients' streams simultaneously. It’s a multi-client cache for in-flight data.For LL-HLS, the challenges were different:Handling Blocked Requests: Our edge servers needed to be optimized to hold thousands of manifest requests open for hundreds of milliseconds without consuming excessive resources.Intelligent Caching: We needed to correctly handle cache statuses (MISS, EXPIRED) for manifests to ensure only one request goes to the origin per update, preventing a "thundering herd" problem.High Request Volume: LL-HLS generates a storm of requests for tiny part-files. Our infrastructure was scaled and optimized to serve these small files with minimal overhead.The payoff: ultimate flexibility for developersThis engineering effort wasn't just an academic exercise. It provides tangible benefits to developers building with our platform. The primary benefit is simplicity through unification, but the most powerful benefit is the ability to optimize for every platform.Consider the complex landscape of Apple devices. With our unified pipeline, you can create a player logic that does this:On iOS 17.1+: Use LL-DASH with the new Managed Media Source (MMS) API for ~2.0 second latency.On iOS 14.0 - 17.0: Use native LL-HLS for ~3.0 second latency.On older iOS versions: Automatically fall back to standard HLS with a reduced latency of ~9 seconds.This lets you provide the best possible experience on every device, all from a single backend and a single live source, without any extra configuration.Don't fly blind: observability in a low-latency worldA complex system is useless without visibility, and traditional metrics can be misleading for low-latency streaming. Simply looking at response_time from a CDN log is not enough.We had to rethink what to measure. For example:For an LL-HLS manifest, a high response_time (e.g., 500ms) is expected behavior, as it reflects the server correctly holding the request while waiting for the next part. A low response_time could actually indicate a problem. We monitor “Manifest Hold Time” to ensure this blocking mechanism is working as intended.For LL-DASH, a player requesting a chunk that isn't ready yet might receive a 404 Not Found error. While occasional 404s are normal, a spike can indicate origin-to-edge latency issues. This metric, combined with monitoring player liveCatchup behavior, gives a true picture of stream health.Gcore: one pipeline to serve them allThe paths of LL-HLS and LL-DASH may be different, but their destination is the same: real-time interaction with a global audience. By starting with a common foundation—the keyframe—and custom-engineering every component of our pipeline to handle this duality, we successfully solved the unification problem.The result is a single, robust system that gives developers the power of both protocols without the complexity of running two separate infrastructures. It’s how we deliver ±2.0s latency with LL-DASH and ±3.0s with LL-HLS, and it’s the foundation upon which we’ll build to push the boundaries of real-time streaming even further.

Gcore successfully stops 6 Tbps DDoS attack

Gcore recently detected and mitigated one of the most powerful distributed denial-of-service (DDoS) attacks of the year, peaking at 6 Tbps and 5.3 billion packets per second (Bpps).This surge, linked to the AISURU botnet, reflects a growing trend of large-scale attacks. It reminds us how crucial effective protection has become for companies that depend on high availability and low latency. 6 Tbps 5.3 BppsThe attack in numbersPeak traffic: 6 TbpsPacket rate: 5.3 BppsMain protocol: UDP, typical of volumetric floods designed to overwhelm bandwidthGeographic concentration: 51% of sources originated in Brazil and 23.7% in the US, together accounting for nearly 75% of all trafficGeographic sources This regional concentration shows how today’s botnets are expanding across areas with high device connectivity and weaker security measures, creating an ideal environment for mass exploitation.How to strengthen your defensesThe 6 Tbps attack is not an isolated incident. It marks an escalation in DDoS activity across industries where performance and availability are critical to customer satisfaction and company revenue. To protect your business from large-scale DDoS attacks, consider the following key strategies:Adopt an adaptive DDoS protection that detects and mitigates attacks automatically.Leverage edge infrastructure to absorb malicious traffic closer to its source.Prepare for high traffic volumes by upgrading your infrastructure or partnering with a reliable DDoS protection provider that has the global capacity and resources to keep your services online during large-scale attacks.Keeping your business safe with GcoreTo stay ahead of these evolving threats, companies need solutions that deliver real-time detection, intelligent mitigation, and global reach. Gcore’s DDoS Protection was built to do precisely that, leveraging AI-driven traffic analysis and worldwide network capacity to block attacks before they impact your users.As attacks grow larger and more complex, staying resilient means being prepared. With the right protection in place, your customers will never know an attack happened in the first place.Learn more about 2025 cyberattack trends

Gcore CDN updates: Dedicated IP and BYOIP now available

We’re pleased to announce two new premium features for Gcore CDN: Dedicated IP and Bring Your Own IP (BYOIP). These capabilities give customers more control over their CDN configuration, helping you meet strict security, compliance, and branding requirements.Many organizations, especially in finance and other regulated sectors, require full control over their network identity. With these new features, Gcore enables customers to use unique, dedicated IP addresses to meet compliance or security standards; retain ownership and visibility over IP reputation and routing, and deliver content globally while maintaining trusted, verifiable IP associations.Read on for more information about the benefits of both updates.Dedicated IP: exclusive addresses for your CDN resourcesThe Dedicated IP feature enables customers to assign a private IP address to their CDN configuration, rather than using shared ones. This is ideal for:Businesses that are subject to strict security or legal frameworks.Customers who want isolated IP resources to ensure consistent access and reputation.Teams using WAAP or other advanced security solutions where dedicated IPs simplify policy management.BYOIP: bring your own IP range to Gcore CDNWith Bring Your Own IP (BYOIP), customers can use their own public IP address range while leveraging the performance and global reach of Gcore CDN. This option is especially useful for:Resellers who prefer to keep Gcore infrastructure invisible to end clients.Enterprises maintaining brand consistency and control over IP reputation.How to get startedBoth features are currently available as paid add-ons and are configured manually by the Gcore team. To request activation or learn more, please contact Gcore Support or your account manager.We’re working on making these features easier to manage and automate in future releases. As always, we welcome your feedback on both the feature functionality and the request process—your insights help us improve the Gcore CDN experience for everyone.Get in touch for more information

Introducing AI Cloud Stack: turning GPU clusters into revenue-generating AI clouds

Enterprises and cloud providers face major roadblocks when trying to deploy GPU infrastructure at scale: long time-to-market, operational inefficiencies, and difficulty bringing new capacity to market profitably. Establishing AI environments with hyperscaler-grade functionality typically requires years of engineering effort, multiple partner integrations, and complex operational tooling.Not anymore.With Gcore AI Cloud Stack, organizations can transform bare Nvidia GPU clusters into a fully cloud-enabled environment—complete with orchestration, observability, billing, and go-to-market support—all in a fraction of the time it would take to build from scratch, maximizing GPU utilization.This proven solution marks the latest addition to the Gcore AI product suite, enabling enterprises and cloud providers to accelerate AI cloud deployment through better GPU utilization, monetization, reduced complexity, and hyperscaler-grade functionality in their own AI environments. Gcore AI Cloud Stack is already powering leading technology providers, including VAST and Nokia.Why we built AI Cloud StackBuying and efficiently operating GPUs at a large scale requires significant investment, time, and expertise. Most organizations need to hit the ground running, bypassing years of in-house R&D. Without a robust reference architecture, infrastructure and network preparation, 24/7 monitoring, dynamic resource allocation, orchestration abstraction, and clear paths to utilization or commercialization, enterprises can spend years before seeing ROI.“Gcore brings together the key pieces—compute, networking, and storage—into a usable stack. That integration helps service providers stand up AI clouds faster and onboard clients sooner, accelerating time to revenue. Combined with the advanced multi-tenant capabilities of VAST’s AI Operating System, it delivers a reliable, scalable, and futureproof AI infrastructure. Gcore offers operators a valuable option to move quickly without building everything themselves.”— Dan Chester, CSP Director EMEA, VAST DataAt Gcore, we understand that organizations across industries will continue to invest heavily in GPUs to power the next wave of AI innovation—meaning these challenges aren’t going away. AI Cloud Stack solves today’s challenges and anticipates tomorrow’s. It ensures that GPU infrastructure at the core of AI innovation delivers maximum value to enterprises.How AI Cloud Stack worksThis comprehensive solution is structured across three stages.1. Provision and launchGcore handles the complexities of initial deployment, from physical infrastructure setup to orchestration, enabling enterprises to go live quickly with a reliable GPU cloud.2. Operations and managementThe solution includes monitoring, orchestration, ticket management, and ongoing support to keep environments stable, secure, and efficient. This includes automated GPU failure handling and optimized resource management.3. Go-to-market supportUnlike other solutions, AI Cloud Stack goes beyond infrastructure. Building on Gcore’s experience as a trusted NVIDIA Cloud Provider (NCP), it helps customers sell their capacity, including through established reseller channels. This integrated GTM support ensures capacity doesn’t sit idle, losing value and potential.What sets Gcore apartUnlike many providers entering this market, Gcore has operated as a global cloud provider for over a decade and has been an early player in the global AI landscape. Gcore knows what it takes to build, scale, and sell cloud and AI services—because it has done it for customers and partners worldwide. Gcore AI Cloud Stack has already been deployed on thousands of NVIDIA Hopper GPUs across Europe to build a commercial-grade AI cloud with full orchestration, abstraction, and monetization layers. That real-world experience allows Gcore to deliver the infrastructure, operational playbook, and sales enablement customers need to succeed.“We’re pleased to collaborate with Gcore, a strong European ISV, to advance a networking reference architecture for AI clouds. Combining Nokia’s open, programmable, and reliable networking with Gcore’s cloud software accelerates deployable blueprints that customers can adopt across data centers and the edge.”— Mark Vanderhaegen, Head of Business Development, Data Center Networks, NokiaKey features of AI Cloud StackCloudification of GPU clusters: Transform raw infrastructure into cloud-like consumption: Infrastructure as a Service (IaaS), Platform as a Service (PaaS), GPU as a Service (GPUaaS), or Model as a Service (MaaS).Gcore AI suite integration: Enable serverless inference and training capabilities through Gcore’s enterprise AI suite.Hyperscaler functionality: Built-in billing, observability, orchestration, and professional services deliver the tools CSPs and enterprises need to operate—similar to what they’re used to getting on public cloud.White-label options: Deliver capacity under your own brand while relying on Gcore’s proven global cloud backbone.NVIDIA AI Enterprise-ready: Integrate pretrained models, chatbots, and NVIDIA AI blueprints to accelerate time-to-market.The future of AI cloudsWith Gcore AI Cloud Stack, enterprises no longer need to spend years building the operational, technical, and commercial capabilities required to utilize and monetize GPU infrastructure. Instead, they can launch in a few months with a hyperscaler-grade solution designed for today’s AI demands.Whether you’re a cloud service provider, an enterprise investing in AI infrastructure, or a partner looking to accelerate GPU monetization, AI Cloud Stack gives you the speed, scalability, and GTM support you need.Ready to turn your GPU clusters into a fully monetized, production-grade AI cloud? Talk with our AI experts to learn how you can go from bare metal to model-as-a-service in months, not years.Get a customized consultation

Subscribe to our newsletter

Get the latest industry trends, exclusive insights, and Gcore updates delivered straight to your inbox.