Radar has landed - discover the latest DDoS attack trends. Get ahead, stay protected.Get the report
Under attack?

Products

Solutions

Resources

Partners

Why Gcore

  1. Home
  2. Blog
  3. Choosing Between Managed and Self-Managed Kubernetes: Key Differences Explained

Choosing Between Managed and Self-Managed Kubernetes: Key Differences Explained

  • By Gcore
  • May 31, 2024
  • 6 min read
Choosing Between Managed and Self-Managed Kubernetes: Key Differences Explained

When you decide to use Kubernetes to run your applications, your first decision should typically be whether to opt for a managed Kubernetes service or self-hosted Kubernetes. These are two different ways to run K8s clusters, and they differ in many ways, from infrastructure provisioning to day-to-day maintenance. This article will guide you through the differences between these two ways of running Kubernetes. By the end, you’ll have a detailed understanding of the key differences between managed and self-managed Kubernetes, so you can make an informed decision about what’s best for your use case.

Please note that this article focuses on the technical and administrative side of managing K8s, rather than a financial (TCO) comparison.

Key Differences: Managed vs. Self-Managed K8s Cluster

The chart below summarizes these key differences between a self-hosted and a managed Kubernetes cluster, using Gcore Managed Kubernetes as an example of the latter.

Effort for cluster deployment
 Self-hosted K8s clusterManaged K8s cluster
Number of key steps from provisioning the infrastructure to accessing the deployed cluster122
Average time to get a cluster up and runningFrom hours to months10—15 minutes
Responsibilities
OperationsSelf-hosted K8s clusterManaged K8s cluster
Infrastructure deployment and maintenance 
Configuring and running VMs or servers for master and worker nodes, network infrastructure
UserProvider
K8s cluster provisioning and maintenance 
Control plane: API server, etcd, scheduler, controller-manager, cloud-controller-manager Worker nodes: kubelet, kube-proxy
UserProvider
K8s operation management 
Monitoring, cluster autoscaling
UserProvider
K8s updates and patchesUserProvider
Additional tools 
CI/CD, service mech, logging, advanced security features
UserUser/Provider*
Technical supportUserUser/Provider**
* Gcore Managed Kubernetes offers a built-in feature to collect logs, Cilium CNI with the service mesh capabilities, and advanced DDoS protection. 
** A provider typically offers technical support for the infrastructure and control plane.

As you can see, deploying a K8s cluster yourself requires much more effort to get it up and running than a managed K8s cluster. Also, unlike a managed K8s, you’ll be responsible for most of the maintenance tasks associated with managing a production-ready K8s cluster, which requires a high level of expertise and ongoing efforts. So, don’t think only about the setup process when estimating your operations team’s total workload for each approach. Consider the full scope of work, including the more substantial element of ongoing maintenance.

Setting Up a Kubernetes Cluster

Let’s look at a practical comparison between the steps required to set up a Kubernetes cluster using each method. On the self-managed side, we’ll look at a well-known guide by a K8s evangelist Kelsey Hightower, “Kubernetes The Hard Way.” For the managed approach, we’ll use Gcore Managed Kubernetes.

Here is a comparison of the key steps in both scenarios with the links to the documentation that guides you through each step:

Self-hosted K8s clusterGcore Managed Kubernetes cluster
  1. Provisioning virtual or physical machines
  2. Setting up the Jumpbox
  3. Provisioning compute resources
  4. Provisioning the CA and generating TLS certificates
  5. Generating Kubernetes configuration files for authentication
  6. Generating the data encryption config and key
  7. Bootstrapping the etcd cluster
  8. Bootstrapping the Kubernetes control plane
  9. Bootstrapping the Kubernetes worker nodes
  10. Configuring kubectl for remote access
  11. Provisioning pod network routes
  12. Smoke test
  1. Configuring worker nodes and network
  2. Configuring kubectl for remote access

“Kubernetes The Hard Way” is a step-by-step guide that explains how to bootstrap a K8s cluster manually without scripts or automation tools. Using this guide, you can deploy a basic Kubernetes cluster with control plane components running on a master node and two worker nodes.

When running a Gcore Managed Kubernetes cluster, all you need to do is configure resources for worker nodes and the network and select a K8s version. The master (control plane) node runs automatically under Gcore’s management.

To summarize:

  • Self-hosted K8s requires twelve key steps to get a cluster up and running and accessing. This can take 1.5—2 hours if you’re an experienced Linux user or months if you’re a novice.
  • Managed K8s requires only two key steps to get a cluster up and running and accessing it. Even if you’re new to Kubernetes, this takes 10—15 minutes.

Pros and Cons of Each Approach

Running a managed Kubernetes and self-managed Kubernetes cluster both bring advantages and disadvantages.

Self-hosted K8s clusterManaged K8s cluster
Pros
  • Full control over all K8s components
  • Can be hosted in the cloud or on-premises and customized as needed, offering flexibility
  • Simplified deployment and management
  • Reduced operational overhead, which means fewer in-house resources are required
  • Integrated monitoring and security features
  • Cluster high availability and reliability backed by an SLA
  • Cost-efficiency for ongoing maintenance of production-grade clusters
Cons
  • Operational overhead and maintenance burden
  • Difficulty hiring Kubernetes experts, such as DevOps engineers
  • Higher ongoing operating costs as the K8s infrastructure grows
  • Potentially slow setup process
  • Less customization and control
  • When migrating to a new provider, some manual work may be required to reconfigure cloud resource automation

Managed Kubernetes has more advantages and fewer disadvantages than self-hosted Kubernetes. However, you should carefully consider your requirements and resources to choose the approach that meets your specific needs. Let’s now take a look at what you should keep in mind when selecting a Kubernetes approach.

Which Approach to Choose?

The choice of managed or unmanaged K8s depends on several factors. These include your project requirements, available resources, and level of expertise. Each approach has common use cases and some important factors to consider.

When Self-Hosted Kubernetes May Be Preferable

Full control over K8s. Self-hosted Kubernetes is more appropriate if you need customization, flexibility, and direct control over Kubernetes clusters and their configuration.

An experienced team. If you have two or three experienced K8s or DevOps engineers, they should be able to manage a single, average self-hosted K8s cluster with all the associated infrastructure 24/7. However, according to the 2023 Spectro Cloud survey, 56% of organizations are running more than ten Kubernetes clusters, and 80% expect more growth. Likely, one cluster won’t be enough if your product is going to grow, so this is only a relevant point if you anticipate a small K8s operation that won’t grow.

Predictable and stable workloads. If you run your cluster on servers or VMs and know how many you need, you can prepare your infrastructure in advance. However, if you expect even predictable load peaks, you should also be prepared to scale the infrastructure up and down so that resources are used efficiently and money is not wasted.

When Managed Kubernetes May Be Preferable

Simplified K8s management. If you don’t need granular control over servers or VMs and the control plane, managed Kubernetes is a good choice. Managed Kubernetes automatically handles maintenance, monitoring, and scaling. All you need to do is manage worker nodes and deploy your applications.

Reduced operational costs. As a result of simplified K8s management, you need fewer people to manage your K8s stack. This is a significant savings when you consider that the average DevOps engineer in the US earns $141,000 per year. With managed K8s, a basic level of technology knowledge is sufficient. That’s why some organizations manage without a dedicated ops team when using a managed service—developers can perform K8s-related tasks.

Scalability. Managed Kubernetes allows you to seamlessly scale your cluster resources up and down as your needs change. You can also control the limit of autoscaling compute resources to avoid exceeding planned utilization. This translates into significant savings on computing resources, as much as 30—50% over self-hosted infrastructure.

Other Factors to Consider

The complexity of Kubernetes. Kubernetes is a difficult technology and has a steep learning curve. To be confident working with Kubernetes, an engineer must have years of experience in Linux administration, networking, and virtualization. It’s not easy to find these experts, let alone grow them in-house—but if you have an expert team, they might be able to offer benefits of self-managed Kubernetes that don’t extend to the managed approach, like increased flexibility. A managed K8s provider or outsourced ops team can also handle Kubernetes with expertise.

Engineer scarcity. As mentioned above, you need two or three engineers to run and maintain a self-hosted K8s cluster. In most cases, these should be qualified DevOps engineers who remain in high demand due to their scarcity. According to the 2023 Reveal survey, a shortage of IT professionals with advanced skills was the top challenge in the market for the second year in a row. DevOps engineers were among those highlighted as the most difficult roles to fill. Even if you currently have a strong team, will you be able to maintain in-house K8s management if a team member leaves?

Industry stats. The number of organizations choosing DIY Kubernetes over managed Kubernetes is decreasing every year. VMware, for example, tracks this trend in its “State of Kubernetes” reports. It shows that the percentage of organizations running their own K8s clusters dropped from 29% in 2020 to 10% in 2023.

Summing Up the Decision-Making Process

In general, self-managed Kubernetes may be a good choice for organizations with a stable and high level of expertise, sufficient infrastructure, and substantial human resources. A managed Kubernetes service, like Gcore Managed Kubernetes, may be a better option for organizations that lack these resources and view Kubernetes maintenance as a commodity they can delegate to an experienced partner. This allows the business to focus on its core activities.

Why Choose Gcore Managed Kubernetes?

Gcore Managed Kubernetes relieves you of the operational burdens of setting up and running K8s. It also helps you reduce the engineer scarcity problem because you need fewer resources to support the day-to-day operations of the cluster. With Gcore Managed Kubernetes, you can get a production-ready K8s cluster in minutes, configure autoscaling, and relax about traffic spikes—we take care of these complexities so you can focus on your core business logic.

Conclusion

Managed and self-managed Kubernetes are different ways to run containerized applications, each of which offers benefits and disadvantages. You should carefully consider your project requirements and resources to choose the approach that is right for you.

If you’re looking for reliable, high-performance, and scalable Kubernetes clusters, try Gcore Managed Kubernetes. We offer free cluster management with a 99.9% SLA. Pricing for worker nodes is the same as for our Virtual Machines and Bare Metal.

Explore Gcore Managed Kubernetes

Related articles

5 ways to keep gaming customers engaged with optimal performance

Nothing frustrates a gamer more than lag, stuttering, or server crashes. When technical issues interfere with gameplay, it can be a deal breaker. Players know that the difference between winning and losing should be down to a player’s skill, not lag, latency issues, or slow connection speed—and they want gaming companies to make that possible every time they play.And gamers aren’t shy about expressing their opinion if a game hasn’t met their expectations. A game can live or die by word-of-mouth, and, in a highly competitive industry, gamers are more than happy to spend their time and money elsewhere. A huge 78% of gamers have “rage-quit” a game due to latency issues.That’s why reliable infrastructure is crucial for your gaming offering. A solid foundation is good for your bottom line and your reputation and, most importantly, provides a great gaming experience for customers, keeping them happy, loyal, and engaged. This article suggests five technologies to boost player engagement in real-world gaming scenarios.The technology powering seamless gaming experiencesHaving the right technology behind the scenes is essential to deliver a smooth, high-performance gaming experience. From optimizing game deployment and content delivery to enabling seamless multiplayer scalability, these technologies work together to reduce latency, prevent server overloads, and guarantee fast, reliable connections.Bare Metal Servers provide dedicated compute power for high-performing massive multiplayer games without virtualization overhead.CDN solutions reduce download times and minimize patch distribution delays, allowing players to get into the action faster.Managed Kubernetes simplifies multiplayer game scaling, handling sudden spikes in player activity.Load Balancers distribute traffic intelligently, preventing server overload during peak times.Edge Cloud reduces latency for real-time interactions, improving responsiveness for multiplayer gaming.Let’s look at five real-world scenarios illustrating how the right infrastructure can significantly enhance customer experience—leading to smooth, high-performance gaming, even during peak demand.#1 Running massive multiplayer games with bare metal serversImagine a multiplayer FPS (first-person shooter gaming) game studio that’s preparing for launch and needs low-latency, high-performance infrastructure to handle real-time player interactions. They can strategically deploy Gcore Bare Metal servers across global locations, reducing ping times and providing smooth gameplay.Benefit: Dedicated bare metal resources deliver consistent performance, eliminating lag spikes and server crashes during peak hours. Stable connections and seamless playing are assured for precision gameplay.#2 Seamless game updates and patch delivery with CDN integrationLet’s say you have a game that regularly pushes extensive updates to millions of players worldwide. Instead of overwhelming origin servers, they can use Gcore CDN to cache and distribute patches, reducing download times and preventing bottlenecks.Benefit: Faster updates for players, reduced server tension, and seamless game launches and updates.#3 Scaling multiplayer games with Managed KubernetesAfter a big update, a game may experience a sudden spike in the number of players. With Gcore Managed Kubernetes, the game autoscales its infrastructure, dynamically adjusting resources to meet player demand without downtime.Benefit: Elastic, cost-efficient scaling keeps matchmaking fast and smooth, even under heavy loads.#4 Load balancing for high-availability game serversAn online multiplayer game with a global base requires low latency and high availability. Gcore Load Balancers distribute traffic across multiple regional server clusters, reducing ping times and preventing server congestion during peak hours.Benefit: Consistent, lag-free gameplay with improved regional connectivity and failover protection.#5 Supporting live events and seasonal game launchesIn the case of a gaming company hosting a global in-game event, attracting millions of players simultaneously, leveraging Gcore CDN, Load Balancers, and autoscaling cloud infrastructure can prevent crashes and provide a seamless and uninterrupted experience.Benefit: Players enjoy smooth, real-time participation while the infrastructure is stable under extreme load.Building customer loyalty with reliable gaming infrastructureIn a challenging climate, focusing on maintaining customer happiness and loyalty is vital. The most foolproof way to deliver this is by investing in reliable and secure infrastructure behind the scenes. With infrastructure that’s both scalable and high-performing, you can deliver uninterrupted, seamless experiences that keep players engaged and satisfied.Since its foundation in 2014, Gcore has been a reliable partner for game studios looking to deliver seamless, high-performance gaming experiences worldwide, including Nitrado, Saber, and Wargaming. If you’d like to learn more about our global infrastructure and how it provides a scalable, high-performance solution for game distribution and real-time games, get in touch.Talk to our gaming infrastructure experts

How cloud infrastructure maximizes efficiency in the gaming industry

The gaming industry is currently facing several challenges, with many companies having laid off staff over the past year due to rising development costs and a fall in product demand post-pandemic. These difficult circumstances mean it’s more important than ever for gaming firms of all sizes to maximize efficiency and keep costs down. One way companies can do this is by implementing reliable infrastructure that supports the speedy development of new games.This article explores how dependable cloud infrastructure at the edge—including virtual machines, bare metal, and GPUs—helps gaming companies work more efficiently. Edge computing allows developers to build, test, and deploy games faster while minimizing latency, reducing server costs, and handling complex rendering and AI workloads.The key benefits of edge cloud infrastructure for gamingReliable cloud infrastructure benefits gaming companies in a variety of ways. It’s a replacement for relying on outdated arrangements such as proprietary on-premises data centers, which lack flexibility, have limited scalability, require significant upfront investment, and need teams that are fully dedicated to their maintenance and management. Cloud compute resources, including virtual machines, bare metal servers, and GPUs, can support your game development and testing more cost-effectively, keeping your gaming company competitive in the market and cost efficient.Here’s how reliable cloud infrastructure can benefit your business:Speeds up development cycles: Cloud-based infrastructure accelerates game builds, testing, and deployment by providing on-demand access to high-performance compute resources. Developers can run several testing environments and collaborate from anywhere.Scales on demand: From indie studios launching a first title to major AAA developers handling millions of players, cloud solutions can scale resources instantly. Storage options and load balancing enable infrastructure to adapt to player demand, preventing performance issues during peak times while optimizing costs during off-peak periods.Offers low-latency performance: Cloud solutions reduce lag, optimize the experience for developers and end-users by deploying servers close to players, and improve their in-game experience.Delivers high-performance compute: Bare Metal servers and GPU instances deliver the power required for game development by providing dedicated resources. This enables faster rendering, complex simulations, and seamless real-time processing for graphics-intensive applications, leading to smooth gameplay experiences and faster iteration cycles.Maximizes cost efficiency: Flexible pricing models help studios optimize costs while maintaining high performance. Pay-as-you-go plans mean companies only pay for the resources used. Commitment plans that give discounts for use cases that require consistent/planned capacity are also available.How Gcore cloud infrastructure works: real-life examplesGcore cloud infrastructure can be helpful in many common scenarios for developers. Here are some real-world examples demonstrating how Gcore virtual machines and GPUs can help:Example 1: Faster game building and testing with scalable virtual machinesLet’s say a game studio developing a cross-platform game needs to compile large amounts of code and assets quickly. By leveraging Gcore’s Virtual Machines, they can create automated CI/CD pipelines that speed up game builds and testing across different environments, reducing wait times. Scalable virtual machines allow developers to spin up multiple test environments on demand, running compatibility and performance tests simultaneously.Example 2: High-performance graphics rendering with GPU computeVisually rich games (like open-world role-playing games) need to render complex 3D environments efficiently. Instead of investing in expensive local hardware, they can use Gcore’s GPU infrastructure to accelerate rendering and AI-powered animation workflows. Access to powerful GPUs without upfront investment enables faster iteration of visual assets and machine-learning-driven game enhancements.If your business faces rendering challenges, one of our experts can advise you on the most suitable cloud infrastructure package.Partnering for success: why gaming companies choose GcoreIn a challenging gaming industry climate, it’s vital to have the right tools and solutions at your disposal. Cloud infrastructure at the edge can significantly enhance game development efficiency for gaming businesses of all sizes.Gcore was founded in 2014 for gamers, by gamers, and we have been a trusted partner to global gaming companies including Nitrado, Saber, and Wargaming since day one. If you’d like to learn more about our gaming industry expertise and how our cloud infrastructure can help you operate in a more efficient and cost effective way, get in touch.Talk to us about your gaming cloud infrastructure needs

Edge cloud trends 2025: AI, big data, and security

Edge cloud is a distributed computing model that brings cloud resources like compute, storage, and networking closer to end users and devices. Instead of relying on centralized data centers, edge cloud infrastructure processes data at the network’s edge, reducing latency and improving performance for real-time applications.In 2025, the edge cloud landscape will evolve even further, shaping industries from gaming and finance to healthcare and manufacturing. But what are the key trends driving this transformation? In this article, we’ll explore five key trends in edge computing for 2025 and explain how the technology helps with pressing issues in key industries. Read on to discover whether it’s time for your company to adopt edge cloud computing.#1 Edge computing is integral to modern infrastructureEdge computing is on the rise and is set to become an indispensable technology across industries. By the end of this year, at least 40% of larger enterprises are expected to have adopted edge computing as part of their IT infrastructure. And this trend shows no signs of slowing. By the end of 2028, worldwide spending for edge computing is anticipated to reach $378 billion. That’s almost a 50% increase from 2024. There’s no question that edge computing is rapidly becoming integral to modern businesses.#2 Edge computing will power AI-driven, real-time workloadsAs real-time digital experiences become the norm, the demand for edge computing is accelerating. From video streaming and immersive XR applications to AI-powered gaming and financial trading, industries are pushing the limits of latency-sensitive workloads. Edge cloud computing provides the necessary infrastructure to process data closer to users, meeting their demands for performance and responsiveness. AI inference will become part of all kinds of applications, and edge computing will deliver faster responses to users than ever before.New AI-powered features in mobile gaming are driving greater demand for edge computing. While game streaming services haven’t yet gained widespread adoption, the high computational demands of AI inference could change that. Since running a large language model (LLM) efficiently on a smartphone is still impractical, these games require high-performance support from edge infrastructure to deliver a smooth experience.Multiplayer games require ultra-low latency for a smooth, real-time experience. With edge computing, game providers can deploy servers closer to players, reducing lag and ensuring high-performance gameplay. Because edge computing is decentralized, it also makes it easier to scale gaming platforms as player demand grows.The same advantage applies to high-frequency trading, where milliseconds can determine profitability. Traders have long benefited from placing servers near financial markets, and edge computing further simplifies deploying infrastructure close to preferred exchanges, optimizing trade execution speeds.#3 Edge computing will handle big dataEmerging real-time applications generate massive volumes of data. IoT devices, stock exchanges, and GenAI models all produce and rely on vast datasets, requiring efficient processing solutions.Traditionally, organizations have managed large-scale data ingestion through horizontal scaling in cloud computing. Edge computing is the next logical step, enabling big data workloads to be processed closer to their source. This distributed approach accelerates data processing, delivering faster insights and improved performance even when handling huge quantities of data.#4 Edge computing will simplify data sovereigntyThe concept of data sovereignty states that data is subject to the same laws and regulations as the user who created it. For example, the GDPR in Europe requires organizations to store their citizens’ and residents’ data on servers subject to European laws. This can cause headaches for companies working with a centralized cloud, since they may have to comply with a complex web of fast-changing data sovereignty laws. Put simply: cloud location matters.With data privacy regulations on the rise, edge computing is emerging as a key technology to simplify compliance. Edge cloud means allows running distributed server networks and geofencing data to servers in specific countries. The result is that companies can scale globally without worrying about compliance, since edge cloud companies like Gcore automate most of the regulatory requirement processes.#5 Edge computing will improve securityEdge computing is crucial to solving the issues of a globally connected world, but its security story has until now been a double-edged sword. On the one hand, the edge ensures data doesn’t need to travel great distances on public networks, where it can be exposed to malicious attacks. On the other hand, central data centers are much easier to secure than a distributed server network. More servers mean a higher potential for one to be compromised, making it a potentially risky choice for privacy-sensitive workloads in healthcare and finance.However, cloud providers are starting to add features to their solutions that bring edge security into line with traditional cloud resources. Secure hardware enclaves and encrypted data transmissions deliver end-to-end security, so data will never be accessible in cleartext to an edge location provider or other third parties. If, for any reason, these encryption mechanisms should fail, AI-driven threat scanners can detect and notify quickly.If your business is looking to adopt edge cloud while prioritizing security, look for a provider that specializes in both. Avoid solutions where security is an afterthought or a bolt-on. Gcore cloud servers integrate seamlessly with Gcore Edge Security solutions, so your servers are protected to the highest levels at the click of a button.Unlock the next wave of edge computing with GcoreThe trend is clear: Internet-enabled devices are rapidly entering every part of our lives. This raises the bar for performance and security, and edge cloud computing delivers solutions to meet these new requirements. Distributed data processing means GenAI models can scale efficiently, and location-independent deployments enable high-performance real-time workloads from high-frequency trading to XR gaming to IoT.At Gcore, we provide a global edge cloud platform designed to meet the performance, scalability, and security demands of modern businesses. With over 180 points of presence worldwide, our infrastructure ensures ultra-low latency for AI-powered applications, real-time gaming, big data workloads, and more. Our edge solutions help businesses navigate evolving data sovereignty regulations by enabling localized data processing for global operations. And with built-in security features like DDoS protection, WAAP, and AI-driven threat detection, you leverage the full potential of edge computing without compromising on security.Ready to learn more about why edge cloud matters? Dive into our blogs on cloud data sovereignty.Get in touch to discuss your edge cloud 2025 goals

Gcore 2024 round-up: 10 highlights from our 10th year

It’s been a busy and exciting year here at Gcore, not least because we celebrated our 10th anniversary back in February. Starting in 2014 with a focus on gaming, Gcore is now a global edge AI, cloud, network, and security solutions provider, supporting businesses from a wide range of industries worldwide.As we start to look forward to the new year, we took some time to reflect on ten of our highlights from 2024.1. WAAP launchIn September, we launched our WAAP security solution (web application and API protection) following the acquisition of Stackpath’s edge WAAP. Gcore WAAP is a genuinely innovative product that offers customers DDoS protection, bot management, and a web application firewall, helping protect businesses from the ever-increasing threat of cyber attacks. It brings next-gen AI features to customers while remaining intuitive to use, meaning businesses of all sizes can futureproof their web app and API protection against even the most sophisticated threats.My highlight of the year was the Stackpath WAAP acquisition, which enabled us to successfully deliver an enterprise-grade web security solution at the edge to our customers in a very short time.Itamar Eshet, Senior Product Manager, Security2. Fundraising round: investing in the futureIn July, we raised $60m in Series A funding, reflecting investors’ confidence in the continued growth and future of Gcore. Next year will be huge for us in terms of AI development, and this funding will accelerate our growth in this area and allow us to bring even more innovative solutions to our customers.3. Innovations in AIIn 2024, we upped our AI offerings, including improved AI services for Gcore Video Streaming: AI ASR for transcription and translation, and AI content moderation. As AI is at the forefront of our products and services, we also provided insights into how regulations are changing worldwide and how AI will likely affect all aspects of digital experiences. We already have many new AI developments in the pipeline for 2025, so watch this space…4. Global expansionsWe had some exciting expansions in terms of new cloud capabilities. We expanded our Edge Cloud offerings in new locations, including Vietnam and South Korea, and in Finland, we boosted our Edge AI capabilities with a new AI cluster and two cutting-edge GPUs. Our AI expansion was further bolstered when we introduced the H200 and GB200 in Luxembourg. We also added new PoPs worldwide in locations such as Munich, Riyadh, and Casablanca, demonstrating our dedication to providing reliable and fast content delivery globally.5. FastEdge launchWe kicked off the year with the launch of FastEdge. This lightweight edge computing solution runs on our global Edge Network and delivers exceptional performance for serverless apps and scripts. This new solution makes handling dynamic content even faster and smoother. We ran an AI image recognition model on FastEdge in an innovative experiment. The Gcore team volunteered their pets to test FastEdge’s performance. Check out the white paper and discover our pets and our technological edge.6. PartnershipsWe formed some exciting global partnerships in 2024. In November, we launched a joint venture with Ezditek, an innovator in data center and digital infrastructure services in Saudi Arabia. The joint venture will build, train, and deploy generative AI solutions locally and globally. We also established some important strategic partnerships. Together with Sesterce, a leading European provider of AI infrastructure, we can help more businesses meet the rising challenges of scaling from AI pilot projects to full-scale implementation. We also partnered with LetzAI, a Luxembourg-based AI startup, to accelerate its mission of developing one of the world’s most comprehensive generative AI platforms.7. EventsIt wasn’t all online. We also ventured out into the real world, making new connections at global technology events, including the WAICF AI conference and Viva Tech in Cannes and Paris, respectively; Mobile World Congress in Barcelona; Gamescom in Cologne in August; IBC (the International Broadcasting Convention) in Amsterdam; and Connected World KSA in Saudi Arabia just last month. We look forward to meeting even more of you next year. Here are a few snapshots from 2024.GamescomIBC8. New container registry solutionSeptember kicked off with the beta launch of Gcore Container Registry, one of the backbones of our cloud offering. It streamlines your image storage and management, keeping your applications running smoothly and consistently across various environments.9. GigaOm recognitionBeing recognized by outside influences is always a moment to remember. In August, we were thrilled to receive recognition from tech analyst GigaOm, which noted Gcore as an outperformer in its field. The prestigious accolade highlights Gcore as a leader in platform capability, innovation, and market impact, as assessed by GigaOm’s rigorous criteria.10. New customer success storiesWe were delighted to share some of the work we’ve done for our customers this year: gaming company Fawkes Games and Austrian sports broadcaster and streaming platform fan.at, helping them with mitigating DDoS attacks and providing the infrastructure for their sports technology offering respectively.And as a bonus number 11, if you’re looking for something to read in the new year lull, download our informative long reads on topics including selecting a modern content delivery network, cyber attack trends, and using Kubernetes to enhance AI. Download the ebook of your choice below.The essential guide to selecting a modern CDN eBookGcore Radar: DDoS attack trends in Q1-Q2 2024 reportAccelerating AI with KubernetesHere’s to 2025!And that’s it for our 2024 highlights. It’s been a truly remarkable year, and we thank you for being a part of it. We’ll leave you with some words from our CEO and see you in 2025.2024 has been a year of highs, from our tenth anniversary celebrations to the launch of various new products, and from expansion into new markets to connecting with customers (new and old) at events worldwide. Happy New Year to all our readers who are celebrating, and see you for an even bigger and better 2025!Andre Reitenbach, CEOChat with us about your 2025 needs

Edge Cloud updates for December 2024

We are pleased to introduce the latest enhancements to our Edge Cloud platform, delivering greater flexibility, reliability, and control over your infrastructure. These updates include multiple public IP support for Bare Metal and strengthened anti-abuse measures. Exclusively for new accounts, we’re offering a special promotion for Bare Metal server activations. Find all the details in this blog.Multiple public IP support for Bare MetalWe’re introducing multiple public IP support for Bare Metal servers on dedicated public subnetworks, adding flexibility and reliability. With this update, you can configure several public IP addresses for seamless service continuity, making your infrastructure more robust. Your services will remain online without interruption with multiple IPs, even if one IP address fails.This functionality brings significant flexibility to scale your operations effortlessly. It’s particularly useful for handling diverse workloads, traffic routing, and complex hosting environments. It’s also an ideal solution for hypervisor environments where segregating traffic across various IPs is crucial.Here’s what you need to know to before getting started:This feature works exclusively with a dedicated public subnet.To enable this functionality, please place a request with our support team.The number of supported public IPs is limited by the size of the dedicated subnet assigned to your Bare Metal server.Please contact our support team to start using multiple public IPs.Strengthened anti-abuse measuresWe’ve introduced new anti-abuse measures to detect and mitigate abusive traffic patterns, enhancing service reliability and protecting your infrastructure from malicious activity. These updates help safeguard your network and achieve consistent application performance.Get more information in our Product Documentation.Try Bare Metal with 35% off this monthGcore Bare Metal servers are the perfect choice for delivering unmatched performance, designed to handle your most demanding workloads. With global availability, they provide a reliable, high-performance, and scalable solution wherever you need them. For a limited time, new customers can enjoy 35% off on High-frequency Bare Metal Servers for two months*.If you’ve been disappointed by your provider during peak season or you’re looking to scale going into 2025, this is the opportunity for you. Take advantage of the offer by January 7 to secure your discount, available for the first 500 customers.Unlock the full potential of Edge CloudThese updates reflect our ongoing commitment to supporting your business with tools and features that address your computing needs. Whether enhancing flexibility, simplifying server management, or improving cost oversight, our Edge Cloud platform is built to help you achieve your goals with confidence.We invite you to explore these enhancements today and take full advantage of the capabilities now available.Discover Gcore Bare Metal* Note: This promotion is available until January 7, 2025. The discount applies for two months from the subscription date and is valid exclusively for new customers activating high-frequency Bare Metal servers. After two months, the discount will be automatically removed. The offer is limited to the first 500 activations.

Edge Cloud Updates for October 2024

Today we’re announcing a range of key enhancements to our Edge Cloud solutions, all crafted to provide you with greater power, flexibility, and control over your cloud infrastructure. Read on to discover why we were named as a Major Player in the 2024 IDC MarketScape for European Public Cloud and learn about Bare Metal availability.Gcore Named Major Player in IDC MarketScape for European Public Cloud 2024We’re excited to announce that we have been recognized as a Major Player in the IDC MarketScape: European Public Cloud Infrastructure (IaaS) 2024 report. This report evaluates and compares public cloud infrastructure-as-a-service (IaaS) providers across Europe, including global and regional cloud providers, to identify the most impactful players in the IaaS landscape.This recognition as a Major Player highlights our commitment at Gcore to providing high-quality cloud services that empower businesses to innovate, scale, and secure their applications with unmatched confidence. We strive to support our customers’ needs with robust solutions tailored for performance, security, and scalability, minimizing the complexities of infrastructure management so you can focus on developing your business.We invite you to read the full press release to learn more.Introducing Additional High-Frequency Bare Metal ServersUnlock the power of our latest high-frequency bare metal server in Manassas, Amsterdam, Santa Clara, Singapore, Sydney, and Luxembourg. With 128 GB RAM capacity, this new additional is specifically designed for compute-intensive, latency-sensitive workloads.This new addition to our Bare Metal lineup provides the performance and reliability to accelerate your most demanding applications. Benefit from dedicated compute power, efficiency, and low latency, perfect for high-performance computing, real-time data analysis, and large-scale simulations.Gcore Bare Metal servers are available in 19 locations on six continents. With just a few clicks in the Gcore Customer Portal, you can easily set up your new high-frequency server. Or, get in touch if you’d like to talk to a Gcore expert.ConclusionWith these October 2024 updates, we continue our commitment to delivering the tools, performance, and reliability you need to build and scale your business with confidence. Stay tuned for more updates as we continue to improve our Edge Cloud solutions.Discover Gcore Edge Cloud

Subscribe to our newsletter

Get the latest industry trends, exclusive insights, and Gcore updates delivered straight to your inbox.