Gcore has expanded the regional coverage of its cloud services, including Infrastructure as a Service, AI IPU Infrastructure, Logging as a Service, and Managed Kubernetes. New points of presence are located in the US, Asia, and EMEA. Read on to learn more about the services and their locations.
IaaS: 23 Locations
As an IaaS (Infrastructure as a Service) provider, Gcore offers virtual machines, bare metal servers, S3 storage, load balancing, and other cloud services. With Gcore’s IaaS, you get all the necessary building blocks for your cloud infrastructure.
In May 2023, we added new points of presence in Dubai (UAE) and Newport (UK.) The total list of Gcore Cloud PoPs comprises 23 locations across the Americas, Asia, Africa, and EMEA.
AI IPU infrastructure: 3 Locations
Gcore also provides AI Infrastructure as a Service based on Graphcore IPUs. AI IPU infrastructure speeds up machine learning and produces outstanding results for language processing, visual computing, and graph neural networks.
AI IPU infrastructure is now available in Luxembourg, Amsterdam, and Newport.
LaaS: 2 Locations
LaaS (Logging as a Service) is a cloud log management platform designed to collect, store, process, and analyze logs from infrastructure and applications. LaaS makes it easier to analyze events and data from multiple services in a single dashboard. With Gcore’s LaaS, you can detect and resolve errors in your infrastructure, investigate security incidents, check server connectivity, and more.
LaaS is available in Luxembourg and Manassas.
FaaS: 9 Locations
FaaS (Function as a Service) is a cloud service that lets you run code without worrying about the underlying infrastructure. You simply write a discrete piece of code called “function” and deploy it in our cloud environment. A function runs on demand, and you are charged only for its execution. FaaS helps to save costs on deploying a simple application, expanding functionality, and reducing time-to-market.
FaaS is now available in core regions in addition to edge regions. Here is the complete list of locations:
- Luxembourg
- Luxembourg-2
- Manassas
- Singapore
- Tokyo
- Santa Clara
- Frankfurt
- Istanbul
- Chicago
Managed Kubernetes: 4 Locations
Managed Kubernetes is a platform service that allows you to quickly and easily deploy a prebuilt Kubernetes cluster based on Gcore Cloud resources. Gcore’s team is responsible for maintaining the infrastructure and cluster, so you can quickly launch and easily scale your containerized applications.
The list of the Managed Kubernetes locations includes:
- Luxembourg
- Manassas
- Singapore
- Frankfurt
Gcore Basic: 5 Locations
Gcore Basic is a low-cost virtual machine with partial CPU usage. It is suitable for simple tasks such as hosting a website or blog, running a pet project, and deploying a private VPN. Gcore Basic is a great solution for home users, developers, and small business owners. You can deploy a virtual machine in a couple of minutes and integrate it with any Gcore Cloud service. Benefits include the latest Intel® Xeon® 4314 CPU, free built-in DDoS protection, and free egress traffic.
The list of the Gcore Basic locations includes:
- Frankfurt
- Amsterdam
- Manassas
- Hong Kong
- Tokyo
Take Advantage of Gcore’s Cloud Services
By expanding our regional coverage, we aim to improve the cloud experience for our customers, provide advanced IaaS and PaaS services anywhere, and enable global companies to achieve scalability across their distributed services and teams. Gcore’s network consists of 140+ points of presence worldwide, including 23 unique cloud locations. We are constantly enhancing our services for the convenience and efficiency of our customers, regardless of their location.
Related articles

How to Speed Up Dynamic Content Delivery Using a CDN
In today’s websites and applications, there are many sections or even pages that are generated according to user properties and preferences. This means that part of the website content is assembled and delivered dynamically as a response to the user’s request.Originally, CDN providers delivered only static web content by caching it on servers around the world, thereby reducing the delivery time to users. They are not designed for dynamic content acceleration.In this article, we explore what makes dynamic content special and how Gcore CDN can speed up its delivery.What is dynamic content?Generally speaking, dynamic content is the content on web pages that is generated when end users request it. Content generation technologies include ASP, JSP, PHP, Perl, CGI requests, and API calls (POST, PUT, and PATCH requests).What the final page with dynamic content will look like depends on distinct factors such as the behavior and preferences of the users on a site, their geolocation, and so on.By using dynamic content, businesses are able to personalize pages. For example:Online stores adapt their product feeds to their customers. Users with different order histories and profiles are served different recommendation feeds, which makes it possible to offer more relevant products and increase conversions.News outlets offer different versions of their website for different readers. Subscribers who have paid for a subscription see full versions of the website, tailored to their interests. For those without a subscription, only the introductory part of the general news block is displayed, along with a pop-up with an offer to purchase a subscription.Franchises localize their sites depending on geolocation. The site’s interface (language, addresses, hours of operation) automatically changes depending on the region in which the user requesting the page is located.With the proliferation of dynamic content on the modern web, there is a challenge in delivering it.What is the challenge of dynamic content delivery?If a business is focused on the global market, content needs to reach users quickly, no matter how remote they are from the origin server. To optimize the delivery of static content, there is a traditional CDN infrastructure consisting of caching servers located around the world.Dynamic content, however, cannot be cached, because it is generated individually for each user. This makes it difficult to use traditional CDNs for sites that contain both types of content. Static site files will be delivered to users from the nearest caching Edge server, while dynamic content will be proxied from the origin, resulting in increased download time.That said, it is still possible to optimize dynamic content delivery. To do so, choose CDNs that provide state-of-the-art delivery acceleration methods. Gcore’s next-gen Edge network architecture uses everything available to accelerate dynamic content delivery as much as possible, and we will look at each of these technologies in detail in this article.How does Gcore’s next-gen CDN accelerate dynamic content delivery?1. Optimized TCP connectionsFor the origin server to respond to a user request for dynamic content on a site via HTTP, a TCP connection must be established between them. The TCP protocol is characterized by reliability: when transmitting data, it requires the receiving side to acknowledge that the packets were received. If a failure occurs and the packets are not received, the desired data segment is resent. However, this reliability comes at the cost of the data rate, slowing it down.Gcore CDN uses two approaches to optimize the speed of the TCP connection:Increasing the congestion window in TCP slow start. TCP slow start is the default network setting that allows you to determine the maximum capacity of a connection safely. It incrementally increases the congestion window size (the number of packets before confirmation is required) if the connection remains stable. When a TCP connection goes through an Edge network, we can increase the congestion window size because we are confident in the stability of the network. In this case, the number of packets will be higher even at the beginning of the connection, allowing dynamic content loading to happen faster.Establishing persistent HTTP connections. By using the HTTP/2 protocol, our Edge network supports multiplexing, which allows multiple data streams to be transmitted over a single, established TCP connection. This means that we can reuse existing TCP connections for multiple HTTP requests, reducing the amount of time needed for traversal and speeding up delivery.Figure 1. Optimized TCP connections within Gcore Edge Network2. Optimized TLS handshakesHTTPS connections use the TLS cryptographic protocol, which secures data transmission and protects it from unauthorized access. To establish a secure TLS connection, three handshakes must be performed between the client and the server during which they exchange security certificate data and generate a session encryption key.It takes a significant amount of time to establish a secure connection. If the RTT (round-trip time) between the origin server and the client is 150 milliseconds, the total connection time will be 450 ms (3 × 150 ms):Figure 2. Three handshakes are required to establish a TLS connectionWhen the source server is connected to the Gcore CDN, TLS handshakes are performed with the help of intermediaries: Edge servers located as close as possible to the user (client) and the origin server. Edge servers belong to the same trusted network, so there is no need to establish a connection between them each time; once is sufficient.Through this method, the connection will be established in 190 ms (more than twice as fast). This time includes three handshakes between the client and the nearest edge server (3 × 10 ms), one handshake between servers in the Edge network (130 ms), and three handshakes between the nearest Edge server and the source (3 × 10 ms):Figure 3. TLS connection establishing with Gcore Edge Network3. WebSockets supportWebSocket is a bidirectional protocol for transferring data between a client and a server via a persistent connection. It allows for real-time message exchange without the need to break connections and send additional HTTP requests.In a standard approach, the client needs to send regular requests to the server to determine if any new information has been received. This increases the load on the origin server, reducing the request processing speed. It also causes delays in content delivery because the browser sends requests at regular intervals and cannot send a new message to the client immediately.In comparison, WebSocket establishes and supports a persistent connection without producing additional load by re-establishing the connection. When a new message appears, the server sends it to the client immediately.Figure 4. The difference between content delivery without and with WebSocketWebSocket support can be enabled in the Gcore interface in two clicks.4. Intelligent routingDynamic content delivery can be accelerated by optimizing packet routing. In the Gcore CDN, a user’s request is routed to the closest Edge server, then passes within the network to the closest server to the source.Network connectivity is critical to achieving high-speed delivery, and Gcore has over 11,000 peering partners to ensure this. Once inside the network, traffic can then bypass the public internet and circulate through ISP networks.We constantly measure network congestion, check connection quality, and perform RUM monitoring. This allows our system to intelligently calculate the best possible route for each request our Edge network receives and increases the overall delivery speed, regardless of whether you’re using static or dynamic content.5. Content prefetchingPrefetching is a technique to speed up content delivery by proactively loading it to Edge servers before end users even request it. It is traditionally associated with static content delivery. But it also can accelerate dynamic content delivery by preloading static objects used in dynamically generated answers.In this case, when an end user requests something, the web server will generate the content with linked objects already on the Edge servers. This reduces the number of requests to the origin server and improves the overall web application performance.How to enable dynamic content delivery in Gcore’s CDNTo enable dynamic content acceleration, you need to integrate the whole website with our CDN by following these step-by-step instructions. In this case, you also need to use our DNS service (it has a free plan) to connect the domain of your website with our DNS points of presence for better balancing.What’s next?Modern applications will be more customized and tuned to custom parameters. Providing users with the most relevant content could become a significant competitive advantage for online businesses.Going in parallel with a constant need for decreased latency, this tendency is pushing forward serverless computing, an emerging technology that is focused on running an application code right on cloud Edges. In addition to overall simplifying the app deployment process, it will open a wide range of opportunities for content customization.We are developing serverless computing products to provide users with the best possible performance and improve their overall web experience. We will keep you informed about the progress and significant updates.Discover Gcore CDN possibilities that give your business access to a high-capacity network with hundreds of Edge servers worldwide. It can improve your web application performance and will allow you to personalize the user experience.Learn more about Gcore CDN

How to Speed Up Dynamic Content Delivery Using a CDN
In today’s websites and applications, there are many sections or even pages that are generated according to user properties and preferences. This means that part of the website content is assembled and delivered dynamically as a response to the user’s request.Originally, CDN providers delivered only static web content by caching it on servers around the world, thereby reducing the delivery time to users. They are not designed for dynamic content acceleration.In this article, we explore what makes dynamic content special and how Gcore CDN can speed up its delivery.What is dynamic content?Generally speaking, dynamic content is the content on web pages that is generated when end users request it. Content generation technologies include ASP, JSP, PHP, Perl, CGI requests, and API calls (POST, PUT, and PATCH requests).What the final page with dynamic content will look like depends on distinct factors such as the behavior and preferences of the users on a site, their geolocation, and so on.By using dynamic content, businesses are able to personalize pages. For example:Online stores adapt their product feeds to their customers. Users with different order histories and profiles are served different recommendation feeds, which makes it possible to offer more relevant products and increase conversions.News outlets offer different versions of their website for different readers. Subscribers who have paid for a subscription see full versions of the website, tailored to their interests. For those without a subscription, only the introductory part of the general news block is displayed, along with a pop-up with an offer to purchase a subscription.Franchises localize their sites depending on geolocation. The site’s interface (language, addresses, hours of operation) automatically changes depending on the region in which the user requesting the page is located.With the proliferation of dynamic content on the modern web, there is a challenge in delivering it.What is the challenge of dynamic content delivery?If a business is focused on the global market, content needs to reach users quickly, no matter how remote they are from the origin server. To optimize the delivery of static content, there is a traditional CDN infrastructure consisting of caching servers located around the world.Dynamic content, however, cannot be cached, because it is generated individually for each user. This makes it difficult to use traditional CDNs for sites that contain both types of content. Static site files will be delivered to users from the nearest caching Edge server, while dynamic content will be proxied from the origin, resulting in increased download time.That said, it is still possible to optimize dynamic content delivery. To do so, choose CDNs that provide state-of-the-art delivery acceleration methods. Gcore’s next-gen Edge network architecture uses everything available to accelerate dynamic content delivery as much as possible, and we will look at each of these technologies in detail in this article.How does Gcore’s next-gen CDN accelerate dynamic content delivery?1. Optimized TCP connectionsFor the origin server to respond to a user request for dynamic content on a site via HTTP, a TCP connection must be established between them. The TCP protocol is characterized by reliability: when transmitting data, it requires the receiving side to acknowledge that the packets were received. If a failure occurs and the packets are not received, the desired data segment is resent. However, this reliability comes at the cost of the data rate, slowing it down.Gcore CDN uses two approaches to optimize the speed of the TCP connection:Increasing the congestion window in TCP slow start. TCP slow start is the default network setting that allows you to determine the maximum capacity of a connection safely. It incrementally increases the congestion window size (the number of packets before confirmation is required) if the connection remains stable. When a TCP connection goes through an Edge network, we can increase the congestion window size because we are confident in the stability of the network. In this case, the number of packets will be higher even at the beginning of the connection, allowing dynamic content loading to happen faster.Establishing persistent HTTP connections. By using the HTTP/2 protocol, our Edge network supports multiplexing, which allows multiple data streams to be transmitted over a single, established TCP connection. This means that we can reuse existing TCP connections for multiple HTTP requests, reducing the amount of time needed for traversal and speeding up delivery.Figure 1. Optimized TCP connections within Gcore Edge Network2. Optimized TLS handshakesHTTPS connections use the TLS cryptographic protocol, which secures data transmission and protects it from unauthorized access. To establish a secure TLS connection, three handshakes must be performed between the client and the server during which they exchange security certificate data and generate a session encryption key.It takes a significant amount of time to establish a secure connection. If the RTT (round-trip time) between the origin server and the client is 150 milliseconds, the total connection time will be 450 ms (3 × 150 ms):Figure 2. Three handshakes are required to establish a TLS connectionWhen the source server is connected to the Gcore CDN, TLS handshakes are performed with the help of intermediaries: Edge servers located as close as possible to the user (client) and the origin server. Edge servers belong to the same trusted network, so there is no need to establish a connection between them each time; once is sufficient.Through this method, the connection will be established in 190 ms (more than twice as fast). This time includes three handshakes between the client and the nearest edge server (3 × 10 ms), one handshake between servers in the Edge network (130 ms), and three handshakes between the nearest Edge server and the source (3 × 10 ms):Figure 3. TLS connection establishing with Gcore Edge Network3. WebSockets supportWebSocket is a bidirectional protocol for transferring data between a client and a server via a persistent connection. It allows for real-time message exchange without the need to break connections and send additional HTTP requests.In a standard approach, the client needs to send regular requests to the server to determine if any new information has been received. This increases the load on the origin server, reducing the request processing speed. It also causes delays in content delivery because the browser sends requests at regular intervals and cannot send a new message to the client immediately.In comparison, WebSocket establishes and supports a persistent connection without producing additional load by re-establishing the connection. When a new message appears, the server sends it to the client immediately.Figure 4. The difference between content delivery without and with WebSocketWebSocket support can be enabled in the Gcore interface in two clicks.4. Intelligent routingDynamic content delivery can be accelerated by optimizing packet routing. In the Gcore CDN, a user’s request is routed to the closest Edge server, then passes within the network to the closest server to the source.Network connectivity is critical to achieving high-speed delivery, and Gcore has over 11,000 peering partners to ensure this. Once inside the network, traffic can then bypass the public internet and circulate through ISP networks.We constantly measure network congestion, check connection quality, and perform RUM monitoring. This allows our system to intelligently calculate the best possible route for each request our Edge network receives and increases the overall delivery speed, regardless of whether you’re using static or dynamic content.5. Content prefetchingPrefetching is a technique to speed up content delivery by proactively loading it to Edge servers before end users even request it. It is traditionally associated with static content delivery. But it also can accelerate dynamic content delivery by preloading static objects used in dynamically generated answers.In this case, when an end user requests something, the web server will generate the content with linked objects already on the Edge servers. This reduces the number of requests to the origin server and improves the overall web application performance.How to enable dynamic content delivery in Gcore’s CDNTo enable dynamic content acceleration, you need to integrate the whole website with our CDN by following these step-by-step instructions. In this case, you also need to use our DNS service (it has a free plan) to connect the domain of your website with our DNS points of presence for better balancing.What’s next?Modern applications will be more customized and tuned to custom parameters. Providing users with the most relevant content could become a significant competitive advantage for online businesses.Going in parallel with a constant need for decreased latency, this tendency is pushing forward serverless computing, an emerging technology that is focused on running an application code right on cloud Edges. In addition to overall simplifying the app deployment process, it will open a wide range of opportunities for content customization.We are developing serverless computing products to provide users with the best possible performance and improve their overall web experience. We will keep you informed about the progress and significant updates.Discover Gcore CDN possibilities that give your business access to a high-capacity network with hundreds of Edge servers worldwide. It can improve your web application performance and will allow you to personalize the user experience.Learn more about Gcore CDN

Cilium CNI is Now Available in Gcore Managed Kubernetes
We’re excited to announce that we now support Cilium in Gcore Managed Kubernetes. Cilium provides advanced networking and security capabilities, making it easier to manage large-scale Kubernetes deployments. It also offers flexible and robust network policy management, which is especially useful for organizations with strict security requirements. In this article, we’ll explore key Cilium features and benefits, compare it to Calico—another container network interface (CNI) that we support—and explain how to enable Cilium in Gcore Managed Kubernetes.What Is Cilium?Cilium is a CNI that provides powerful networking, security, and observability capabilities for container orchestration systems like Kubernetes. It’s based on eBPF (Extended Berkeley Packet Filter) technology, which allows it to handle networking functions at a high speed with minimal overhead. eBPF allows programs to run directly in the Linux kernel and offers broad functionality beyond basic filtering. As a result, Cilium enables the effortless management of clusters, with a larger number of pods and nodes than CNIs based on previous-generation technologies like iptables.Cilium CNI is an open-source CNCF (Cloud Native Computing Foundation) project that reached the “Graduated” maturity level in 2023, indicating its stability for production environments. It has increasingly been integrated into managed Kubernetes services.Key Features of CiliumCilium offers three main sets of features, respectively addressing networking, security, and observability. The most important elements of each are as follows.NetworkingHigh performance: Enables the creation and removal of thousands of containers in seconds, allowing the management of large and dynamic container environments.L7 network policies: Supports OSI Layer 7 network policies for ingress and egress traffic based on application protocols such as HTTP and TCP. Traditional L3 and L4 policies are also supported.Layer 4 load balancer: Offers high-performance load balancing based on BGP, XDP, and eBPF.Gateway API: Enables advanced routing capabilities beyond the limitations of the Ingress API, such as header modification, traffic splitting, and URL rewriting. Gateway API also provides a fully functional, no-sidecar service mesh, eliminating the need for additional tools like Istio, and their associated recourse overhead.SecurityPolicy enforcement modes: Offers three levels of rule enforcement for how endpoints accept traffic, from less restrictive to more restrictive. These are suitable for organizations with varying security requirements.Inter-node traffic control: Supports cluster-wide, non-namespaced policies that allow you to specify nodes as source and destination. This makes it easy to filter traffic between different node groups.Transparent encryption: Enables pod-to-pod encryption. Features can be added, such as datapath encryption via in-kernel IPsec or WireGuard and automatic key rotation with overlapping keys.ObservabilityService map: Supports integration with Hubble, which provides real-time monitoring of traffic and service interactions visually represented through a dynamic service connection diagram. Support for an out-of-the-box Hubble UI will be introduced in 2024.Metrics and tracing export: Enables a solution that empowers users to monitor and streamline their Kubernetes environments.What Types of Workloads Can Benefit from Cilium?Let’s take a look at some examples of workloads that can benefit significantly from using Cilium CNI.Microservices: Cilium’s L7 awareness and granular security policies are well-suited for enforcing communication control between tightly coupled microservices that use API-level security for protocols, like HTTP and gRPC. Its eBPF-based performance helps maintain low latency and high throughput in highly dynamic microservice environments such as messaging systems and authentification-authorization services.Security-sensitive workloads: Cilium’s identity-based security and advanced network policies strengthen security for workloads that require robust protection, such as financial services, government applications, and healthcare.High-performance computing (HPC): Cilium’s efficient network processing and low latency provide benefits for HPC workloads that require fast and trusted communication between nodes. Examples of such workloads include analytical systems and database management systems.Cilium vs. iptables-Based CalicoIn Gcore Managed Kubernetes, we also provide another popular CNI: Calico, which is built on top of iptables. Calico, while simple and reliable, does not perform as well in large-scale clusters and lacks many of Cilium’s advanced features.Calico adds complicated logic to container networking, like iptables PREROUTING, POSTROUTING, and FORWARD. In contrast, the eBPF implemented in Cilium doesn’t have extra layers of network abstraction; it works in the Linux kernel itself, which makes it very fast. Here is a comparison between iptables-based networking and eBPF-based networking that shows the additional logic in Calico.Figure 1: eBPF container networking compared to standard iptables-based (Source: cilium.io)As a result, Cilium passes more traffic with less delay than Calico, given the same resources and conditions. This enhanced throughput is a particular advantage for applications that require access to extensive data, media streaming services, and data upload/download services.Until now, we couldn’t support deployments with more than 110 pods per node because of Calico’s technical limitations. With Cilium, we can support three times that number. Given that we offer Gcore Bare Metal worker nodes, this is a huge benefit for customers who prefer to run large Kubernetes clusters on bare metal servers.However, if Calico meets your needs, you can still use it in your Gcore Managed Kubernetes clusters.How to Enable Cilium in Gcore Managed KubernetesSelect Cilium as your CNI when creating a Kubernetes cluster. The process is as follows:Log in to the Gcore Customer Portal. If you are not registered, sign up using your email, Google, or GitHub account.From the vertical menu on the left, select Cloud, open the Kubernetes tab, and click Create Cluster.Figure 2: Creating a Kubernetes clusterIn the “CNI Provider” section, select Cilium:Figure 3: Choosing a CNI providerComplete the cluster setup and click Create Cluster. If you need more information on how to configure a cluster, please refer to our Managed Kubernetes documentation.Once you have connected to your cluster, you can configure the necessary Cilium policies and use them in your Gcore Managed Kubernetes installation. For example, here is a policy to use a simple ingress rule to allow communication between endpoints with frontend and backend labels:apiVersion: "cilium.io/v2"kind: CiliumNetworkPolicymetadata: name: "l3-rule"spec: endpointSelector: matchLabels: role: backend ingress: - fromEndpoints: - matchLabels: role: frontendSee the Cilium documentation and GitHub for more examples of policies that you can customize to your needs.You can also use Network Policy Editor, which provides a simple and user-friendly interface. It allows you to create policies and use the corresponding YAMLs in your Kubernetes clusters.Future Plans: Hubble + CiliumWe plan to integrate out-of-the-box support for Hubble into Cilium later this year. Hubble, an open-source tool developed specifically for Cilium, automatically detects all services within a cluster and maps their interactions. This service map is accessible through any web browser. Using Hubble’s visualizations, you can gain a deeper understanding of service interdependencies and behaviors within your cluster, enabling quicker identification and resolution of network interaction issues.We’ll keep you posted as the feature is released and explain its benefits in more detail.ConclusionWe’re constantly working to enhance our offerings with the latest technologies to meet the evolving needs of our customers. Cilium represents one of these significant advancements. It integrates seamlessly into Gcore Managed Kubernetes, enabling our customers to use advanced networking and security capabilities without complex configuration or setup.Gcore Managed Kubernetes takes care of setting up and maintaining Kubernetes cluster for you. Our team manages master nodes (control plane) while you maintain full control over your worker nodes. Choose from Virtual Instances and Bare Metal Servers as worker nodes, including those powered by GPU accelerators to boost your AI/ML workloads. We provide free, production-grade cluster management with a 99.9% SLA for your peace of mind.Explore Gcore Managed Kubernetes

Cilium CNI is Now Available in Gcore Managed Kubernetes
We’re excited to announce that we now support Cilium in Gcore Managed Kubernetes. Cilium provides advanced networking and security capabilities, making it easier to manage large-scale Kubernetes deployments. It also offers flexible and robust network policy management, which is especially useful for organizations with strict security requirements. In this article, we’ll explore key Cilium features and benefits, compare it to Calico—another container network interface (CNI) that we support—and explain how to enable Cilium in Gcore Managed Kubernetes.What Is Cilium?Cilium is a CNI that provides powerful networking, security, and observability capabilities for container orchestration systems like Kubernetes. It’s based on eBPF (Extended Berkeley Packet Filter) technology, which allows it to handle networking functions at a high speed with minimal overhead. eBPF allows programs to run directly in the Linux kernel and offers broad functionality beyond basic filtering. As a result, Cilium enables the effortless management of clusters, with a larger number of pods and nodes than CNIs based on previous-generation technologies like iptables.Cilium CNI is an open-source CNCF (Cloud Native Computing Foundation) project that reached the “Graduated” maturity level in 2023, indicating its stability for production environments. It has increasingly been integrated into managed Kubernetes services.Key Features of CiliumCilium offers three main sets of features, respectively addressing networking, security, and observability. The most important elements of each are as follows.NetworkingHigh performance: Enables the creation and removal of thousands of containers in seconds, allowing the management of large and dynamic container environments.L7 network policies: Supports OSI Layer 7 network policies for ingress and egress traffic based on application protocols such as HTTP and TCP. Traditional L3 and L4 policies are also supported.Layer 4 load balancer: Offers high-performance load balancing based on BGP, XDP, and eBPF.Gateway API: Enables advanced routing capabilities beyond the limitations of the Ingress API, such as header modification, traffic splitting, and URL rewriting. Gateway API also provides a fully functional, no-sidecar service mesh, eliminating the need for additional tools like Istio, and their associated recourse overhead.SecurityPolicy enforcement modes: Offers three levels of rule enforcement for how endpoints accept traffic, from less restrictive to more restrictive. These are suitable for organizations with varying security requirements.Inter-node traffic control: Supports cluster-wide, non-namespaced policies that allow you to specify nodes as source and destination. This makes it easy to filter traffic between different node groups.Transparent encryption: Enables pod-to-pod encryption. Features can be added, such as datapath encryption via in-kernel IPsec or WireGuard and automatic key rotation with overlapping keys.ObservabilityService map: Supports integration with Hubble, which provides real-time monitoring of traffic and service interactions visually represented through a dynamic service connection diagram. Support for an out-of-the-box Hubble UI will be introduced in 2024.Metrics and tracing export: Enables a solution that empowers users to monitor and streamline their Kubernetes environments.What Types of Workloads Can Benefit from Cilium?Let’s take a look at some examples of workloads that can benefit significantly from using Cilium CNI.Microservices: Cilium’s L7 awareness and granular security policies are well-suited for enforcing communication control between tightly coupled microservices that use API-level security for protocols, like HTTP and gRPC. Its eBPF-based performance helps maintain low latency and high throughput in highly dynamic microservice environments such as messaging systems and authentification-authorization services.Security-sensitive workloads: Cilium’s identity-based security and advanced network policies strengthen security for workloads that require robust protection, such as financial services, government applications, and healthcare.High-performance computing (HPC): Cilium’s efficient network processing and low latency provide benefits for HPC workloads that require fast and trusted communication between nodes. Examples of such workloads include analytical systems and database management systems.Cilium vs. iptables-Based CalicoIn Gcore Managed Kubernetes, we also provide another popular CNI: Calico, which is built on top of iptables. Calico, while simple and reliable, does not perform as well in large-scale clusters and lacks many of Cilium’s advanced features.Calico adds complicated logic to container networking, like iptables PREROUTING, POSTROUTING, and FORWARD. In contrast, the eBPF implemented in Cilium doesn’t have extra layers of network abstraction; it works in the Linux kernel itself, which makes it very fast. Here is a comparison between iptables-based networking and eBPF-based networking that shows the additional logic in Calico.Figure 1: eBPF container networking compared to standard iptables-based (Source: cilium.io)As a result, Cilium passes more traffic with less delay than Calico, given the same resources and conditions. This enhanced throughput is a particular advantage for applications that require access to extensive data, media streaming services, and data upload/download services.Until now, we couldn’t support deployments with more than 110 pods per node because of Calico’s technical limitations. With Cilium, we can support three times that number. Given that we offer Gcore Bare Metal worker nodes, this is a huge benefit for customers who prefer to run large Kubernetes clusters on bare metal servers.However, if Calico meets your needs, you can still use it in your Gcore Managed Kubernetes clusters.How to Enable Cilium in Gcore Managed KubernetesSelect Cilium as your CNI when creating a Kubernetes cluster. The process is as follows:Log in to the Gcore Customer Portal. If you are not registered, sign up using your email, Google, or GitHub account.From the vertical menu on the left, select Cloud, open the Kubernetes tab, and click Create Cluster.Figure 2: Creating a Kubernetes clusterIn the “CNI Provider” section, select Cilium:Figure 3: Choosing a CNI providerComplete the cluster setup and click Create Cluster. If you need more information on how to configure a cluster, please refer to our Managed Kubernetes documentation.Once you have connected to your cluster, you can configure the necessary Cilium policies and use them in your Gcore Managed Kubernetes installation. For example, here is a policy to use a simple ingress rule to allow communication between endpoints with frontend and backend labels:apiVersion: "cilium.io/v2"kind: CiliumNetworkPolicymetadata: name: "l3-rule"spec: endpointSelector: matchLabels: role: backend ingress: - fromEndpoints: - matchLabels: role: frontendSee the Cilium documentation and GitHub for more examples of policies that you can customize to your needs.You can also use Network Policy Editor, which provides a simple and user-friendly interface. It allows you to create policies and use the corresponding YAMLs in your Kubernetes clusters.Future Plans: Hubble + CiliumWe plan to integrate out-of-the-box support for Hubble into Cilium later this year. Hubble, an open-source tool developed specifically for Cilium, automatically detects all services within a cluster and maps their interactions. This service map is accessible through any web browser. Using Hubble’s visualizations, you can gain a deeper understanding of service interdependencies and behaviors within your cluster, enabling quicker identification and resolution of network interaction issues.We’ll keep you posted as the feature is released and explain its benefits in more detail.ConclusionWe’re constantly working to enhance our offerings with the latest technologies to meet the evolving needs of our customers. Cilium represents one of these significant advancements. It integrates seamlessly into Gcore Managed Kubernetes, enabling our customers to use advanced networking and security capabilities without complex configuration or setup.Gcore Managed Kubernetes takes care of setting up and maintaining Kubernetes cluster for you. Our team manages master nodes (control plane) while you maintain full control over your worker nodes. Choose from Virtual Instances and Bare Metal Servers as worker nodes, including those powered by GPU accelerators to boost your AI/ML workloads. We provide free, production-grade cluster management with a 99.9% SLA for your peace of mind.Explore Gcore Managed Kubernetes

How we solve issues of RTMP-to-HLS streaming on iOS and Android
Long launch times, video buffering, high delays, broadcast interruptions, and other lags are common issues when developing applications for streaming and live streaming. Anyone who has ever developed such services has come across at least one of them.In previous articles, we talked about how to develop streaming apps for iOS and Android. And today, we will share the problems we encountered in the process and how we solved them.Use of a modern streaming platformAll that is required from the mobile app is to capture video and audio from the camera, form a data stream, and send it to viewers. A streaming platform will be needed for mass content distribution to a wide audience.Streaming via the Gcore platformThe only drawback of a streaming platform is latency. Broadcasting is a rather complex and sophisticated process. A certain amount of latency occurs at each stage.Our developers were able to assemble a stable, functional, and fast solution that requires 5 seconds to launch all processes, while the end-to-end latency when broadcasting in the Low latency mode takes 4 seconds.The table below shows several platforms that solve the latency reduction problem in their own way. We compared several solutions, studied each one, and found the best approach.It takes 5 minutes to start streaming on Gcore Streaming Platform:Create a free account. You will need to specify your email and password.Activate the service by selecting Free Live or any other suitable plan.Create a stream and start broadcasting.All the processes involved in streaming are inextricably linked. Changes to one affect all subsequent ones. Therefore, it would be incorrect to divide them into separate blocks. We will consider what can be optimized and how.Decrease of GOP size and speed up of stream delivery and receptionTo start decoding and processing any video stream, you need an iframe. We conducted tests and selected the optimal 2-second iFrame interval for our apps. However, in some cases, it can be changed to 1 second. By reducing the GOP length, the decoding, and thus the beginning of stream processing, is faster.iOSSet maxKeyFrameIntervalDuration = 2.AndroidSet iFrameIntervalInSeconds = 2.Background streaming to keep it uninterruptedIf you need short pauses during streaming, for example, to switch to another app, you can continue streaming in the background and keep the video intact. In doing so, we do not waste time on initializing all processes and keep minimal end-to-end latency when returning to the air.iOSApple forbids recording video while the app is minimized. Our initial solution was to disable the camera at the appropriate moment and reconnect it when returning to the air. To do this, we subscribed to a system notification informing us of the entry/exit to the background state.It didn’t work. The connection was not lost, but the library did not send the video of the RTMP stream. Therefore, we decided to make changes to the library itself.Each time the system sends a buffer with audio to AVCaptureAudioDataOutputSampleBufferDelegate, it checks whether all devices are disconnected from the session. Only the microphone should remain connected. If everything is correct, timingInfo is created. It contains information about the duration, dts, and pts of a fragment.After that, the pushPauseImageIntoVideoStream method of the AVMixer class is called, which checks the presence of a picture to pause. Next, a CVPixelBuffer with the image data is created via the pixelBufferFromCGImage method, and the CMSampleBuffer itself is created via the createBuffer method, which is sent to AVCaptureVideoDataOutputSampleBufferDelegate.Extension for AVMixer:hasOnlyMicrophone checks if all devices except the microphone are disconnected from the sessionfunc pushPauseImageIntoVideoStream takes data from the audio buffer, creates a video buffer, and sends it to AVCaptureVideoDataOutputSampleBufferDelegateprivate func pixelBufferFromCGImage (image: CGImage) creates and returns CVPixelBuffer from the imagecreateBuffer (pixelBuffer: CVImageBuffer, timingInfo: input CMSampleTimingInfo) creates and returns a CMSampleBuffer from timingInfo and CVPixelBufferAdd the pauseImage property to the AVMixer class:In AVAudioIOUnit, add the functionality to the func captureOutput (_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) method:AndroidWith Android, things turned out to be simpler. Looking deeper into the source code of the library that we used, it becomes clear that streaming is actually in a separate stream.Considering the life cycle of the component where our streaming is initialized, we decided to initialize it in the ViewModel—it remains alive throughout the life cycle of the component to which it is bound (Activity, Fragment).ViewModel life cycleNothing will change in the life cycle of ViewModel, even in case of changes in configuration, orientation, background transition, etc.But there is still a small problem. For streaming, we need to create a RtmpCamera2() object, which depends on an OpenGlView object. This is a UI element, which means it is eliminated when the app goes to background and the streaming process is interrupted.The solution was found quickly. The library allows you to easily replace the View option of the RtmpCamera2 object. We can replace it with a Context object from our app. Its life lasts until the app is eliminated by the system or closed by the user.We consider the elimination of the OpenGlView object to be an indicator of the app going to background and the creation of this View to be the signal of a return to foreground. For this purpose, we need to implement the corresponding callback:Next, as we mentioned before, we need to replace the OpenGlView object with Context when going to background and back to foreground. To do this, we define the required methods in ViewModel. We’ll also need to stop streaming when ViewModel is eliminated.If we need to pause our streaming without going to background, we just have to turn off the camera and microphone. In this mode, the bitrate is reduced to 70–80 Kbps, which allows you to save traffic.WebSocket and launch of the player at the right timeUse WebSocket to get the required information about the content being ready for playing and to start streaming instantly:Use of adaptive bitrate and resolutionIf we perform streaming from a mobile device, cellular networks will be used for video transmission. It is the main problem in mobile streaming: the signal level and its quality depend on many factors. Therefore, it is necessary to adapt the bitrate and resolution to the available bandwidth. This will help maintain a stable streaming process regardless of the viewers’ internet connection quality.How adaptive bitrate worksiOSTwo RTMPStreamDelegate methods are used to implement adaptive bitrate:Examples of implementation:The adaptive resolution is adjusted according to the bitrate. We used the following resolution/bitrate ratio as a basis:Resolution1920×10801280×720854×480640×360Video bitrate6 Mbps2 Mbps0.8 Mbps0.4 MbpsIf the bandwidth drops by more than half of the difference between two adjacent resolutions, switch to a lower resolution. To increase the bitrate, switch to a higher resolution.AndroidTo use adaptive bitrate, change the implementation of the ConnectCheckerRtmp interface:SummaryStreaming from mobile devices is not a difficult task. Using open-source code and our Streaming Platform, this can be done quickly and at minimal costs.Of course, you can always face problems during the development process. We hope that our solutions will help you simplify this process and complete your tasks faster.Learn more about developing apps for streaming on iOS and Android in our articles:“How to create a mobile streaming app on Android”“How to create a mobile streaming app on iOS”Repositories with the source code of mobile streaming apps can be found on GitHub: iOS, Android.Seamlessly stream on mobile devices using our Streaming Platform.More about Streaming Platform

How we solve issues of RTMP-to-HLS streaming on iOS and Android
Long launch times, video buffering, high delays, broadcast interruptions, and other lags are common issues when developing applications for streaming and live streaming. Anyone who has ever developed such services has come across at least one of them.In previous articles, we talked about how to develop streaming apps for iOS and Android. And today, we will share the problems we encountered in the process and how we solved them.Use of a modern streaming platformAll that is required from the mobile app is to capture video and audio from the camera, form a data stream, and send it to viewers. A streaming platform will be needed for mass content distribution to a wide audience.Streaming via the Gcore platformThe only drawback of a streaming platform is latency. Broadcasting is a rather complex and sophisticated process. A certain amount of latency occurs at each stage.Our developers were able to assemble a stable, functional, and fast solution that requires 5 seconds to launch all processes, while the end-to-end latency when broadcasting in the Low latency mode takes 4 seconds.The table below shows several platforms that solve the latency reduction problem in their own way. We compared several solutions, studied each one, and found the best approach.It takes 5 minutes to start streaming on Gcore Streaming Platform:Create a free account. You will need to specify your email and password.Activate the service by selecting Free Live or any other suitable plan.Create a stream and start broadcasting.All the processes involved in streaming are inextricably linked. Changes to one affect all subsequent ones. Therefore, it would be incorrect to divide them into separate blocks. We will consider what can be optimized and how.Decrease of GOP size and speed up of stream delivery and receptionTo start decoding and processing any video stream, you need an iframe. We conducted tests and selected the optimal 2-second iFrame interval for our apps. However, in some cases, it can be changed to 1 second. By reducing the GOP length, the decoding, and thus the beginning of stream processing, is faster.iOSSet maxKeyFrameIntervalDuration = 2.AndroidSet iFrameIntervalInSeconds = 2.Background streaming to keep it uninterruptedIf you need short pauses during streaming, for example, to switch to another app, you can continue streaming in the background and keep the video intact. In doing so, we do not waste time on initializing all processes and keep minimal end-to-end latency when returning to the air.iOSApple forbids recording video while the app is minimized. Our initial solution was to disable the camera at the appropriate moment and reconnect it when returning to the air. To do this, we subscribed to a system notification informing us of the entry/exit to the background state.It didn’t work. The connection was not lost, but the library did not send the video of the RTMP stream. Therefore, we decided to make changes to the library itself.Each time the system sends a buffer with audio to AVCaptureAudioDataOutputSampleBufferDelegate, it checks whether all devices are disconnected from the session. Only the microphone should remain connected. If everything is correct, timingInfo is created. It contains information about the duration, dts, and pts of a fragment.After that, the pushPauseImageIntoVideoStream method of the AVMixer class is called, which checks the presence of a picture to pause. Next, a CVPixelBuffer with the image data is created via the pixelBufferFromCGImage method, and the CMSampleBuffer itself is created via the createBuffer method, which is sent to AVCaptureVideoDataOutputSampleBufferDelegate.Extension for AVMixer:hasOnlyMicrophone checks if all devices except the microphone are disconnected from the sessionfunc pushPauseImageIntoVideoStream takes data from the audio buffer, creates a video buffer, and sends it to AVCaptureVideoDataOutputSampleBufferDelegateprivate func pixelBufferFromCGImage (image: CGImage) creates and returns CVPixelBuffer from the imagecreateBuffer (pixelBuffer: CVImageBuffer, timingInfo: input CMSampleTimingInfo) creates and returns a CMSampleBuffer from timingInfo and CVPixelBufferAdd the pauseImage property to the AVMixer class:In AVAudioIOUnit, add the functionality to the func captureOutput (_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) method:AndroidWith Android, things turned out to be simpler. Looking deeper into the source code of the library that we used, it becomes clear that streaming is actually in a separate stream.Considering the life cycle of the component where our streaming is initialized, we decided to initialize it in the ViewModel—it remains alive throughout the life cycle of the component to which it is bound (Activity, Fragment).ViewModel life cycleNothing will change in the life cycle of ViewModel, even in case of changes in configuration, orientation, background transition, etc.But there is still a small problem. For streaming, we need to create a RtmpCamera2() object, which depends on an OpenGlView object. This is a UI element, which means it is eliminated when the app goes to background and the streaming process is interrupted.The solution was found quickly. The library allows you to easily replace the View option of the RtmpCamera2 object. We can replace it with a Context object from our app. Its life lasts until the app is eliminated by the system or closed by the user.We consider the elimination of the OpenGlView object to be an indicator of the app going to background and the creation of this View to be the signal of a return to foreground. For this purpose, we need to implement the corresponding callback:Next, as we mentioned before, we need to replace the OpenGlView object with Context when going to background and back to foreground. To do this, we define the required methods in ViewModel. We’ll also need to stop streaming when ViewModel is eliminated.If we need to pause our streaming without going to background, we just have to turn off the camera and microphone. In this mode, the bitrate is reduced to 70–80 Kbps, which allows you to save traffic.WebSocket and launch of the player at the right timeUse WebSocket to get the required information about the content being ready for playing and to start streaming instantly:Use of adaptive bitrate and resolutionIf we perform streaming from a mobile device, cellular networks will be used for video transmission. It is the main problem in mobile streaming: the signal level and its quality depend on many factors. Therefore, it is necessary to adapt the bitrate and resolution to the available bandwidth. This will help maintain a stable streaming process regardless of the viewers’ internet connection quality.How adaptive bitrate worksiOSTwo RTMPStreamDelegate methods are used to implement adaptive bitrate:Examples of implementation:The adaptive resolution is adjusted according to the bitrate. We used the following resolution/bitrate ratio as a basis:Resolution1920×10801280×720854×480640×360Video bitrate6 Mbps2 Mbps0.8 Mbps0.4 MbpsIf the bandwidth drops by more than half of the difference between two adjacent resolutions, switch to a lower resolution. To increase the bitrate, switch to a higher resolution.AndroidTo use adaptive bitrate, change the implementation of the ConnectCheckerRtmp interface:SummaryStreaming from mobile devices is not a difficult task. Using open-source code and our Streaming Platform, this can be done quickly and at minimal costs.Of course, you can always face problems during the development process. We hope that our solutions will help you simplify this process and complete your tasks faster.Learn more about developing apps for streaming on iOS and Android in our articles:“How to create a mobile streaming app on Android”“How to create a mobile streaming app on iOS”Repositories with the source code of mobile streaming apps can be found on GitHub: iOS, Android.Seamlessly stream on mobile devices using our Streaming Platform.More about Streaming Platform
Subscribe to our newsletter
Get the latest industry trends, exclusive insights, and Gcore updates delivered straight to your inbox.