- Home
- Developers
- What a HESP protocol is and how it changes streaming for the better
What a HESP protocol is and how it changes streaming for the better
- By Gcore
- November 9, 2021
- 4 min read

HESP is a new video streaming protocol developed by THEO Technologies for streaming video content with ultra-low latency. In order to standardize and promote this protocol, the developer company in collaboration with Synamedia has created HESP Alliance, an association of streaming providers and media companies.
In September 2021, Gcore joined the HESP Alliance. This means that from now on our CDN supports the HESP protocol, thus enabling our clients to organize video content broadcasts with minimal delays to millions of viewers around the world.
We will tell you what kind of protocol it is and which advantages it provides in terms of video delivery.
What is HESP?
HESP (High Efficiency Stream Protocol) is an adaptive HTTP based video streaming protocol developed by THEO Technologies for ensuring video content streaming with ultra-low latency and optimizing your costs.
This protocol can deliver videos very fast, with the delays not exceeding 0.4–2 seconds. As opposed to its analogs, HESP requires less bandwidth.
Since HESP is an HTTP based protocol, it can be transmitted via CDN. This protocol enables Gcore CDN to deliver video content to all kinds of devices and to millions of viewers located anywhere in the world, while maintaining up to 8K quality. Note that the broadcast costs are reduced to a minimum, as opposed to the alternative WebRTC protocol.
4 main differences between HESP and other protocols
The main protocols currently used for streaming are HLS, MPEG DASH, RTMP, and WebRTC. Each of them has certain drawbacks.
- HESP advantages over other protocols
The main disadvantage of the HLS and MPEG-DASH protocols is that they can’t provide sufficiently low latency streaming. Their limit is 4 seconds, which is too much for interactive events and for real-time communication.
WebRTC is too expensive and doesn’t allow clients to easily scale their streams to a larger audience.
HESP combines the advantages of all these protocols and is free of their weak points. Let’s take a look at its main features.
1. High speed and scalability
Since HESP is an HTTP based protocol, it allows the clients to broadcast videos using CDN, just like HLS and MPEG-DASH. This means that you can broadcast your video content to thousands and even millions of viewers around the world.
When using HESP, the video will be transmitted with the delays that don’t exceed 2 seconds. This is faster than with other HTTP based protocols.
- Comparison of HESP with other protocols in terms of speed
RTMP, RTSP/RTP, and WebRTC are independent protocols that can’t be transmitted via CDN. They provide ultra-low latency just like HESP, yet they don’t allow you to stream your video content to a larger audience.
By the way, Gcore CDN supports not only the HESP protocol but also other HTTP based protocols. Our Streaming Platform enables video streaming using RTMP, RTSP/RTP, and WebRTC as well. If you use our streaming services, you can choose any technology mentioned in this article.
2. Cost-effectiveness
Since video content that is broadcast using RTMP, RTSP/RTP, and WebRTC protocols can’t be scaled via CDN, streaming videos to a larger audience becomes more difficult and effort-consuming. On average, streaming videos via CDN costs only 1/2–1/5 as much as streaming videos without CDN.
4. Reduced bandwidth requirements
Another difference that we’ve already mentioned is that the HESP protocol requires 10–20% less bandwidth than other low-latency protocols and technologies, including LL HLS, Chunked CMAF, and WebRTC.
5. Adaptive bitrate (ABR) support
HESP is compatible with the adaptive bitrate technology. This means that your video content will be streamed to the users on all kinds of devices without buffering even in case of poor Internet connection. We’ve enlisted the main differences between HESP and other streaming protocols and technologies in the table below:
- Comparison of HESP with other protocols
When you will benefit from using HESP
Almost any business aims at minimizing latency during video broadcasts and optimizing costs at the same time. The new technology seems to be especially useful for the following industries:
- Cybersport and Gaming. HESP delivers video content to end users faster than standard HLS or MPEG-DASH protocols.
- Online learning. You can hold interactive events for 1,000+ and even 1,000,000+ people without any restrictions and without having to use any expensive external applications.
- Sports. Sports events are streamed almost in real time. They will be broadcast in the Internet even faster than on TV.
- Telemedicine. HESP ensures doctors’ communication with a large audience without resorting to any third-party applications while helping significantly reduce financial costs.
- Auctions and online casinos. In these spheres, it is important not only to minimize delays while delivering video content but also to maintain high video quality in order to give the viewers the opportunity to take a closer look at what is happening on the screen. Using HESP will allow you to achieve this at a lower cost.
- OTT and TV broadcasting. These technologies will allow you to combine IPTV and OTT solutions for creating fully-featured broadcasts of the highest quality.
We’ve enlisted the main branches only, yet this is far from being the ultimate list. Using the HESP protocol is a truly profitable solution for any project where reducing delays to a minimum is of vital importance.
How to start using HESP for video content delivery
If you want to start streaming videos using HESP, you will need to incorporate some new features into your standard procedure of content production, processing, and delivery:
- A special HESP Packager that encodes videos before transmitting them. You can take it from our partners.
- CDN supporting the HESP protocol, e.g., Gcore CDN. We have excellent coverage and connectivity, including 90+ points of presence on 5 continents and 6,000+ peering partners.
- A player supporting the HESP protocol. You can also take it from our partners. For example, THEO Technologies has its own player called THEOPlayer. It has quite a number of different functions and can be easily integrated into your web resource.
- Video creation and delivery process using HESP
If you are already signed up for our CDN and eager to stream your video content using the HESP protocol, contact our technical support, and they will activate this feature for you.
Summary
- Now our CDN supports the new HESP protocol and can deliver video content to the users even faster.
- HESP is an HTTP based streaming protocol providing ultra-low latency with the delays not exceeding 0.4–2 seconds.
- HESP will help you enhance user experience and optimize costs when it comes to any project that requires high video delivery speed: sports, cybersports and gaming, online learning, telemedicine, auctions, online casinos, etc.
- What distinguishes HESP from other protocols is that it both provides ultra-low latency and can be easily scaled via CDN. HESP requires 10–20% less bandwidth. This protocol costs only 1/2–1/5 as much as its analogs.
- To start using HESP, you will need to integrate a special HESP Packager and a player supporting HESP into your standard streaming process.
The HESP protocol combined with Gcore CDN will allow you to provide your users with the best streaming experience, no matter where in the world they are located.
Related articles

What is cloud networking: benefits, components, and implementation strategies
Cloud networking is the use and management of network resources, including hardware and software, hosted on public or private cloud infrastructures rather than on-premises equipment. Over 90% of enterprises are expected to adopt cloud networking solutions by 2025, indicating rapid industry-wide adoption for IT infrastructure modernization.Cloud networking operates through advanced technologies that separate traditional hardware dependencies from network management. Software-Defined Networking (SDN) serves as a core technology, decoupling network control from hardware to allow centralized, programmable management and automation of network configurations.This approach enables organizations to manage their entire network infrastructure through software interfaces rather than physical device manipulation.The main components of cloud networking include several key elements that work together to create flexible network environments. Virtual Private Clouds (VPCs) provide isolated virtual network environments within the cloud, allowing organizations to define IP ranges, subnets, and routing for enhanced security and control. Virtual network functions (VNFs) replace traditional hardware devices like firewalls, load balancers, and routers with software-based equivalents for easier use and improved flexibility.Cloud networking delivers significant advantages that transform how organizations approach network infrastructure management.These solutions can reduce network operational costs by up to 30% compared to traditional on-premises networking through reduced hardware requirements, lower maintenance overhead, and improved resource use. Cloud networks can scale bandwidth and compute resources within seconds to minutes, demonstrating superior agility compared to traditional manual provisioning methods.Understanding cloud networking has become essential for modern businesses seeking to modernize their IT infrastructure and improve operational effectiveness. This technology enables organizations to build more flexible and cost-effective network solutions that adapt quickly to changing business requirements.What is cloud networking?Cloud networking is the use and management of network resources through virtualized, software-defined environments hosted on cloud infrastructure rather than traditional on-premises hardware. This approach uses technologies like Software-Defined Networking (SDN) to separate network control from physical devices, allowing centralized management and programmable automation of network configurations. Virtual Private Clouds (VPCs) create isolated network environments within the cloud. In contrast, virtual network functions replace traditional hardware like firewalls and load balancers with flexible software alternatives that can scale within seconds to meet changing demands.How does cloud networking work?Cloud networking works by moving your network infrastructure from physical hardware to virtualized, software-defined environments hosted in the cloud. Instead of managing routers, switches, and firewalls in your data center, you access these network functions as services running on cloud platforms.The core mechanism relies on Software-Defined Networking (SDN), which separates network control from the underlying hardware. This means you can configure, manage, and modify your entire network through software interfaces rather than physically touching equipment.When you need a new subnet or firewall rule, you simply define it through an API or web console, and the cloud platform instantly creates the virtual network components.Virtual Private Clouds (VPCs) form the foundation of cloud networking by creating isolated network environments within the shared cloud infrastructure. You define your own IP address ranges, create subnets across different availability zones, and set up routing tables exactly like you would with physical networks. The difference is that all these components exist as software abstractions that can be modified in seconds.Network functions that traditionally required dedicated hardware appliances now run as Virtual Network Functions (VNFs).Load balancers, firewalls, VPN gateways, and intrusion detection systems all operate as software services that you can use, scale, or remove on demand. This approach can reduce network operational costs by up to 30% compared to traditional on-premises networking while providing the flexibility to scale bandwidth and compute resources within seconds to minutes.What are the main components of cloud networking?The main components of cloud networking refer to the key technologies and services that enable network infrastructure to operate in virtualized cloud environments. They are listed below.Software-defined networking (SDN): SDN separates network control from hardware devices, allowing centralized management through software controllers. This approach enables automated network configuration and policy enforcement across cloud resources.Virtual private clouds (VPCs): VPCs create isolated network environments within public cloud infrastructure, giving organizations control over IP addressing, subnets, and routing. They provide secure boundaries between different workloads and applications.Virtual network functions (VNFs): VNFs replace traditional hardware appliances like firewalls, load balancers, and routers with software-based alternatives. These functions can be deployed quickly and scaled on demand without physical hardware constraints.Cloud load balancers: These distribute incoming network traffic across multiple servers or resources to prevent overload and maintain performance. They automatically adjust traffic routing based on server health and capacity.Network security services: Cloud-native security tools include distributed firewalls, intrusion detection systems, and encryption services that protect data in transit. These services combine directly with cloud infrastructure for consistent security policies.Hybrid connectivity solutions: VPN gateways and dedicated network connections link on-premises infrastructure with cloud resources. These components enable secure data transfer between different network environments.Network monitoring and analytics: Real-time monitoring tools track network performance, bandwidth usage, and security events across cloud infrastructure. They provide visibility into traffic patterns and help identify potential issues before they affect users.What are the benefits of cloud networking?The benefits of cloud networking refer to the advantages organizations gain when they move their network infrastructure from physical hardware to virtualized, cloud-based environments. The benefits of cloud networking are listed below.Cost reduction: Cloud networking eliminates the need for expensive physical hardware like routers, switches, and firewalls. Organizations can reduce network operational costs by up to 30% compared to traditional on-premises networking through reduced maintenance, power consumption, and hardware replacement expenses.Instant flexibility: Cloud networks can scale bandwidth and compute resources within seconds to minutes based on demand. This flexibility allows businesses to handle traffic spikes during peak periods without over-provisioning resources during normal operations.Centralized management: Software-Defined Networking (SDN) enables administrators to control entire network infrastructures from a single dashboard. This centralized approach simplifies configuration changes, policy enforcement, and troubleshooting across distributed locations.Enhanced security: Virtual Private Clouds (VPCs) create isolated network environments that prevent unauthorized access between different applications or tenants. Cloud networking achieves compliance with strict standards like GDPR and HIPAA through built-in encryption and access controls.High availability: Cloud providers maintain network uptime SLAs of 99.99% or higher through redundant infrastructure and automatic failover mechanisms. This reliability exceeds what most organizations can achieve with on-premises equipment.Reduced complexity: Network-as-a-Service (NaaS) models eliminate the need for specialized networking staff to manage physical infrastructure. Organizations can focus on their core business while cloud providers handle network maintenance and updates.Global reach: Cloud networking enables instant use of network resources across multiple geographic regions. This global presence improves application performance for users worldwide without requiring physical infrastructure investments in each location.What's the difference between cloud networking and traditional networking?Cloud networking differs from traditional networking primarily in infrastructure location, resource management, and flexibility mechanisms. Traditional networking relies on physical hardware like routers, switches, and firewalls installed and maintained on-premises, while cloud networking delivers these functions as virtualized services managed remotely through cloud platforms.Infrastructure and management approachesTraditional networks require organizations to purchase, install, and configure physical equipment in data centers or office PoPs. IT teams must handle hardware maintenance, software updates, and capacity planning manually.Cloud networking operates through software-defined infrastructure where network functions run as virtual services. Administrators manage entire network configurations through web interfaces and APIs, enabling centralized control across multiple locations without physical hardware access.Flexibility and speedTraditional networking scales through hardware procurement processes that often take weeks or months to complete. Adding network capacity requires purchasing equipment, scheduling installations, and configuring devices individually.Cloud networks scale instantly through software provisioning, allowing organizations to add or remove bandwidth, create new network segments, or use security policies in minutes. This agility enables businesses to respond quickly to changing demands without infrastructure investments.Cost structure and resource allocationTraditional networking involves significant upfront capital expenses for hardware purchases, plus ongoing costs for power, cooling, and maintenance staff. Organizations must estimate future capacity needs and often over-provision to handle peak loads.Cloud networking operates on pay-as-you-go models where costs align with actual usage. According to industry case studies (2024), cloud networking can reduce network operational costs by up to 30% compared to traditional on-premises networking through improved resource effectiveness and reduced maintenance overhead.What are common cloud networking use cases?Common cloud networking use cases refer to the specific scenarios and applications in which organizations use cloud-based networking solutions to meet their infrastructure and connectivity needs. Below are some common cloud networking use cases.Hybrid cloud connectivity: Organizations connect their on-premises infrastructure with cloud resources to create cooperative hybrid cloud environments. This approach allows companies to maintain sensitive data locally while using cloud services for flexibility.Multi-cloud networking: Businesses distribute workloads across multiple cloud providers to avoid vendor lock-in and improve redundancy. This plan enables organizations to choose the best services from different providers while maintaining consistent network policies.Remote workforce enablement: Companies provide secure network access for distributed teams through cloud-based VPN and zero-trust network solutions. These implementations support remote work by ensuring employees can safely access corporate resources from any location.Application modernization: Organizations migrate legacy applications to cloud environments while maintaining network performance and security requirements. Cloud networking supports containerized applications and microservices architectures that require flexible connectivity.Disaster recovery and backup: Businesses replicate their network infrastructure in the cloud to ensure continuity during outages or disasters. Cloud networking enables rapid failover and recovery processes that reduce downtime and data loss.Global content delivery: Companies distribute content and applications closer to end users through cloud-based edge networking solutions. This approach reduces latency and improves user experience for geographically dispersed audiences.Development and testing environments: Teams create isolated network environments in the cloud for application development, testing, and staging. These environments can be quickly provisioned and torn down without affecting production systems.How to implement a cloud networking strategyYou implement a cloud networking plan by defining your network architecture requirements, selecting appropriate cloud services, and establishing security and connectivity frameworks that align with your business objectives.First, assess your current network infrastructure and identify which components can move to the cloud. Document your existing bandwidth requirements, security policies, and compliance needs to establish baseline requirements for your cloud network design.Next, design your Virtual Private Cloud (VPC) architecture by defining IP address ranges, subnets, and routing tables. Create separate subnets for different application tiers and establish network segmentation to isolate critical workloads from less sensitive traffic.Then, establish connectivity between your on-premises infrastructure and cloud resources through VPN connections or dedicated network links. Configure hybrid connectivity to ensure cooperation communication while maintaining security boundaries between environments.After that, use Software-Defined Networking (SDN) controls to centralize network management and enable automated configuration changes. Set up network policies that can flexibly adjust bandwidth allocation and routing based on application demands.Configure cloud-native security services, including network access control lists, security groups, and distributed firewalls. Apply the principle of least privilege by restricting network access to only necessary ports and protocols for each service.Use network monitoring and analytics tools to track performance metrics like latency, throughput, and packet loss. Establish baseline performance measurements and set up automated alerts for network anomalies or capacity thresholds.Finally, create disaster recovery and backup procedures for your network configurations. Document your network topology and maintain version control for configuration changes to enable quick recovery during outages.Start with a pilot using non-critical workloads to validate your network design and performance before migrating mission-critical applications to your new cloud networking environment.Learn more about building a faster, more flexible network with Gcore Cloud.Frequently asked questionsWhat's the difference between cloud networking and SD-WAN?Cloud networking is a broad infrastructure approach that virtualizes entire network environments in the cloud. At the same time, SD-WAN is a specific technology that connects and manages multiple network locations through software-defined controls. Cloud networking includes virtual networks, security services, and compute resources hosted by cloud providers, whereas SD-WAN focuses on connecting branch offices, data centers, and cloud resources through intelligent traffic routing and centralized management.Is cloud networking secure?Yes, cloud networking is secure when properly configured, offering advanced security features like encryption, network isolation, and centralized access controls. Major cloud providers maintain 99.99% uptime SLAs and comply with strict security standards, including GDPR and HIPAA, through technologies like Virtual Private Clouds that isolate network traffic.How much does cloud networking cost compared to traditional networking?Cloud networking costs 20-40% less than traditional networking due to reduced hardware expenses, maintenance, and staffing requirements. Organizations save on upfront capital expenditures while gaining predictable monthly operational costs through subscription-based cloud services.How does cloud networking affect network performance?Cloud networking can both improve and reduce network performance depending on your specific setup and requirements.Cloud networking typically improves performance through global content delivery networks that reduce latency by 40-60%, automatic growing that handles traffic spikes within seconds, and advanced routing that optimizes data paths. However, performance can decrease if you're moving from a well-optimized local network to a poorly configured cloud setup, or if your applications require extremely low latency that adds overhead from internet routing and virtualization layers.What happens if cloud networking services experience outages?Cloud networking outages cause service disruptions, including loss of connectivity, reduced application performance, and potential data access issues lasting from minutes to several hours. Most major cloud providers maintain 99.99% uptime guarantees and use redundant systems to reduce outage impact through automatic failover to backup infrastructure.

3 use cases for geo-aware routing with Gcore DNS
If your audience is global but you’re serving everyone the same content from the same place, you're likely sacrificing performance and resilience. Gcore DNS (which includes a free-forever plan and enterprise-grade option) offers a straightforward way to change that with geo-aware routing, a feature that lets you return different DNS responses based on where users are coming from.This article breaks down how Gcore's geo-routing works, how to set it up using the GeoDNS preset in dynamic response mode, where it shines, and when you might be better off with a different option. We’ll walk through three hands-on use cases with real config examples, highlight TTL trade-offs, and call out what developers need to know about edge cases like resolver mismatch and caching delays.What is geo-aware DNS routing?Gcore DNS lets you return different IP addresses based on the user’s geographic location. This is configured using dynamic response rules with the GeoDNS preset, which lets you match on continent, country, region, ASN, or IP/CIDR. When a user makes a DNS request, Gcore uses the resolver’s location to decide which record to return.You can control traffic to achieve outcomes like:Directing European users to an EU-based CDN endpointSending users in regions with known service degradation to a fallback instanceBehind the scenes, this is done by setting up metadata pickers and specifying fallback behavior.For step-by-step guidance, see the official docs: Configure geo-balancing with Dynamic response.How to configure GeoDNS in Gcore DNSTo use geo-aware routing in Gcore DNS, you'll configure a dynamic response record set with the GeoDNS preset. This lets you return different IPs based on region, country, ASN, or IP/CIDR metadata.Basic stepsGo to DNS → Zones in the Gcore Customer Portal. (If you don’t have an account, you can sign up free and use Gcore DNS in just a few clicks.)Create or edit a record set (e.g., for app.example.com).Switch to Advanced mode.Enable Dynamic response.Choose the GeoDNS preset.Add responses per region or country.Define a fallback record for unmatched queries.For detailed step-by-step instructions, check out our docs.Once you’ve set this up, your config should look like the examples shown in the use cases below.Common use casesEach use case below includes a real-world scenario and a breakdown of how to configure it in Gcore DNS. These examples assume you're working in the DNS advanced mode zone editor with dynamic response enabled and the GeoDNS preset selected.The term “DNS setup” refers to the configuration you’d enter for a specific hostname in the Gcore DNS UI under advanced mode.1. Content localizationScenario: You're running example.com and want to serve language-optimized infrastructure for European and Asian users. This use case is often used to reduce TTFB, apply region-specific UX, or comply with local UX norms. If you're also localizing content (e.g., currency, language), make sure your app handles that via subdomains or headers in addition to routing.Objective:EU users → eu.example.comAsia users → asia.example.comAll others → global.example.comDNS setup:Host: www.example.comType: A TTL: 120 Dynamic response: Enabled Preset: GeoDNS Europe → 185.22.33.44 # EU-based web server Asia → 103.55.66.77 # Asia-based web server Fallback → 198.18.0.1 # Global web server2. Regional CDN failoverScenario: You’re using two CDN clusters: one in North America, one in Europe. If one cluster is unavailable, you want traffic rerouted regionally without impacting users elsewhere. To make this work reliably, you must enable DNS Healthchecks for each origin so that Gcore DNS can automatically detect outages and filter out unhealthy IPs from responses.Objective:North America → na.cdn.example.comEurope → eu.cdn.example.comEach region has its own fallbackDNS setup:Host: cdn.example.comType: A TTL: 60 Dynamic response: Enabled Preset: GeoDNS North America → 203.0.113.10 # NA CDN IP Backup (NA region only) → 185.22.33.44 # EU CDN as backup for NA Health check → Enabled for 203.0.113.10 with HTTP/TCP probe settingsEurope → 185.22.33.44 # EU CDN IP Backup (EU region only) → 203.0.113.10 # NA CDN as backup for EU Health check → Enabled for 185.22.33.44Note: Multi-level fallback by region isn’t supported inside one rule set—you need to separate them to keep routing decisions clean.3. Traffic steering for complianceScenario: You need to keep EU user data inside the EU for GDPR compliance while routing the rest of the world to lower-cost infrastructure elsewhere. This approach is useful for fintech, healthcare, or regulated SaaS workloads where regulatory compliance is a challenge.Objective:EU users → EU-only backendAll other users → Global backendDNS setup:Host: transactions.example.com Type: A TTL: 300 Dynamic response: Enabled Preset: GeoDNS Europe → 185.10.10.10 # EU regional API node Fallback → 198.51.100.42 # Global API nodeEdge casesGeoDNS works well, but it’s worth keeping in mind a few edge cases and limitations when you get set up.Resolver location ≠ user locationBy default, Gcore uses ECS (EDNS Client Subnet) for precise client subnet geo-balancing. If ECS isn’t present, resolver IP is used, which may skew location (e.g., public resolvers, mobile carriers). ECS usage can be disabled in the ManagedDNS UI if needed.Caching slows failoverEven if your upstream fails, users may have cached the original IP for minutes. Fallback + TTL tuning are key.No sub-regional precisionYou can route by continent, country, or ASN—but not city. City-level precision isn’t currently supported.Gcore delivers simple solutions to big problemsGeo-aware routing is one of those features that quietly solves big problems, especially when your app or CDN runs globally. With Gcore DNS, you don’t need complex infrastructure to start optimizing traffic flow.Geo-aware routing with Gcore DNS is a lightweight way to optimize performance, localize content, or handle regional failover. If you need greater precision, consider pairing GeoDNS with in-app geolocation logic or CDN edge logic. But for many routing use cases, DNS is the simplest and fastest way to go.Get free-forever Gcore DNS with just a few clicks

Flexible DDoS mitigation with BGP Flowspec
For customers who understand their own network traffic patterns, rigid DDoS protection can be more of a limitation than a safeguard. That’s why Gcore supports BGP Flowspec: a flexible, standards-based method for defining granular filters that block or rate-limit malicious traffic in real time…before it reaches your infrastructure.In this article, we’ll walk through:What Flowspec is and how it worksThe specific filters and actions Gcore supportsCommon use cases, with example rule definitionsHow to activate and monitor Flowspec in your environmentWhat is the BGP Flowspec?BGP Flowspec (RFC 8955) extends Border Gateway Protocol to distribute traffic filtering rules alongside routing updates. Instead of static ACLs or reactive blackholing, Flowspec enables near-instantaneous propagation of mitigation rules across networks.BGP tells routers how to reach IP prefixes across the internet. With Flowspec, those same BGP announcements can now carry rules, not just routes. Each rule describes a pattern of traffic (e.g., TCP SYN packets >1000 bytes from a specific subnet) and what action to take (drop, rate-limit, mark, or redirect).What are the benefits of the BGP Flowspec?Most traditional DDoS protection services react to threats after they start, whether by blackholing traffic to a target IP, redirecting flows to a scrubbing center, or applying rigid, static filters. These approaches can block legitimate traffic, introduce latency, or be too slow to respond to fast-evolving attacks.Flowspec offers a more flexible alternative.Proactive mitigation: Instead of waiting for attacks, you can define known-bad traffic patterns ahead of time and block them instantly. Flowspec lets experienced operators prevent incidents before they start.Granular filtering: You’re not limited to blocking by IP or port. With Flowspec, you can match on packet size, TCP flags, ICMP codes, and more, enabling fine-tuned control that traditional ACLs or RTBH don’t support.Edge offloading: Filtering happens directly on Gcore’s routers, offloading your infrastructure and avoiding scrubbing latency.Real-time updates: Changes to rules are distributed across the network via BGP and take effect immediately, faster than manual intervention or standard blackholing.You still have the option to block traffic during an active attack, but with Flowspec, you gain the flexibility to protect services with minimal disruption and greater precision than conventional tools allow.Which parts of the Flowspec does Gcore implement?Gcore supports twelve filter types and four actions of the Flowspec.Supported filter typesGcore supports all 12 standard Flowspec match components.Filter FieldDescriptionDestination prefixTarget subnet (usually your service or app)Source prefixSource of traffic (e.g., attacker IP range)IP protocolTCP, UDP, ICMP, etc.Port / Source portMatch specific client or server portsDestination portMatch destination-side service portsICMP type/codeFilter echo requests, errors, etc.TCP flagsFilter packets by SYN, ACK, RST, FIN, combinationsPacket lengthFilter based on payload sizeDSCPQuality of service code pointFragmentMatch on packet fragmentation characteristicsSupported actionsGcore DDoS Protection supports the following Flowspec actions, which can be triggered when traffic matches a specific filter:ActionDescriptionTraffic-rate (0x8006)Throttle/rate limit traffic by byte-per-second rateredirectRedirect traffic to alternate location (e.g., scrubbing)traffic-markingApply DSCP marks for downstream classificationno-action (drop)Drop packets (rate-limit 0)Rule orderingRFC 5575 defines the implicit order of Flowspec rules. The crucial point is that more specific announcements take preference, not the order in which the rules are propagated.Gcore also respects Flowspec rule ordering per RFC 5575. More specific filters override broader ones. Future support for Flowspec v2 (with explicit ordering) is under consideration, pending vendor adoption.Blackholing and extended blackholing (eBH)Remote-triggered blackhole (RTBH) is a standardized protection method that the client manages via BGP by analyzing traffic, identifying the direction of the attack (i.e., the destination IP address). This method protects against volumetric attacks.Customers using Gcore IP Transit can trigger immediate blackholing for attacked prefixes via BGP, using the well-known blackhole community tag 65000:666. All traffic to that destination IP is dropped at Gcore’s edge.The list of supported BGP communities is available here.BGP extended blackholeExtended blackhole (eBH) allows for more granular blackholing that does not affect legitimate traffic. For customers unable to implement Flowspec directly, Gcore supports eBH. You announce target prefixes with pre-agreed BGP communities, and Gcore translates them into Flowspec mitigations.To configure this option, contact our NOC at noc@gcore.lu.Monitoring and limitationsGcore can support several logging transports, including mail and Slack.If the number of Flowspec prefixes exceeds the configured limit, Gcore DDoS Protection stops accepting new announcements, but BGP sessions and existing prefixes will stay active. Gcore will receive a notification that you reached the limit.How to activateActivation takes just two steps:Define rules on your edge router using Flowspec NLRI formatAnnounce rules via BGP to Gcore’s intermediate control planeThen, Gcore validates and propagates the filters to border routers. Filters are installed on edge devices and take effect immediately.If attack patterns are unknown, you’ll first need to detect anomalies using your existing monitoring stack, then define the appropriate Flowspec rules.Need help activating Flowspec? Get in touch via our 24/7 support channels and our experts will be glad to assist.Set up GRE and benefit from Flowspec today

Tuning Gcore CDN rules for dynamic application data caching
Caching services like a CDN service can be a solid addition to your web stack. They lower response latency and improve user experience while also helping protect your origin servers through security features like access control lists (ACLs) and traffic filtering. However, if you’re running a highly dynamic web service, a misconfigured CDN might lead to the delivery of stale or, in the worst case, wrong data.If you’re hosting a dynamic web service and want to speed it up, this guide is for you. It explains the common issues dynamic services have with CDNs and how to solve them with Gcore CDN.How does dynamic data differ from static data?There are two main differences between static and dynamic data:Change frequency: Dynamic data changes more often than static data. Some websites stay the same for weeks or months; others change multiple times daily.Personalized responses: Static systems deliver the same response for a given URL path. Dynamic systems, by contrast, can generate different responses for each user, based on parameters like authentication, location, session data, or user preferences.Now, you might ask: Aren’t static websites simply HTML pages while dynamic ones are generated on-the-fly by application servers?It depends.A website consisting only of HTML pages might still be dynamic if the pages are changed frequently, and an application server that generates HTML responses can serve the same HTML forever and always provide everyone with the same content for a URL. The CDN network doesn’t know how you create the HTML. It only sees the finished product and decides how long it should cache it. You need to decide on a case-by-case basis.How do cache rules affect dynamic data?When using a CDN, you have to define rules that govern the caching of your data. If you consider this data dynamic, either because it changes frequently or because you deliver user-specific responses, those rules can drastically impact the user experience, ranging from the delivery of stale data to completely wrong data.Cache expirationFirst, consider cache expiration time. With Gcore CDN, you have two options:Let your origin server control it. This is ideal for dynamic systems using application servers because it gives you precise control without needing to adjust Gcore settings.Let Gcore CDN control it. This works well for static HTTP servers delivering HTML pages that change often. If you can’t modify the server’s cache configuration, using Gcore’s settings is easier.No matter which method you choose, understand what your users consider “stale” and set the expiration time accordingly.Query string handlingNext, decide how Gcore CDN should handle URL query parameters. Ignoring them can improve performance—but for dynamic systems that use query strings for server-side sorting, filtering, or pagination, this can break functionality.For example, a headless CMS might use: https://example.com/api/posts?sort=asc&start=99If the CDN ignores the query string, it will always deliver the cached response, even if new parameters are requested. So, make sure to disable the Ignore query string parameters setting when necessary.Cookie bypassingCookies are often used for session handling. While ignoring cookies can boost performance, doing so risks breaking applications that rely on them.For example: https://example.com/api/users/profileIf this endpoint relies on a session cookie, caching without considering the cookie will serve the same user profile to everyone. Be sure to disable “Ignore cookies” if your server uses them for authentication or personalization.Cache key customizationIf you need more detailed control over the caching, you can modify the cache key generation. This key defines the mapping of a request to a cache entry and allows you to manage the granularity of your caching.The Gcore Customer Portal offers basic customization functionality, and the support team can help with advanced rules. For example, adding the request method (e.g., GET, HEAD, POST, etc.) to your cache key ensures a single URL has a dedicated cache entry for each method instead of using one for all.GraphQL considerationsMost GraphQL implementations only use POST requests and include the GraphQL query in the request body. This means every GraphQL request will use the same URL and the same method, regardless of the query. Gcore CDN doesn’t check the request body when caching, so every query will result in the same cache key and override each other.To make sure the CDN doesn’t break your API, turn off caching for all your GraphQL endpoints.Path-based CDN rules for hybrid contentIf your application serves both static and dynamic content across different paths, Gcore CDN rules offer a powerful way to manage caching more granularly.Using the CDN rules engine, you can create specific rules for individual file paths or extensions. This allows you to apply dynamic-appropriate settings—like disabling caching or respecting cookies—only to dynamic endpoints (e.g., /api/**), while using more aggressive caching for static assets (e.g., /assets/**, /images/**, or /js/**).This path-level control delivers performance gains from CDN caching without compromising the correctness of dynamic content delivery.SummaryUsing a CDN is an easy way to improve your site’s performance, and even dynamic applications can benefit from CDN caching when configured correctly. Check that:Expiration times reflect real-world freshness needsQuery strings and cookies aren’t ignored if they affect the responseCache keys are customized where neededGraphQL endpoints are excluded from cachingCDN rules are used to apply different settings for dynamic and static pathsWith the right setup, you can safely speed up even the most complex applications.Explore our step-by-step guide to setting rules for particular files in Gcore CDN.Discover Gcore CDN

How AI is reshaping the future of interactive streaming
Interactive streaming is entering a new era. Artificial intelligence is changing how live content is created, delivered, and experienced. Advances in real-time avatars, voice synthesis, deepfake rendering, and ultra-low-latency delivery are giving rise to new formats and expectations.Viewers don’t want to be passive audiences anymore. They want to interact, influence, and participate. For platforms that want to lead, the stakes are growing: innovate now, or fall behind.At Gcore, we support this shift with global streaming infrastructure built to handle responsive, AI-driven content at scale. This article explores how real-time interactivity is evolving and how you can prepare for what’s next.A new era for live contentStreaming used to mean watching someone else perform. Today, it’s becoming a conversation between the creator and the viewer. AI tools are making live content more reactive and personalized. A cooking show host can take ingredient requests from the audience and generate live recipes. A language tutor can assess student pronunciation and adjust the lesson plan on the spot. These aren’t speculative use cases—they’re already being piloted.Traditional cameras and presenters are no longer required. Some creators now use entirely digital hosts, powered by motion capture and generative AI. They can stream with multiple personas, switch backgrounds on command, or pause for mid-session translations. This evolution is not about replacing humans but creating new ways to engage that scale across time zones, languages, and platforms.Creating virtual influencersVirtual influencers are digital characters designed to build audiences, promote products, and hold conversations with followers. Unlike human influencers, they don’t get tired, change jobs, or need extensive re-shoots when messaging changes. They’re fully programmable, and the most successful ones are backed by teams of writers, animators, and brand strategists.For example, a skincare company might launch a virtual influencer with a consistent tone, recognizable look, and 24/7 availability. This persona could host product tutorials in the morning, respond to DMs during the day, and livestream reactions to customer feedback at night—all in the local language of the audience.These characters are not limited to influencer marketing. A virtual celebrity might appear as a guest at a live product launch or provide commentary during a sports event. The point is consistency, scalability, and control. Gcore’s global delivery network ensures these digital personas perform without delay, wherever the audience is located.Real-time avatars and AI-generated personasReal-time avatars use motion capture and emotion detection to mimic human behavior with digital models. A fitness instructor can appear as a stylized avatar while tracking their own real movements. A virtual talk show host can gesture, smile, or pause in response to viewer comments. These avatars do more than just look the part—they respond dynamically.AI-generated personas build on this foundation with language generation and decision-making. For instance, an edtech company could deploy a digital tutor that asks learners comprehension questions and adapts its tone based on their engagement level. In entertainment, a music artist might perform live as a virtual character that reflects audience mood through color shifts, dance patterns, or facial expression.These experiences require ultra-low latency. If the avatar lags, the illusion collapses. Gcore’s infrastructure supports the real-time input-output loop needed to make digital characters feel present and responsive.Deepfake technology for creative storytellingDeepfakes are often associated with misinformation, but the same tools can be used to build engaging, high-integrity content. The technology enables face-swapping, voice cloning, and character animation, all of which are powerful in live formats.A museum might use deepfake avatars of historical figures for interactive educational sessions. Visitors could ask questions, and Abraham Lincoln or Golda Meir might respond with historically grounded answers in real time. A brand could create a fictional spokesperson who evolves over time, appearing in product demos, ads, and livestreams. Deepfake technology also allows multilingual content without re-recording—the speaker’s lip movements and tone are modified to match each language.These applications raise legitimate ethical questions. Gcore’s streaming infrastructure includes controls to ensure the source and integrity of AI-generated content are traceable and secure. We provide the technical foundation that enables deepfake use cases without compromising trust.Synthetic voices and personalized audioAudio is often overlooked in discussions about AI streaming, but it’s just as important as video. Synthetic voices today can express subtle emotions and match speaking styles. They can whisper, shout, pause for dramatic effect, and even mimic regional accents.Let’s consider a news platform that offers interactive daily briefings. Viewers choose their preferred language, delivery style (casual, serious, humorous), and even the voice profile. The AI generates a personalized broadcast on the fly. In gaming, synthetic characters can offer encouragement, warn about strategy mistakes, or narrate progress—all without human voice actors.Gcore’s streaming infrastructure ensures that synthetic voice outputs are tightly synchronized with video, so users don’t experience out-of-sync dialogue or lag during back-and-forth exchanges.Increasing interactivity through feedback and participationInteractivity in streaming now goes far beyond comments or emoji reactions. It includes live polls that influence story outcomes, branching narratives based on audience behavior, and user-generated content layered into the broadcast.For example, a live talent show might allow viewers to suggest challenges mid-broadcast. An online classroom could let students vote on the next topic. A product launch might include a real-time Q&A where the host pulls questions from chat and answers them in the moment.All of these use cases rely on real-time data processing, behavior tracking, and adaptive rendering. Gcore’s platform handles the underlying complexity so that creators can focus on building experiences, not infrastructure.Why low latency is criticalInteractive content only works if it feels immediate. A delay of even a second can break immersion, especially when users are trying to influence the outcome or receive a response. Low latency is essential for real-time gaming, sports, interviews, and educational formats.A live trivia game with hundreds of participants won’t retain users if there’s a lag between the question appearing and the timer starting. A remote surgery training session won’t work if the avatar’s responses trail behind the mentor’s instructions. In each of these cases, timing is everything.Gcore Video Streaming minimizes buffering, supports high-resolution streams, and synchronizes data flows to keep participants engaged. Our infrastructure is built to support high-throughput, globally distributed audiences with the responsiveness that interactive formats demand.Preparing for what’s nextAI-generated content is no longer a novelty. It’s becoming a standard feature of modern streaming strategies. Whether you’re building a platform that features virtual influencers, immersive avatars, or interactive educational streams, the foundation matters. That foundation is infrastructure.If you’re planning the next generation of live content, we’re ready to help you bring it to life. At Gcore, we provide the performance, scale, and security to launch these experiences with confidence. Our streaming solutions are designed to support real-time content generation, audience interaction, and global delivery without compromise.Want to see interactive streaming in action? Learn how fan.at used Gcore Video Streaming to deliver ultra-low-latency streams and boost fan engagement with real-time features.Read the case study

What are captions and subtitles, and how do they work?
Subtitles and captions are essential to consuming video content today. But how do they work behind the scenes?Creating subtitles and captions involves a five-step process to ensure that your video’s spoken and auditory content is accurately and effectively conveyed. The five steps are transcription, correction, synchronization/spotting, translation, and simulation/display on screen.The whole process is usually managed using specialized subtitle or caption creator software.In this blog, we explain the five steps in more detail, what the end user sees, and how to choose the right caption/subtitle service for your needs.Step 1: TranscriptionSpoken content is transformed into a text-based format. Formats are different ways to implement the textual elements, depending on technical needs.Transcription creates the raw materials that will be refined in stages 2–4.Step 2: CorrectionCorrection enhances readability by improving the textual flow. Punctuation, grammar, and sentence structure are adjusted so that the user’s reading experience is seamless and doesn’t detract from the content.Step 3: Synchronization/spottingNext, the text and audio are aligned precisely. Each caption or subtitle’s timing is adjusted so it appears and disappears at the correct moment.Step 4: TranslationTranslation is required for content intended for consumption in multiple languages. During this stage, it’s important to consider format requirements and character limitations. For example, a caption that fits on two lines in English might require three in Spanish, and so in Spanish, one caption becomes two. As a result, additional synchronization might be necessary.Step 5: Simulation/display on screenFinally, the captions or subtitles need to be integrated onto the end user’s screen. Formatting issues might arise at this stage, requiring tweaks for an optimal user experience.How does the end user see subtitles and captions?After the technical process of creating captions and subtitles, the next step is understanding how these elements appear to the end user. The type of captions you choose can greatly impact the user experience, especially when considering accessibility, engagement, and clarity. Below, we break down the different options available and how they serve different viewing scenarios.Open captions: These are always visible to viewers and are a fixed part of the video. They’re popular, for example, for video installations in museums and employee training videos—cases where maximum accessibility is the key consideration when it comes to captions and/or subtitles.Closed captions: Viewers can turn these on or off based on preference. For instance, an online course might offer this feature, allowing learners to choose how to consume the content. Students could opt temporarily to turn on closed captions to note the spelling of a new term introduced during the course.Real-time captions: These are great for live events like webinars, where the text appears almost simultaneously as the words are spoken. They keep the audience engaged in real time without missing out on crucial points. For example, ambient noise like chatter in a sports bar might obscure commentary on a live TV basketball game. Real-time captions allow viewers to benefit from near-live commentary regardless of the bar’s noise levels or if the TV’s sound is muted.Burned-in subtitles: These are etched onto the video and cannot be turned off. A promotional video targeting a multilingual audience might use this feature so that everyone understands the message, regardless of their language preference.What to look for in captioning and subtitling servicesTo deliver high-quality captions and subtitles, it's important to choose a provider that offers key features for accuracy, efficiency, and audience engagement.Original language transcription: Accurate documentation of every spoken word in your video for unrivaled accuracy.Tailored translation: Localized content that integrates translations with cultural relevance, increasing resonance with diverse audiences.Alignment synchronization: Time-annotated subtitles, matching words perfectly to the on-screen action.Automatic SRT file generation: A simplified subtitling and captioning process through effortless file creation for a better user experience.Transform your videos with cutting-edge captions and subtitles from GcoreNo matter your video content needs, it’s essential to be aware of the best type of captions and subtitles for your audience’s needs. Choosing the right format ensures a smoother viewing experience, better accessibility, and stronger engagement across every platform.Gcore Video Streaming offers subtitles and closed captions to enhance users’ experience. Each feature within the subtitling and captioning toolkit is crafted to expand your video content’s reach and impact, catering to a multitude of use cases. Embedding captions is quick and easy, and AI-automated speech recognition also saves you time and money.Try Gcore's automated subtitle and caption solution for free
Subscribe to our newsletter
Get the latest industry trends, exclusive insights, and Gcore updates delivered straight to your inbox.