Gcore named a Leader in the GigaOm Radar for AI Infrastructure!Get the report
  1. Home
  2. Blog
  3. Evaluating Server Connectivity with Looking Glass: A Comprehensive Guide

Evaluating Server Connectivity with Looking Glass: A Comprehensive Guide

  • By Gcore
  • November 1, 2023
  • 8 min read
Evaluating Server Connectivity with Looking Glass: A Comprehensive Guide

When you’re considering purchasing a dedicated or virtual server, it can be challenging to assess how it will function before you buy. Gcore’s free Looking Glass network tool allows you to assess connectivity, giving you a clear picture of whether a server will meet your needs before you make a financial commitment. This article will explore what the Looking Glass network tool is, explain how to use it, and delve into its key functionalities like BGP, PING, and traceroute.

What Is Looking Glass?

Looking Glass is a dedicated network tool that examines the routing and connectivity of a specific AS (autonomous system) to provide real-time insights into network performance and route paths, and show potential bottlenecks. This makes it an invaluable resource for network administrators, ISPs, and end users considering purchasing a virtual or dedicated server. With Looking Glass, you can:

  • Test connections to different nodes and their response times. This enables users to initiate connection tests to various network nodes—crucial for evaluating response times, and thereby helpful in performance tuning or troubleshooting.
  • Trace the packet route from the router to a web resource. This is useful for identifying potential bottlenecks or failures in the network.
  • Display detailed BGP (Border Gateway Protocol) routes to any IPv4 or IPv6 destination. This feature is essential for ISPs to understand routing patterns and make informed peering decisions.
  • Visualize BGP maps for any IP address. By generating a graphical representation or map of the BGP routes, Looking Glass provides an intuitive way to understand the network architecture.

How to Work with Looking Glass

Let’s take a closer look at Looking Glass’s interface.

Under Gcore’s AS number (AS199524), header, and introduction, you will see four fields to complete. Operating Looking Glass is straightforward:

  1. Diagnostic method: Pick from three available diagnostic methods (commands.)
    • BGP: The BGP command in Looking Glass gives you information about all autonomous systems (AS) traversed to reach the specified IP address from the selected router. Gcore Looking Glass can present the BGP route as an SVG/PNG diagram.
    • Ping: The ping command lets you know the round trip time (RRT) and Time to Live (TTL) for the route between the IP address and the router.
    • Traceroute: The traceroute command displays all enabled router hops encountered in the path between the selected router and the destination IP address. It also marks the total time for the request fulfillment and the intermediate times for each AS router that it passed.
  2. Region: Choose a location from the list of locations where our hosting is available. You can filter server nodes by region to narrow your search.
  3. Router: Pick a router from the list of Gcore’s available in the specified region.
  4. IP address: Enter the IP address to which you want to connect. You can type in both IPv4 and IPv6 addresses.
  5. Click Run test.

After you launch the test, you will see the plain text output of the command and three additional buttons in orange, per the image below:

  • Copy to clipboard: Copies command output to your clipboard.
  • Open results page: Opens the output in a separate tab. You can share results via a link with third parties, as this view masks tested IP addresses. The link will remain live for three days.
  • Show BGP map: Provides a graphical representation of the BGP route and shows which autonomous systems the data has to go through on the way from the Gcore’s node to the IP address. This option is only relevant for the BGP command.

In this article, we will examine each command (BGP, ping, and traceroute) using 93.184.216.34 (example.com) as a target IP address and Gcore’s router in Luxembourg (capital of the Duchy of Luxembourg), where our HQ is located. So our settings for all three commands will be the following:

  • Region: Europe
  • Router: Luxembourg (Luxembourg)
  • IP address: 93.184.216.34. We picked example.com as a Looking Glass target server.

Now let’s dive deeper into each command: BGP, ping, and traceroute.

BGP

The BGPLooking Glass shows the best BGP route (or routes) to the destination point. BGP is a dynamic routing protocol that connects autonomous systems (AS)—systems of routers and IP networks with a shared online routing policy. BGP allows various sections of the internet to communicate with one another. Each section has its own set of IP addresses, like a unique ID. BGP captures and maintains these IDs in a database. When data has to be moved from one autonomous system to another over the internet, BGP consults this database to determine the most direct path.

Based on this data, the best route for the packets is built; and this is what the Looking Glass BGP command can show. Here’s an example output:

The data shows two possible paths for the route to travel. The first path has numbers attached; here’s what each of them means:

  1. 93.184.216.0/24: CIDR notation for the network range being routed. The “/24” specifies that the first 24 bits identify the network, leaving 8 bits for individual host addresses within that network. Thus it covers IP addresses from 93.184.216.0 to 93.184.216.255.
  2. via 92.223.88.1 on eno1: The next hop’s IP address and the network interface through which the data will be sent. In this example, packets will be forwarded to 92.223.88.1 via the eno1 interface.
  3. BGP.origin: IGP: Specifies the origin of the BGP route. The IGP (interior gateway protocol) value implies that this route originated within the same autonomous system (AS.)
  4. BGP.as_path: 174 15133: The AS path shows which autonomous systems the data has passed through to reach this point. Here, the data traveled through AS 174 and then to AS 15133.
  5. BGP.next_hop: 92.223.112.66: The next router to which packets will be forwarded.
  6. BGP.med: 84040: The Multi-Exit Discriminator (MED) is a metric that influences how incoming traffic should be balanced over multiple entry points in an AS. Lower values are generally preferred; here, the MED value is 84040.
  7. BGP.local_pref: 80: Local preference, which is used to choose the exit point from the local AS. A higher value is preferred when determining the best path. The local preference of 80 in the route output indicates that this route is more preferred than other routes to the same destination with a lower local preference.
  8. BGP.community: These are tags or labels that can be attached to a route. Output (174,21001) consists of pairs of ASNs and custom values representing a specific routing policy or action to be taken. Routing policies can use these communities as conditions to apply specific actions. The meaning of these values depends on the internal configurations of the network and usually requires documentation from the network provider for interpretation.
  9. BGP.originator_id: 10.255.78.64: This indicates the router that initially advertised the route. In this context, the route originated from the router with IP 10.255.78.64.
  10. BGP.cluster_list: This is used in route reflection scenarios. It lists the identifiers of the route reflectors that have processed the route. Here, it shows that this route has passed through the reflector identified by 10.255.8.68 or 10.255.8.69 depending on the path.

Both routes are part of AS 15133 and pass through AS 174, but they have different next hops (92.223.112.66 and 92.223.112.67.) This allows for redundancy and load balancing.

BGP map

When you run the BGP command, the Show BGP map button will become active. Here’s what we will see for our IP address:

Let’s take this diagram point by point:

  • AS199524 | GCORE, LU: This is the autonomous system belonging to Gcore, based in Luxembourg. The IP 92.223.88.1 is the part of this AS, functioning as a gateway or router.
  • AS174 | COGENT-174, US: This is Cogent Communications’ autonomous system, based in the United States. Cogent is a major ISP.
  • AS15133 | EDGECAST, US: This AS belongs to Edgecast, also based in the United States. Edgecast is generally involved in content delivery network (CDN) services.
  • 93.184.216.0/24: This CIDR notation indicates a network range where example.com (93.184.216.34) is located. It might be a part of Edgecast’s CDN services or another network associated with one of the listed AS.

In summary, Gcore’s BGP Looking Glass command is an essential tool for understanding intricate network routes. By offering insights into autonomous systems, next hops, and metrics like MED and local preference, it allows for a nuanced approach to network management. Whether you’re an ISP peered with Gcore or a network administrator seeking to optimize performance, the data generated by this command offers a roadmap for strategic decision making.

Ping

The ping command is a basic, essential network troubleshooting tool that measures the round-trip time for sending a packet of data from the source to a destination and back. Ping shows the packet transfer speed and can also be used to check the node’s overall availability.

The command utilizes the ICMP protocol. It works as follows:

  • The router sends a packet from the IP address to the node.
  • The node sends it back.

In our case, this command shows how much time it takes to transfer a packet from the specified IP address to the node.

Let’s break down our output:

Main part:

  1. Target IP: You pinged 93.184.216.34, which is the example.com IP address we are testing.
  2. Packet Size: 56(84) bytes of data were sent. The packet consists of 56 bytes of data and 28 bytes of header, totaling 84 bytes.
  3. Individual pings: Each line indicates a single round trip of a packet, detailing:
    • icmp_seq: Sequence number of the packet.
    • ttl: Time-to-Live, showing how many more hops the packet could make before being dropped.
    • time: Round-trip time (RTT) in milliseconds.

Statistics:

  1. 5 packets transmitted, 5 received: All packets were successfully transmitted and received, indicating no packet loss
  2. 0% packet loss: No packets were lost during the transmission
  3. time 4005ms: Total time taken for these five pings
  4. rtt min/avg/max/mdev: Round-trip times in milliseconds:
    • min: minimum time
    • avg: average time
    • max: maximum time
    • mdev: mean deviation time

To summarize, the average round-trip time here is 87.138 ms, and the TTL is 52. RTT of less than 100 ms is generally considered acceptable for interactive applications, and TTL of 50 is considered a good value. No packet loss suggests a stable connection to the IP address 93.184.216.34.

The ping function provides basic, vital metrics for assessing network health. By offering details on round-trip times, packet loss, and TTL, this command allows for a quick yet comprehensive evaluation of network connectivity. For any network stakeholder—whether ISP or end user—understanding these metrics is crucial for effective network management and troubleshooting.

Traceroute

The Looking Glass traceroutecommand is a diagnostic tool that maps out the path packets take from the source to the destination, enabling you to identify potential bottlenecks or network failures. Traceroute relies on the TTL (Time-to-Live) parameter, which basically determines how long this packet can stay in the network. Every router along the packet’s path decrements the TTL by 1 and forwards the packet to the next router in the path. The process works as follows:

  1. The traceroute sends a packet to the destination host with TTL value of 1.
  2. The first router that receives the packet decrements the TTL value by 1 and forwards the packet.
  3. When the TTL reaches zero, the router drops the packet and sends an ICMP Time Exceeded message back to the source host.
  4. The traceroute records the IP address of the router that sent back the ICMP Time Exceeded message.
  5. The traceroute then sends another packet to the destination host with a TTL value of 2.
  6. Steps 2-4 are repeated until the traceroute routine reaches the destination host or until it exceeds the maximum number of hops.

Now let’s apply this command to the address we used earlier. The traceroute command will test our target IP address with 60-byte packets and a maximum of fifteen hops. Here’s what we get as output:

Apart from the header, each output line consists of the following information, labeled on the image below:

  1. IP and hostname: e.g., vrrp.gcore.lu (92.223.88.2)
  2. AS information: Provided in square brackets, e.g., [AS199524/AS202422]
  3. Latency: Time in milliseconds for the packet to reach the hop and return, e.g., 0.202 ms

In our example, traceroute traverses through three different autonomous systems (AS):

  1. AS199524 (GCORE, LU): The first two hops are within this AS, representing the initial part of the route.
  2. Hops 3 and 4 fall under the private IPv4 address space (10.255.X.X), meaning the hops are within a private network. This could be an internal router or other networking device not directly accessible over the public Internet. Private addresses like this are often used for internal routing within an organization or service provider’s network.
  3. AS174 (COGENT, US): Hops 5 to 9 are within Cogent’s network.
  4. AS15133 (EDGECAST, US): The final hops are within EdgeCast’s network, where the destination IP resides.

Example Hop: ae-66.core1.bsb.edgecastcdn.net (152.195.233.131) [AS15133] 82.450 ms

To sum up, the traceroute command offers a comprehensive view of the packet journey across multiple autonomous systems. Providing latency data and AS information at each hop, it aids in identifying potential bottlenecks or issues in the network. This insight is invaluable for anyone looking to understand or troubleshoot a network path.

Conclusion

Looking Glass is a tool for pre-purchase network testing, covering node connectivity, response times, packet paths, and BGP routes. Its user-friendly interface requires just a few inputs—location, target IP address, and the command of your choice—to deliver immediate results.

Based on your specific needs, such as connectivity speeds and location, and the insights gained from Looking Glass test results, you can choose between Gcore Virtual or Dedicated servers, both boasting outstanding connectivity. Want to learn more? Contact our team.

Related articles

Edge AI is your next competitive advantage: highlights from Seva Vayner’s webinar

Edge AI isn’t just a technical milestone. It’s a strategic lever for businesses aiming to gain a competitive advantage with AI.As AI deployments grow more complex and more global, central cloud infrastructure is hitting real-world limits: compliance barriers, latency bottlenecks, and runaway operational costs. The question for businesses isn’t whether they’ll adopt edge AI, but how soon.In a recent webinar with Mobile World Live, Seva Vayner, Gcore’s Product Director of Edge Cloud and AI, made the business case for edge inference as a competitive differentiator. He outlined what it takes to stay ahead in a world where speed, locality, and control define AI success.Scroll on to watch Seva explain why your infrastructure choices now shape your market position later.Location is everything: edge over cloudAI is no longer something globally operating businesses can afford to run from a central location. Regional regulations and growing user expectations mean models must be served as close to the user as possible. This reduces latency, but perhaps more importantly is essential for compliance with local laws.Edge AI also keeps costs down by avoiding costly international traffic routes. When your users are global but your infrastructure isn’t, every request becomes an expensive, high-latency journey across the internet.Edge inference solves three problems at once in an increasingly regionally fragmented AI landscape:Keeps compute near users for low latencyCuts down on international transit for reduced costsHelps companies stay compliant with local lawsPrivate edge: control over convenienceMany businesses started their AI journey by experimenting with public APIs like OpenAI’s. But as companies and their AI use cases mature, that’s not good enough anymore. They need full control over data residency, model access, and deployment architecture, especially in regulated industries or high-sensitivity environments.That’s where private edge deployments come in. Instead of relying on public endpoints and shared infrastructure, organizations can fully isolate their AI environments, keeping data secure and models proprietary.This approach is ideal for healthcare, finance, government, and any sector where data sovereignty and operational security are critical.Optimizing edge AI: precision over powerDeploying AI at the edge requires right-sizing your infrastructure for the models and tasks at hand. That’s both technically smarter and far more cost-effective than throwing maximum power and size at every use case.Making smart trade-offs allows businesses to scale edge AI sustainably by using the right hardware for each use case.AI at the edge helps businesses deliver the experience without the excess. With the control that the edge brings, hardware costs can be cut by using exactly what each device or location requires, reducing financial waste.Final takeawayAs Seva put it, AI infrastructure decisions are no longer just financial; they’re part of serious business strategy. From regulatory compliance to operational cost to long-term scalability, edge inference is already a necessity for businesses that plan to serve AI at scale and get ahead in the market.Gcore offers a full suite of public and private edge deployment options across six continents, integrated with local telco infrastructure and optimized for real-time performance. Learn more about Everywhere Inference, our edge AI solution, or get in touch to see how we can help tailor a deployment model to your needs.Ready to get started? Deploy a model in just three clicks with Gcore Everywhere Inference.Discover Everywhere Inference

Smart caching and predictive streaming: the next generation of content delivery

As streaming demand surges worldwide, providers face mounting pressure to deliver high-quality video without buffering, lag, or quality dips, no matter where the viewer is or what device they're using. That pressure is only growing as audiences consume content across mobile, desktop, smart TVs, and edge-connected devices.Traditional content delivery networks (CDNs) were built to handle scale, but not prediction. They reacted to demand, but couldn’t anticipate it. That’s changing.Today, predictive streaming and AI-powered smart caching are enabling a proactive, intelligent approach to content delivery. These technologies go beyond delivering content by forecasting what users will need and making sure it's there before it's even requested. For network engineers, platform teams, and content providers, this marks a major evolution in performance, reliability, and cost control.What are predictive streaming and smart caching?Predictive streaming is a technology that uses AI to anticipate what a viewer will watch next, so the content can be ready before it's requested. That might mean preloading the next episode in a series, caching popular highlights from a live event, or delivering region-specific content based on localized viewing trends.Smart caching supports this by storing that predicted content on servers closer to the viewer, reducing delays and buffering. Together, they make streaming faster and smoother by preparing content in advance based on user behavior.Unlike traditional caching, which relies on static popularity metrics or simple geolocation, predictive streaming is dynamic. It adapts in real time to what’s happening on the platform: user actions, traffic spikes, network conditions, and content trends. This results in:Faster playback with minimal bufferingReduced bandwidth and server loadHigher quality of experience (QoE) scores across user segmentsFor example, during the 2024 UEFA European Championship, several broadcasters used predictive caching to preload high-traffic game segments and highlight reels based on past viewer drop-off points. This allowed for instant replay delivery in multiple languages without overloading central servers.Why predictive streaming matters for viewersGlobally, viewers tend to binge-watch new streaming platform releases. For example, sci-fi-action drama Fallout got 25% of its annual US viewing minutes (2.9 billion minutes) in its first few days of release. The South Korean series Queen of Tears became Netflix's most-watched Korean drama of all time in 2024, amassing over 682.6 million hours viewed globally, with more than half of those watch hours occurring during its six-week broadcast run.A predictive caching system can take advantage of this launch-day momentum by pre-positioning likely-to-be-watched episodes, trailers, or bonus content at the edge, customized by region, device, or time of day.The result is a seamless, high-performance experience that anticipates user behavior and scales intelligently to meet it.Benefits for streaming providersTraditional CDNs often waste resources caching content that may never be viewed. Predictive caching focuses only on content that is likely to be accessed, leading to:Lower egress costsReduced server loadMore efficient cache hit ratiosOne of the core benefits of predictive streaming is latency reduction. By caching content at the edge before it’s requested, platforms avoid the delay caused by round-trips to origin servers. This is especially critical for:Live sports and eventsInteractive or real-time formats (e.g., polls, chats, synchronized streams)Edge environments with unreliable last-mile connectivityFor instance, during the 2024 Copa América, mobile viewers in remote areas of Argentina were able to stream matches without delay thanks to proactive edge caching based on geo-temporal viewing predictions.How it worksAt the core of predictive streaming is smart caching: the process of storing data closer to the end user before it’s explicitly requested. Here’s how it works:Data ingestion: The system gathers data on user behavior, device types, content popularity, and location-based trends.Behavior modeling: AI models identify patterns (e.g., binge-watching behaviors, peak-hour traffic, or regional content spikes).Pre-positioning: Based on predictions, the system caches video segments, trailers, or interactive assets to edge servers closest to where demand is expected.Real-time adaptation: As user behavior changes, the system continuously updates its caching strategy.Use cases across streaming ecosystemsSmart caching and predictive delivery benefit nearly every vertical of streaming.Esports and gaming platforms: Live tournaments generate unpredictable traffic surges, especially when underdog teams advance. Predictive caching helps preload high-interest match content, post-game analysis, and multilingual commentary before traffic spikes hit. This helps provide global availability with minimal delay.Corporate webcasts and investor events: Virtual AGMs or earnings calls need to stream seamlessly to thousands of stakeholders, often under compliance pressure. Predictive systems can cache frequently accessed segments, like executive speeches or financial summaries, at regional nodes.Education platforms: In EdTech environments, predictive delivery ensures that recorded lectures, supplemental materials, and quizzes are ready for users based on their course progression. This reduces lag for remote learners on mobile connections.VOD platforms with regional licensing: Content availability differs across geographies. Predictive caching allows platforms to cache licensed material efficiently and avoid serving geo-blocked content by mistake, while also meeting local performance expectations.Government or emergency broadcasts: During public health updates or crisis communications, predictive streaming can support multi-language delivery, instant replay, and mobile-first optimization without overloading networks during peak alerts.Looking forward: Personalization and platform governanceWe predict that the next wave of predictive streaming will likely include innovations that help platforms scale faster while protecting performance and compliance:Viewer-personalized caching, where individual user profiles guide what’s cached locally (e.g., continuing series, genre preferences)Programmatic cache governance, giving DevOps and marketing teams finer control over how and when content is distributedCross-platform intelligence, allowing syndicated content across services to benefit from shared predictions and joint caching strategiesGcore’s role in the predictive futureAt Gcore, we’re building AI-powered delivery infrastructure that makes the future of streaming a practical reality. Our smart caching, real-time analytics, and global edge network work together to help reduce latency and cost, optimize resource usage, and improve user retention and stream stability.If you’re ready to unlock the next level of content delivery, Gcore’s team is here to help you assess your current setup and plan your predictive evolution.Discover how Gcore streaming technologies helped fan.at boost subscription revenue by 133%

From budget strain to AI gain: Watch how studios are building smarter with AI

Game development is in a pressure cooker. Budgets are ballooning, infrastructure and labor costs are rising, and players expect more complexity and polish with every release. All studios, from the major AAAs to smaller indies, are feeling the strain.But there is a way forward. In a recent webinar, Sean Hammond, Territory Manager for the UK and Nordics at Gcore, explained how AI is reshaping game development workflows and how the right infrastructure strategy can reduce costs, speed up production, and create better player experiences.Scroll on to watch key moments from Sean's talk and explore how studios can make AI work for them.Rising costs are threatening game developmentGame revenue has slowed, but development costs continue to rise. Some AAA titles now surpass $100 million in development budgets. The complexity of modern games demands more powerful servers, scalable infrastructure, and larger teams, making the industry increasingly unsustainable.Personnel and infrastructure costs are also climbing. Developers, artists, and QA testers with specialized skills are in high demand, as are technologies like VR, AR, and AI. Studios are also having to invest more in cybersecurity to protect player data, detect cheating, and safeguard in-game economies.AI is revolutionizing GameDev, even without a perfect use caseWhile the perfect use case for AI in gaming may not have been found yet, it’s already transforming how games are built, tested, and personalized.Sean highlighted emerging applications, including:Smarter QA testingAI-driven player personalizationReal-time motion and animationAccelerated environment and character designMultilingual localizationAdaptive game balancingStudios are already applying these technologies to reduce production timelines and improve immersion.The challenge of secure, scalable AI adoptionOf course, AI adoption doesn’t come without its challenges. Chief among them is security. Public models pose risks: no studio wants their proprietary assets to end up training a competitor’s model.The solution? Deploy AI models on infrastructure you trust so you’re in complete control. That’s where Gcore comes in.Gcore Everywhere Inference reduces compute costs and infrastructure bloat by allowing you to deploy only what you need, where you need it.The future of gaming is AI at scaleTo power real-time player experiences, your studio needs to deploy AI globally, close to your users.Gcore Everywhere Inference lets you deploy models worldwide at the edge with minimal latency because data is not routed back to central servers. This means fast, responsive gameplay and a new generation of real-time, AI-driven features.As a company originally built by gamers, we’ve developed AI solutions with gaming studios in mind. Here’s what we offer:Global edge inference for real-time gameplay: Deploy your AI models close to players worldwide, enabling fast, responsive player experiences without routing data to central servers.Full control over AI model deployment and IP protection: Avoid public APIs and retain full ownership of your assets with on-prem options, preventing your proprietary data from being available to competitors.Scalable, cost-efficient infrastructure tailored to gaming workloads: Deploy only what you need to avoid overprovisioning and reduce compute costs without sacrificing performance.Enhanced player retention through AI-driven personalization and matchmaking: Real-time inference powers smarter NPCs and dynamic matchmaking, improving engagement and keeping players coming back for more.Deploy models in 3 clicks and under 10 seconds: Our developer-friendly platform lets you go from trained model to global deployment in seconds. No complex DevOps setup required.Final takeawayAI is advancing game development fast, but only if it’s deployed right. Gcore offers scalable, secure, and cost-efficient AI infrastructure that helps studios create smarter, faster, and more immersive games.Want to see how it works? Deploy your first model in just a few clicks.Check out our blog on how AI is transforming gaming in 2025

No capacity = no defense: rethinking DDoS resilience at scale

DDoS attacks are growing so massive they are overwhelming the very infrastructure designed to stop them. Earlier this year, a peak attack exceeding 7 Tbps was recorded, while 1–2 Tbps attacks have become everyday occurrences. Such volumes were unimaginable just a few years ago.Yet many businesses still depend on mitigation systems that were not designed to scale alongside this rapid attack growth. While these systems may have smart detection, that advantage is moot if physical infrastructure cannot handle the load. Today, raw capacity is non-negotiable — intelligent filtering alone isn’t enough; you need vast, globally distributed throughput.Lukasz Karwacki, Gcore’s Security Solution Architect specializing in DDoS, explains why modern DDoS protection requires immense capacity, global distribution, and resilient routing. Scroll down to watch him describe why a globally distributed defense model is now the minimum standard for mitigating devastating DDoS attacks.DDoS is a capacity war, not just a traffic spikeThe central challenge in DDoS mitigation today is the total attack volume versus total available throughput.Attacks do not originate from a single location. Global botnets harness compromised devices across Asia, Africa, Europe, and the Americas. When all this traffic converges on a single data center, it creates a structural mismatch: a single site’s limited capacity pitted against the full bandwidth of the internet.Anycast is non-negotiable for global capacityTo counter today’s attack volumes, mitigation capacity must be distributed globally, and that’s where Anycast routing plays a critical role.Anycast routes incoming traffic to the nearest available scrubbing center. If one region is overwhelmed or offline, traffic is automatically redirected elsewhere. This eliminates single points of failure and enables the absorption of massive attacks without compromising service availability.By contrast, static mitigation pipelines create bottlenecks: all traffic funnels through a single point, making it easy for attackers to overwhelm that location. Centralized mitigation means centralized failure. The more distributed your infrastructure, the harder it is to take down — that’s resilient network design.Why always-on cloud defense outperforms on-demand protectionSome DDoS defenses activate only when an attack is detected. These on-demand models may save costs but introduce a brief delay while traffic is rerouted and protections come online.Even a few seconds of delay can allow a high-speed attack to inflict damage.Gcore’s cloud-native DDoS protection is always-on, continuously monitoring, filtering, and balancing traffic across all scrubbing centers. This means no activation lag and no dependency on manual triggers.Capacity is the new baseline for protectionModern DDoS attacks focus less on sophistication and more on sheer scale. Attackers simply overwhelm infrastructure by flooding it with more traffic than it can handle.True DDoS protection begins with capacity planning — not just signatures or rulesets. You need sufficient bandwidth, processing power, and geographic distribution to absorb attacks before they reach your core systems.At Gcore, we’ve built a globally distributed DDoS mitigation network with over 200 Tbps capacity, 40+ protected data centers, and thousands of peering partners. Using Anycast routing and always-on defense, our infrastructure withstands attacks that other systems simply can’t.Many customers turn to Gcore for DDoS protection after other providers fail to keep up with attack capacity.Find out why Fawkes Games turned to Gcore for DDoS protection

How AI-enhanced content moderation is powering safe and compliant streaming

How AI-enhanced content moderation is powering safe and compliant streaming

As streaming experiences a global boom across platforms, regions, and industries, providers face a growing challenge: how to deliver safe, respectful, and compliant content delivery at scale. Viewer expectations have never been higher, likewise the regulatory demands and reputational risks.Live content in particular leaves little room for error. A single offensive comment, inappropriate image, or misinformation segment can cause long-term damage in seconds.Moderation has always been part of the streaming conversation, but tools and strategies are evolving rapidly. AI-powered content moderation is helping providers meet their safety obligations while preserving viewer experience and platform performance.In this article, we explore how AI content moderation works, where it delivers value, and why streaming platforms are adopting it to stay ahead of both audience expectations and regulatory pressures.Real-time problems require real-time solutionsHuman moderators can provide accuracy and context, but they can’t match the scale or speed of modern streaming environments. Live streams often involve thousands of viewers interacting at once, with content being generated every second through audio, video, chat, or on-screen graphics.Manual review systems struggle to keep up with this pace. In some cases, content can go viral before it is flagged, like deepfakes that circulated on Facebook leading up to the 2025 Canadian election. In others, delays in moderation result in regulatory penalties or customer churn, like X’s 2025 fine under the EU Digital Services Act for shortcomings in content moderation and algorithm transparency. This has created a demand for scalable solutions that act instantly, with minimal human intervention.AI-enhanced content moderation platforms address this gap. These systems are trained to identify and filter harmful or non-compliant material as it is being streamed or uploaded. They operate across multiple modalities—video frames, audio tracks, text inputs—and can flag or remove content within milliseconds of detection. The result is a safer environment for end users.How AI moderation systems workModern AI moderation platforms are powered by machine learning algorithms trained on extensive datasets. These datasets include a wide variety of content types, languages, accents, dialects, and contexts. By analyzing this data, the system learns to identify content that violates platform policies or legal regulations.The process typically involves three stages:Input capture: The system monitors live or uploaded content across audio, video, and text layers.Pattern recognition: It uses models to identify offensive content, including nudity, violence, hate speech, misinformation, or abusive language.Contextual decision-making: Based on confidence thresholds and platform rules, the system flags, blocks, or escalates the content for review.This process is continuous and self-improving. As the system receives more inputs and feedback, it adapts to new forms of expression, regional trends, and platform-specific norms.What makes this especially valuable for streaming platforms is its low latency. Content can be flagged and removed in real time, often before viewers even notice. This is critical in high-stakes environments like esports, corporate webinars, or public broadcasts.Multi-language moderation and global streamingStreaming audiences today are truly global. Content crosses borders faster than ever, but moderation standards and cultural norms do not. What’s considered acceptable in one region may be flagged as offensive in another. A word that is considered inappropriate in one language might be completely neutral in another. A piece of nudity in an educational context may be acceptable, while the same image in another setting may not be. Without the ability to understand nuance, AI systems risk either over-filtering or letting harmful content through.That’s why high-quality moderation platforms are designed to incorporate context into their models. This includes:Understanding tone, not just keywordsRecognizing culturally specific gestures or idiomsAdapting to evolving slang or coded languageApplying different standards depending on content type or target audienceThis enables more accurate detection of harmful material and avoids false positives caused by mistranslation.Training AI models for multi-language support involves:Gathering large, representative datasets in each languageTeaching the model to detect content-specific risks (e.g., slurs or threats) in the right cultural contextContinuously updating the model as language evolvesThis capability is especially important for platforms that operate in multiple markets or support user-generated content. It enables a more respectful experience for global audiences while providing consistent enforcement of safety standards.Use cases across the streaming ecosystemAI moderation isn’t just a concern for social platforms. It plays a growing role in nearly every streaming vertical, including the following:Live sports: Real-time content scanning helps block offensive chants, gestures, or pitch-side incidents before they reach a wide audience. Fast filtering protects the viewer experience and helps meet broadcast standards.Esports: With millions of viewers and high emotional stakes, esports platforms rely on AI to remove hate speech and adult content from chat, visuals, and commentary. This creates a more inclusive environment for fans and sponsors alike.Corporate live events: From earnings calls to virtual town halls, organizations use AI moderation to help ensure compliance with internal communication guidelines and protect their reputation.Online learning: EdTech platforms use AI to keep classrooms safe and focused. Moderation helps filter distractions, harassment, and inappropriate material in both live and recorded sessions.On-demand entertainment: Even outside of live broadcasts, moderation helps streaming providers meet content standards and licensing obligations across global markets. It also ensures user-submitted content (like comments or video uploads) meets platform guidelines.In each case, the shared goal is to provide a safe and trusted streaming environment for users, advertisers, and creators.Balancing automation with human oversightAI moderation is a powerful tool, but it shouldn’t be the only one. The best systems combine automation with clear review workflows, configurable thresholds, and human input.False positives and edge cases are inevitable. Giving moderators the ability to review, override, or explain decisions is important for both quality control and user trust.Likewise, giving users a way to appeal moderation decisions or report issues ensures that moderation doesn’t become a black box. Transparency and user empowerment are increasingly seen as part of good platform governance.Looking ahead: what’s next for AI moderationAs streaming becomes more interactive and immersive, moderation will need to evolve. AI systems will be expected to handle not only traditional video and chat, but also spatial audio, avatars, and real-time user inputs in virtual environments.We can also expect increased demand for:Personalization, where viewers can set their own content preferencesIntegration with platform APIs for programmatic content governanceCross-platform consistency to support syndicated content across partnersAs these changes unfold, AI moderation will remain central to the success of modern streaming. Platforms that adopt scalable, adaptive moderation systems now will be better positioned to meet the next generation of content challenges without compromising on speed, safety, or user experience.Keep your streaming content safe and compliant with GcoreGcore Video Streaming offers AI Content Moderation that satisfies today’s digital safety concerns while streamlining the human moderation process.To explore how Gcore AI Content Moderation can transform your digital platform, we invite you to contact our streaming team for a demonstration. Our docs provide guidance for using our intuitive Gcore Customer Portal to manage your streaming content. We also provide a clear pricing comparison so you can assess the value for yourself.Embrace the future of content moderation and deliver a safer, more compliant digital space for all your users.Try AI Content Moderation for freeTry AI Content Moderation for free

Deploy GPT-OSS-120B privately on Gcore

OpenAI’s release of GPT-OSS-120B is a turning point for LLM developers. It’s a 120B parameter model trained from scratch, licensed for commercial use, and available with open weights. This is a serious asset for serious builders.Gcore now supports private GPT-OSS-120B deployments via our Everywhere Inference platform. That means you can stand up your own endpoint in minutes, run inference at scale, and control the full stack, without API limits, vendor lock-in, or hidden usage fees. Just fast, secure, controlled deployment on your terms. Deploy now in three clicks or read on to learn more.Why GPT-OSS-120B is big news for buildersThis model changes the game for anyone developing AI apps, platforms, or infrastructure. It brings GPT-3-level reasoning to the open-source ecosystem and frees developers from closed APIs.With GPT-OSS-120B, you get:Full access to model weights and architectureSelf-hosting for maximum data control and privacySupport for fine-tuning and model editingOffline deployment for secure or air-gapped useMassive cost savings at scaleYou can deploy in any Gcore region (or leverage Gcore’s three-click serverless inference on your own infrastructure), route traffic through your own stack, and fully control load, latency, and logs. This is LLM deployment for real-world apps, not just playground prompts.How to deploy GPT-OSS-120B with Gcore Everywhere InferenceGcore Everywhere Inference gives you a clean path from open model to production endpoint. You can spin up a dedicated deployment in just three clicks. We offer configuration options to suit your business needs:Choose your location (cloud or on-prem)Integrate via standard APIs (OpenAI-compatible)Control usage, autoscale, and costsDeploying GPT-OSS-120B on Gcore takes just three clicks in the Gcore Customer Portal.There are no shared endpoints. You get dedicated compute, low-latency routing, and full control and observability.You can also bring your own trained variant if you’ve fine-tuned GPT-OSS-120B elsewhere. We’ll help you host it reliably, close to your users.Use cases: where GPT-OSS-120B fits bestCommercial GPTs still outperform OSS models on some general tasks, but GPT-OSS-120B gives you control, portability, and flexibility where it counts. Most importantly, it gives you the ability to build privacy-sensitive applications.Great fits include:Internal dev tools and copilotsRetrieval-augmented generation (RAG) pipelinesSecure, private enterprise assistantsData-sensitive, on-prem AI workloadsModels requiring full customization or fine-tuningIt’s especially relevant for finance, healthcare, government, and legal teams operating under strict compliance rules.Deploy GPT-OSS-120B todayWant to learn more about GPT-OSS-120B and why Gcore is an ideal provider for deployment? Get all the information you need on our dedicated page.And if you’re ready to deploy in just three clicks, head on over to the Gcore Customer Portal. GPT-OSS-120B is waiting for you in the Application Catalog.Learn more about deploying GPT-OSS-120B on Gcore

Subscribe to our newsletter

Get the latest industry trends, exclusive insights, and Gcore updates delivered straight to your inbox.