Radar has landed - discover the latest DDoS attack trends. Get ahead, stay protected.Get the report
Under attack?

Products

Solutions

Resources

Partners

Why Gcore

  1. Home
  2. Blog
  3. Evaluating Server Connectivity with Looking Glass: A Comprehensive Guide

Evaluating Server Connectivity with Looking Glass: A Comprehensive Guide

  • By Gcore
  • November 1, 2023
  • 8 min read
Evaluating Server Connectivity with Looking Glass: A Comprehensive Guide

When you’re considering purchasing a dedicated or virtual server, it can be challenging to assess how it will function before you buy. Gcore’s free Looking Glass network tool allows you to assess connectivity, giving you a clear picture of whether a server will meet your needs before you make a financial commitment. This article will explore what the Looking Glass network tool is, explain how to use it, and delve into its key functionalities like BGP, PING, and traceroute.

What Is Looking Glass?

Looking Glass is a dedicated network tool that examines the routing and connectivity of a specific AS (autonomous system) to provide real-time insights into network performance and route paths, and show potential bottlenecks. This makes it an invaluable resource for network administrators, ISPs, and end users considering purchasing a virtual or dedicated server. With Looking Glass, you can:

  • Test connections to different nodes and their response times. This enables users to initiate connection tests to various network nodes—crucial for evaluating response times, and thereby helpful in performance tuning or troubleshooting.
  • Trace the packet route from the router to a web resource. This is useful for identifying potential bottlenecks or failures in the network.
  • Display detailed BGP (Border Gateway Protocol) routes to any IPv4 or IPv6 destination. This feature is essential for ISPs to understand routing patterns and make informed peering decisions.
  • Visualize BGP maps for any IP address. By generating a graphical representation or map of the BGP routes, Looking Glass provides an intuitive way to understand the network architecture.

How to Work with Looking Glass

Let’s take a closer look at Looking Glass’s interface.

Under Gcore’s AS number (AS199524), header, and introduction, you will see four fields to complete. Operating Looking Glass is straightforward:

  1. Diagnostic method: Pick from three available diagnostic methods (commands.)
    • BGP: The BGP command in Looking Glass gives you information about all autonomous systems (AS) traversed to reach the specified IP address from the selected router. Gcore Looking Glass can present the BGP route as an SVG/PNG diagram.
    • Ping: The ping command lets you know the round trip time (RRT) and Time to Live (TTL) for the route between the IP address and the router.
    • Traceroute: The traceroute command displays all enabled router hops encountered in the path between the selected router and the destination IP address. It also marks the total time for the request fulfillment and the intermediate times for each AS router that it passed.
  1. Region: Choose a location from the list of locations where our hosting is available. You can filter server nodes by region to narrow your search.
  2. Router: Pick a router from the list of Gcore’s available in the specified region.
  3. IP address: Enter the IP address to which you want to connect. You can type in both IPv4 and IPv6 addresses.
  4. Click Run test.

After you launch the test, you will see the plain text output of the command and three additional buttons in orange, per the image below:

  • Copy to clipboard: Copies command output to your clipboard.
  • Open results page: Opens the output in a separate tab. You can share results via a link with third parties, as this view masks tested IP addresses. The link will remain live for three days.
  • Show BGP map: Provides a graphical representation of the BGP route and shows which autonomous systems the data has to go through on the way from the Gcore’s node to the IP address. This option is only relevant for the BGP command.

In this article, we will examine each command (BGP, ping, and traceroute) using 93.184.216.34 (example.com) as a target IP address and Gcore’s router in Luxembourg (capital of the Duchy of Luxembourg), where our HQ is located. So our settings for all three commands will be the following:

  • Region: Europe
  • Router: Luxembourg (Luxembourg)
  • IP address: 93.184.216.34. We picked example.com as a Looking Glass target server.

Now let’s dive deeper into each command: BGP, ping, and traceroute.

BGP

The BGPLooking Glass shows the best BGP route (or routes) to the destination point. BGP is a dynamic routing protocol that connects autonomous systems (AS)—systems of routers and IP networks with a shared online routing policy. BGP allows various sections of the internet to communicate with one another. Each section has its own set of IP addresses, like a unique ID. BGP captures and maintains these IDs in a database. When data has to be moved from one autonomous system to another over the internet, BGP consults this database to determine the most direct path.

Based on this data, the best route for the packets is built; and this is what the Looking Glass BGP command can show. Here’s an example output:

The data shows two possible paths for the route to travel. The first path has numbers attached; here’s what each of them means:

  1. 93.184.216.0/24: CIDR notation for the network range being routed. The “/24” specifies that the first 24 bits identify the network, leaving 8 bits for individual host addresses within that network. Thus it covers IP addresses from 93.184.216.0 to 93.184.216.255.
  2. via 92.223.88.1 on eno1: The next hop’s IP address and the network interface through which the data will be sent. In this example, packets will be forwarded to 92.223.88.1 via the eno1 interface.
  3. BGP.origin: IGP: Specifies the origin of the BGP route. The IGP (interior gateway protocol) value implies that this route originated within the same autonomous system (AS.)
  4. BGP.as_path: 174 15133: The AS path shows which autonomous systems the data has passed through to reach this point. Here, the data traveled through AS 174 and then to AS 15133.
  5. BGP.next_hop: 92.223.112.66: The next router to which packets will be forwarded.
  6. BGP.med: 84040: The Multi-Exit Discriminator (MED) is a metric that influences how incoming traffic should be balanced over multiple entry points in an AS. Lower values are generally preferred; here, the MED value is 84040.
  7. BGP.local_pref: 80: Local preference, which is used to choose the exit point from the local AS. A higher value is preferred when determining the best path. The local preference of 80 in the route output indicates that this route is more preferred than other routes to the same destination with a lower local preference.
  8. BGP.community: These are tags or labels that can be attached to a route. Output (174,21001) consists of pairs of ASNs and custom values representing a specific routing policy or action to be taken. Routing policies can use these communities as conditions to apply specific actions. The meaning of these values depends on the internal configurations of the network and usually requires documentation from the network provider for interpretation.
  9. BGP.originator_id: 10.255.78.64: This indicates the router that initially advertised the route. In this context, the route originated from the router with IP 10.255.78.64.
  10. BGP.cluster_list: This is used in route reflection scenarios. It lists the identifiers of the route reflectors that have processed the route. Here, it shows that this route has passed through the reflector identified by 10.255.8.68 or 10.255.8.69 depending on the path.

Both routes are part of AS 15133 and pass through AS 174, but they have different next hops (92.223.112.66 and 92.223.112.67.) This allows for redundancy and load balancing.

BGP map

When you run the BGP command, the Show BGP map button will become active. Here’s what we will see for our IP address:

Let’s take this diagram point by point:

  • AS199524 | GCORE, LU: This is the autonomous system belonging to Gcore, based in Luxembourg. The IP 92.223.88.1 is the part of this AS, functioning as a gateway or router.
  • AS174 | COGENT-174, US: This is Cogent Communications’ autonomous system, based in the United States. Cogent is a major ISP.
  • AS15133 | EDGECAST, US: This AS belongs to Edgecast, also based in the United States. Edgecast is generally involved in content delivery network (CDN) services.
  • 93.184.216.0/24: This CIDR notation indicates a network range where example.com (93.184.216.34) is located. It might be a part of Edgecast’s CDN services or another network associated with one of the listed AS.

In summary, Gcore’s BGP Looking Glass command is an essential tool for understanding intricate network routes. By offering insights into autonomous systems, next hops, and metrics like MED and local preference, it allows for a nuanced approach to network management. Whether you’re an ISP peered with Gcore or a network administrator seeking to optimize performance, the data generated by this command offers a roadmap for strategic decision making.

Ping

The ping command is a basic, essential network troubleshooting tool that measures the round-trip time for sending a packet of data from the source to a destination and back. Ping shows the packet transfer speed and can also be used to check the node’s overall availability.

The command utilizes the ICMP protocol. It works as follows:

  • The router sends a packet from the IP address to the node.
  • The node sends it back.

In our case, this command shows how much time it takes to transfer a packet from the specified IP address to the node.

Let’s break down our output:

Main part:

  1. Target IP: You pinged 93.184.216.34, which is the example.com IP address we are testing.
  2. Packet Size: 56(84) bytes of data were sent. The packet consists of 56 bytes of data and 28 bytes of header, totaling 84 bytes.
  3. Individual pings: Each line indicates a single round trip of a packet, detailing:
    • icmp_seq: Sequence number of the packet.
    • ttl: Time-to-Live, showing how many more hops the packet could make before being dropped.
    • time: Round-trip time (RTT) in milliseconds.

Statistics:

  1. 5 packets transmitted, 5 received: All packets were successfully transmitted and received, indicating no packet loss
  2. 0% packet loss: No packets were lost during the transmission
  3. time 4005ms: Total time taken for these five pings
  4. rtt min/avg/max/mdev: Round-trip times in milliseconds:
    • min: minimum time
    • avg: average time
    • max: maximum time
    • mdev: mean deviation time

To summarize, the average round-trip time here is 87.138 ms, and the TTL is 52. RTT of less than 100 ms is generally considered acceptable for interactive applications, and TTL of 50 is considered a good value. No packet loss suggests a stable connection to the IP address 93.184.216.34.

The ping function provides basic, vital metrics for assessing network health. By offering details on round-trip times, packet loss, and TTL, this command allows for a quick yet comprehensive evaluation of network connectivity. For any network stakeholder—whether ISP or end user—understanding these metrics is crucial for effective network management and troubleshooting.

Traceroute

The Looking Glass traceroutecommand is a diagnostic tool that maps out the path packets take from the source to the destination, enabling you to identify potential bottlenecks or network failures. Traceroute relies on the TTL (Time-to-Live) parameter, which basically determines how long this packet can stay in the network. Every router along the packet’s path decrements the TTL by 1 and forwards the packet to the next router in the path. The process works as follows:

  1. The traceroute sends a packet to the destination host with TTL value of 1.
  2. The first router that receives the packet decrements the TTL value by 1 and forwards the packet.
  3. When the TTL reaches zero, the router drops the packet and sends an ICMP Time Exceeded message back to the source host.
  4. The traceroute records the IP address of the router that sent back the ICMP Time Exceeded message.
  5. The traceroute then sends another packet to the destination host with a TTL value of 2.
  6. Steps 2-4 are repeated until the traceroute routine reaches the destination host or until it exceeds the maximum number of hops.

Now let’s apply this command to the address we used earlier. The traceroute command will test our target IP address with 60-byte packets and a maximum of fifteen hops. Here’s what we get as output:

Apart from the header, each output line consists of the following information, labeled on the image below:

  1. IP and hostname: e.g., vrrp.gcore.lu (92.223.88.2)
  2. AS information: Provided in square brackets, e.g., [AS199524/AS202422]
  3. Latency: Time in milliseconds for the packet to reach the hop and return, e.g., 0.202 ms

In our example, traceroute traverses through three different autonomous systems (AS):

  1. AS199524 (GCORE, LU): The first two hops are within this AS, representing the initial part of the route.
  2. Hops 3 and 4 fall under the private IPv4 address space (10.255.X.X), meaning the hops are within a private network. This could be an internal router or other networking device not directly accessible over the public Internet. Private addresses like this are often used for internal routing within an organization or service provider’s network.
  3. AS174 (COGENT, US): Hops 5 to 9 are within Cogent’s network.
  4. AS15133 (EDGECAST, US): The final hops are within EdgeCast’s network, where the destination IP resides.
Example Hop: ae-66.core1.bsb.edgecastcdn.net (152.195.233.131) [AS15133] 82.450 ms

To sum up, the traceroute command offers a comprehensive view of the packet journey across multiple autonomous systems. Providing latency data and AS information at each hop, it aids in identifying potential bottlenecks or issues in the network. This insight is invaluable for anyone looking to understand or troubleshoot a network path.

Conclusion

Looking Glass is a tool for pre-purchase network testing, covering node connectivity, response times, packet paths, and BGP routes. Its user-friendly interface requires just a few inputs—location, target IP address, and the command of your choice—to deliver immediate results.

Based on your specific needs, such as connectivity speeds and location, and the insights gained from Looking Glass test results, you can choose between Gcore Virtual or Dedicated servers, both boasting outstanding connectivity. Want to learn more? Contact our team.

Related Articles

The rise of DDoS attacks on Minecraft and gaming

The gaming industry is a prime target for distributed denial-of-service (DDoS) attacks, which flood servers with malicious traffic to disrupt gameplay. These attacks can cause server outages, leading to player frustration, and financial losses.Minecraft, one of the world’s most popular games with 166 million monthly players, is no exception. But this isn’t just a Minecraft problem. From Call of Duty to GTA, gaming servers worldwide face relentless DDoS attacks as the most-targeted industry, costing game publishers and server operators millions in lost revenue.This article explores what’s driving this surge in gaming-related DDoS attacks, and what lessons can be learned from Minecraft’s experience.How DDoS attacks have disrupted MinecraftMinecraft’s open-ended nature makes it a prime testing ground for cyberattacks. Over the years, major Minecraft servers have been taken down by large-scale DDoS incidents:MCCrash botnet attack: A cross-platform botnet targeted private Minecraft servers, crashing thousands of them in minutes.Wynncraft MC DDoS attack: A Mirai botnet variant launched a multi-terabit DDoS attack on a large Minecraft server. Players could not connect, disrupting gameplay and forcing the server operators to deploy emergency mitigation efforts to restore service.SquidCraft Game attack: DDoS attackers disrupted a Twitch Rivals tournament, cutting off an entire competing team.Why are Minecraft servers frequent DDoS targets?DDoS attacks are widespread in the gaming industry, but certain factors make gaming servers especially vulnerable. Unlike other online services, where brief slowdowns might go unnoticed, even a few milliseconds of lag in a competitive game can ruin the experience. Attackers take advantage of this reliance on stability, using DDoS attacks to create chaos, gain an unfair edge, or even extort victims.Gaming communities rely on always-on availabilityUnlike traditional online services, multiplayer games require real-time responsiveness. A few seconds of lag can ruin a match, and server downtime can send frustrated players to competitors. Attackers exploit this pressure, launching DDoS attacks to disrupt gameplay, extort payments, or damage reputations.How competitive gaming fuels DDoS attacksUnlike other industries where cybercriminals seek financial gain, many gaming DDoS attacks are fueled by rivalry. Attackers might:Sabotage online tournaments by forcing competitors offline.Target popular streamers, making their live games unplayable.Attack rival servers to drive players elsewhere.Minecraft has seen all of these scenarios play out.The rise of DDoS-for-hire servicesDDoS attacks used to require technical expertise. Now, DDoS-as-a-service platforms offer attacks for as little as $10 per hour, making it easier than ever to disrupt gaming servers. The increasing accessibility of these attacks is a growing concern, especially as large-scale incidents continue to emerge.How gaming companies can defend against DDoS attacksWhile attacks are becoming more sophisticated, effective defenses do exist. By implementing proactive security measures, gaming companies can minimize risks and maintain uninterrupted gameplay for customers. Here are four key strategies to protect gaming servers from DDoS attacks.#1 Deploy always-on DDoS protectionGame publishers and server operators need real-time, automated DDoS mitigation. Gcore DDoS Protection analyzes traffic patterns, filters malicious requests, and keeps gaming servers online, even during an attack. In July 2024, Gcore mitigated a massive 1 Tbps DDoS attack on Minecraft servers, highlighting how gaming platforms remain prime targets. While the exact source of such attacks isn’t always straightforward, their frequency and intensity reinforce the need for robust security measures to protect gaming communities from service disruptions.#2 Strengthen network securityGaming companies can reduce attack surfaces in the following ways:Using rate limiting to block excessive requestsImplementing firewalls and intrusion detection systemsObfuscating server IPs to prevent attackers from finding them#3 Educate players and moderatorsSince many DDoS attacks come from within gaming communities, education is key. Server admins, tournament organizers, and players should be trained to recognize and report suspicious behavior.#4 Monitor for early attack indicatorsDDoS attacks often start with warning signs: sudden traffic spikes, frequent disconnections, or network slowdowns. Proactive monitoring can help stop attacks before they escalate.Securing the future of online gamingDDoS attacks against Minecraft servers are part of a broader trend affecting the gaming industry. Whether driven by competition, extortion, or sheer disruption, these attacks compromise gameplay, frustrate players, and cause financial losses. Learning from Minecraft’s challenges can help server operators and game developers build stronger defenses and prevent similar attacks across all gaming platforms.While proactive measures like traffic monitoring and server hardening are essential, investing in purpose-built DDoS protection is the most effective way to guarantee uninterrupted gameplay and protect gaming communities. Gcore provides advanced, multi-layered DDoS protection specifically designed for gaming servers, including countermeasures tailored to Minecraft and other gaming servers. With a deep understanding of the industry’s security challenges, we help server owners keep their platforms secure, responsive, and resilient—no matter the type of attack.Want to take the next step in securing your gaming servers?Download our ultimate guide to preventing Minecraft DDoS

How AI enhances bot protection and anti-automation measures

Bots and automated attacks have become constant issues for organizations across industries, threatening everything from website availability to sensitive customer data. As these attacks become increasingly sophisticated, traditional bot mitigation methods struggle to keep pace. Businesses face a growing need to protect their applications, APIs, and data without diminishing the efficiency of essential automated parts and bots that enhance user experiences.That’s where AI comes in. AI-enabled WAAP is a game-changing solution that marries the adaptive intelligence of AI with information gleaned from historical data. This means WAAP can detect and neutralize malicious bot and anti-automation activity with unprecedented precision. Read on to discover how.The bot problem: why automation threats are growingJust a decade ago, use cases for AI and bots were completely different than they are today. While some modern use cases are benign, such as indexing search engines or helping to monitor website performance, malicious bots account for a large proportion of web traffic. Malicious bots have grown from simple machines that follow scripts to complex creations that can convincingly simulate human behaviors.What makes bots particularly dangerous is their ability to evade detection by mimicking human-like patterns. Simple measures like CAPTCHA tests or IP blocking no longer suffice. Businesses need more intelligent systems capable of identifying and mitigating these evolving threats without impacting real users.Defeating automation threats with AI and machine learningToday’s bots don’t just click on links. They fake human activity convincingly, and defeating them involves a lot more than just simple detection. Battling modern bots requires fighting fire with fire by implementing machine learning and AI to create defensive strategies such as blocking credential stuffing, blocking data scraping, and performing behavioral tagging and profiling.Blocking credential stuffingCredential stuffing is a form of attack in which stolen login credentials are used to gain access to user accounts. AI/ML systems can identify such an attack by patterns, including multiple failed logins or logins from unusual locations. These systems learn with each new attempt, strengthening their defenses after every attack attempt.Data scraping blockingScraping bots can harvest everything from pricing data to intellectual property. AI models detect these through the repetitive patterns of requests or abnormally high frequencies of interactions. Unlike basic anti-scraping tools, AI learns new ways that scraping is done, keeping businesses one step ahead.Behavioral tagging and profilingAI-powered systems are quite good at analyzing user behavior. They study the tendencies of session parameters, IP addresses, and interaction rates. For instance, most regular users save session data, while bots do not prioritize this action. The AI system flags suspicious behavior and highlights the user in question for review.These systems also count the recurrence of certain actions, such as clicks or requests. The AI is supposed to build an in-depth profile for every IP or user and find something out of the ordinary to suggest a way to block or throttle the traffic.IP rescoring for smarter detectionOne of the unique capabilities of AI-driven bot protection is Dynamic IP Scoring. Based on external behavior data and threat intelligence, each incoming IP is accorded a risk score. For example, an IP displaying a number of failed login attempts could be suspicious. If it persists, that score worsens, and the system blocks the traffic.This dynamic scoring system does not focus on mere potential threats. It also allows IPs to “recover” if their behavior normalizes, reducing false positives and helping to ensure that real users are not inadvertently blocked.Practical insights: operationalizing AI-driven bot protectionImplementing AI/ML-driven bot protection requires an understanding of both the technology and the operational context in which it’s deployed. Businesses can take advantage of several unique features offered by platforms like Gcore WAAP:Tagging system synergy: Technology-generated tags, like the Gcore Tagging and Analysis Classification and Tagging (TACT) engine, are used throughout the platform to enforce fine-grained security policies and share conclusions and information between various solution components. Labeling threats allows users to easily track potential threats, provides input for ML analysis, and contributes data to an attacker profile that can be applied and acted on globally. This approach ensures an interlinked approach in which all components interact to mitigate threats effectively.Scalable defense mechanisms: With businesses expanding their online footprints, platforms like Gcore scale seamlessly to accommodate new users and applications. The cloud-based architecture makes continuous learning and adaptation possible, which is critical to long-term protection against automation threats.Cross-domain knowledge sharing: One of the salient features of Gcore WAAP is cross-domain functionality, which means the platform can draw from a large shared database of user behavior and threat intelligence. Even newly onboarded users immediately benefit from the insights gained by the platform from its historical data and are protected against previously encountered threats.Security insights: Gcore WAAP’s Security Insights feature provides visibility into security configurations and policy enforcement, helping users identify disabled policies that may expose them to threats. While the platform’s tagging system, powered by the TACT engine, classifies traffic and identifies potential risks, separate microservices handle policy recommendations and mitigation strategies. This functionality reduces the burden on security teams while enhancing overall protection.API discovery and protection: API security is among the most targeted entry points for automated attacks due to APIs’ ability to open up data exchange between applications. Protecting APIs requires advanced capabilities that can accurately identify suspicious activities without disrupting legitimate traffic. Gcore WAAP’s API discovery engine achieves this with a 97–99% accuracy rate, leveraging AI/ML to detect and prevent threats.Leveraging collective intelligence: Gcore WAAP’s cross-domain functionality creates a shared database of known threats and behaviors, allowing data from one client to protect the entire customer base. New users benefit immediately from the platform’s historical insights, bypassing lengthy learning curves. For example, a flagged suspicious IP can be automatically blocked across the network for faster, more efficient protection.Futureproof your security with Gcore’s AI-enabled WAAPBusinesses are constantly battling increasingly sophisticated botnet threats and have to be much more proactive regarding their security mechanisms. AI and machine learning have become integral to fighting bot-driven attacks, providing an unprecedented level of precision and flexibility that no traditional security systems can keep up with. With advanced behavior analysis, adaptive threat models, and cross-domain knowledge sharing, Gcore WAAP establishes new standards of bot protection.Curious to learn more about WAAP? Check out our ebook for cybersecurity best practices, the most common threats to look out for, and how WAAP can safeguard your businesses’ digital assets. Or, get in touch with our team to learn more about Gcore WAAP.Learn why WAAP is essential for modern businesses with a free ebook

How to choose the right technology tools to combat digital piracy

One of the biggest challenges facing the media and entertainment industry is digital piracy, where stolen content is redistributed without authorization. This issue causes significant revenue and reputational losses for media companies. Consumers who use these unregulated services also face potential threats from malware and other security risks.Governments, regulatory bodies, and private organizations are increasingly taking the ramifications of digital piracy seriously. In the US, new legislation has been proposed that would significantly crack down on this type of activity, while in Europe, cloud providers are being held liable by the courts for enabling piracy. Interpol and authorities in South Korea have also teamed up to stop piracy in its tracks.In the meantime, you can use technology to help stop digital piracy and safeguard your company’s assets. This article explains anti-piracy technology tools that can help content providers, streaming services, and website owners safeguard their proprietary media: geo-blocking, digital rights management (DRM), secure tokens, and referrer validation.Geo-blockingGeo-blocking (or country access policy) restricts access to content based on a user’s geographic location, preventing unauthorized access and limiting content distribution to specific regions. It involves setting rules to allow or deny access based on the user’s IP address and location in order to comply with regional laws or licensing agreements.Pros:Controls access by region so that content is only available in authorized marketsHelps comply with licensing agreementsCons:Can be bypassed with VPNs or proxiesRequires additional security measures to be fully effectiveTypical use cases: Geo-blocking is used by streaming platforms to restrict access to content, such as sports events or film premieres, based on location and licensing agreements. It’s also helpful for blocking services in high-risk areas but should be used alongside other anti-piracy tools for better and more comprehensive protection.Referrer validationReferrer validation is a technique that checks where a content request is coming from and prevents unauthorized websites from directly linking to and using content. It works by checking the “referrer” header sent by the browser to determine the source of the request. If the referrer is from an unauthorized domain, the request is blocked or redirected. This allows only trusted sources to access your content.Pros:Protects bandwidth by preventing unauthorized access and misuse of resourcesGuarantees content is only accessed by trusted sources, preventing piracy or abuseCons:Can accidentally block legitimate requests if referrer headers are not correctly sentMay not work as intended if users access content via privacy-focused methods that strip referrer data, leading to false positivesTypical use cases: Content providers commonly use referrer validation to prevent unauthorized streaming or hotlinking, which involves linking to media from another website or server without the owner’s permission. It’s especially useful for streamers who want to make sure their content is only accessed through their official platforms. However, it should be combined with other security measures for more substantial protection.Secure tokensSecure tokens and protected temporary links provide enhanced security by granting temporary access to specific resources so only authorized users can access sensitive content. Secure tokens are unique identifiers that, when linked to a user’s account, allow them to access protected resources for a limited time. Protected temporary links further restrict access by setting expiration dates, meaning the link becomes invalid after a set time.Pros:Provides a high level of security by allowing only authorized users to access contentTokens are time-sensitive, which prevents unauthorized access after they expireHarder to circumvent compared to traditional password protection methodsCons:Risk of token theft if they’re not managed or stored securelyRequires ongoing management and rotation of tokens, adding complexityCan be challenging to implement properly, especially in high-traffic environmentsTypical use cases: Streaming platforms use secure tokens and protected temporary links so only authenticated users can access premium content, like movies or live streams. They are also useful for secure file downloads or limiting access to exclusive resources, making them effective for protecting digital content and preventing unauthorized sharing or piracy.Digital rights managementDigital rights management (DRM) refers to a set of technologies designed to protect digital content from unauthorized use so that only authorized users can access, copy, or share it, according to licensing agreements. DRM uses encryption, licensing, and authentication mechanisms to control access to digital resources so that only authorized users can view or interact with the content. While DRM offers strong protection against piracy, it comes with higher complexity and setup costs than other security methods.Pros:Robust protection against unauthorized copying, sharing, and piracyHelps safeguard intellectual property and revenue streamsEnforces compliance with licensing agreementsCons:Can be complex and expensive to implementMay cause inconvenience for users, such as limiting playback on unauthorized devices or restricting sharingPotential system vulnerabilities or compatibility issuesTypical use cases: DRM is commonly used by streaming services to protect movies, TV shows, and music from piracy. It can also be used for e-books, software, and video games, ensuring that content is only used by licensed users according to the terms of the agreement. DRM solutions can vary, from software-based solutions for media files to hardware-based or cloud-based DRM for more secure distribution.Protect your content from digital piracy with GcoreDigital piracy remains a significant challenge for the media and entertainment industry as it poses risks in terms of both revenue and security. To combat this, partnering with a cloud provider that can actively monitor and protect your digital assets through advanced multi-layer security measures is essential.At Gcore, our CDN and streaming solutions give rights holders peace of mind that their assets are protected, offering the features mentioned in this article and many more besides. We also offer advanced cybersecurity tools, including WAAP (web application and API protection) and DDoS protection, which further integrate with and enhance these security measures. We provide trial limitations for streamers to curb piracy attempts and respond swiftly to takedown requests from rights holders and authorities, so you can rest assured that your assets are in safe hands.Get in touch to learn more about combatting digital piracy

5 ways to keep gaming customers engaged with optimal performance

Nothing frustrates a gamer more than lag, stuttering, or server crashes. When technical issues interfere with gameplay, it can be a deal breaker. Players know that the difference between winning and losing should be down to a player’s skill, not lag, latency issues, or slow connection speed—and they want gaming companies to make that possible every time they play.And gamers aren’t shy about expressing their opinion if a game hasn’t met their expectations. A game can live or die by word-of-mouth, and, in a highly competitive industry, gamers are more than happy to spend their time and money elsewhere. A huge 78% of gamers have “rage-quit” a game due to latency issues.That’s why reliable infrastructure is crucial for your gaming offering. A solid foundation is good for your bottom line and your reputation and, most importantly, provides a great gaming experience for customers, keeping them happy, loyal, and engaged. This article suggests five technologies to boost player engagement in real-world gaming scenarios.The technology powering seamless gaming experiencesHaving the right technology behind the scenes is essential to deliver a smooth, high-performance gaming experience. From optimizing game deployment and content delivery to enabling seamless multiplayer scalability, these technologies work together to reduce latency, prevent server overloads, and guarantee fast, reliable connections.Bare Metal Servers provide dedicated compute power for high-performing massive multiplayer games without virtualization overhead.CDN solutions reduce download times and minimize patch distribution delays, allowing players to get into the action faster.Managed Kubernetes simplifies multiplayer game scaling, handling sudden spikes in player activity.Load Balancers distribute traffic intelligently, preventing server overload during peak times.Edge Cloud reduces latency for real-time interactions, improving responsiveness for multiplayer gaming.Let’s look at five real-world scenarios illustrating how the right infrastructure can significantly enhance customer experience—leading to smooth, high-performance gaming, even during peak demand.#1 Running massive multiplayer games with bare metal serversImagine a multiplayer FPS (first-person shooter gaming) game studio that’s preparing for launch and needs low-latency, high-performance infrastructure to handle real-time player interactions. They can strategically deploy Gcore Bare Metal servers across global locations, reducing ping times and providing smooth gameplay.Benefit: Dedicated bare metal resources deliver consistent performance, eliminating lag spikes and server crashes during peak hours. Stable connections and seamless playing are assured for precision gameplay.#2 Seamless game updates and patch delivery with CDN integrationLet’s say you have a game that regularly pushes extensive updates to millions of players worldwide. Instead of overwhelming origin servers, they can use Gcore CDN to cache and distribute patches, reducing download times and preventing bottlenecks.Benefit: Faster updates for players, reduced server tension, and seamless game launches and updates.#3 Scaling multiplayer games with Managed KubernetesAfter a big update, a game may experience a sudden spike in the number of players. With Gcore Managed Kubernetes, the game autoscales its infrastructure, dynamically adjusting resources to meet player demand without downtime.Benefit: Elastic, cost-efficient scaling keeps matchmaking fast and smooth, even under heavy loads.#4 Load balancing for high-availability game serversAn online multiplayer game with a global base requires low latency and high availability. Gcore Load Balancers distribute traffic across multiple regional server clusters, reducing ping times and preventing server congestion during peak hours.Benefit: Consistent, lag-free gameplay with improved regional connectivity and failover protection.#5 Supporting live events and seasonal game launchesIn the case of a gaming company hosting a global in-game event, attracting millions of players simultaneously, leveraging Gcore CDN, Load Balancers, and autoscaling cloud infrastructure can prevent crashes and provide a seamless and uninterrupted experience.Benefit: Players enjoy smooth, real-time participation while the infrastructure is stable under extreme load.Building customer loyalty with reliable gaming infrastructureIn a challenging climate, focusing on maintaining customer happiness and loyalty is vital. The most foolproof way to deliver this is by investing in reliable and secure infrastructure behind the scenes. With infrastructure that’s both scalable and high-performing, you can deliver uninterrupted, seamless experiences that keep players engaged and satisfied.Since its foundation in 2014, Gcore has been a reliable partner for game studios looking to deliver seamless, high-performance gaming experiences worldwide, including Nitrado, Saber, and Wargaming. If you’d like to learn more about our global infrastructure and how it provides a scalable, high-performance solution for game distribution and real-time games, get in touch.Talk to our gaming infrastructure experts

How to achieve compliance and security in AI inference

AI inference applications today handle an immense volume of confidential information, so prioritizing data privacy is paramount. Industries such as finance, healthcare, and government rely on AI to process sensitive data—detecting fraudulent transactions, analyzing patient records, and identifying cybersecurity threats in real time. While AI inference enhances efficiency, decision-making, and automation, neglecting security and compliance can lead to severe financial penalties, regulatory violations, and data breaches. Industries handling sensitive information—such as finance, healthcare, and government—must carefully manage AI deployments to avoid costly fines, legal action, and reputational damage.Without robust security measures, AI inference environments present a unique security challenge as they process real-time data and interact directly with users. This article explores the security challenges enterprises face and best practices for guaranteeing compliance and protecting AI inference workloads.Key inference security and compliance challengesAs businesses scale AI-powered applications, they will likely encounter challenges in meeting regulatory requirements, preventing unauthorized access, and making sure that AI models (whether proprietary or open source) produce reliable and unaltered outputs.Data privacy and sovereigntyRegulations such as GDPR (Europe), CCPA (California), HIPAA (United States, healthcare), and PCI DSS (finance) impose strict rules on data handling, dictating where and how AI models can be deployed. Businesses using public cloud-based AI models must verify that data is processed and stored in appropriate locations to avoid compliance violations.Additionally, compliance constraints restrict certain AI models in specific regions. Companies must carefully evaluate whether their chosen models align with regulatory requirements in their operational areas.Best practices:To maintain compliance and avoid legal risks:Deploy AI models in regionally restricted environments to keep sensitive data within legally approved jurisdictions.Use Smart Routing with edge inference to process data closer to its source, reducing cross-border security risks.Model security risksBad actors can manipulate AI models to produce incorrect outputs, compromising their reliability and integrity. This is known as adversarial manipulation, where small, intentional alterations to input data can deceive AI models. For example, researchers have demonstrated that minor changes to medical images can trick AI diagnostic models into misclassifying benign tumors as malignant. In a security context, attackers could exploit these vulnerabilities to bypass fraud detection in finance or manipulate AI-driven cybersecurity systems, leading to unauthorized transactions or undetected threats.To prevent such threats, businesses must implement strong authentication, encryption strategies, and access control policies for AI models.Best practices:To prevent adversarial attacks and maintain model integrity:Enforce strong authentication and authorization controls to limit access to AI models.Encrypt model inputs and outputs to prevent data interception and tampering.Endpoint protection for AI deploymentsThe security of AI inference does not stop at the model level. It also depends on where and how models are deployed.For private deployments, securing AI endpoints is crucial to prevent unauthorized access.For public cloud inference, leveraging CDN-based security can help protect workloads against cyber threats.Processing data within the country of origin can further reduce compliance risks while improving latency and security.AI models rely on low-latency, high-performance processing, but securing these workloads against cyber threats is as critical as optimizing performance. CDN-based security strengthens AI inference protection in the following ways:Encrypts model interactions with SSL/TLS to safeguard data transmissions.Implements rate limiting to prevent excessive API requests and automated attacks.Enhances authentication controls to restrict access to authorized users and applications.Blocks bot-driven threats that attempt to exploit AI vulnerabilities.Additionally, CDN-based security supports compliance by:Using Smart Routing to direct AI workloads to designated inference nodes, helping align processing with data sovereignty laws.Optimizing delivery and security while maintaining adherence to regional compliance requirements.While CDNs enhance security and performance by managing traffic flow, compliance ultimately depends on where the AI model is hosted and processed. Smart Routing allows organizations to define policies that help keep inference within legally approved regions, reducing compliance risks.Best practices:To protect AI inference environments from endpoint-related threats, you should:Deploy monitoring tools to detect unauthorized access, anomalies, and potential security breaches in real-time.Implement logging and auditing mechanisms for compliance reporting and proactive security tracking.Secure AI inference with Gcore Everywhere InferenceAI inference security and compliance are critical as businesses handle sensitive data across multiple regions. Organizations need a robust, security-first AI infrastructure to mitigate risks, reduce latency, and maintain compliance with data sovereignty laws.Gcore’s edge network and CDN-based security provide multi-layered protection for AI workloads, combining DDoS protection and WAAP (web application and API protection. By keeping inference closer to users and securing every stage of the AI pipeline, Gcore helps businesses protect data, optimize performance, and meet industry regulations.Explore Gcore AI Inference

Mobile World Congress 2025: the year of AI

As Mobile World Congress wrapped up for another year, it was apparent that only one topic was on everyone’s minds: artificial intelligence.Major players—such as Google, Ericsson, and Deutsche Telekom—showcased the various ways in which they’re piloting AI applications—from operations to infrastructure management and customer interactions. It’s clear there is a great desire to see AI move from the research lab into the real world, where it can make a real difference to people’s everyday lives. The days of more theoretical projects and gimmicky robots seem to be behind us: this year, it was all about real-world applications.MWC has long been an event for telecommunications companies to launch their latest innovations, and this year was no different. Telco companies demonstrated how AI is now essential in managing network performance, reducing operational downtime, and driving significant cost savings. The industry consensus is that AI is no longer experimental but a critical component of modern telecommunications. While many of the applications showcased were early-stage pilots and stakeholders are still figuring out what wide-scale, real-time AI means in practice, the ambition to innovate and move forward on adoption is clear.Here are three of the most exciting AI developments that caught our eye in Barcelona:Conversational AIChatbots were probably the key telco application showcased across MWC, with applications ranging from contact centers, in-field repairs, personal assistants transcribing calls, booking taxis and making restaurant reservations, to emergency responders using intelligent assistants to manage critical incidents. The easy-to-use, conversational nature of chatbots makes them an attractive means to deploy AI across functions, as it doesn’t require users to have any prior hands-on machine learning expertise.AI for first respondersEmergency responders often rely on telco partners to access novel, technology-enabled solutions to address their challenges. One such example is the collaboration between telcos and large language model (LLM) companies to deliver emergency-response chatbots. These tailored chatbots integrate various decision-making models, enabling them to quickly parse vast data streams and suggest actionable steps for human operators in real time.This collaboration not only speeds up response times during critical situations but also enhances the overall effectiveness of emergency services, ensuring that support reaches those in need faster.Another interesting example in this field was the Deutsche Telekom drone with an integrated LTE base station, which can be deployed in emergencies to deliver temporary coverage to an affected area or extend the service footprint during sports events and festivals, for example.Enhancing Radio Access Networks (RAN)Telecommunication companies are increasingly turning to advanced applications to manage the growing complexity of their networks and provide high-quality, uninterrupted service for their customers.By leveraging artificial intelligence, these applications can proactively monitor network performance, detect anomalies in real time, and automatically implement corrective measures. This not only enhances network reliability but reduces operational costs and minimizes downtime, paving the way for more efficient, agile, and customer-focused network management.One notable example was the Deutsche Telekom and Google Cloud collaboration: RAN Guardian. Built using Gemini 2.0, this agent analyzes network behavior, identifies performance issues, and takes corrective measures to boost reliability, lower operational costs, and improve customer experience.As telecom networks become more complex, conventional rule-based automation struggles to handle real-time challenges. In contrast, agentic AI employs large language models (LLMs) and sophisticated reasoning frameworks to create intelligent systems capable of independent thought, action, and learning.What’s next in the world of AI?The innovation on show at MWC 2025 confirms that AI is rapidly transitioning from a research topic to a fundamental component of telecom and enterprise operations.  Wide-scale AI adoption is, however, a balancing act between cost, benefit, and risk management.Telcos are global by design, operating in multiple regions with varying business needs and local regulations. Ensuring service continuity and a good return on investment from AI-driven applications while carefully navigating regional laws around data privacy and security is no mean feat.If you want to learn more about incorporating AI into your business operations, we can help.Gcore Everywhere Inference significantly simplifies large-scale AI deployments by providing a simple-to-use serverless inference tool that abstracts the complexity of AI hardware and allows users to deploy and manage AI inference globally with just a few clicks. It enables fully automated, auto-scalable deployment of inference workloads across multiple geographic locations, making it easier to handle fluctuating requirements, thus simplifying deployment and maintenance.Learn more about Gcore Everywhere Inference

Subscribe
to our newsletter

Get the latest industry trends, exclusive insights, and Gcore updates delivered straight to your inbox.