Gcore named a Leader in the GigaOm Radar for AI Infrastructure!Get the report
  1. Home
  2. Blog
  3. Evaluating Server Connectivity with Looking Glass: A Comprehensive Guide

Evaluating Server Connectivity with Looking Glass: A Comprehensive Guide

  • November 1, 2023
  • 8 min read
Evaluating Server Connectivity with Looking Glass: A Comprehensive Guide

When you’re considering purchasing a dedicated or virtual server, it can be challenging to assess how it will function before you buy. Gcore’s free Looking Glass network tool allows you to assess connectivity, giving you a clear picture of whether a server will meet your needs before you make a financial commitment. This article will explore what the Looking Glass network tool is, explain how to use it, and delve into its key functionalities like BGP, PING, and traceroute.

What Is Looking Glass?

Looking Glass is a dedicated network tool that examines the routing and connectivity of a specific AS (autonomous system) to provide real-time insights into network performance and route paths, and show potential bottlenecks. This makes it an invaluable resource for network administrators, ISPs, and end users considering purchasing a virtual or dedicated server. With Looking Glass, you can:

  • Test connections to different nodes and their response times. This enables users to initiate connection tests to various network nodes—crucial for evaluating response times, and thereby helpful in performance tuning or troubleshooting.
  • Trace the packet route from the router to a web resource. This is useful for identifying potential bottlenecks or failures in the network.
  • Display detailed BGP (Border Gateway Protocol) routes to any IPv4 or IPv6 destination. This feature is essential for ISPs to understand routing patterns and make informed peering decisions.
  • Visualize BGP maps for any IP address. By generating a graphical representation or map of the BGP routes, Looking Glass provides an intuitive way to understand the network architecture.

How to Work with Looking Glass

Let’s take a closer look at Looking Glass’s interface.

Under Gcore’s AS number (AS199524), header, and introduction, you will see four fields to complete. Operating Looking Glass is straightforward:

  1. Diagnostic method: Pick from three available diagnostic methods (commands.)
    • BGP: The BGP command in Looking Glass gives you information about all autonomous systems (AS) traversed to reach the specified IP address from the selected router. Gcore Looking Glass can present the BGP route as an SVG/PNG diagram.
    • Ping: The ping command lets you know the round trip time (RRT) and Time to Live (TTL) for the route between the IP address and the router.
    • Traceroute: The traceroute command displays all enabled router hops encountered in the path between the selected router and the destination IP address. It also marks the total time for the request fulfillment and the intermediate times for each AS router that it passed.
  2. Region: Choose a location from the list of locations where our hosting is available. You can filter server nodes by region to narrow your search.
  3. Router: Pick a router from the list of Gcore’s available in the specified region.
  4. IP address: Enter the IP address to which you want to connect. You can type in both IPv4 and IPv6 addresses.
  5. Click Run test.

After you launch the test, you will see the plain text output of the command and three additional buttons in orange, per the image below:

  • Copy to clipboard: Copies command output to your clipboard.
  • Open results page: Opens the output in a separate tab. You can share results via a link with third parties, as this view masks tested IP addresses. The link will remain live for three days.
  • Show BGP map: Provides a graphical representation of the BGP route and shows which autonomous systems the data has to go through on the way from the Gcore’s node to the IP address. This option is only relevant for the BGP command.

In this article, we will examine each command (BGP, ping, and traceroute) using 93.184.216.34 (example.com) as a target IP address and Gcore’s router in Luxembourg (capital of the Duchy of Luxembourg), where our HQ is located. So our settings for all three commands will be the following:

  • Region: Europe
  • Router: Luxembourg (Luxembourg)
  • IP address: 93.184.216.34. We picked example.com as a Looking Glass target server.

Now let’s dive deeper into each command: BGP, ping, and traceroute.

BGP

The BGPLooking Glass shows the best BGP route (or routes) to the destination point. BGP is a dynamic routing protocol that connects autonomous systems (AS)—systems of routers and IP networks with a shared online routing policy. BGP allows various sections of the internet to communicate with one another. Each section has its own set of IP addresses, like a unique ID. BGP captures and maintains these IDs in a database. When data has to be moved from one autonomous system to another over the internet, BGP consults this database to determine the most direct path.

Based on this data, the best route for the packets is built; and this is what the Looking Glass BGP command can show. Here’s an example output:

The data shows two possible paths for the route to travel. The first path has numbers attached; here’s what each of them means:

  1. 93.184.216.0/24: CIDR notation for the network range being routed. The “/24” specifies that the first 24 bits identify the network, leaving 8 bits for individual host addresses within that network. Thus it covers IP addresses from 93.184.216.0 to 93.184.216.255.
  2. via 92.223.88.1 on eno1: The next hop’s IP address and the network interface through which the data will be sent. In this example, packets will be forwarded to 92.223.88.1 via the eno1 interface.
  3. BGP.origin: IGP: Specifies the origin of the BGP route. The IGP (interior gateway protocol) value implies that this route originated within the same autonomous system (AS.)
  4. BGP.as_path: 174 15133: The AS path shows which autonomous systems the data has passed through to reach this point. Here, the data traveled through AS 174 and then to AS 15133.
  5. BGP.next_hop: 92.223.112.66: The next router to which packets will be forwarded.
  6. BGP.med: 84040: The Multi-Exit Discriminator (MED) is a metric that influences how incoming traffic should be balanced over multiple entry points in an AS. Lower values are generally preferred; here, the MED value is 84040.
  7. BGP.local_pref: 80: Local preference, which is used to choose the exit point from the local AS. A higher value is preferred when determining the best path. The local preference of 80 in the route output indicates that this route is more preferred than other routes to the same destination with a lower local preference.
  8. BGP.community: These are tags or labels that can be attached to a route. Output (174,21001) consists of pairs of ASNs and custom values representing a specific routing policy or action to be taken. Routing policies can use these communities as conditions to apply specific actions. The meaning of these values depends on the internal configurations of the network and usually requires documentation from the network provider for interpretation.
  9. BGP.originator_id: 10.255.78.64: This indicates the router that initially advertised the route. In this context, the route originated from the router with IP 10.255.78.64.
  10. BGP.cluster_list: This is used in route reflection scenarios. It lists the identifiers of the route reflectors that have processed the route. Here, it shows that this route has passed through the reflector identified by 10.255.8.68 or 10.255.8.69 depending on the path.

Both routes are part of AS 15133 and pass through AS 174, but they have different next hops (92.223.112.66 and 92.223.112.67.) This allows for redundancy and load balancing.

BGP map

When you run the BGP command, the Show BGP map button will become active. Here’s what we will see for our IP address:

Let’s take this diagram point by point:

  • AS199524 | GCORE, LU: This is the autonomous system belonging to Gcore, based in Luxembourg. The IP 92.223.88.1 is the part of this AS, functioning as a gateway or router.
  • AS174 | COGENT-174, US: This is Cogent Communications’ autonomous system, based in the United States. Cogent is a major ISP.
  • AS15133 | EDGECAST, US: This AS belongs to Edgecast, also based in the United States. Edgecast is generally involved in content delivery network (CDN) services.
  • 93.184.216.0/24: This CIDR notation indicates a network range where example.com (93.184.216.34) is located. It might be a part of Edgecast’s CDN services or another network associated with one of the listed AS.

In summary, Gcore’s BGP Looking Glass command is an essential tool for understanding intricate network routes. By offering insights into autonomous systems, next hops, and metrics like MED and local preference, it allows for a nuanced approach to network management. Whether you’re an ISP peered with Gcore or a network administrator seeking to optimize performance, the data generated by this command offers a roadmap for strategic decision making.

Ping

The ping command is a basic, essential network troubleshooting tool that measures the round-trip time for sending a packet of data from the source to a destination and back. Ping shows the packet transfer speed and can also be used to check the node’s overall availability.

The command utilizes the ICMP protocol. It works as follows:

  • The router sends a packet from the IP address to the node.
  • The node sends it back.

In our case, this command shows how much time it takes to transfer a packet from the specified IP address to the node.

Let’s break down our output:

Main part:

  1. Target IP: You pinged 93.184.216.34, which is the example.com IP address we are testing.
  2. Packet Size: 56(84) bytes of data were sent. The packet consists of 56 bytes of data and 28 bytes of header, totaling 84 bytes.
  3. Individual pings: Each line indicates a single round trip of a packet, detailing:
    • icmp_seq: Sequence number of the packet.
    • ttl: Time-to-Live, showing how many more hops the packet could make before being dropped.
    • time: Round-trip time (RTT) in milliseconds.

Statistics:

  1. 5 packets transmitted, 5 received: All packets were successfully transmitted and received, indicating no packet loss
  2. 0% packet loss: No packets were lost during the transmission
  3. time 4005ms: Total time taken for these five pings
  4. rtt min/avg/max/mdev: Round-trip times in milliseconds:
    • min: minimum time
    • avg: average time
    • max: maximum time
    • mdev: mean deviation time

To summarize, the average round-trip time here is 87.138 ms, and the TTL is 52. RTT of less than 100 ms is generally considered acceptable for interactive applications, and TTL of 50 is considered a good value. No packet loss suggests a stable connection to the IP address 93.184.216.34.

The ping function provides basic, vital metrics for assessing network health. By offering details on round-trip times, packet loss, and TTL, this command allows for a quick yet comprehensive evaluation of network connectivity. For any network stakeholder—whether ISP or end user—understanding these metrics is crucial for effective network management and troubleshooting.

Traceroute

The Looking Glass traceroutecommand is a diagnostic tool that maps out the path packets take from the source to the destination, enabling you to identify potential bottlenecks or network failures. Traceroute relies on the TTL (Time-to-Live) parameter, which basically determines how long this packet can stay in the network. Every router along the packet’s path decrements the TTL by 1 and forwards the packet to the next router in the path. The process works as follows:

  1. The traceroute sends a packet to the destination host with TTL value of 1.
  2. The first router that receives the packet decrements the TTL value by 1 and forwards the packet.
  3. When the TTL reaches zero, the router drops the packet and sends an ICMP Time Exceeded message back to the source host.
  4. The traceroute records the IP address of the router that sent back the ICMP Time Exceeded message.
  5. The traceroute then sends another packet to the destination host with a TTL value of 2.
  6. Steps 2-4 are repeated until the traceroute routine reaches the destination host or until it exceeds the maximum number of hops.

Now let’s apply this command to the address we used earlier. The traceroute command will test our target IP address with 60-byte packets and a maximum of fifteen hops. Here’s what we get as output:

Apart from the header, each output line consists of the following information, labeled on the image below:

  1. IP and hostname: e.g., vrrp.gcore.lu (92.223.88.2)
  2. AS information: Provided in square brackets, e.g., [AS199524/AS202422]
  3. Latency: Time in milliseconds for the packet to reach the hop and return, e.g., 0.202 ms

In our example, traceroute traverses through three different autonomous systems (AS):

  1. AS199524 (GCORE, LU): The first two hops are within this AS, representing the initial part of the route.
  2. Hops 3 and 4 fall under the private IPv4 address space (10.255.X.X), meaning the hops are within a private network. This could be an internal router or other networking device not directly accessible over the public Internet. Private addresses like this are often used for internal routing within an organization or service provider’s network.
  3. AS174 (COGENT, US): Hops 5 to 9 are within Cogent’s network.
  4. AS15133 (EDGECAST, US): The final hops are within EdgeCast’s network, where the destination IP resides.

Example Hop: ae-66.core1.bsb.edgecastcdn.net (152.195.233.131) [AS15133] 82.450 ms

To sum up, the traceroute command offers a comprehensive view of the packet journey across multiple autonomous systems. Providing latency data and AS information at each hop, it aids in identifying potential bottlenecks or issues in the network. This insight is invaluable for anyone looking to understand or troubleshoot a network path.

Conclusion

Looking Glass is a tool for pre-purchase network testing, covering node connectivity, response times, packet paths, and BGP routes. Its user-friendly interface requires just a few inputs—location, target IP address, and the command of your choice—to deliver immediate results.

Based on your specific needs, such as connectivity speeds and location, and the insights gained from Looking Glass test results, you can choose between Gcore Virtual Dedicated Servers or Dedicated server hosting, both boasting outstanding connectivity.

Related articles

How we engineered a single pipeline for LL-HLS and LL-DASH

Viewers in sports, gaming, and interactive events expect real-time, low-latency streaming experiences. To deliver this, the industry has rallied around two powerful protocols: Low-Latency HLS (LL-HLS) and Low-Latency DASH (LL-DASH).While they share a goal, their methods are fundamentally different. LL-HLS delivers video in a sequence of tiny, discrete files. LL-DASH delivers it as a continuous, chunked download of a larger file. This isn't just a minor difference in syntax; it implies completely different behaviors for the packager, the CDN, and the player.This duality presents a major architectural challenge: How do you build a single, efficient, and cost-effective pipeline that can serve both protocols simultaneously from one source?At Gcore, we took on this unification problem. The result is a robust, single-source pipeline that delivers streams with a glass-to-glass latency of approximately 2.0 seconds for LL-DASH and 3.0 seconds for LL-HLS. This is the story of how we designed it.Understanding the dualityTo build a unified system, we first had to deeply understand the differences in how each protocol operates at the delivery level.LL-DASH: the continuous feedMPEG-DASH has always been flexible, using a single manifest file to define media segments by their timing. Low-Latency DASH builds on this by using Chunked CMAF segments.Imagine a file that is still being written to on the server. Instead of waiting for the whole file to be finished, the player can request it, and the server can send it piece by piece using Chunked Transfer Encoding. The player receives a continuous stream of bytes and can start playback as soon as it has enough data.Single, long-lived files: A segment might be 2–6 seconds long, but it’s delivered as it’s being generated.Timing-based requests: The player knows when a segment should be available and requests it. The server uses chunked transfer to send what it has so far.Player-driven latency: The manifest contains a targetLatency attribute, giving the player a strong hint about how close to the live edge it should play.LL-HLS: the rapid-fire deliveryLL-HLS takes a different approach. It extends the traditional playlist-based HLS by breaking segments into even smaller chunks called Parts.Think of it like getting breaking news updates. The server pre-announces upcoming Parts in the manifest before they are fully available. The player then requests a Part, but the server holds that request open until the Part is ready to be delivered at full speed. This is called a Blocking Playlist Reload.Many tiny files (Parts): A 2-second segment might be broken into four 0.5-second Parts, each requested individually.Manifest-driven updates: The server constantly updates the manifest with new Parts, and uses special tags like #EXT-X-PART-INF and #EXT-X-SERVER-CONTROL to manage delivery.Server-enforced timing: The server controls when the player receives data by holding onto requests, which helps synchronize all viewers.A simplified diagram visually comparing the LL-HLS delivery of many small parts versus the LL-DASH chunked transfer of a single, larger segment over the same time period.These two philosophies demand different things from a CDN. LL-DASH requires the CDN to intelligently cache and serve partially complete files. LL-HLS requires the CDN to handle a massive volume of short, bursty requests and hold connections open for manifest updates. A traditional CDN is optimized for neither.Forging a unified strategyWith two different delivery models, where do you start? You find the one thing they both depend on: the keyframe.Playback can only start from a keyframe (or I-frame). Therefore, the placement of keyframes, which defines the Group of Pictures (GOP), is the foundational layer that both protocols must respect. By enforcing a consistent keyframe interval on the source stream, we could create a predictable media timeline. This timeline can then be described in two different “languages” in the manifests for LL-HLS and LL-DASH.A single timeline with consistent GOPs being packaged for both protocols.This realization led us to a baseline configuration, but each parameter involved a critical engineering trade-off:GOP: 1 second. We chose a frequent, 1-second GOP. The primary benefit is extremely fast stream acquisition; a player never has to wait more than a second for a keyframe to begin playback. The trade-off is a higher bitrate. A 1-second GOP can increase bitrate by 10–15% compared to a more standard 2-second GOP because you're storing more full-frame data. For real-time, interactive use cases, we prioritized startup speed over bitrate savings.Segment Size: 2 seconds. A 2-second segment duration provides a sweet spot. For LL-DASH and modern HLS players, it's short enough to keep manifest sizes manageable. For older, standard HLS clients, it prevents them from falling too far behind the live edge, keeping latency reduced even on legacy players.Part Size: 0.5 seconds. For LL-HLS, this means we deliver four updates per segment. This frequency is aggressive enough to achieve sub-3-second latency while being coarse enough to avoid overwhelming networks with excessive request overhead, which can happen with part durations in the 100–200ms range.Cascading challenges through the pipeline1. Ingest: predictability is paramountTo produce a clean, synchronized output, you need a clean, predictable input. We found that the encoder settings of the source stream are critical. An unstable source with a variable bitrate or erratic keyframe placement will wreck any attempt at low-latency delivery.For our users, we recommend settings that prioritize speed and predictability over compression efficiency:Rate control: Constant Bitrate (CBR)Keyframe interval: A fixed interval (e.g., every 30 frames for 30 FPS, to match our 1s GOP).Encoder tune: zerolatencyAdvanced options: Disable B-frames (bframes=0) and scene-cut detection (scenecut=0) to ensure keyframes are placed exactly where you command them to be.Here is an example ffmpeg command in Bash that encapsulates these principles:ffmpeg -re -i "source.mp4" -c:a aac -c:v libx264 \ -profile:v baseline -tune zerolatency -preset veryfast \ -x264opts "bframes=0:scenecut=0:keyint=30" \ -f flv "rtmp://your-ingest-url"2. Transcoding and packagingOur transcoding and Just-In-Time Packaging (JITP) layer is where the unification truly happens. This component does more than just convert codecs; it has to operate on a stream that is fundamentally incomplete.The primary challenge is that the packager must generate manifests and parts from media files that are still being written by the transcoder. This requires a tightly-coupled architecture where the packager can safely read from the transcoder's buffer.To handle the unpredictable nature of live sources, especially user-generated content via WebRTC, we use a hybrid workflow:GPU Workers (Nvidia/Intel): These handle the heavy lifting of decoding and encoding. Offloading to GPU hardware is crucial for minimizing processing latency and preserving advanced color formats like HDR+.Software Workers and Filters: These provide flexibility. When a live stream from a mobile device suddenly changes resolution or its framerate drops due to a poor connection, a rigid hardware pipeline would crash. Our software layer can handle these context changes gracefully, for instance, by scaling the erratic source and overlaying it on a stable, black-background canvas, meaning the output stream never stops.This makes our JITP a universal packager, creating three synchronized content types from a single, resilient source:LL-DASH (CMAF)LL-HLS (CMAF)Standard HLS (MPEG-TS) for backward compatibility3. CDN delivery: solving two problems at onceThis was the most intensive part of the engineering effort. Our CDN had to be taught how to excel at two completely different, high-performance tasks simultaneously.For LL-DASH, we developed a custom caching module we call chunked-proxy. When the first request for a new .m4s segment arrives, our edge server requests it from the origin. As bytes flow in from the origin, the chunked-proxy immediately forwards them to the client. When a second client requests the same file, our edge server serves all the bytes it has already cached and then appends the new bytes to both clients' streams simultaneously. It’s a multi-client cache for in-flight data.For LL-HLS, the challenges were different:Handling Blocked Requests: Our edge servers needed to be optimized to hold thousands of manifest requests open for hundreds of milliseconds without consuming excessive resources.Intelligent Caching: We needed to correctly handle cache statuses (MISS, EXPIRED) for manifests to ensure only one request goes to the origin per update, preventing a "thundering herd" problem.High Request Volume: LL-HLS generates a storm of requests for tiny part-files. Our infrastructure was scaled and optimized to serve these small files with minimal overhead.The payoff: ultimate flexibility for developersThis engineering effort wasn't just an academic exercise. It provides tangible benefits to developers building with our platform. The primary benefit is simplicity through unification, but the most powerful benefit is the ability to optimize for every platform.Consider the complex landscape of Apple devices. With our unified pipeline, you can create a player logic that does this:On iOS 17.1+: Use LL-DASH with the new Managed Media Source (MMS) API for ~2.0 second latency.On iOS 14.0 - 17.0: Use native LL-HLS for ~3.0 second latency.On older iOS versions: Automatically fall back to standard HLS with a reduced latency of ~9 seconds.This lets you provide the best possible experience on every device, all from a single backend and a single live source, without any extra configuration.Don't fly blind: observability in a low-latency worldA complex system is useless without visibility, and traditional metrics can be misleading for low-latency streaming. Simply looking at response_time from a CDN log is not enough.We had to rethink what to measure. For example:For an LL-HLS manifest, a high response_time (e.g., 500ms) is expected behavior, as it reflects the server correctly holding the request while waiting for the next part. A low response_time could actually indicate a problem. We monitor “Manifest Hold Time” to ensure this blocking mechanism is working as intended.For LL-DASH, a player requesting a chunk that isn't ready yet might receive a 404 Not Found error. While occasional 404s are normal, a spike can indicate origin-to-edge latency issues. This metric, combined with monitoring player liveCatchup behavior, gives a true picture of stream health.Gcore: one pipeline to serve them allThe paths of LL-HLS and LL-DASH may be different, but their destination is the same: real-time interaction with a global audience. By starting with a common foundation—the keyframe—and custom-engineering every component of our pipeline to handle this duality, we successfully solved the unification problem.The result is a single, robust system that gives developers the power of both protocols without the complexity of running two separate infrastructures. It’s how we deliver ±2.0s latency with LL-DASH and ±3.0s with LL-HLS, and it’s the foundation upon which we’ll build to push the boundaries of real-time streaming even further.

Gcore successfully stops 6 Tbps DDoS attack

Gcore recently detected and mitigated one of the most powerful distributed denial-of-service (DDoS) attacks of the year, peaking at 6 Tbps and 5.3 billion packets per second (Bpps).This surge, linked to the AISURU botnet, reflects a growing trend of large-scale attacks. It reminds us how crucial effective protection has become for companies that depend on high availability and low latency. 6 Tbps 5.3 BppsThe attack in numbersPeak traffic: 6 TbpsPacket rate: 5.3 BppsMain protocol: UDP, typical of volumetric floods designed to overwhelm bandwidthGeographic concentration: 51% of sources originated in Brazil and 23.7% in the US, together accounting for nearly 75% of all trafficGeographic sources This regional concentration shows how today’s botnets are expanding across areas with high device connectivity and weaker security measures, creating an ideal environment for mass exploitation.How to strengthen your defensesThe 6 Tbps attack is not an isolated incident. It marks an escalation in DDoS activity across industries where performance and availability are critical to customer satisfaction and company revenue. To protect your business from large-scale DDoS attacks, consider the following key strategies:Adopt an adaptive DDoS protection that detects and mitigates attacks automatically.Leverage edge infrastructure to absorb malicious traffic closer to its source.Prepare for high traffic volumes by upgrading your infrastructure or partnering with a reliable DDoS protection provider that has the global capacity and resources to keep your services online during large-scale attacks.Keeping your business safe with GcoreTo stay ahead of these evolving threats, companies need solutions that deliver real-time detection, intelligent mitigation, and global reach. Gcore’s DDoS Protection was built to do precisely that, leveraging AI-driven traffic analysis and worldwide network capacity to block attacks before they impact your users.As attacks grow larger and more complex, staying resilient means being prepared. With the right protection in place, your customers will never know an attack happened in the first place.Learn more about 2025 cyberattack trends

Gcore CDN updates: Dedicated IP and BYOIP now available

We’re pleased to announce two new premium features for Gcore CDN: Dedicated IP and Bring Your Own IP (BYOIP). These capabilities give customers more control over their CDN configuration, helping you meet strict security, compliance, and branding requirements.Many organizations, especially in finance and other regulated sectors, require full control over their network identity. With these new features, Gcore enables customers to use unique, dedicated IP addresses to meet compliance or security standards; retain ownership and visibility over IP reputation and routing, and deliver content globally while maintaining trusted, verifiable IP associations.Read on for more information about the benefits of both updates.Dedicated IP: exclusive addresses for your CDN resourcesThe Dedicated IP feature enables customers to assign a private IP address to their CDN configuration, rather than using shared ones. This is ideal for:Businesses that are subject to strict security or legal frameworks.Customers who want isolated IP resources to ensure consistent access and reputation.Teams using WAAP or other advanced security solutions where dedicated IPs simplify policy management.BYOIP: bring your own IP range to Gcore CDNWith Bring Your Own IP (BYOIP), customers can use their own public IP address range while leveraging the performance and global reach of Gcore CDN. This option is especially useful for:Resellers who prefer to keep Gcore infrastructure invisible to end clients.Enterprises maintaining brand consistency and control over IP reputation.How to get startedBoth features are currently available as paid add-ons and are configured manually by the Gcore team. To request activation or learn more, please contact Gcore Support or your account manager.We’re working on making these features easier to manage and automate in future releases. As always, we welcome your feedback on both the feature functionality and the request process—your insights help us improve the Gcore CDN experience for everyone.Get in touch for more information

Introducing AI Cloud Stack: turning GPU clusters into revenue-generating AI clouds

Enterprises and cloud providers face major roadblocks when trying to deploy GPU infrastructure at scale: long time-to-market, operational inefficiencies, and difficulty bringing new capacity to market profitably. Establishing AI environments with hyperscaler-grade functionality typically requires years of engineering effort, multiple partner integrations, and complex operational tooling.Not anymore.With Gcore AI Cloud Stack, organizations can transform bare Nvidia GPU clusters into a fully cloud-enabled environment—complete with orchestration, observability, billing, and go-to-market support—all in a fraction of the time it would take to build from scratch, maximizing GPU utilization.This proven solution marks the latest addition to the Gcore AI product suite, enabling enterprises and cloud providers to accelerate AI cloud deployment through better GPU utilization, monetization, reduced complexity, and hyperscaler-grade functionality in their own AI environments. Gcore AI Cloud Stack is already powering leading technology providers, including VAST and Nokia.Why we built AI Cloud StackBuying and efficiently operating GPUs at a large scale requires significant investment, time, and expertise. Most organizations need to hit the ground running, bypassing years of in-house R&D. Without a robust reference architecture, infrastructure and network preparation, 24/7 monitoring, dynamic resource allocation, orchestration abstraction, and clear paths to utilization or commercialization, enterprises can spend years before seeing ROI.“Gcore brings together the key pieces—compute, networking, and storage—into a usable stack. That integration helps service providers stand up AI clouds faster and onboard clients sooner, accelerating time to revenue. Combined with the advanced multi-tenant capabilities of VAST’s AI Operating System, it delivers a reliable, scalable, and futureproof AI infrastructure. Gcore offers operators a valuable option to move quickly without building everything themselves.”— Dan Chester, CSP Director EMEA, VAST DataAt Gcore, we understand that organizations across industries will continue to invest heavily in GPUs to power the next wave of AI innovation—meaning these challenges aren’t going away. AI Cloud Stack solves today’s challenges and anticipates tomorrow’s. It ensures that GPU infrastructure at the core of AI innovation delivers maximum value to enterprises.How AI Cloud Stack worksThis comprehensive solution is structured across three stages.1. Provision and launchGcore handles the complexities of initial deployment, from physical infrastructure setup to orchestration, enabling enterprises to go live quickly with a reliable GPU cloud.2. Operations and managementThe solution includes monitoring, orchestration, ticket management, and ongoing support to keep environments stable, secure, and efficient. This includes automated GPU failure handling and optimized resource management.3. Go-to-market supportUnlike other solutions, AI Cloud Stack goes beyond infrastructure. Building on Gcore’s experience as a trusted NVIDIA Cloud Provider (NCP), it helps customers sell their capacity, including through established reseller channels. This integrated GTM support ensures capacity doesn’t sit idle, losing value and potential.What sets Gcore apartUnlike many providers entering this market, Gcore has operated as a global cloud provider for over a decade and has been an early player in the global AI landscape. Gcore knows what it takes to build, scale, and sell cloud and AI services—because it has done it for customers and partners worldwide. Gcore AI Cloud Stack has already been deployed on thousands of NVIDIA Hopper GPUs across Europe to build a commercial-grade AI cloud with full orchestration, abstraction, and monetization layers. That real-world experience allows Gcore to deliver the infrastructure, operational playbook, and sales enablement customers need to succeed.“We’re pleased to collaborate with Gcore, a strong European ISV, to advance a networking reference architecture for AI clouds. Combining Nokia’s open, programmable, and reliable networking with Gcore’s cloud software accelerates deployable blueprints that customers can adopt across data centers and the edge.”— Mark Vanderhaegen, Head of Business Development, Data Center Networks, NokiaKey features of AI Cloud StackCloudification of GPU clusters: Transform raw infrastructure into cloud-like consumption: Infrastructure as a Service (IaaS), Platform as a Service (PaaS), GPU as a Service (GPUaaS), or Model as a Service (MaaS).Gcore AI suite integration: Enable serverless inference and training capabilities through Gcore’s enterprise AI suite.Hyperscaler functionality: Built-in billing, observability, orchestration, and professional services deliver the tools CSPs and enterprises need to operate—similar to what they’re used to getting on public cloud.White-label options: Deliver capacity under your own brand while relying on Gcore’s proven global cloud backbone.NVIDIA AI Enterprise-ready: Integrate pretrained models, chatbots, and NVIDIA AI blueprints to accelerate time-to-market.The future of AI cloudsWith Gcore AI Cloud Stack, enterprises no longer need to spend years building the operational, technical, and commercial capabilities required to utilize and monetize GPU infrastructure. Instead, they can launch in a few months with a hyperscaler-grade solution designed for today’s AI demands.Whether you’re a cloud service provider, an enterprise investing in AI infrastructure, or a partner looking to accelerate GPU monetization, AI Cloud Stack gives you the speed, scalability, and GTM support you need.Ready to turn your GPU clusters into a fully monetized, production-grade AI cloud? Talk with our AI experts to learn how you can go from bare metal to model-as-a-service in months, not years.Get a customized consultation

Gcore Radar Q1–Q2 2025: three insights into evolving attack trends

Cyberattacks are becoming more frequent, larger in scale, and more sophisticated in execution. For businesses across industries, this means protecting digital resources is more important than ever. Staying ahead of attackers requires not only robust defense solutions but also a clear understanding of how attack patterns are changing.The latest edition of the Gcore Radar report, covering the first half of 2025, highlights important shifts in attack volumes, industry targets, and attacker strategies. Together, these findings show how the DDoS landscape is evolving, and why adaptive defense has never been more important.Here are three key insights from the report, which you can download in full here.#1. DDoS attack volumes continue to riseIn Q1–Q2 2025, the total number of DDoS attacks grew by 21% compared to H2 2024 and 41% year-on-year.The largest single attack peaked at 2.2 Tbps, surpassing the previous record of 2 Tbps in late 2024.The growth is driven by several factors, including the increasing availability of DDoS-for-hire services, the rise of insecure IoT devices feeding into botnets, and heightened geopolitical and economic tensions worldwide. Together, these factors make attacks not only more common but also harder to mitigate.#2. Technology overtakes gaming as the top targetThe distribution of attacks by industry has shifted significantly. Technology now represents 30% of all attacks, overtaking gaming, which dropped from 34% in H2 2024 to 19% in H1 2025. Financial services remain a prime target, accounting for 21% of attacks.This trend reflects attackers’ growing focus on industries with broader downstream impact. Hosting providers, SaaS platforms, and payment systems are attractive targets because a single disruption can affect entire ecosystems of dependent businesses.#3. Attacks are getting smarter and more complexAttackers are increasingly blending high-volume assaults with application-layer exploits aimed at web apps and APIs. These multi-layered tactics target customer-facing systems such as inventory platforms, payment flows, and authentication processes.At the same time, attack durations are shifting. While maximum duration has shortened from five hours to three, mid-range attacks lasting 10–30 minutes have nearly quadrupled. This suggests attackers are testing new strategies designed to bypass automated defenses and maximize disruption.How Gcore helps businesses stay protectedAs attack methods evolve, businesses need equally advanced protection. Gcore DDoS Protection offers over 200 Tbps filtering capacity across 210+ points of presence worldwide, neutralizing threats in real time. Integrated Web Application and API Protection (WAAP) extends defense beyond network perimeters, protecting against sophisticated application-layer and business-logic attacks. To explore the report’s full findings, download the complete Gcore Radar report here.Download Gcore Radar Q1-Q2 2025

Edge AI is your next competitive advantage: highlights from Seva Vayner’s webinar

Edge AI isn’t just a technical milestone. It’s a strategic lever for businesses aiming to gain a competitive advantage with AI.As AI deployments grow more complex and more global, central cloud infrastructure is hitting real-world limits: compliance barriers, latency bottlenecks, and runaway operational costs. The question for businesses isn’t whether they’ll adopt edge AI, but how soon.In a recent webinar with Mobile World Live, Seva Vayner, Gcore’s Product Director of Edge Cloud and AI, made the business case for edge inference as a competitive differentiator. He outlined what it takes to stay ahead in a world where speed, locality, and control define AI success.Scroll on to watch Seva explain why your infrastructure choices now shape your market position later.Location is everything: edge over cloudAI is no longer something globally operating businesses can afford to run from a central location. Regional regulations and growing user expectations mean models must be served as close to the user as possible. This reduces latency, but perhaps more importantly is essential for compliance with local laws.Edge AI also keeps costs down by avoiding costly international traffic routes. When your users are global but your infrastructure isn’t, every request becomes an expensive, high-latency journey across the internet.Edge inference solves three problems at once in an increasingly regionally fragmented AI landscape:Keeps compute near users for low latencyCuts down on international transit for reduced costsHelps companies stay compliant with local lawsPrivate edge: control over convenienceMany businesses started their AI journey by experimenting with public APIs like OpenAI’s. But as companies and their AI use cases mature, that’s not good enough anymore. They need full control over data residency, model access, and deployment architecture, especially in regulated industries or high-sensitivity environments.That’s where private edge deployments come in. Instead of relying on public endpoints and shared infrastructure, organizations can fully isolate their AI environments, keeping data secure and models proprietary.This approach is ideal for healthcare, finance, government, and any sector where data sovereignty and operational security are critical.Optimizing edge AI: precision over powerDeploying AI at the edge requires right-sizing your infrastructure for the models and tasks at hand. That’s both technically smarter and far more cost-effective than throwing maximum power and size at every use case.Making smart trade-offs allows businesses to scale edge AI sustainably by using the right hardware for each use case.AI at the edge helps businesses deliver the experience without the excess. With the control that the edge brings, hardware costs can be cut by using exactly what each device or location requires, reducing financial waste.Final takeawayAs Seva put it, AI infrastructure decisions are no longer just financial; they’re part of serious business strategy. From regulatory compliance to operational cost to long-term scalability, edge inference is already a necessity for businesses that plan to serve AI at scale and get ahead in the market.Gcore offers a full suite of public and private edge deployment options across six continents, integrated with local telco infrastructure and optimized for real-time performance. Learn more about Everywhere Inference, our edge AI solution, or get in touch to see how we can help tailor a deployment model to your needs.Ready to get started? Deploy a model in just three clicks with Gcore Everywhere Inference.Discover Everywhere Inference

Subscribe to our newsletter

Get the latest industry trends, exclusive insights, and Gcore updates delivered straight to your inbox.