Gcore named a Leader in the GigaOm Radar for AI Infrastructure!Get the report

Gcore Blog

Discover the latest industry trends, get ahead with cutting‑edge insights, and be in the know about the newest Gcore innovations.

Gcore recognized as a Leader in the 2025 GigaOm Radar for AI Infrastructure

Gcore recognized as a Leader in the 2025 GigaOm Radar for AI Infrastructure

We’re proud to share that Gcore has been named a Leader in the 2025 GigaOm Radar for AI Infrastructure—the only European provider to earn a top-tier spot. GigaOm’s rigorous evaluation highlights our leadership in platform capability and innovation, and our expertise in delivering secure, scalable AI infrastructure.Inside the GigaOm Radar: what’s behind the Leader statusThe GigaOm Radar report is a respected industry analysis that evaluates top vendors in critical technology spaces. In this year’s edition, GigaOm assessed 14 of the world’s leading AI infrastructure providers, measuring their strengths across key technical and business metrics. It ranks providers based on factors such as scalability and performance, deployment flexibility, security and compliance, and interoperability.Alongside the ranking, the report offers valuable insights into the evolving AI infrastructure landscape, including the rise of hybrid AI architectures, advances in accelerated computing, and the increasing adoption of edge deployment to bring AI closer to where data is generated. It also offers strategic takeaways for organizations seeking to build scalable, secure, and sovereign AI capabilities.Why was Gcore named a top provider?The specific areas in which Gcore stood out and earned its Leader status are as follows:A comprehensive AI platform offering Everywhere Inference and GPU Cloud solutions that support scalable AI from model development to productionHigh performance powered by state-of-the-art NVIDIA A100, H100, H200 and GB200 GPUs and a global private network ensuring ultra-low latencyAn extensive model catalogue with flexible deployment options across cloud, on-premises, hybrid, and edge environments, enabling tailored global AI solutionsExtensive capacity of cutting-edge GPUs and technical support in Europe, supporting European sovereign AI initiativesChoosing Gcore AI is a strategic move for organizations prioritizing ultra-low latency, high performance, and flexible deployment options across cloud, on-premises, hybrid, and edge environments. Gcore’s global private network ensures low-latency processing for real-time AI applications, which is a key advantage for businesses with a global footprint.GigaOm Radar, 2025Discover more about the AI infrastructure landscapeAt Gcore, we’re dedicated to driving innovation in AI infrastructure. GPU Cloud and Everywhere Inference empower organizations to deploy AI efficiently and securely, on their terms.If you’re planning your AI infrastructure roadmap or rethinking your current one, this report is a must-read. Explore the report to discover how Gcore can support high-performance AI at scale and help you stay ahead in an AI-driven world.Download the full report

July 22, 2025 2 min read

Protecting networks at scale with AI security strategies

Network cyberattacks are no longer isolated incidents. They are a constant, relentless assault on network infrastructure, probing for vulnerabilities in routing, session handling, and authentication flows. With AI at their disposal, threat actors can move faster than ever, shifting tactics mid-attack to bypass static defenses.Legacy systems, designed for simpler threats, cannot keep pace. Modern network security demands a new approach, combining real-time visibility, automated response, AI-driven adaptation, and decentralized protection to secure critical infrastructure without sacrificing speed or availability.At Gcore, we believe security must move as fast as your network does. So, in this article, we explore how L3/L4 network security is evolving to meet new network security challenges and how AI strengthens defenses against today’s most advanced threats.Smarter threat detection across complex network layersModern threats blend into legitimate traffic, using encrypted command-and-control, slow drip API abuse, and DNS tunneling to evade detection. Attackers increasingly embed credential stuffing into regular login activity. Without deep flow analysis, these attempts bypass simple rate limits and avoid triggering alerts until major breaches occur.Effective network defense today means inspection at Layer 3 and Layer 4, looking at:Traffic flow metadata (NetFlow, sFlow)SSL/TLS handshake anomaliesDNS request irregularitiesUnexpected session persistence behaviorsGcore Edge Security applies real-time traffic inspection across multiple layers, correlating flows and behaviors across routers, load balancers, proxies, and cloud edges. Even slight anomalies in NetFlow exports or unexpected east-west traffic inside a VPC can trigger early threat alerts.By combining packet metadata analysis, flow telemetry, and historical modeling, Gcore helps organizations detect stealth attacks long before traditional security controls react.Automated response to contain threats at network speedDetection is only half the battle. Once an anomaly is identified, defenders must act within seconds to prevent damage.Real-world example: DNS amplification attackIf a volumetric DNS amplification attack begins saturating a branch office's upstream link, automated systems can:Apply ACL-based rate limits at the nearest edge routerFilter malicious traffic upstream before WAN degradationAlert teams for manual inspection if thresholds escalateSimilarly, if lateral movement is detected inside a cloud deployment, dynamic firewall policies can isolate affected subnets before attackers pivot deeper.Gcore’s network automation frameworks integrate real-time AI decision-making with response workflows, enabling selective throttling, forced reauthentication, or local isolation—without disrupting legitimate users. Automation means threats are contained quickly, minimizing impact without crippling operations.Hardening DDoS mitigation against evolving attack patternsDDoS attacks have moved beyond basic volumetric floods. Today, attackers combine multiple tactics in coordinated strikes. Common attack vectors in modern DDoS include the following:UDP floods targeting bandwidth exhaustionSSL handshake floods overwhelming load balancersHTTP floods simulating legitimate browser sessionsAdaptive multi-vector shifts changing methods mid-attackReal-world case study: ISP under hybrid DDoS attackIn recent years, ISPs and large enterprises have faced hybrid DDoS attacks blending hundreds of gigabits per second of L3/4 UDP flood traffic with targeted SSL handshake floods. Attackers shift vectors dynamically to bypass static defenses and overwhelm infrastructure at multiple layers simultaneously. Static defenses fail in such cases because attackers change vectors every few minutes.Building resilient networks through self-healing capabilitiesEven the best defenses can be breached. When that happens, resilient networks must recover automatically to maintain uptime.If BGP route flapping is detected on a peering session, self-healing networks can:Suppress unstable prefixesReroute traffic through backup transit providersPrevent packet loss and service degradation without manual interventionSimilarly, if a VPN concentrator faces resource exhaustion from targeted attack traffic, automated scaling can:Spin up additional concentratorsRedistribute tunnel sessions dynamicallyMaintain stable access for remote usersGcore’s infrastructure supports self-healing capabilities by combining telemetry analysis, automated failover, and rapid resource scaling across core and edge networks. This resilience prevents localized incidents from escalating into major outages.Securing the edge against decentralized threatsThe network perimeter is now everywhere. Branches, mobile endpoints, IoT devices, and multi-cloud services all represent potential entry points for attackers.Real-world example: IoT malware infection at the branchMalware-infected IoT devices at a branch office can initiate outbound C2 traffic during low-traffic periods. Without local inspection, this activity can go undetected until aggregated telemetry reaches the central SOC, often too late.Modern edge security platforms deploy the following:Real-time traffic inspection at branch and edge routersBehavioral anomaly detection at local points of presenceAutomated enforcement policies blocking malicious flows immediatelyGcore’s edge nodes analyze flows and detect anomalies in near real time, enabling local containment before threats can propagate deeper into cloud or core systems. Decentralized defense shortens attacker dwell time, minimizes potential damage, and offloads pressure from centralized systems.How Gcore is preparing networks for the next generation of threatsThe threat landscape will only grow more complex. Attackers are investing in automation, AI, and adaptive tactics to stay one step ahead. Defending modern networks demands:Full-stack visibility from core to edgeAdaptive defense that adjusts faster than attackersAutomated recovery from disruption or compromiseDecentralized detection and containment at every entry pointGcore Edge Security delivers these capabilities, combining AI-enhanced traffic analysis, real-time mitigation, resilient failover systems, and edge-to-core defense. In a world where minutes of network downtime can cost millions, you can’t afford static defenses. We enable networks to protect critical infrastructure without sacrificing performance, agility, or resilience.Move faster than attackers. Build AI-powered resilience into your network with Gcore.Check out our docs to see how DDoS Protection protects your network

July 17, 2025 3 min read

Introducing Gcore for Startups: created for builders, by builders

Building a startup is tough. Every decision about your infrastructure can make or break your speed to market and burn rate. Your time, team, and budget are stretched thin. That’s why you need a partner that helps you scale without compromise.At Gcore, we get it. We’ve been there ourselves, and we’ve helped thousands of engineering teams scale global applications under pressure.That’s why we created the Gcore Startups Program: to give early-stage founders the infrastructure, support, and pricing they actually need to launch and grow.At Gcore, we launched the Startups Program because we’ve been in their shoes. We know what it means to build under pressure, with limited resources, and big ambitions. We wanted to offer early-stage founders more than just short-term credits and fine print; our goal is to give them robust, long-term infrastructure they can rely on.Dmitry Maslennikov, Head of Gcore for StartupsWhat you get when you joinThe program is open to startups across industries, whether you’re building in fintech, AI, gaming, media, or something entirely new.Here’s what founders receive:Startup-friendly pricing on Gcore’s cloud and edge servicesCloud credits to help you get started without riskWhite-labeled dashboards to track usage across your team or customersPersonalized onboarding and migration supportGo-to-market resources to accelerate your launchYou also get direct access to all Gcore products, including Everywhere Inference, GPU Cloud, Managed Kubernetes, Object Storage, CDN, and security services. They’re available globally via our single, intuitive Gcore Customer Portal, and ready for your production workloads.When startups join the program, they get access to powerful cloud and edge infrastructure at startup-friendly pricing, personal migration support, white-labeled dashboards for tracking usage, and go-to-market resources. Everything we provide is tailored to the specific startup’s unique needs and designed to help them scale faster and smarter.Dmitry MaslennikovWhy startups are choosing GcoreWe understand that performance and flexibility are key for startups. From high-throughput AI inference to real-time media delivery, our infrastructure was designed to support demanding, distributed applications at scale.But what sets us apart is how we work with founders. We don’t force startups into rigid plans or abstract SLAs. We build with you 24/7, because we know your hustle isn’t a 9–5.One recent success story: an AI startup that migrated from a major hyperscaler told us they cut their inference costs by over 40%…and got actual human support for the first time. What truly sets us apart is our flexibility: we’re not a faceless hyperscaler. We tailor offers, support, and infrastructure to each startup’s stage and needs.Dmitry MaslennikovWe’re excited to support startups working on AI, machine learning, video, gaming, and real-time apps. Gcore for Startups is delivering serious value to founders in industries where performance, cost efficiency, and responsiveness make or break product experience.Ready to scale smarter?Apply today and get hands-on support from engineers who’ve been in your shoes. If you’re an early-stage startup with a working product and funding (pre-seed to Series A), we’ll review your application quickly and tailor infrastructure that matches your stage, stack, and goals.To get started, head on over to our Gcore for Startups page and book a demo.Discover Gcore for Startups

July 9, 2025 2 min read

The cloud control gap: why EU companies are auditing jurisdiction in 2025

Europe’s cloud priorities are changing fast, and rightly so. With new regulations taking effect, concerns about jurisdictional control rising, and trust becoming a key differentiator, more companies are asking a simple question: Who really controls our data?For years, European companies have relied on global cloud giants headquartered outside the EU. These providers offered speed, scale, and a wide range of services. But 2025 is a different landscape.Recent developments have shown that data location doesn’t always mean data protection. A service hosted in an EU data center may still be subject to laws from outside the EU, like the US CLOUD Act, which could require the provider to hand over customer data regardless of where it’s stored.For regulated industries, government contractors, and data-sensitive businesses, that’s a growing problem. Sovereignty today goes beyond compliance. It’s central to business trust, operational transparency, and long-term risk management.Rising risks of non-EU cloud dependencyIn 2025, the conversation has shifted from “is this provider GDPR-compliant?” to “what happens if this provider is forced to act against our interests?”Here are three real concerns European companies now face:Foreign jurisdiction risk: Cloud providers based outside Europe may be legally required to share customer data with foreign authorities, even if it’s stored in the EU.Operational disruption: Geopolitical tensions or executive decisions abroad could affect service availability or create new barriers to access.Reputational and compliance exposure: Customers and regulators increasingly expect companies to use providers aligned with European standards and legal protections.European leaders are actively pushing for “full-stack European solutions” across cloud and AI infrastructure, citing sovereignty and legal clarity as top concerns. Leading European firms like Deutsche Telekom and Airbus have criticized proposals that would grant non-EU tech giants access to sensitive EU cloud data.This reinforces a broader industry consensus: jurisdictional control is a serious strategic issue for European businesses across industries. Relying on foreign cloud services introduces risks that no business can control, and that few can absorb.What European companies must do nextEuropean businesses can’t wait for disruption to happen. They must build resilience now, before potentially devastating problems occur.Audit their cloud stack to identify data locations and associated legal jurisdictions.Repatriate sensitive workloads to EU-based providers with clear legal accountability frameworks.Consider deploying hybrid or multi-cloud architectures, blending hyperscaler agility and EU sovereign assurance.Over 80% of European firms using cloud infrastructure are actively exploring or migrating to sovereign solutions. This is a smart strategic maneuver in an increasingly complex and regulated cloud landscape.Choosing a futureproof pathIf your business depends on the cloud, sovereignty should be part of your planning. It’s not about political trends or buzzwords. It’s about control, continuity, and credibility.European cloud providers like Gcore support organizations in achieving key sovereignty milestones:EU legal jurisdiction over dataAlignment with sectoral compliance requirementsResilience to legal and geopolitical disruptionTrust with EU customers, partners, and regulatorsIn 2025, that’s a serious competitive edge that shows your customers that you take their data protection seriously. A European provider is quickly becoming a non-negotiable for European businesses.Want to explore what digital sovereignty looks like in practice?Gcore’s infrastructure is fully self-owned, jurisdictionally transparent, and compliant with EU data laws. As a European provider, we understand the legal, operational, and reputational demands on EU businesses.Talk to us about sovereignty strategies for cloud, AI, network, and security that protect your data, your customers, and your business. We’re ready to provide a free, customized consultation to help your European business prepare for sovereignty challenges.Auditing your cloud stack is the first step. Knowing what to look for in a provider comes next.Not all EU-based cloud providers guarantee sovereignty. Learn what to evaluate in infrastructure, ownership, and legal control to make the right decision.Learn how to verify EU cloud control in our blog

July 2, 2025 3 min read

Outpacing cloud‑native threats: How to secure distributed workloads at scale

The cloud never stops. Neither do the threats.Every shift toward containers, microservices, and hybrid clouds creates new opportunities for innovation…and for attackers. Legacy security, built for static systems, crumbles under the speed, scale, and complexity of modern cloud-native environments.To survive, organizations need a new approach: one that’s dynamic, AI-driven, automated, and rooted in zero trust.In this article, we break down the hidden risks of cloud-native architectures and show how intelligent, automated security can outpace threats, protect distributed workloads, and power secure growth at scale.The challenges of cloud-native environmentsCloud-native architectures are designed for maximum flexibility and speed. Applications run in containers that can scale in seconds. Microservices split large applications into smaller, independent parts. Hybrid and multi-cloud deployments stretch workloads across public clouds, private clouds, and on-premises infrastructure.But this agility comes at a cost. It expands the attack surface dramatically, and traditional perimeter-based security can’t keep up.Containers share host resources, which means if one container is breached, attackers may gain access to others on the same system. Microservices rely heavily on APIs to communicate, and every exposed API is a potential attack vector. Hybrid cloud environments create inconsistent security controls across platforms, making gaps easier for attackers to exploit.Legacy security tools, built for unchanging, centralized environments, lack the real-time visibility, scalability, and automated response needed to secure today’s dynamic systems. Organizations must rethink cloud security from the ground up, prioritizing speed, automation, and continuous monitoring.Solution #1: AI-powered threat detection forsmarter defensesModern threats evolve faster than any manual security process can track. Rule-based defenses simply can’t adapt fast enough.The solution? AI-driven threat detection.Instead of relying on static rules, AI models monitor massive volumes of data in real time, spotting subtle anomalies that signal an attack before real damage is done. For example, an AI-based platform can detect an unauthorized process in a container trying to access confidential data, flag it as suspicious, and isolate the threat within milliseconds before attackers can move laterally or exfiltrate information.This proactive approach learns, adapts, and neutralizes new attack vectors before they become widespread. By continuously monitoring system behavior and automatically responding to abnormal activity, AI closes the gap between detection and action, critical in cloud-native, regulated environments where even milliseconds matter.Solution #2: Zero trust as the new security baseline“Trust but verify” no longer cuts it. In a cloud-native world, the new rule is “trust nothing, verify everything”.Zero-trust security assumes that threats exist both inside and outside the network perimeter. Every request—whether from a user, device, or application—must be authenticated, authorized, and validated.In distributed architectures, zero trust isolates workloads, meaning even if attackers breach one component, they can’t easily pivot across systems. Strict identity and access management controls limit the blast radius, minimizing potential damage.Combined with AI-driven monitoring, zero trust provides deep, continuous verification, blocking insider threats, compromised credentials, and advanced persistent threats before they escalate.Solution #3: Automated security policies for scalingprotectionManual security management is impossible in dynamic environments where thousands of containers and microservices are spun up and down in real time.Automation is the way forward. AI-powered security policies can continuously analyze system behavior, detect deviations, and adjust defenses automatically, without human intervention.This eliminates the lag between detection and response, shrinks the attack window, and drastically reduces the risk of human error. It also ensures consistent security enforcement across all environments: public cloud, private cloud, and on-premises.For example, if a system detects an unusual spike in API calls, an automated security policy can immediately apply rate limiting or restrict access, shutting down the threat without impacting overall performance.Automation doesn’t just respond faster. It maintains resilience and operational continuity even in the face of complex, distributed threats.Unifying security across cloud environmentsSecuring distributed workloads isn’t just about having smarter tools, it’s about making them work together. Different cloud platforms, technologies, and management protocols create fragmentation, opening cracks that attackers can exploit. Security gaps between systems are as dangerous as the threats themselves.Modern cloud-native security demands a unified approach. Organizations need centralized platforms that pull real-time data from every endpoint, regardless of platform or location, and present it through a single management dashboard. This gives IT and security teams full, end-to-end visibility over threats, system health, and compliance posture. It also allows security policies to be deployed, updated, and enforced consistently across every environment, without relying on multiple, siloed tools.Unification strengthens security, simplifies operations, and dramatically reduces overhead, critical for scaling securely at cloud-native speeds. That’s why at Gcore, our integrated suite of products includes security for cloud, network, and AI workloads, all managed in a single, intuitive interface.Why choose Gcore for cloud-native security?Securing cloud-native workloads requires more than legacy firewalls and patchwork solutions. It demands dynamic, intelligent protection that moves as fast as your business does.Gcore Edge Security delivers robust, AI-driven security built for the cloud-native era. By combining real-time AI threat detection, zero-trust enforcement, automated responses, and compliance-first design, Gcore security solutions protect distributed applications without slowing down development cycles.Discover why WAAP is essential for cloud security in 2025

June 26, 2025 3 min read

Announcing a new AI-optimized data center in Southern Europe

Good news for businesses operating in Southern Europe! Our newest cloud regions in Sines, Portugal, give you faster, more local access to the infrastructure you need to run advanced AI, ML, and HPC workloads across the Iberian Peninsula and wider region. Sines-2 marks the first region launched in partnership with Northern Data Group, signaling a new chapter in delivering powerful, workload-optimized infrastructure across Europe. And Sines-3 expands capacity and availability for the region.Strategically positioned in Portugal, Sines-2 and Sines-3 enhance coverage in Southern Europe, providing a lower-latency option for customers operating in or targeting this region. With the explosive growth of AI, machine learning, and compute-intensive workloads, these new regions are designed to meet escalating demand with cutting-edge GPU and storage capabilities.You can activate Sines-2 and Sines-3 for GPU Cloud or Everywhere Inference today with just a few clicks.Built for AI, designed to scaleSines-2 and Sines-3 bring with them next-generation infrastructure features, purpose-built for today's most demanding workloads:NVIDIA H100 GPUs: Unlock the full potential of AI/ML training, high-performance computing (HPC), and rendering workloads with access to H100 GPUs.VAST NFS (file sharing protocol) support: Benefit from scalable, high-throughput file storage ideal for data-intensive operations, research, and real-time AI workflows.IaaS portfolio: Deploy Virtual Machines, manage storage, and scale infrastructure with the same consistency and reliability as in our flagship regions.Organizations operating in Portugal, Spain, and nearby regions can now deploy workloads closer to end users, improving application performance. For finance, healthcare, public sector, and other organisations running sensitive workloads that must stay within a country or region, Sines-2 and Sines-3 are easy ways to access state-of-the-art GPUs with simplified compliance. Whether you're building AI models, running simulations, or managing rendering pipelines, Sines-2 and Sines-3 offer the performance, capacity, availability, and proximity you need.And best of all, servers are available and ready to deploy today.Run your AI workloads in Portugal todayWith these new Sines regions and our partnership with Northern Data Group, we're making it easier than ever for you to run AI workloads at scale. If you need speed, flexibility, and global reach, we're ready to power your next AI breakthrough.Unlock the power of Sines-2 and Sines-3 today

June 23, 2025 2 min read

GTC Europe 2025: watch Seva Vayner on European AI trends

Inference is becoming Europe’s core AI workload. Telcos are moving fast on low-latency infrastructure. Data sovereignty is shaping every deployment decision.At GTC Europe, these trends were impossible to miss. The conversation has moved beyond experimentation to execution, with exciting, distinctly European priorities shaping conversations.Gcore’s own Seva Vayner, Product Director of Edge Cloud and AI, shared his take on this year’s event during GTC. He sees a clear shift in what European enterprises are asking for and what the ecosystem is ready to deliver.Scroll on to watch the interview and see where AI in Europe is heading.“It’s really a pleasure to see GTC in Europe”After years of global AI strategy being shaped primarily by the US and China, Europe is carving its own path. Seva notes that this year’s GTC Europe wasn’t just a regional spin-off. it marked the emergence of a distinctly European voice in AI development.“First of all, it's really a pleasure to see that GTC in Europe happened, and that a lot of European companies came together to have the conversation and build the ecosystem.”As Seva notes, the real excitement came from watching European players collaborate. The focus was less on following global trends and more on co-creating the region’s own AI trajectory.“Inference workloads will grow significantly in Europe”Inference was a throughline across nearly every session. As Seva points out, Europe is still at the early stages of adopting inference at scale, but the shift is happening fast.“Europe is only just starting its journey into inference, but we already see the trend. Over the next 5 to 10 years, inference workloads will grow significantly. That’s why GTC Europe is becoming a permanent, yearly event.”This growth won’t just be driven by startups. Enterprises, governments, and infrastructure providers are all waking up to the importance of real-time, regional inference capabilities.“There’s real traction. Companies are more and more interested in how to deliver low-latency inference. In a few years, this will be one of the most crucial workloads for any GPU cloud in Europe.”“Telcos are getting serious about AI”One of the clearest signs of maturity at GTC Europe was that telcos and CSPs are actively looking to deploy AI. And they’re asking the hard questions about how to integrate it into their infrastructure at a vast scale.“One of the most interesting things is how telcos are thinking about adopting AI workloads on their infrastructure to deliver low latency. Sovereignty is crucial, especially for customers looking to serve training or inference workloads inside their region. And also user experience: how can I get GPU capacity in clusters, or deliver inference in just a few clicks?”This theme—fast, sovereign, self-service AI—popped up again and again. Telcos and service providers want frictionless deployment and local control.“Companies are struggling most with data”While model deployment and infrastructure strategy took center stage, Seva reminds us that data processing and storage remains the bottleneck. Enterprises know they need to adopt AI, but they’re still navigating where and how to store and process the data that fuels it.“One of the biggest struggles for end customers is the data: where it’s processed, where it’s stored, and what kind of capabilities are available. From a European perspective, we already see more and more companies looking for sovereign data privacy and simple, mature solutions for end users.”That’s a familiar challenge for enterprises operating under GDPR, NIS2, and other compliance frameworks. The new wave of AI infrastructure has to be built for performance and for trust.AI in Europe: responsible, scalable, and localSeva’s key takeaway is that AI in Europe is no longer about catching up, it’s about doing it differently. The questions have changed from “Should we do AI?” to “How do we scale it responsibly, reliably, and locally?”From sovereign deployment to edge-first infrastructure, GTC Europe 2025 showed that inference is the foundation of how European businesses plan to run AI. “The ecosystem is coming together,” explains Seva. “And the next five years will be crucial for defining how AI will work: not just in the cloud, but everywhere.”If you’re looking to reduce latency, cut costs, and stay compliant while deploying AI in production, we invite you to download our free ebook, The inference optimization playbook.Download our free inference optimization playbook

June 18, 2025 3 min read

Introducing FastEdge Triggers: real-time edge logic

When you're building real-time applications, whether for streaming platforms, SaaS dashboards, or security-sensitive services, you need content that adapts on the fly. Blocking suspicious IPs, injecting personalized content, transforming media on the edge—these should be fast, scalable, and reliable.Until now, they weren't.Developers and technical teams often had to work across multiple departments to create brittle, hardcoded solutions. Each use case, like watermarking video or rewriting headers, required a custom integration. There was no easy way to run logic dynamically at the edge. That changes with FastEdge Triggers.Real-time logic, built into the edgeFastEdge Triggers let you execute custom serverless logic at key moments in the HTTP lifecycle:on_request_headerson_request_bodyon_response_headerson_response_bodyFastEdge is built on the proxy-wasm standard, making it easy to adapt existing proxy-wasm applications (e.g., for Envoy or Kong) for use with Gcore. These trigger types align directly with proxy-wasm conventions, meaning less friction for developers familiar with modern proxy architectures.This means that you can now:Authenticate users' tokens, such as JWTBlock access by IP, region, or user agentInject CSS, HTML, or JavaScript into responsesTransform images or convert markdown to HTML before deliveryAdd security tokens or watermarks to video contentRewrite or sanitize request headers and bodiesNo backend round-trips. No manual routing. Just real-time, programmable edge behavior, backed by Gcore's global infrastructure.While FastEdge enables instant logic execution at the edge, response-stage triggers (on_response_headers and on_response_body) naturally depend on receiving data from the origin before acting. Even so, transformations happen at the edge, reducing backend load and improving overall efficiency.Our architecture means that FastEdge logic is executed in ultra-low-latency environments, tightly coupled with CDN. Triggers can be layered across multiple stages of a request without performance degradation.Built for developersFastEdge Triggers were built to solve three core pain points for technical teams:Hard to scale: Custom logic used to require bespoke, team-specific workaroundsHard to maintain: Even single-team solutions became brittle without proper edge infrastructureLimited flexibility: Legacy CDN logic couldn't support complex, dynamic behaviorWith FastEdge, developers have full control: no DevOps bottlenecks, no workarounds, no compromises. Logic runs at the edge, not your origin, minimizing backend exposure. FastEdge apps execute in isolated, sandboxed environments, reducing the risk of vulnerabilities that might otherwise be introduced when logic runs on central infrastructure.How it works behind the scenesEach FastEdge application is written in Rust or AssemblyScript and connected to the HTTP request lifecycle through Gcore's configuration interface. Apps are linked to trigger types through the CDN resource settings page in the Gcore Customer Portal.Configuring FastEdge Triggers from the CDN resource settings screen in the Gcore Customer PortalHere's what happens under the hood:You assign a FastEdge app to a trigger point.Our Core Proxy detects that trigger and automatically routes execution through your custom logic.The result is returned before hitting cache or origin, modified, enriched, and secured.This flow is deeply integrated with our CDN, delivering minimal latency with zero friction.A sequence diagram showing how FastEdge Triggers works under the hood A real-life use case: markdown to HTML at the edgeHere's a real-world example that shows how FastEdge Triggers can power multi-step content transformation without a single backend server.One customer wanted to serve Markdown-based documentation as styled HTML, without spinning up infrastructure. Using this FastEdge app written in Rust, they achieved just that.The app listens at three trigger points: on_request_headers, on_response_headers, and on_response_bodyIt detects requests for .md files and converts them on the flyThe HTML is served directly via CDN, no origin compute requiredYou can see it live here:README renderedTerraform docs renderedThis use case showcases FastEdge's ability to orchestrate multi-stage logic at the edge: ideal for serverless documentation, lightweight rendering, or content transformation pipelines.Ready to build smarter at the edge?FastEdge Triggers are available now for all FastEdge customers. If you're looking to modernize your edge logic, simplify architecture, and ship faster with fewer backend dependencies, FastEdge is built for you.Reach out to your account manager or contact us to activate FastEdge Triggers in your environment.Try Fastedge Triggers

June 16, 2025 3 min read

Gcore and Orange Business launch innovation program piloting joint solution to deliver sovereign inference as a service

Gcore and Orange Business have kicked off a strategic co-innovation program with the mission to deliver a scalable, production-grade AI inference service that is sovereign by design. By combining Orange Business’ secure, trusted cloud infrastructure and Gcore’s AI inference private deployment service, the collaboration empowers European enterprises and public sector organizations to run inference workloads at scale, without compromising on latency, control, or compliance.Gcore’s AI inference private deployment service is already live on Orange Business’ Cloud Avenue infrastructure. Selected enterprises across industries are actively testing it in real-world scenarios. These pilot customers are exploring how fast, secure, and compliant inference can accelerate their AI projects, cut deployment times, and reduce infrastructure overhead.The prototype will be demonstrated at NVIDIA GTC Paris, at the Taiga Cloud booth G26. Stop by any time to see it in action.The inference supercycle is underwayBy 2030, inference will comprise 70% of enterprise AI workloads. Telcos are well positioned to lead this shift due to their dense edge presence, licensed national data infrastructure, and long-standing trust relationships.Gcore’s inference solution provides a sovereign, edge-native inference layer. It enables users to serve real-time, GPU-intensive applications like agentic AI, trusted LLMs, computer vision, and predictive analytics, all while staying compliant with Europe’s evolving data and AI governance frameworks.From complexity to three clicksEnterprise AI doesn’t need to be hard. Deploying inference workloads at scale used to demand Kubernetes fluency, large MLOps teams, and costly trial-and-error.Now? It’s just three clicks:Pick a model: Choose from NVIDIA NIMs, open source, or proprietary libraries.Choose a region: Select one of Orange Business’ accredited EU data centers.Deploy: See your workloads go live in under 10 seconds.Enterprises can launch inference projects faster, test ideas more quickly, and deliver production-ready AI services without spending months on ML plumbing.Explore our blog to watch a demo showing how enterprises can deploy inference workloads in just three clicks and ten seconds.Sovereign by designAll model data, logs, and inference results are stored exclusively within Orange Business’ own data centers in France, Germany, Norway, and Sweden. Cross-border data transfer is opt-in only, helping ensure alignment with GDPR, sector-specific regulations, and the forthcoming EU AI Act.This platform is built for trust, transparency, and sovereignty by default. Customers maintain full control over their data, with governance baked into every layer of the deployment.Performance without trade-offsGcore’s AI inference solution avoids the latency spikes, cold starts, and resource waste common in traditional cloud AI setups. Key design features include:Smart GPU routing: Directs each request to the nearest in-region GPU, delivering real-time performance with sub-50ms latency.Pre-loaded models: Reduces cold start delays and improves response times.Secure multi-tenancy: Isolates customer data while maximizing infrastructure efficiency.The result is a production-ready inference platform optimized for both performance and compliance.Powering the future of AI infrastructureThis partnership marks a step forward for Europe’s sovereign AI capabilities. It highlights how telcos can serve as the backbone of next-generation AI infrastructure, hosting, scaling, and securing workloads at the edge.With hundreds of edge POPs, trusted national networks, and deep ties across vertical industries, Orange Business is uniquely positioned to support a broad range of use cases, including real-time customer service AI, fraud detection, healthcare diagnostics, logistics automation, and public sector digital services.What’s next: validating real-world performanceThis phase of the Gcore and Orange Business program is focused on validating the solution through live customer deployments and performance benchmarks. Orange Business will gather feedback from early access customers to shape its future sovereign inference service offering. These insights will drive refinements and shape the roadmap ahead of a full commercial launch planned for later this year.Gcore and Orange Business are committed to delivering a sovereign inference service that meets Europe’s highest standards for speed, simplicity, and trust. This co-innovation program lays the foundation for that future.Ready to discover how Gcore and Orange Business can deliver sovereign inference as a service for your business?Request a preview

June 10, 2025 3 min read

Why on-premises AI is making a comeback

In recent years, cloud AI infrastructure has soared in popularity. With its scalability and ease of deployment, it’s no surprise that organizations rushed to transfer their data to the cloud in a bid to become “cloud-first.”But now, the tide is turning.As AI workloads grow more complex and regulatory pressures increase, many companies are reconsidering their reliance on cloud and turning back toward on-premises AI infrastructure.Rather than doubling down on the cloud, organizations are diversifying—adopting multi-cloud models, sovereign cloud environments, and even hybrid or fully on-prem setups. The era of a single cloud provider handling everything is coming to an end. Why? Control, security, and performance are hard to find in the public cloud.Here’s why more businesses are bringing AI back in-house.#1 Enhanced data security and controlData security remains one of the most urgent concerns driving the return to on-prem infrastructure.For sensitive or high-priority workloads—common in sectors like finance, healthcare, and government—keeping data off the cloud is often non-negotiable. Cloud computing inherently increases risk by exposing data to shared environments, wider attack surfaces, and complex supply chains.Choosing a trusted cloud provider can mitigate some of those risks. But it can’t replace the peace of mind that comes from keeping sensitive data in-house.With on-premises AI, organizations gain fine-grained access control. Encryption keys remain internal and breach exposure shrinks dramatically. It’s also much easier to stay compliant with privacy laws when data never leaves your own secure perimeter.For industries where trust and confidentiality are everything, on-prem solutions offer full visibility into where and how data is stored and processed.#2 Performance enhancement and latency reductionLatency matters—especially in AI.On-premises AI systems excel in environments that require real-time performance and heavy compute loads. Processing data locally avoids the physical delays caused by transferring it across the internet to a cloud data center.By eliminating long-haul network hops, companies get near-instant access to computing resources. They also get to fine-tune their internal networks—using private fiber, low-hop switching, and other low-latency optimizations that cloud customers can’t control.Unlike multi-tenant cloud platforms, on-prem resources aren’t shared. That means consistently low, predictable latency.This is vital for use cases where milliseconds—or even microseconds—make a difference: autonomous vehicles, real-time analytics, robotic control systems, and high-speed trading. Fast feedback loops and localized processing enable better outcomes, tighter control, and faster decision-making at the edge.#3 Regulatory compliance and data sovereigntyAround the world, data privacy regulations are tightening. For most organizations, compliance isn’t optional.On-premises infrastructure helps keep data safely inside the organization’s network. This supports data sovereignty, ensuring that sensitive information remains subject only to local laws—not the policies of another country’s cloud provider.It's also a powerful hedge against geopolitical instability.While hyperscalers operate globally, they’re always headquartered somewhere. That makes their infrastructure vulnerable to political shifts, sanctions, or changes in international data law. Governments may require them to restrict access, share data, or cut off services entirely—especially to organizations in sanctioned or adversarial jurisdictions.Businesses relying on these providers risk disruption when regulations change. On-premises infrastructure, by contrast, offers reliable continuity and greater control—especially in uncertain times.#4 Cost control and operational benefitsCloud pricing may look flexible, but costs can escalate quickly.Data transfers, storage, and compute spikes all add up—fast. In contrast, on-premises infrastructure provides a predictable Total Cost of Ownership (TCO). Although upfront CapEx is higher, OpEx remains more stable over time.Organizations can invest in high-performance hardware tailored to their specific needs and amortize those costs across years. That means no surprise bills, no sudden price hikes, and no dependence on vendor pricing models.Of course, running on-prem infrastructure comes with its own challenges. It demands specialized teams for deployment, maintenance, and support. These experts are costly to recruit and retain—but they’re critical to ensure uptime, security, and performance.Still, for companies with relatively stable compute and storage needs, the long-term savings often outweigh the initial setup effort. On-prem also integrates more smoothly into existing IT workflows, without the need for internet access or additional network setup—another operational bonus.#5 Proactive threat detection and automated responsesOn-premises AI sometimes enables smarter, more customized security.Advanced platforms can continuously analyze live data streams using machine learning to detect anomalies and predict threats. When something suspicious is flagged, the system can respond instantly by quarantining data, blocking traffic, and alerting security teams.That kind of automation is essential for minimizing damage and downtime.With full infrastructure control, organizations can deploy bespoke monitoring systems that align with their threat models. Deep packet inspection, real-time anomaly detection, and behavioral analytics can be easier to configure and maintain on-prem than in shared cloud environments.These systems can also work seamlessly with WAAP and DDoS tools to detect and neutralize threats before they spread. The key is flexibility: whether on-prem or cloud-based, AI-driven security should adapt to your architecture and threat landscape, not the other way around.End-to-end visibility can give security teams a clearer picture and faster response options than generic, one-size-fits-all public cloud security tools.How to combine eon-premises control with cloud scalabilityLet’s be clear: on-premises AI isn’t perfect. It demands upfront investment. It requires skilled personnel to deploy and manage systems. And integrating AI into legacy environments takes thoughtful planning.But today’s tools are helping bridge those gaps. Modern platforms reduce the need for constant manual intervention. They support real-time updates to threat models and detection logic. As a result, security teams can spend more time on strategy and less on maintenance.Meanwhile, the cloud still plays an important role. It offers faster access to new tools, software updates, and next-gen GPU hardware.That’s why many organizations are opting for a hybrid model.Our recommendation: Keep your sensitive, high-priority workloads on-prem. Use the cloud for elastic scale and innovation. Together, they deliver the best of both worlds: performance, control, compliance, and flexibility.Secure your digital infrastructure with Gcore on-premises AI inferenceWhether you’re protecting sensitive data or running high-demand workloads, on-premises AI gives you the control and confidence you need. Securing sensitive data and managing high-demand workloads requires a level of control, performance, and predictability that only on-premises AI infrastructure delivers.Gcore Everywhere Inference Private Deployment makes it easier than ever to bring powerful serverless AI inference capabilities directly into your physical environment. Designed for scalable global performance, Everywhere Inference enables robust and secure multi-tenant AI inference deployments across on-prem and cloud environments, helping you meet data sovereignty requirements, reduce latency, and streamline deployment.Talk to us about your on-prem AI plans

June 9, 2025 4 min read

3 clicks, 10 seconds: what real serverless AI inference should look like

Deploying a trained AI model could be the easiest part of the AI lifecycle. After the heavy lifting of data collection, training, and optimization, pushing a model into production is where “the rubber hits the road”, meaning the business expects to see the benefits of invested time and resources. In reality, many AI projects fail in production because of poor performance stemming from suboptimal infrastructure conditions.There are, broadly speaking, two paths developers can take when deploying inference: DIY, which is time and resource-consuming and requires domain expertise from several teams within the business, or the ever-so-popular “serverless inference” solution. The latter is supposed to simplify the task at hand and deliver productivity, cutting down effort to seconds, not hours. Yet most platforms offering “serverless” AI inference still feel anything but effortless. They require containers, configs, and custom scripts. They bury users in infrastructure decisions. And they often assume your data scientists are also DevOps engineers. It’s a far cry from what serverless was meant to be.At Gcore, we believe real serverless inference means this: three clicks and ten seconds to deploy a model. That’s not a tagline—it’s the experience we built. And it’s what infrastructure leaders like Mirantis are now enabling for enterprises through partnerships with Gcore.Why deployment UX matters more than you thinkServerless inference isn’t just a backend architecture choice. It’s a business enabler, a go-to-market accelerator, an ROI optimizer, a technology democratizer—or, if poorly executed, a blocker.The reality is that inference workloads are a key point of interface between your AI product or service and the customer. If deployment is clunky, you’re struggling to keep up with demand. If provisioning takes too long, latency spikes, performance is inconsistent, and ultimately your service doesn’t scale. And if the user experience is unclear or inconsistent, customers end up frustrated—or worse, they churn.Developers and data scientists don’t want to manage infrastructure. They want to bring a model and get results without becoming cloud operators in the process.Dom Wilde, SVP Marketing, MirantisThat’s why deployment UX is no longer a nice-to-have. It’s the core of your product.The benchmark: 3 clicks, 10 secondsWe built Gcore Everywhere Inference to remove every unnecessary step between uploading a model and running it in production. That includes GPU provisioning, routing, scaling, isolation, and endpoint generation, all handled behind the scenes.The result is what we believe should be the default:Upload a modelConfirm deployment parametersClick deployAnd within ten seconds, you’re serving live inference.For platform teams supporting AI workloads, this isn’t just a better workflow. It’s a transformation.With Gcore, our customers can deliver not just self-service infrastructure but also inference as a product. End users can deploy models in seconds, and customers don’t have to micromanage the backend to support that.Dom Wilde, MirantisSimple frontend, powerful backendIt’s worth saying: simplifying the frontend doesn’t mean weakening the backend. Gcore’s platform is built for scale and performance, offering the following:Multi-tenant GPU isolationSmart routing based on location and loadAuto-scaling based on demandA unified API and UI for both automation and accessibilityWhat makes this meaningful isn’t just the tech, it’s the way it vanishes behind the scenes. With Gcore, Mirantis customers can deliver low-latency inference, maximize GPU efficiency, and meet data privacy requirements without touching low-level infrastructure.Many enterprises and cloud customers worry about underutilized GPUs. Now, every cycle is optimized. The platform handles the complexity so our customers can focus on building value.Dom Wilde, MirantisIf it’s not 3 clicks and 10 seconds, it’s not really serverlessThere’s a growing gap between what serverless inference promises and what most platforms deliver. Many cloud providers are focused on raw compute or orchestration, but overlook the deployment layer. That’s a mistake. Because when it comes to customer experience, ease of deployment is the product.Mirantis saw that early on and partnered with Gcore to bring inference-as-a-service to CSP and enterprise customers, fast. Now, customers can launch new offerings more quickly, reduce operational overhead, and improve the user experience with a simple, elegant deployment path.Redefine serverless AI with GcoreIf it takes a config file, a container, and a support ticket to deploy a model, it’s not serverless—it’s server-less-ish. With Gcore Everywhere Inference, we’ve set a new benchmark: three clicks and ten seconds to deploy AI. And, our model catalog offers a variety of popular models so you can get started right away.Whether you’re frustrated with slow, inefficient model deployments or looking for the most effective way to start using AI for your company, you need Gcore Everywhere Inference. Give our experts a call to discover how we can simplify your AI so you can focus on scaling and business logic.Let’s talk about your AI project

June 5, 2025 3 min read

Run AI inference faster, smarter, and at scale

Training your AI models is only the beginning. The real challenge lies in running them efficiently, securely, and at scale. AI and reality meet in inference—the continuous process of generating predictions in real time. It is the driving force behind virtual assistants, fraud detection, product recommendations, and everything in between. Unlike training, inference doesn’t happen once; it runs continuously. This means that inference is your operational engine rather than just technical infrastructure. And if you don’t manage it well, you’re looking at skyrocketing costs, compliance risks, and frustrating performance bottlenecks. That’s why it’s critical to rethink where and how inference runs in your infrastructure.The hidden cost of AI inferenceWhile training large models often dominates the AI conversation, it’s inference that carries the greatest operational burden. As more models move into production, teams are discovering that traditional, centralized infrastructure isn’t built to support inference at scale.This is particularly evident when:Real-time performance is critical to user experienceRegulatory frameworks require region-specific data processingCompute demand fluctuates unpredictably across time zones and applicationsIf you don’t have a clear plan to manage inference, the performance and impact of your AI initiatives could be undermined. You risk increasing cloud costs, adding latency, and falling out of compliance.The solution: optimize where and how you run inferenceOptimizing AI inference isn’t just about adding more infrastructure—it’s about running models smarter and more strategically. In our new white paper, “How to Optimize AI Inference for Cost, Speed, and Compliance”, we break it down into three key decisions:1. Choose the right stage of the AI lifecycleNot every workload needs a massive training run. Inference is where value is delivered, so focus your resources on where they matter most. Learn when to use pretrained models, when to fine-tune, and when simple inference will do the job.2. Decide where your inference should runFrom the public cloud to on-prem and edge locations, where your model runs, impacts everything from latency to compliance. We show why edge inference is critical for regulated, real-time use cases—and how to deploy it efficiently.3. Match your model and infrastructure to the taskBigger models aren’t always better. We cover how to choose the right model size and infrastructure setup to reduce costs, maintain performance, and meet privacy and security requirements.Who should read itIf you’re responsible for turning AI from proof of concept into production, this guide is for you.Inference is where your choices immediately impact performance, cost, and customer experience, whether you’re managing infrastructure, developing models, or building AI-powered solutions. This white paper will help you cut through complexity and focus on what matters most: running smarter, faster, and more scalable inference.It’s especially relevant if you’re:A machine learning engineer or AI architect deploying models across environmentsA product manager introducing real-time AI featuresA technical leader or decision-maker managing compute, cloud spend, or complianceOr simply trying to scale AI without sacrificing controlIf inference is the next big challenge on your roadmap, this white paper is where to start.Scale AI inference seamlessly with GcoreEfficient, scalable inference is critical to making AI work in production. Whether you’re optimizing for performance, cost, or compliance, you need infrastructure that adapts to real-world demand. Gcore Everywhere Inference brings your models closer to users and data sources—reducing latency, minimizing costs, and supporting region-specific deployments.Our latest white paper, “How to optimize AI inference for cost, speed, and compliance”, breaks down the strategies and technologies that make this possible. From smart model selection to edge deployment and dynamic scaling, you’ll learn how to build an inference pipeline that delivers at scale.Ready to make AI inference faster, smarter, and easier to manage?Download the white paper

June 2, 2025 2 min read

Subscribe to our newsletter

Get the latest industry trends, exclusive insights, and Gcore updates delivered straight to your inbox.