Gcore named a Leader in the GigaOm Radar for AI Infrastructure!Get the report
  1. Home
  2. Developers
  3. What Is a Smurf DDoS Attack?

What Is a Smurf DDoS Attack?

  • By Gcore
  • July 4, 2023
  • 7 min read
What Is a Smurf DDoS Attack?

Smurf attacks are a type of DDoS (distributed denial-of-service) attack that involves overwhelming a victim’s server with Internet Control Message Protocol (ICMP) packet requests and making it inoperable. In this article, you’ll gain a thorough understanding of their characteristics, repercussions, and the sequence in which they are carried out. By the end of this article, you’ll be equipped with actionable knowledge to mitigate Smurf DDoS attacks with a combination of strategy, tools, and best practices.

What Are Smurf Attacks?

Smurf DDoS attacks are a specific type of online attack where a server is flooded with fake requests for information (called echo requests,) causing it to slow down or even crash.

How ICMP Echo Requests Work

For example, ICMP can signal when a requested service is not available or when a router could not be reached. ICMP echo requests send an echo reply back to the source IP address once an ICMP echo request is received. This echo mechanism is fundamental to maintaining network connectivity.

How ICMP Echo Requests Are Used in Smurf Attacks

In the context of a Smurf attack, the ICMP echo functionality is exploited in a malevolent manner. The attacker begins by sending a large number of ICMP echo requests, potentially in the thousands, to a targeted server. These requests, however, are not genuine; instead, they are “spoofed” to appear as if they were sent from the victim’s IP address. Since each of these requests necessitates an echo reply from the targeted server, the server soon becomes overwhelmed with the number of responses it must generate. The result is a flood of traffic that can cause the server to crash under the pressure, meaning that it can no longer respond to legitimate requests.

How Do Smurf DDoS Attacks Work?

A Smurf attack can be broken down into five main steps.

  1. Target identification. The first step of a Smurf attack involves an attacker identifying a target. The main goal of the attacker in this step is to discover the target’s IP address.
  2. Spoofing. Attackers leverage Smurf malware to create a spoofed ICMP echo request. As we’ve established, this means that the spoofed echo request’s source IP address is the same as the target’s IP address.
  3. ICMP echo request deployment. In this step, attackers will send out ICMP echo request packets to their target server. This will ensure that every device that’s connected to the target server will send back an ICMP echo reply.
  4. ICMP echo reply flood. The influx of ICMP echo reply packets from connected devices will flood the target server, which results in legitimate users being unable to access it.
  5. Server overload. Every server has a breaking point. The objective of an attacker is to exceed this breaking point. When the number of ICMP echo replies surpasses a server’s capabilities, the resulting overload can cause significant harm.

While all Smurf DDoS attacks use this basic sequence of events, there are two different kinds of Smurf attacks that differ slightly.

What Are the Different Kinds of Smurf DDoS Attacks?

Smurf attacks are broadly categorized into basic and advanced attacks. Both basic and advanced Smurf attacks begin with the same fundamental process. What differentiates the two is a shift in source configuration that can vastly increase the scope of an attack. Let’s dive a little deeper.

  • Basic Smurf attack. These attacks involve attackers overwhelming a targeted system with a vast number of ICMP echo requests. The source IP of the echo request is set to the same IP of the targeted system. Therefore, every network under the domain of that system is forced to reply to the echo request. This infinite flooding of ICMP echo requests and replies renders a system inoperable.
  • Advanced Smurf attack. These attacks begin the same way as basic Smurf attacks. However, in advanced cases, attackers change source configurations to enable replies to a broader spectrum of third-party victims. This vastly increases the attack surface and the number of entities that are in the line of fire. Advanced Smurf attacks facilitate large-scale devastation of networks and encroach on other provinces of the web.

How Are Smurfing Attacks Different from Other DDoS Attacks?

There are dozens of DDoS attack techniques that can cripple a business at a network, connection, or application level. In this section, we will highlight and differentiate two DDoS techniques that are most similar to Smurf attacks: Fraggle attacks and ping floods.

  • Smurf attacks and Fraggle attacks. Fraggle attacks are committed to the same objectives as Smurf attacks: they are designed to overwhelm and collapse a server. The key difference is that Smurf attacks use ICMP echo requests, whereas Fraggle attacks use spoofed UDP (User Datagram Protocol) packets. UDP is a communications protocol that speeds up the transportation of data between a sender and receiver.
  • Smurf attacks and ping floods. A standard ping flood functions the same way as a Smurf attack. They both use ICMP echo requests. The fundamental difference between the two is that Smurf attacks are initiated with malware, whereas ping floods are not. Traditional ping floods aren’t amplification attacks, which means that the damage caused is typically less than that by Smurf attacks.

What Are the Key Indicators that a Smurf DDoS Attack Is Happening?

As general DDoS attacks and Smurf DDoS attacks have much in common, and it’s increasingly challenging to differentiate them. In general, the signs of a DDoS attack will be present. However, there are three key differences when it comes to Smurf DDoS:

  • DDoS attacks use different methods, for example, TCP/IP-based, volume-based, and application layer, among others.
  • Smurf attacks always involve ICMP echo requests.
  • Smurf attacks target how a network manages ICMP requests and responses.

What Are the Repercussions of Smurf DDoS Attacks?

Smurf attacks which fall under the broader category of DDoS attacks, and as such have a similar impact to DDoS in general:

  • Service disruption. The immediate consequence of a Smurf attack is a service disruption. Once the target system is overwhelmed with traffic, it leads to a denial of service, making the target website or server inaccessible to legitimate users.
  • Resource consumption. Responding to significant traffic volumes generated by a Smurf DDoS attack demands considerable resources and increased costs related to the immediate demand for more bandwidth.
  • Performance degradation. Even if a threat actor can’t completely shut down the target system, the organization may still have to contend with significant performance issues. For example, the target system can quickly become unreliable or slow.
  • Inter-device communication challenges. Enterprise servers have many devices connected to them. Smurf attacks will make inter-device communication a challenge. Organizations will struggle to communicate and exchange information from one device to another.

The broader impact of Smurf attacks and other DDoS attacks are as follows:

  • User/customer complaints. An overwhelmed server typically results in web pages that can’t be accessed. Companies that are victims of Smurf attacks can expect a large number of complaints from users.
  • Brand damage. Modern businesses are expected to be digital experts. Effects from a Smurf attack such as unresponsive websites, slow loading speeds, and the unavailability of critical digital services can severely and permanently dent a brand’s reputation.
  • Compromised data. Smurf DDoS attacks can compromise an enterprise’s most valuable asset, data. The remediation of Smurf attacks can keep IT teams occupied and leave IT infrastructure vulnerable. This provides a window of opportunity for hackers to steal sensitive data and further handicap organizations.
  • Legal complications. Smurf attacks can have significant legal implications that include noncompliance with region- and industry-specific regulations, SLA breaches, and other fines that may stem from an enterprise system being unavailable.
  • Reduced revenue. Every hour of server uptime earns an organization a certain amount of money. The higher that amount is, the greater their annual revenue. In contrast, every hour of server downtime and disruption can cost companies a significant amount of money, which can be severely detrimental both in the short term and long term.

We’ve highlighted the different types, key indicators, and most significant repercussions of Smurf attacks. It’s time to investigate how they differ from other DDoS attacks.

How To Mitigate Smurf DDoS Attacks

Smurf attacks can be mitigated with security strategies, robust tools and technologies, best practices, and the guidance of a DDoS protection expert like Gcore.

Here are the most powerful ways to mitigate Smurf DDoS attacks:

  1. Enforce a robust security strategy. Smurf and other DDoS attacks are conducted with meticulous precision. Therefore, companies can’t expect to defend themselves from Smurf attacks with isolated measures that lack strategy. Protection against Smurf attacks requires a holistic security strategy that acknowledges the intricacies of a specific business including its objectives, sector, region, IT architecture, attack surface, and potential attackers.
  2. Commission anti-virus, anti-malware, and firewalls. Smurf attacks are executed with malware called DDoS.Smurf. Commissioning and regularly updating well-reputed and modern anti-virus and anti-malware solutions are strong ways for companies to defend themselves against Smurf DDoS attacks. Network firewalls should be installed to monitor incoming and outgoing server traffic to detect anomalies.
  3. Increase server redundancy. Server redundancy involves setting up backup or mirror servers to support a primary server. The logic behind increasing server redundancy is that web services can be replaced with minimal downtime in case of a disaster such as a Smurf DDoS attack. Backup servers should be spread across geographical environments (nationally or internationally) and feature different networks.
  4. Ensure high bandwidth. DDoS attack bandwidths are increasing with every passing year. Businesses should overprovision bandwidth to ensure that any dramatic increase in network traffic due to a Smurf attack can be managed without causing long-lasting system and business trauma.
  5. Disable IP-directed broadcasts. It’s important to identify all routers that are connected to an enterprise server and disable IP-directed broadcasts. This critical configuration ensures that there’s no option for attack amplification. Although disabling IP-directed broadcasts doesn’t prevent attackers from carrying out an attack, it can significantly reduce the damage and enable quick remediation.
  6. Minimize ICMP Traffic. ICMP echo requests and replies are the key ingredients of a Smurf attack. Disabling ICMP might seem like a logical option. However, the opposite is true; it would cause a snowballing of network failures. ICMP traffic needs to be optimized, not eliminated. We highly recommend configuring network devices in a way that optimizes incoming and outgoing ICMP packets.
  7. Choose an expert provider. Gcore’s global DDoS protection service is an all-in-one solution to protect your websites, apps, and services. Our proven, best-in-class product offers protection against massive, complex DDoS attacks—including Smurf DDoS.

Conclusion

Smurf DDoS attacks, which involve attackers overwhelming target servers with ICMP echo requests and replies, are becoming increasingly common. Basic and advanced Smurf attacks result in slow servers, communication breakdown of connected devices, and customer complaints. In the long run, companies can suffer reputational damage, data theft, legal complications, and a loss of revenue due to Smurf attacks. The good news is that experts like Gcore provide world-class security solutions to holistically protect businesses from modern threats like Smurf DDoS attacks.

Gcore’s global DDoS protection service tackles all types of multi-vector DDoS threats at the network, connection, and application levels. Gcore’s security platform features scrubbing centers worldwide that are connected to different servers and feature numerous backup systems to ensure zero performance degradation for a digital business. Gcore also offers customers the option of purchasing a secure server or protecting their current server located anywhere in the world through a GRE tunnel. With Gcore protection, neither you nor your customers will even notice a Smurf attack.

Under attack? Get immediate protection with expert implementation.

Related articles

What is cloud security? Definition, challenges, and best practices

Cloud security is the discipline of protecting cloud-based infrastructure, applications, and data from internal and external threats, ensuring confidentiality, integrity, and availability of cloud resources. This protection model has become important as organizations increasingly move their operations to cloud environments.Cloud security operates under a shared responsibility model where providers secure the infrastructure while customers secure their deployed applications, data, and access policies. This responsibility distribution varies by service model, with Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS) each requiring different levels of customer involvement.The model creates clear boundaries between provider and customer security obligations.Cloud security protects resources and data individually rather than relying on a traditional perimeter defense approach. This protection method uses granular controls like cloud security posture management (CSPM), network segmentation, and encryption to secure specific assets. The approach addresses the distributed nature of cloud computing, where resources exist across multiple locations and services.Organizations face several cloud security challenges, including misconfigurations, account hijacking, data breaches, and insider threats.Cloud security matters because the average cost of a cloud data breach has reached $5 million according to IBM, making effective security controls essential for protecting both financial assets and organizational reputation.What is cloud security?Cloud security is the practice of protecting cloud-based infrastructure, applications, and data from cyber threats through specialized technologies, policies, and controls designed for cloud environments. This protection operates under a shared responsibility model where cloud providers secure the underlying infrastructure while customers protect their applications, data, and access configurations.Cloud security includes identity and access management (IAM), data encryption, continuous monitoring, workload protection, and automated threat detection to address the unique challenges of distributed cloud resources. The approach differs from traditional security by focusing on individual resource protection rather than perimeter defense, as cloud environments require granular controls and real-time visibility across flexible infrastructure.How does cloud security work?Cloud security works by using a multi-layered defense system that protects data, applications, and infrastructure hosted in cloud environments through shared responsibility models, identity controls, and continuous monitoring. Unlike traditional perimeter-based security, cloud security operates on a distributed model where protection is applied at multiple levels across the cloud stack.The foundation of cloud security rests on the shared responsibility model, where cloud providers secure the underlying infrastructure while customers protect their applications, data, and access policies. This division varies by service type - in Infrastructure as a Service (IaaS), customers handle more security responsibilities, including operating systems and network controls. In contrast, Software as a Service (SaaS) shifts most security duties to the provider.Identity and Access Management (IAM) serves as the primary gatekeeper, controlling who can access cloud resources and what actions they can perform.IAM systems use role-based access control (RBAC) and multi-factor authentication (MFA) to verify user identities and enforce least-privilege principles. These controls prevent unauthorized access even if credentials are compromised.Data protection operates through encryption both at rest and in transit, ensuring information remains unreadable to unauthorized parties. Cloud security platforms also employ workload protection agents that monitor running applications for suspicious behavior. At the same time, Security Information and Event Management (SIEM) systems collect and analyze logs from across the cloud environment to detect potential threats.Continuous monitoring addresses the flexible nature of cloud environments, where resources are constantly created, modified, and destroyed.Cloud Security Posture Management (CSPM) tools automatically scan configurations against security best practices, identifying misconfigurations that could expose data.What are the main cloud security challenges?Cloud security challenges refer to the obstacles and risks that organizations face when protecting their cloud-based infrastructure, applications, and data from threats. The main cloud security challenges are listed below.Misconfigurations: According to Zscaler research, improper cloud settings create the most common security vulnerabilities, with 98.6% of organizations having misconfigurations that cause critical risks to data and infrastructure. These include exposed storage buckets, overly permissive access controls, and incorrect network settings.Shared responsibility confusion: Organizations struggle to understand which security tasks belong to the cloud provider versus what their own responsibilities are. This confusion leads to security gaps where critical protections are assumed to be handled by the other party.Identity and access management complexity: Managing user permissions across multiple cloud services and environments becomes difficult as organizations scale. Weak authentication, excessive privileges, and poor access controls create entry points for attackers.Data protection across environments: Securing sensitive data as it moves between on-premises systems, multiple cloud platforms, and edge locations requires consistent encryption and monitoring. Organizations often lack visibility into where their data resides and how it's protected.Compliance and regulatory requirements: Meeting industry standards like GDPR, HIPAA, or SOC 2 becomes more complex in cloud environments where data location and processing methods may change flexibly. Organizations must maintain compliance across multiple jurisdictions and service models.Limited visibility and monitoring: Traditional security tools often can't provide complete visibility into cloud workloads, containers, and serverless functions. This blind spot makes it difficult to detect threats, track user activities, and respond to incidents quickly.Insider threats and privileged access: Cloud environments often grant broad administrative privileges that can be misused by malicious insiders or compromised accounts. The distributed nature of cloud access makes it harder to monitor and control privileged user activities.What are the essential cloud security technologies and tools?Essential cloud security technologies and tools refer to the specialized software, platforms, and systems designed to protect cloud-based infrastructure, applications, and data from cyber threats and operational risks. The essential cloud security technologies and tools are listed below.Identity and access management (IAM): IAM systems control who can access cloud resources and what actions they can perform through role-based permissions and multi-factor authentication. These platforms prevent unauthorized access by requiring users to verify their identity through multiple methods before granting system entry.Cloud security posture management (CSPM): CSPM tools continuously scan cloud environments to identify misconfigurations, compliance violations, and security gaps across multiple cloud platforms. They provide automated remediation suggestions and real-time alerts when security policies are violated or resources are improperly configured.Data encryption services: Encryption technologies protect sensitive information both at rest in storage systems and in transit between cloud services using advanced cryptographic algorithms. These tools mean that even if data is intercepted or accessed without authorization, it remains unreadable without proper decryption keys.Cloud workload protection platforms (CWPP): CWPP solutions monitor and secure applications, containers, and virtual machines running in cloud environments against malware, vulnerabilities, and suspicious activities. They provide real-time threat detection and automated response capabilities specifically designed for flexible cloud workloads.Security information and event management (SIEM): Cloud-based SIEM platforms collect, analyze, and correlate security events from across cloud infrastructure to detect potential threats and compliance violations. These systems use machine learning and behavioral analysis to identify unusual patterns that may indicate security incidents.Cloud access security brokers (CASB): CASB solutions act as intermediaries between users and cloud applications, enforcing security policies and providing visibility into cloud usage across the organization. They monitor data movement, detect risky behaviors, and ensure compliance with regulatory requirements for cloud-based activities.Network security tools: Cloud-native firewalls and network segmentation tools control traffic flow between cloud resources and external networks using intelligent filtering rules. These technologies create secure network boundaries and prevent lateral movement of threats within cloud environments.What are the key benefits of cloud security?The key benefits of cloud security refer to the advantages organizations gain from protecting their cloud-based infrastructure, applications, and data from threats. The key benefits of cloud security are listed below.Cost reduction: Cloud security eliminates the need for expensive on-premises security hardware and reduces staffing requirements. Organizations can access enterprise-grade security tools through subscription models rather than large capital investments.Improved threat detection: Cloud security platforms use machine learning and AI to identify suspicious activities in real-time across distributed environments. These systems can detect anomalies that traditional security tools might miss.Automatic compliance: Cloud security solutions help organizations meet regulatory requirements like GDPR, HIPAA, and SOC 2 through built-in compliance frameworks. Automated reporting and audit trails simplify compliance management and reduce manual oversight.Reduced misconfiguration risks: Cloud security posture management tools automatically scan for misconfigurations and provide remediation guidance.Enhanced data protection: Cloud security provides multiple layers of encryption for data at rest, in transit, and in use. Advanced key management systems ensure that sensitive information remains protected even if other security measures fail.Flexible security coverage: Cloud security solutions automatically scale with business growth without requiring additional infrastructure investments. Organizations can protect new workloads and applications instantly as they use them.Centralized security management: Cloud security platforms provide unified visibility across multiple cloud environments and hybrid infrastructures. Security teams can monitor, manage, and respond to threats from a single dashboard rather than juggling multiple tools.What are the challenges of cloud security?Cloud security challenges refer to the obstacles and risks organizations face when protecting their cloud-based infrastructure, applications, and data from threats. These challenges are listed below.Misconfigurations: Cloud environments are complex, and improper settings create security gaps that attackers can exploit. These errors include exposed storage buckets, overly permissive access controls, and incorrect network settings.Shared responsibility confusion: Organizations often misunderstand which security tasks belong to them versus their cloud provider. This confusion leads to gaps where critical security measures aren't implemented by either party. The division of responsibilities varies between IaaS, PaaS, and SaaS models, adding to the complexity.Identity and access management complexity: As organizations scale, managing user permissions across multiple cloud services and environments becomes difficult. Weak authentication methods and excessive privileges create entry points for unauthorized access. Multi-factor authentication and role-based access controls require careful planning and ongoing maintenance.Data protection across environments: Ensuring data remains encrypted and secure as it moves between on-premises systems and cloud platforms presents ongoing challenges. Organizations must track data location, apply appropriate encryption, and maintain compliance across different jurisdictions. Data residency requirements add another layer of complexity.Visibility and monitoring gaps: Traditional security tools often can't provide complete visibility into cloud environments and workloads. The flexible nature of cloud resources makes it hard to track all assets and their security status. Real-time monitoring becomes critical but technically challenging to use effectively.Compliance and regulatory requirements: Meeting industry standards and regulations in cloud environments requires continuous effort and specialized knowledge. Different regions have varying data protection laws that affect cloud deployments. Organizations must prove compliance while maintaining operational effectiveness.Insider threats and privileged access: Cloud environments often grant broad access to administrators and developers, creating risks from malicious or careless insiders. Monitoring privileged user activities without impacting productivity requires advanced tools and processes. The remote nature of cloud access makes traditional oversight methods less effective.How to implement cloud security best practices?You use cloud security best practices by establishing a complete security framework that covers identity management, data protection, monitoring, and compliance across your cloud environment.First, configure identity and access management (IAM) with role-based access control (RBAC) and multi-factor authentication (MFA). Create specific roles for different job functions and require MFA for all administrative accounts to prevent unauthorized access.Next, encrypt all data both at rest and in transit using industry-standard encryption protocols like AES256.Enable encryption for databases, storage buckets, and communication channels between services to protect sensitive information from interception.Then, use continuous security monitoring with automated threat detection tools. Set up real-time alerts for suspicious activities, failed login attempts, and unusual data access patterns to identify potential security incidents quickly.After that, establish cloud security posture management (CSPM) to scan for misconfigurations automatically. Configure automated remediation for common issues like open security groups, unencrypted storage, and overly permissive access policies.Create network segmentation using virtual private clouds (VPCs) and security groups to isolate different workloads. Limit communication between services to only what's necessary and use zero-trust network principles.Set up regular security audits and compliance monitoring to meet industry standards like SOC 2, HIPAA, or GDPR. Document all security controls and maintain audit trails for regulatory requirements.Finally, develop an incident response plan specifically for cloud environments. Include procedures for isolating compromised resources, preserving forensic evidence, and coordinating with your cloud provider's security team.Start with IAM and encryption as your foundation, then build additional security layers progressively to avoid overwhelming your team while maintaining strong protection.Gcore cloud securityWhen using cloud security measures, the underlying infrastructure becomes just as important as the security tools themselves. Gcore’s cloud security solutions address this need with a global network of 180+ points of presence and 30ms latency, ensuring your security monitoring and threat detection systems perform consistently across all regions. Our edge cloud infrastructure supports real-time security analytics and automated threat response without the performance bottlenecks that can leave your systems vulnerable during critical moments.What sets our approach apart is the combination of security directly into the infrastructure layer, eliminating the complexity of managing separate security vendors while providing enterprise-grade DDoS protection and encrypted data transmission as standard features. This unified approach typically reduces security management overhead by 40-60% compared to multi-vendor solutions, while maintaining the continuous monitoring capabilities.Explore how Gcore's integrated cloud security infrastructure can strengthen your defense plan at gcore.com/cloud.Frequently asked questionsWhat's the difference between cloud security and traditional approaches?Cloud security differs from traditional approaches by protecting distributed resources through shared responsibility models and cloud-native tools, while traditional security relies on perimeter-based defenses around centralized infrastructure. Traditional security assumes a clear network boundary with firewalls and intrusion detection systems protecting internal resources. In contrast, cloud security secures individual workloads, data, and identities across multiple environments without relying on network perimeters.What is cloud security posture management?Cloud security posture management (CSPM) is a set of tools and processes that continuously monitor cloud environments to identify misconfigurations, compliance violations, and security risks across cloud infrastructure. CSPM platforms automatically scan cloud resources, assess security policies, and provide remediation guidance to maintain proper security configurations.How does Zero Trust apply to cloud security?Zero Trust applies to cloud security by treating every user, device, and connection as untrusted and requiring verification before granting access to cloud resources. This approach replaces traditional perimeter-based security with continuous authentication, micro-segmentation, and least-privilege access controls across cloud environments.What compliance standards apply?Cloud security must comply with industry-specific regulations like SOC 2, ISO 27001, GDPR, HIPAA, PCI DSS, and FedRAMP, depending on your business sector and geographic location. Organizations typically need to meet multiple standards simultaneously, with financial services requiring PCI DSS compliance, healthcare needing HIPAA certification, and EU operations mandating GDPR adherence.What happens during a cloud security breach?During a cloud security breach, attackers gain unauthorized access to cloud resources, potentially exposing sensitive data, disrupting services, and causing financial damage averaging $5 million per incident, according to IBM. The breach typically involves exploiting misconfigurations, compromised credentials, or vulnerabilities to access cloud infrastructure, applications, or data stores.

Query your cloud with natural language: A developer’s guide to Gcore MCP

What if you could ask your infrastructure questions and get real answers?With Gcore’s open-source implementation of the Model Context Protocol (MCP), now you can. MCP turns generative AI into an agent that understands your infrastructure, responds to your queries, and takes action when you need it to.In this post, we’ll demo how to use MCP to explore and inspect your Gcore environment just by prompting, to list resources, check audit logs, and generate cost reports. We’ll also walk through a fun bonus use case: provisioning infrastructure and exporting it to Terraform.What is MCP and why do devs love it?Originally developed by Anthropic, the Model Context Protocol (MCP) is an open standard that turns language models into agents that interact with structured tools: APIs, CLIs, or internal systems. Gcore’s implementation makes this protocol real for our customers.With MCP, you can:Ask questions about your infrastructureList, inspect, or filter cloud resourcesView cost data, audit logs, or deployment metadataExport configs to TerraformChain multi-step operations via natural languageGcore MCP removes friction from interacting with your infrastructure. Instead of wiring together scripts or context-switching across dashboards and CLIs, you can just…ask.That means:Faster debugging and auditsMore accessible infra visibilityFewer repetitive setup tasksBetter team collaborationBecause it’s open source, backed by the Gcore Python SDK, you can plug it into other APIs, extend tool definitions, or even create internal agents tailored to your stack. Explore the GitHub repo for yourself.What can you do with it?This isn’t just a cute chatbot. Gcore MCP connects your cloud to real-time insights. Here are some practical prompts you can use right away.Infrastructure inspection“List all VMs running in the Frankfurt region”“Which projects have over 80% GPU utilization?”“Show all volumes not attached to any instance”Audit and cost analysis“Get me the API usage for the last 24 hours”“Which users deployed resources in the last 7 days?”“Give a cost breakdown by region for this month”Security and governance“Show me firewall rules with open ports”“List all active API tokens and their scopes”Experimental automation“Create a secure network in Tokyo, export to Terraform, then delete it”We’ll walk through that last one in the full demo below.Full video demoWatch Gcore’s AI Software Engineer, Algis Dumbris, walk through setting up MCP on your machine and show off some use cases. If you prefer reading, we’ve broken down the process step-by-step below.Step-by-step walkthroughThis section maps to the video and shows exactly how to replicate the workflow locally.1. Install MCP locally (0:00–1:28)We use uv to isolate the environment and pull the project directly from GitHub.curl -Ls https://astral.sh/uv/install.sh | sh uvx add gcore-mcp-server https://github.com/G-Core/gcore-mcp-server Requirements:PythonGcore account + API keyTool config file (from the repo)2. Set up your environment (1:28–2:47)Configure two environment variables:GCORE_API_KEY for authGCORE_TOOLS to define what the agent can access (e.g., regions, instances, costs, etc.)Soon, tool selection will be automatic, but today you can define your toolset in YAML or JSON.3. Run a basic query (3:19–4:11)Prompt:“Find the Gcore region closest to Antalya.”The agent maps this to a regions.list call and returns: IstanbulNo need to dig through docs or write an API request.4. Provision, export, and clean up (4:19–5:32)This one’s powerful if you’re experimenting with CI/CD or infrastructure-as-code.Prompt:“Create a secure network in Tokyo. Export to Terraform. Then clean up.”The agent:Provisions the networkExports it to Terraform formatDestroys the resources afterwardYou get usable .tf output with no manual scripting. Perfect for testing, prototyping, or onboarding.Gcore: always building for developersTry it now:Clone the repoInstall UVX + configure your environmentStart prompting your infrastructureOpen issues, contribute tools, or share your use casesThis is early-stage software, and we’re just getting started. Expect more tools, better UX, and deeper integrations soon.Watch how easy it is to deploy an inference instance with Gcore

How to protect login pages with Gcore WAAP

Exposed login pages are a common vulnerability across web applications. Attackers often use automated tools to guess credentials in brute-force or credential-stuffing attacks, probe for login behavior to exploit session or authentication logic, or overload your infrastructure with fake requests.Without specific rules for login-related traffic, your application might miss these threats or apply overly broad protections that disrupt real users. Fortunately, Gcore WAAP makes it easy to defend these sensitive endpoints without touching your application code.In this guide, we’ll show you how to use WAAP’s custom rule engine to identify login traffic and apply protections like CAPTCHA to reduce risk, block automated abuse, and maintain a smooth experience for legitimate users. We’ve also included a complete video walkthrough from Gcore’s Security Presales Engineer, Michal Zalewski.Video walkthroughHere’s Gcore’s Michal Zalewski giving a full walkthrough of the steps in this article.Step 1: Access your WAAP configurationGo to portal.gcore.com and log in.Navigate to WAAP in the sidebar. If you’re not yet a WAAP user, it costs just $26/month.Select the resource that hosts your login form; for example, gcore.zalewski.cloud.Step 2: Create a custom ruleIn the main panel of your selected resource, go to WAAP Rules.Click Add Custom Rule in the upper-right corner.Step 3: Define the login page URLIdentify the login endpoint you want to protect:Use tools like Burp Suite or the "Inspect" feature in your browser to verify the login page URL.In Burp Suite, use the Proxy tab, or in the browser, check the Network tab to inspect a login request.Look for the path (e.g., /login.php) and HTTP method (POST).In the custom rule setup:Enter the URL (e.g., /login.php).Tag the request using a predefined tag. Select Login Page.Step 4: Name and save the ruleProvide a name for the rule, such as “Login Page URL”, and save it.Step 5: Add a CAPTCHA challenge ruleTo protect the login page from automated abuse:Create a new custom rule.Name it something like “Login Page Challenge”.Under Conditions, select the previously created Login Page tag.Set the Action to CAPTCHA.Save the rule.Step 6: Test the protectionReturn to your browser and turn off any proxy tools.Refresh the login page.You should now be challenged with a CAPTCHA each time the login page loads.Once the CAPTCHA is completed successfully, users can log in as usual.Monitor, adapt, and alertAfter deployment:Track rate limit trigger frequencyMonitor WAAP logs for anomaly detectionRotate exemptions or thresholds based on live behaviorFor analytics, refer to the WAAP analytics documentation.Bonus tips for hardened protectionCombine with bot protection: Enable WAAP’s bot mitigation to identify headless browsers and automation tools like Puppeteer or Selenium. See our bot protection docs for setup instructions.Customize 429 responses: Replace default error pages with branded messages or a fallback action. Consider including a support link or CAPTCHA challenge. Check out our response pages documentation for more details.Use geo or ASN exceptions: Whitelist trusted locations or block known bot-heavy ASNs if your audience is localized.Automate it: optional API and Terraform supportTeams with IaC pipelines or security automation workflows might want to automate login page protection with rate limiting. This keeps your WAAP config version-controlled and repeatable.You can use the WAAP API or Terraform to:Create or update rulesRotate session keys or thresholdsExport logs for auditingExplore the WAAP API documentation and WAAP Terraform provider documentation for more details.Stop abuse before it starts with GcoreLogin pages are high-value targets, but they don’t have to be high risk. With Gcore WAAP, setting up robust defenses takes just a few minutes. By tagging login traffic and applying challenge rules like CAPTCHA, you can reduce automated attack risk without sacrificing user experience.As your application grows, revisit your WAAP rules regularly to adapt to new threats, add behavior-based detection, and fine-tune your protective layers. For more advanced configurations, check out our documentation or reach out to Gcore support.Get WAAP today for just $26/month

3 underestimated security risks of AI workloads and how to overcome them

3 underestimated security risks of AI workloads and how to overcome them

Artificial intelligence workloads introduce a fundamentally different security landscape for engineering and security teams. Unlike traditional applications, AI systems must protect not just endpoints and networks, but also training data pipelines, feature stores, model repositories, and inference APIs. Each phase of the AI life cycle presents distinct attack vectors that adversaries can exploit to corrupt model behavior, extract proprietary logic, or manipulate downstream outputs.In this article, we uncover three security vulnerabilities of AI workloads and explain how developers and MLOps teams can overcome them. We also look at how investing in your AI security can save time and money, explore the challenges that lie ahead for AI security, and offer a simplified way to protect your AI workloads with Gcore.Risk #1: data poisoningData poisoning is a targeted attack on the integrity of AI systems, where malicious actors subtly inject corrupted or manipulated data into training pipelines. The result is a model that behaves unpredictably, generates biased or false outputs, or embeds hidden logic that can be triggered post-deployment. This can undermine business-critical applications—from fraud detection and medical diagnostics to content moderation and autonomous decision-making.For developers, the stakes are high: poisoned models are hard to detect once deployed, and even small perturbations in training data can have system-wide consequences. Luckily, you can take a few steps to mitigate against data poisoning and then implement zero-trust AI to further protect your workloads.Mitigation and hardeningRestrict dataset access using IAM, RBAC, or identity-aware proxies.Store all datasets in versioned, signed, and hashed formats.Validate datasets with automated schema checks, label distribution scans, and statistical outlier detection before training.Track data provenance with metadata logs and checksums.Block training runs if datasets fail predefined data quality gates.Integrate data validation scripts into CI/CD pipelines pre-training.Enforce zero-trust access policies for data ingestion services.Solution integration: zero-trust AIImplement continuous authentication and authorization for each component interacting with data (e.g., preprocessing scripts, training jobs).Enable real-time threat detection during training using runtime security tools.Automate incident response triggers for unexpected file access or data source changes.Risk #2: adversarial attacksAdversarial attacks manipulate model inputs in subtle ways that trick AI systems into making incorrect or dangerous decisions. These perturbations—often imperceptible to humans—can cause models to misclassify images, misinterpret speech, or misread sensor data. In high-stakes environments like facial recognition, autonomous vehicles, or fraud detection, these failures can result in security breaches, legal liabilities, or physical harm.For developers, the threat is real: even state-of-the-art models can be easily fooled without adversarial hardening. The good news? You can make your models more robust by combining defensive training techniques, input sanitization, and secure API practices. While encrypted inference doesn’t directly block adversarial manipulation, it ensures that sensitive inference data stays protected even if attackers attempt to probe the system.Mitigation and hardeningUse adversarial training frameworks like CleverHans or IBM ART to expose models to perturbed inputs during training.Apply input sanitization layers (e.g., JPEG re-encoding, blurring, or noise filters) before data reaches the model.Implement rate limiting and authentication on inference APIs to block automated adversarial probing.Use model ensembles or randomized smoothing to improve resilience to small input perturbations.Log and analyze input-output patterns to detect high-variance or abnormal responses.Test models regularly against known attack vectors using robustness evaluation tools.Solution integration: encrypted inferenceWhile encryption doesn't prevent adversarial inputs, it does mean that input data and model responses remain confidential and protected from observation or tampering during inference.Run inference in trusted environments like Intel SGX or AWS Nitro Enclaves to protect model and data integrity.Use homomorphic encryption or SMPC to process encrypted data without exposing sensitive input.Ensure that all intermediate and output data is encrypted at rest and in transit.Deploy access policies that restrict inference to verified users and approved applications.Risk #3: model leakage of intellectual assetsModel leakage—or model extraction—happens when an attacker interacts with a deployed model in ways that allow them to reverse-engineer its structure, logic, or parameters. Once leaked, a model can be cloned, monetized, or used to bypass the very defenses it was meant to enforce. For businesses, this means losing competitive IP, compromising user privacy, or enabling downstream attacks.For developers and MLOps teams, the challenge is securing deployed models in a way that balances performance and privacy. If you're exposing inference APIs, you’re exposing potential entry points—but with the right controls and architecture, you can drastically reduce the risk of model theft.Mitigation and hardeningEnforce rate limits and usage quotas on all inference endpoints.Monitor for suspicious or repeated queries that indicate model extraction attempts.Implement model watermarking or fingerprinting to trace unauthorized model use.Obfuscate models before deployment using quantization, pruning, or graph rewriting.Disable or tightly control any model export functionality in your platform.Sign and verify inference requests and responses to ensure authenticity.Integrate security checks into CI/CD pipelines to detect risky configurations—such as public model endpoints, export-enabled containers, or missing inference authentication—before they reach production.Solution integration: native security integrationIntegrate model validation, packaging, and signing into CI/CD pipelines.Serve models from encrypted containers or TEEs, with minimal runtime exposure.Use container and image scanning tools to catch misconfigurations before deployment.Centralize monitoring and protection with tools like Gcore WAAP for real-time anomaly detection and automated response.How investing in AI security can save your business moneyFrom a financial point of view, the use of AI and machine learning in cybersecurity can lead to massive cost savings. Organizations that utilize AI and automation in cybersecurity have saved an average of $2.22 million per data breach compared to organizations that do not have these protections in place. This is because the necessity for manual oversight is reduced, lowering the total cost of ownership, and averting costly security breaches. The initial investment in advanced security technologies yields returns through decreased downtime, fewer false positives, and an enhanced overall security posture.Challenges aheadWhile securing the AI lifecycle is essential, it’s still difficult to balance robust security with a positive user experience. Rigid scrutiny can add additional latency or false positives that can stop operations, but AI-powered security can avoid such incidents.Another concern organizations must contend with is how to maintain current AI models. With threats changing so rapidly, today's newest model could easily become outdated by tomorrow’s. Solutions must have an ongoing learning ability so that security detection parameters can be revised.Operational maturity is also a concern, especially for companies that operate in multiple geographies. Well-thought-out strategies and sound governance processes must accompany the integration of complex AI/ML tools with existing infrastructure, but automation still offers the most benefits by reducing the overhead on security teams and helping ensure consistent deployment of security policies.Get ahead of AI security with GcoreAI workloads introduce new and often overlooked security risks that can compromise data integrity, model behavior, and intellectual property. By implementing practices like zero-trust architecture, encrypted inference, and native security integration, developers can build more resilient and trustworthy AI systems. As threats evolve, staying ahead means embedding security at every phase of the AI lifecycle.Gcore helps teams apply these principles at scale, offering native support for zero-trust AI, encrypted inference, and intelligent API protection. As an experienced AI and security solutions provider, our DDoS Protection and AI-enabled WAAP solutions integrate natively with Everywhere Inference and GPU Cloud across 210+ global points of presence. That means low latency, high performance, and proven, robust security, no matter where your customers are located.Talk with our AI security experts and secure your workloads today

Flexible DDoS mitigation with BGP Flowspec cover image

Flexible DDoS mitigation with BGP Flowspec

For customers who understand their own network traffic patterns, rigid DDoS protection can be more of a limitation than a safeguard. That’s why Gcore supports BGP Flowspec: a flexible, standards-based method for defining granular filters that block or rate-limit malicious traffic in real time…before it reaches your infrastructure.In this article, we’ll walk through:What Flowspec is and how it worksThe specific filters and actions Gcore supportsCommon use cases, with example rule definitionsHow to activate and monitor Flowspec in your environmentWhat is the BGP Flowspec?BGP Flowspec (RFC 8955) extends Border Gateway Protocol to distribute traffic filtering rules alongside routing updates. Instead of static ACLs or reactive blackholing, Flowspec enables near-instantaneous propagation of mitigation rules across networks.BGP tells routers how to reach IP prefixes across the internet. With Flowspec, those same BGP announcements can now carry rules, not just routes. Each rule describes a pattern of traffic (e.g., TCP SYN packets >1000 bytes from a specific subnet) and what action to take (drop, rate-limit, mark, or redirect).What are the benefits of the BGP Flowspec?Most traditional DDoS protection services react to threats after they start, whether by blackholing traffic to a target IP, redirecting flows to a scrubbing center, or applying rigid, static filters. These approaches can block legitimate traffic, introduce latency, or be too slow to respond to fast-evolving attacks.Flowspec offers a more flexible alternative.Proactive mitigation: Instead of waiting for attacks, you can define known-bad traffic patterns ahead of time and block them instantly. Flowspec lets experienced operators prevent incidents before they start.Granular filtering: You’re not limited to blocking by IP or port. With Flowspec, you can match on packet size, TCP flags, ICMP codes, and more, enabling fine-tuned control that traditional ACLs or RTBH don’t support.Edge offloading: Filtering happens directly on Gcore’s routers, offloading your infrastructure and avoiding scrubbing latency.Real-time updates: Changes to rules are distributed across the network via BGP and take effect immediately, faster than manual intervention or standard blackholing.You still have the option to block traffic during an active attack, but with Flowspec, you gain the flexibility to protect services with minimal disruption and greater precision than conventional tools allow.Which parts of the Flowspec does Gcore implement?Gcore supports twelve filter types and four actions of the Flowspec.Supported filter typesGcore supports all 12 standard Flowspec match components.Filter FieldDescriptionDestination prefixTarget subnet (usually your service or app)Source prefixSource of traffic (e.g., attacker IP range)IP protocolTCP, UDP, ICMP, etc.Port / Source portMatch specific client or server portsDestination portMatch destination-side service portsICMP type/codeFilter echo requests, errors, etc.TCP flagsFilter packets by SYN, ACK, RST, FIN, combinationsPacket lengthFilter based on payload sizeDSCPQuality of service code pointFragmentMatch on packet fragmentation characteristicsSupported actionsGcore DDoS Protection supports the following Flowspec actions, which can be triggered when traffic matches a specific filter:ActionDescriptionTraffic-rate (0x8006)Throttle/rate limit traffic by byte-per-second rateredirectRedirect traffic to alternate location (e.g., scrubbing)traffic-markingApply DSCP marks for downstream classificationno-action (drop)Drop packets (rate-limit 0)Rule orderingRFC 5575 defines the implicit order of Flowspec rules. The crucial point is that more specific announcements take preference, not the order in which the rules are propagated.Gcore also respects Flowspec rule ordering per RFC 5575. More specific filters override broader ones. Future support for Flowspec v2 (with explicit ordering) is under consideration, pending vendor adoption.Blackholing and extended blackholing (eBH)Remote-triggered blackhole (RTBH) is a standardized protection method that the client manages via BGP by analyzing traffic, identifying the direction of the attack (i.e., the destination IP address). This method protects against volumetric attacks.Customers using Gcore IP Transit can trigger immediate blackholing for attacked prefixes via BGP, using the well-known blackhole community tag 65000:666. All traffic to that destination IP is dropped at Gcore’s edge.The list of supported BGP communities is available here.BGP extended blackholeExtended blackhole (eBH) allows for more granular blackholing that does not affect legitimate traffic. For customers unable to implement Flowspec directly, Gcore supports eBH. You announce target prefixes with pre-agreed BGP communities, and Gcore translates them into Flowspec mitigations.To configure this option, contact our NOC at noc@gcore.lu.Monitoring and limitationsGcore can support several logging transports, including mail and Slack.If the number of Flowspec prefixes exceeds the configured limit, Gcore DDoS Protection stops accepting new announcements, but BGP sessions and existing prefixes will stay active. Gcore will receive a notification that you reached the limit.How to activateActivation takes just two steps:Define rules on your edge router using Flowspec NLRI formatAnnounce rules via BGP to Gcore’s intermediate control planeThen, Gcore validates and propagates the filters to border routers. Filters are installed on edge devices and take effect immediately.If attack patterns are unknown, you’ll first need to detect anomalies using your existing monitoring stack, then define the appropriate Flowspec rules.Need help activating Flowspec? Get in touch via our 24/7 support channels and our experts will be glad to assist.Set up GRE and benefit from Flowspec today

Securing AI from the ground up: defense across the lifecycle

As more AI workloads shift to the edge for lower latency and localized processing, the attack surface expands. Defending a data center is old news. Now, you’re securing distributed training pipelines, mobile inference APIs, and storage environments that may operate independently of centralized infrastructure, especially in edge or federated learning contexts. Every stage introduces unique risks. Each one needs its own defenses.Let’s walk through the key security challenges across each phase of the AI lifecycle, and the hardening strategies that actually work.PhaseTop threatsHardening stepsTrainingData poisoning, leaksValidation, dataset integrity tracking, RBAC, adversarial trainingDevelopmentModel extraction, inversionRate limits, obfuscation, watermarking, penetration testingInferenceAdversarial inputs, spoofed accessInput filtering, endpoint auth, encryption, TEEsStorage and deploymentModel theft, tamperingEncrypted containers, signed builds, MFA, anomaly monitoringTraining: your model is only as good as its dataThe training phase sets the foundation. If the data going in is poisoned, biased, or tampered with, the model will learn all the wrong lessons and carry those flaws into production.Why it mattersData poisoning is subtle. You won’t see a red flag during training logs or a catastrophic failure at launch. These attacks don’t break training, they bend it.A poisoned model may appear functional, but behaves unpredictably, embeds logic triggers, or amplifies harmful bias. The impact is serious later in the AI workflow: compromised outputs, unexpected behavior, or regulatory non-compliance…not due to drift, but due to training-time manipulation.How to protect itValidate datasets with schema checks, label audits, and outlier detection.Version, sign, and hash all training data to verify integrity and trace changes.Apply RBAC and identity-aware proxies (like OPA or SPIFFE) to limit who can alter or inject data.Use adversarial training to improve model robustness against manipulated inputs.Development and testing: guard the logicOnce you’ve got a trained model, the next challenge is protecting the logic itself: what it knows and how it works. The goal here is to make attacks economically unfeasible.Why it mattersModels encode proprietary logic. When exposed via poorly secured APIs or unprotected inference endpoints, they’re vulnerable to:Model inversion: Extracting training dataExtraction: Reconstructing logicMembership inference: Revealing whether a datapoint was in trainingHow to protect itApply rate limits, logging, and anomaly detection to monitor usage patterns.Disable model export by default. Only enable with approval and logging.Use quantization, pruning, or graph obfuscation to reduce extractability.Explore output fingerprinting or watermarking to trace unauthorized use in high-value inference scenarios.Run white-box and black-box adversarial evaluations during testing.Integrate these security checks into your CI/CD pipeline as part of your MLOps workflow.Inference: real-time, real riskInference doesn’t get a free pass because it’s fast. Security needs to be just as real-time as the insights your AI delivers.Why it mattersAdversarial attacks exploit the way models generalize. A single pixel change or word swap can flip the classification.When inference powers fraud detection or autonomous systems, a small change can have a big impact.How to protect itSanitize input using JPEG compression, denoising, or frequency filtering.Train on adversarial examples to improve robustness.Enforce authentication and access control for all inference APIs—no open ports.Encrypt inference traffic with TLS. For added privacy, use trusted execution environments (TEEs).For highly sensitive cases, consider homomorphic encryption or SMPC—strong but compute-intensive solutions.Check out our free white paper on inference optimization.Storage and deployment: don’t let your model leakOnce your model’s trained and tested, you’ve still got to deploy and store it securely—often across multiple locations.Why it mattersUnsecured storage is a goldmine for attackers. With access to the model binary, they can reverse-engineer, clone, or rehost your IP.How to protect itStore models on encrypted volumes or within enclaves.Sign and verify builds before deployment.Enforce MFA, RBAC, and immutable logging on deployment pipelines.Monitor for anomalous access patterns—rate, volume, or source-based.Edge strategy: security that moves with your AIAs AI moves to the edge, centralized security breaks down. You need protection that operates as close to the data as your inference does.That’s why we at Gcore integrate protection into AI workflows from start to finish:WAAP and DDoS mitigation at edge nodes—not just centralized DCs.Encrypted transport (TLS 1.3) and in-node processing reduce exposure.Inline detection of API abuse and L7 attacks with auto-mitigation.180+ global PoPs to maintain consistency across regions.AI security is lifecycle securityNo single firewall, model tweak, or security plugin can secure AI workloads in isolation. You need defense in depth: layered, lifecycle-wide protections that work at the data layer, the API surface, and the edge.Ready to secure your AI stack from data to edge inference?Talk to our AI security experts

Subscribe to our newsletter

Get the latest industry trends, exclusive insights, and Gcore updates delivered straight to your inbox.