Gcore named a Leader in the GigaOm Radar for AI Infrastructure!Get the report
  1. Home
  2. Developers
  3. How to protect against DDoS attacks

How to protect against DDoS attacks

  • By Gcore
  • May 14, 2021
  • 7 min read
How to protect against DDoS attacks

DDoS attacks are happening more often. The number of “smart attacks” is also increasing, and their duration and capacity are growing. Protecting your online resources is becoming increasingly harder.

In this article, we’ll provide tips on how to effectively protect against increasingly complex DDoS attacks.

What are DDoS attacks?

DDoS attacks are any actions by cybercriminals aimed at making your services inaccessible to clients. There are different ways to do this. The most common is to send a huge number of requests to the server so that it stops coping with them, causing it to work very slowly or crash altogether. But there are other methods as well.

Attackers can attack a single site, application, or entire server.

There are many types of DDoS attacks. They can target different OSI layers and use different techniques.

During a DDoS attack, criminals find vulnerabilities and can, for example, launch a virus on a website and steal your data or the data of your customers.

How do DDoS attacks harm businesses?

The primary harm of a DDoS attack is that your service becomes unavailable for a while. Customers can’t access a website or application and therefore can’t use your services. As a result, they become less loyal to your business.

Worst of all, attackers often attack at critical moments. For example, you’ve launched a promotion on your online store and you expect it to result in a substantial sales growth. But instead of “clean” traffic, a huge number of requests from bots is sent to the server, and real people can’t access the website and make a purchase.

Aside from this, there are other negative aspects:

  • If bot requests account for a certain percentage of traffic to your website, it’s difficult to estimate the amount of real traffic. This means you won’t be able to know how appealing and user-friendly your website or application is for real customers, and how often they visit.
  • Bot requests increase the bounce rate. This worsens the position of your website in the search engine results.
  • If you use paid traffic to attract customers, some of this traffic may not be “real”, causing you to waste part of your budget.

GitHub attack

In early March 2018, the most powerful DDoS attack in history hit GitHub, setting a new record of 1.35 Tbps, or 126.9 million packets per second. Attackers had learned to use Memcached DDoS servers for amplification, which can amplify the attack by more than 50,000 times.

EVE Online attack

In February 2020, a powerful DDoS attack that lasted more than a week completely paralyzed gameplay: chats, ship control, and market transactions were impossible.

Takeaway.com attack

In March 2020, there was a major DDoS attack on the Takeaway.com food delivery network. Restaurants could receive orders, but couldn’t process them.

The attackers demanded 2 bitcoins from the company as payment to stop the DDoS attack. On the same day, the CEO tweeted a screenshot of their message.

Takeaway chose not to pay the ransom, but the DDoS attack itself caused serious damage. They had to provide refunds to all users whose orders were paid but not delivered.

Why do criminals carry out DDoS attacks?

The reasons vary.

Extortion

We’ve already given an example above. Events often proceed in two possible ways:

  1. You’re warned in advance. The attackers promise to attack your websites if you don’t pay them a certain amount by a specified date.
  2. You are first attacked, and then a message comes with a request demanding payment to stop the DDoS attack.

If a ransom is demanded from you, you should never pay anything! Criminals will think you give in easily, and they will do it again and again.

Unscrupulous competition

You are actively growing, eventually overtaking your competitors, and one of them envies you. Or maybe you are going to enter new markets, and the companies already there don’t want extra competition.

In any market, there are those who don’t like to play fair. With the help of a DDoS attack, they can try to ruin your business and force you to abandon your plans.

What should you do in this case? Again, don’t give in to the attackers. If your competitors fear you and try to stop you, it means you are moving in the right direction.

In addition to intentional attacks, there are also unintentional ones:

  • You are collateral damage. This can happen if your hosting is located on a virtual server. Another website may have been the target of the criminals, but since a DDoS attack affects the entire server, everyone else suffers too.
  • It wasn’t an attack at all. You simply didn’t anticipate natural surges in traffic, such as due to sales, and the system couldn’t cope with the influx.

How do you know if your resource has been attacked?

DDoS attacks are usually unexpected. You didn’t offer any promotions or sales. You did nothing to attract customers. And yet for no reason, a huge number of requests are sent to the server. A normal surge in traffic, as opposed to an attack, is usually predictable.

You can check if this is a DDoS attack by analyzing the logs. These are files that are stored on the server’s hard drive. They record information about visitors, transmitted data, and error messages.

Access to the logs is usually granted by the hosting provider via the control panel.

If your resource is under attack, you’ll probably see that a lot of identical requests and packets are coming from the same IP addresses.

How to protect yourself against DDoS attacks on your own?

Let’s be clear: you won’t be able to set up full-fledged protection on your own. There’s no free technique that is guaranteed to protect your website or application. New DDoS attacks appear all the time, and the existing ones get better every day.

But you can still do something.

Prepare for the load

During the New Year’s sale, your website was “crashed” at the most crucial moment. Was it really a DDoS attack?

If you have a competent infrastructure, a balanced load distribution is provided, and possible traffic surges are taken into account, then DDoS attacks won’t be such a threat to you. Invest in infrastructure. It’s better to make one good investment than to scrimp and then suffer losses many times.

If you have no resources to build your own infrastructure, consider purchasing a third-party IT solution. One option is to sign up for a CDN—a content delivery network.

The Gcore CDN delivers any heavy content around the world. It’s a fast and secure network with over 70 points of presence on all continents, as well as a spot in the Guinness World Records.

You are under attack right now. What do you do?

If you’re being attacked, and you haven’t set up any protection for your website, there are several actions that you can take.

1. Ban the IP addresses from which the attack is carried out. They can be found in the logs.

To avoid manually blocking each request, you can use grep. It’s a tool that allows you to find certain elements in a file and perform simple actions with them—for example, block.

You will be very lucky if the attack on your website is short. In this case, you can figure out right away where the “junk” traffic originated, allowing you to block it.

But such luck is rare. A DDoS attack can last for several days and stem from thousands of different IP addresses. It’s not possible to block them all, even using grep.

Besides, stopping smart attacks by blocking IP addresses isn’t a very effective tactic. If the perpetrators use dynamic IP addresses, then no block can save you.

2. Block requests by geolocation. This method works only if you see that a lot of requests to your website come from a specific area of the world. For example, your users live in Eastern Europe, but suddenly a huge amount of traffic comes from Africa.

But once again, this is rare. Most DDoS attacks these days are “smart”, and attackers most likely won’t make such a mistake.

3. Block the “heavy” section of your website. The attack may be aimed not at the entire website, but at the most vulnerable part of it, such as the search feature. If it’s not the most important element of your website, you can simply disable access to it for all users. Customers may not be able to use search, but everything else will function normally.

The drawback to this method is that it’s useless for most attacks.

Why are these methods often ineffective?

These methods can help stop some simple types of DDoS attacks. Besides, all of them are designed to repel attacks on servers and will in no way rid you of bots on the website, which can also cause big problems.

For instance, if you have a limited number of products, an attacker can launch bots that will add all the products to their carts, preventing real users from buying anything.

On top of that, even if you manage to repel the attack, you’ll have spent time solving the problem. That means your services will be unavailable for some time.

In order to avoid frantically taking emergency measures, it’s better to buy hosting with built-in protection against DDoS attacks from the very beginning or to enable paid protection against DDoS attacks for your server.

Benefits of using a specialized service to protect against DDoS attacks

1. Protection at all layers. A DDoS attack can occur at the network (L3), transport (L4), or application (L7) layers. The methods listed above will help in the event of a DDoS attack at one layer. But attacks are different. And it is extremely difficult to protect all layers on your own.

Professional protection is a well-designed filtering platform that all traffic passes through and that blocks suspicious requests. “Junk” data packets will be stopped on the way to the resource.

2. Load balancing. A good security system usually provides for an even distribution of traffic between nodes. This makes it harder for criminals to “crash” your website. Additionally, it will also speed up the loading of the website and help with natural traffic surges.

3. Protection of web application vulnerabilities. Any website or app has weak spots, and attackers don’t hesitate to exploit them. They detect vulnerabilities and exploit them to gain access to confidential user data.

Web Application Firewall is a firewall that hides application vulnerabilities and blocks suspicious traffic.

When choosing a firewall, it’s important to pay attention to how it works. It is a good idea to choose a “smart” WAF with self-learning algorithms. Such screens are able to analyze the contents of packets and avoid blocking real customers along with bots.

4. Refund guarantee. If you are securing your website with whatever tools are available, there is no guarantee that these tools will help. And even if your own protection has more or less coped for now, tomorrow hackers may invent a new type of DDoS attack and your methods will be useless.

On the other hand, if you purchase professional protection, good companies always provide a refund guarantee for their services. If the protection doesn’t work, you can get your money back.

At the same time, professional systems are constantly evolving and taking into account the emergence of new DDoS attacks.

How does Gcore protect customers against DDoS attacks?

We offer protection for websites and applications from bots and secure hosting on our servers. We can also enable server protection for your own infrastructure.

The protection solution is based on our own traffic filtering centers in Europe. The total filtering bandwidth is more than 1.5 Tbps.

How does it work?

  1. The filtration centers make all traffic go through them. The centers analyze the traffic along the way.
  2. Not only are packets checked, but also the behavioral factors of the person who sent the request. For example, the system analyzes how much time the user spent on the website, as well as the intervals between requests and sub-requests.
  3. This data is compared with the parameters to determine whether the request is legitimate or not. Simply put, the system calculates whether a real person or a bot visited your website.
  4. If the request seems suspicious, it’s blocked.

The system blocks any bot traffic, including parsing and brute-force.

It blocks sessions, not IP addresses. Self-learning algorithms are built into the platform. It remembers “trustworthy” customers and doesn’t verify subsequent requests from them. The false positive rate is less than 0.01%.

The advantages of our Protection

  • We block DDoS attacks from the first request.
  • We ensure load balancing.
  • You pay only for legitimate traffic. We don’t charge for 5% of surges, which means you won’t have to pay for natural surges, such as during promotions.
  • We provide reports.
  • We guarantee the availability of your websites by 99.5%. We’ll refund the money if the protection doesn’t work.
  • To enable protection, you just need to set up a DNS record.

In addition to protection, you can buy a smart firewall for your web application.

Protect your resources with a comprehensive solution and forget about DDoS attacks.

Get a free consultation

Enable protection

Related articles

What is cloud security? Definition, challenges, and best practices

Cloud security is the discipline of protecting cloud-based infrastructure, applications, and data from internal and external threats, ensuring confidentiality, integrity, and availability of cloud resources. This protection model has become important as organizations increasingly move their operations to cloud environments.Cloud security operates under a shared responsibility model where providers secure the infrastructure while customers secure their deployed applications, data, and access policies. This responsibility distribution varies by service model, with Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS) each requiring different levels of customer involvement.The model creates clear boundaries between provider and customer security obligations.Cloud security protects resources and data individually rather than relying on a traditional perimeter defense approach. This protection method uses granular controls like cloud security posture management (CSPM), network segmentation, and encryption to secure specific assets. The approach addresses the distributed nature of cloud computing, where resources exist across multiple locations and services.Organizations face several cloud security challenges, including misconfigurations, account hijacking, data breaches, and insider threats.Cloud security matters because the average cost of a cloud data breach has reached $5 million according to IBM, making effective security controls essential for protecting both financial assets and organizational reputation.What is cloud security?Cloud security is the practice of protecting cloud-based infrastructure, applications, and data from cyber threats through specialized technologies, policies, and controls designed for cloud environments. This protection operates under a shared responsibility model where cloud providers secure the underlying infrastructure while customers protect their applications, data, and access configurations.Cloud security includes identity and access management (IAM), data encryption, continuous monitoring, workload protection, and automated threat detection to address the unique challenges of distributed cloud resources. The approach differs from traditional security by focusing on individual resource protection rather than perimeter defense, as cloud environments require granular controls and real-time visibility across flexible infrastructure.How does cloud security work?Cloud security works by using a multi-layered defense system that protects data, applications, and infrastructure hosted in cloud environments through shared responsibility models, identity controls, and continuous monitoring. Unlike traditional perimeter-based security, cloud security operates on a distributed model where protection is applied at multiple levels across the cloud stack.The foundation of cloud security rests on the shared responsibility model, where cloud providers secure the underlying infrastructure while customers protect their applications, data, and access policies. This division varies by service type - in Infrastructure as a Service (IaaS), customers handle more security responsibilities, including operating systems and network controls. In contrast, Software as a Service (SaaS) shifts most security duties to the provider.Identity and Access Management (IAM) serves as the primary gatekeeper, controlling who can access cloud resources and what actions they can perform.IAM systems use role-based access control (RBAC) and multi-factor authentication (MFA) to verify user identities and enforce least-privilege principles. These controls prevent unauthorized access even if credentials are compromised.Data protection operates through encryption both at rest and in transit, ensuring information remains unreadable to unauthorized parties. Cloud security platforms also employ workload protection agents that monitor running applications for suspicious behavior. At the same time, Security Information and Event Management (SIEM) systems collect and analyze logs from across the cloud environment to detect potential threats.Continuous monitoring addresses the flexible nature of cloud environments, where resources are constantly created, modified, and destroyed.Cloud Security Posture Management (CSPM) tools automatically scan configurations against security best practices, identifying misconfigurations that could expose data.What are the main cloud security challenges?Cloud security challenges refer to the obstacles and risks that organizations face when protecting their cloud-based infrastructure, applications, and data from threats. The main cloud security challenges are listed below.Misconfigurations: According to Zscaler research, improper cloud settings create the most common security vulnerabilities, with 98.6% of organizations having misconfigurations that cause critical risks to data and infrastructure. These include exposed storage buckets, overly permissive access controls, and incorrect network settings.Shared responsibility confusion: Organizations struggle to understand which security tasks belong to the cloud provider versus what their own responsibilities are. This confusion leads to security gaps where critical protections are assumed to be handled by the other party.Identity and access management complexity: Managing user permissions across multiple cloud services and environments becomes difficult as organizations scale. Weak authentication, excessive privileges, and poor access controls create entry points for attackers.Data protection across environments: Securing sensitive data as it moves between on-premises systems, multiple cloud platforms, and edge locations requires consistent encryption and monitoring. Organizations often lack visibility into where their data resides and how it's protected.Compliance and regulatory requirements: Meeting industry standards like GDPR, HIPAA, or SOC 2 becomes more complex in cloud environments where data location and processing methods may change flexibly. Organizations must maintain compliance across multiple jurisdictions and service models.Limited visibility and monitoring: Traditional security tools often can't provide complete visibility into cloud workloads, containers, and serverless functions. This blind spot makes it difficult to detect threats, track user activities, and respond to incidents quickly.Insider threats and privileged access: Cloud environments often grant broad administrative privileges that can be misused by malicious insiders or compromised accounts. The distributed nature of cloud access makes it harder to monitor and control privileged user activities.What are the essential cloud security technologies and tools?Essential cloud security technologies and tools refer to the specialized software, platforms, and systems designed to protect cloud-based infrastructure, applications, and data from cyber threats and operational risks. The essential cloud security technologies and tools are listed below.Identity and access management (IAM): IAM systems control who can access cloud resources and what actions they can perform through role-based permissions and multi-factor authentication. These platforms prevent unauthorized access by requiring users to verify their identity through multiple methods before granting system entry.Cloud security posture management (CSPM): CSPM tools continuously scan cloud environments to identify misconfigurations, compliance violations, and security gaps across multiple cloud platforms. They provide automated remediation suggestions and real-time alerts when security policies are violated or resources are improperly configured.Data encryption services: Encryption technologies protect sensitive information both at rest in storage systems and in transit between cloud services using advanced cryptographic algorithms. These tools mean that even if data is intercepted or accessed without authorization, it remains unreadable without proper decryption keys.Cloud workload protection platforms (CWPP): CWPP solutions monitor and secure applications, containers, and virtual machines running in cloud environments against malware, vulnerabilities, and suspicious activities. They provide real-time threat detection and automated response capabilities specifically designed for flexible cloud workloads.Security information and event management (SIEM): Cloud-based SIEM platforms collect, analyze, and correlate security events from across cloud infrastructure to detect potential threats and compliance violations. These systems use machine learning and behavioral analysis to identify unusual patterns that may indicate security incidents.Cloud access security brokers (CASB): CASB solutions act as intermediaries between users and cloud applications, enforcing security policies and providing visibility into cloud usage across the organization. They monitor data movement, detect risky behaviors, and ensure compliance with regulatory requirements for cloud-based activities.Network security tools: Cloud-native firewalls and network segmentation tools control traffic flow between cloud resources and external networks using intelligent filtering rules. These technologies create secure network boundaries and prevent lateral movement of threats within cloud environments.What are the key benefits of cloud security?The key benefits of cloud security refer to the advantages organizations gain from protecting their cloud-based infrastructure, applications, and data from threats. The key benefits of cloud security are listed below.Cost reduction: Cloud security eliminates the need for expensive on-premises security hardware and reduces staffing requirements. Organizations can access enterprise-grade security tools through subscription models rather than large capital investments.Improved threat detection: Cloud security platforms use machine learning and AI to identify suspicious activities in real-time across distributed environments. These systems can detect anomalies that traditional security tools might miss.Automatic compliance: Cloud security solutions help organizations meet regulatory requirements like GDPR, HIPAA, and SOC 2 through built-in compliance frameworks. Automated reporting and audit trails simplify compliance management and reduce manual oversight.Reduced misconfiguration risks: Cloud security posture management tools automatically scan for misconfigurations and provide remediation guidance.Enhanced data protection: Cloud security provides multiple layers of encryption for data at rest, in transit, and in use. Advanced key management systems ensure that sensitive information remains protected even if other security measures fail.Flexible security coverage: Cloud security solutions automatically scale with business growth without requiring additional infrastructure investments. Organizations can protect new workloads and applications instantly as they use them.Centralized security management: Cloud security platforms provide unified visibility across multiple cloud environments and hybrid infrastructures. Security teams can monitor, manage, and respond to threats from a single dashboard rather than juggling multiple tools.What are the challenges of cloud security?Cloud security challenges refer to the obstacles and risks organizations face when protecting their cloud-based infrastructure, applications, and data from threats. These challenges are listed below.Misconfigurations: Cloud environments are complex, and improper settings create security gaps that attackers can exploit. These errors include exposed storage buckets, overly permissive access controls, and incorrect network settings.Shared responsibility confusion: Organizations often misunderstand which security tasks belong to them versus their cloud provider. This confusion leads to gaps where critical security measures aren't implemented by either party. The division of responsibilities varies between IaaS, PaaS, and SaaS models, adding to the complexity.Identity and access management complexity: As organizations scale, managing user permissions across multiple cloud services and environments becomes difficult. Weak authentication methods and excessive privileges create entry points for unauthorized access. Multi-factor authentication and role-based access controls require careful planning and ongoing maintenance.Data protection across environments: Ensuring data remains encrypted and secure as it moves between on-premises systems and cloud platforms presents ongoing challenges. Organizations must track data location, apply appropriate encryption, and maintain compliance across different jurisdictions. Data residency requirements add another layer of complexity.Visibility and monitoring gaps: Traditional security tools often can't provide complete visibility into cloud environments and workloads. The flexible nature of cloud resources makes it hard to track all assets and their security status. Real-time monitoring becomes critical but technically challenging to use effectively.Compliance and regulatory requirements: Meeting industry standards and regulations in cloud environments requires continuous effort and specialized knowledge. Different regions have varying data protection laws that affect cloud deployments. Organizations must prove compliance while maintaining operational effectiveness.Insider threats and privileged access: Cloud environments often grant broad access to administrators and developers, creating risks from malicious or careless insiders. Monitoring privileged user activities without impacting productivity requires advanced tools and processes. The remote nature of cloud access makes traditional oversight methods less effective.How to implement cloud security best practices?You use cloud security best practices by establishing a complete security framework that covers identity management, data protection, monitoring, and compliance across your cloud environment.First, configure identity and access management (IAM) with role-based access control (RBAC) and multi-factor authentication (MFA). Create specific roles for different job functions and require MFA for all administrative accounts to prevent unauthorized access.Next, encrypt all data both at rest and in transit using industry-standard encryption protocols like AES256.Enable encryption for databases, storage buckets, and communication channels between services to protect sensitive information from interception.Then, use continuous security monitoring with automated threat detection tools. Set up real-time alerts for suspicious activities, failed login attempts, and unusual data access patterns to identify potential security incidents quickly.After that, establish cloud security posture management (CSPM) to scan for misconfigurations automatically. Configure automated remediation for common issues like open security groups, unencrypted storage, and overly permissive access policies.Create network segmentation using virtual private clouds (VPCs) and security groups to isolate different workloads. Limit communication between services to only what's necessary and use zero-trust network principles.Set up regular security audits and compliance monitoring to meet industry standards like SOC 2, HIPAA, or GDPR. Document all security controls and maintain audit trails for regulatory requirements.Finally, develop an incident response plan specifically for cloud environments. Include procedures for isolating compromised resources, preserving forensic evidence, and coordinating with your cloud provider's security team.Start with IAM and encryption as your foundation, then build additional security layers progressively to avoid overwhelming your team while maintaining strong protection.Gcore cloud securityWhen using cloud security measures, the underlying infrastructure becomes just as important as the security tools themselves. Gcore’s cloud security solutions address this need with a global network of 180+ points of presence and 30ms latency, ensuring your security monitoring and threat detection systems perform consistently across all regions. Our edge cloud infrastructure supports real-time security analytics and automated threat response without the performance bottlenecks that can leave your systems vulnerable during critical moments.What sets our approach apart is the combination of security directly into the infrastructure layer, eliminating the complexity of managing separate security vendors while providing enterprise-grade DDoS protection and encrypted data transmission as standard features. This unified approach typically reduces security management overhead by 40-60% compared to multi-vendor solutions, while maintaining the continuous monitoring capabilities.Explore how Gcore's integrated cloud security infrastructure can strengthen your defense plan at gcore.com/cloud.Frequently asked questionsWhat's the difference between cloud security and traditional approaches?Cloud security differs from traditional approaches by protecting distributed resources through shared responsibility models and cloud-native tools, while traditional security relies on perimeter-based defenses around centralized infrastructure. Traditional security assumes a clear network boundary with firewalls and intrusion detection systems protecting internal resources. In contrast, cloud security secures individual workloads, data, and identities across multiple environments without relying on network perimeters.What is cloud security posture management?Cloud security posture management (CSPM) is a set of tools and processes that continuously monitor cloud environments to identify misconfigurations, compliance violations, and security risks across cloud infrastructure. CSPM platforms automatically scan cloud resources, assess security policies, and provide remediation guidance to maintain proper security configurations.How does Zero Trust apply to cloud security?Zero Trust applies to cloud security by treating every user, device, and connection as untrusted and requiring verification before granting access to cloud resources. This approach replaces traditional perimeter-based security with continuous authentication, micro-segmentation, and least-privilege access controls across cloud environments.What compliance standards apply?Cloud security must comply with industry-specific regulations like SOC 2, ISO 27001, GDPR, HIPAA, PCI DSS, and FedRAMP, depending on your business sector and geographic location. Organizations typically need to meet multiple standards simultaneously, with financial services requiring PCI DSS compliance, healthcare needing HIPAA certification, and EU operations mandating GDPR adherence.What happens during a cloud security breach?During a cloud security breach, attackers gain unauthorized access to cloud resources, potentially exposing sensitive data, disrupting services, and causing financial damage averaging $5 million per incident, according to IBM. The breach typically involves exploiting misconfigurations, compromised credentials, or vulnerabilities to access cloud infrastructure, applications, or data stores.

Query your cloud with natural language: A developer’s guide to Gcore MCP

What if you could ask your infrastructure questions and get real answers?With Gcore’s open-source implementation of the Model Context Protocol (MCP), now you can. MCP turns generative AI into an agent that understands your infrastructure, responds to your queries, and takes action when you need it to.In this post, we’ll demo how to use MCP to explore and inspect your Gcore environment just by prompting, to list resources, check audit logs, and generate cost reports. We’ll also walk through a fun bonus use case: provisioning infrastructure and exporting it to Terraform.What is MCP and why do devs love it?Originally developed by Anthropic, the Model Context Protocol (MCP) is an open standard that turns language models into agents that interact with structured tools: APIs, CLIs, or internal systems. Gcore’s implementation makes this protocol real for our customers.With MCP, you can:Ask questions about your infrastructureList, inspect, or filter cloud resourcesView cost data, audit logs, or deployment metadataExport configs to TerraformChain multi-step operations via natural languageGcore MCP removes friction from interacting with your infrastructure. Instead of wiring together scripts or context-switching across dashboards and CLIs, you can just…ask.That means:Faster debugging and auditsMore accessible infra visibilityFewer repetitive setup tasksBetter team collaborationBecause it’s open source, backed by the Gcore Python SDK, you can plug it into other APIs, extend tool definitions, or even create internal agents tailored to your stack. Explore the GitHub repo for yourself.What can you do with it?This isn’t just a cute chatbot. Gcore MCP connects your cloud to real-time insights. Here are some practical prompts you can use right away.Infrastructure inspection“List all VMs running in the Frankfurt region”“Which projects have over 80% GPU utilization?”“Show all volumes not attached to any instance”Audit and cost analysis“Get me the API usage for the last 24 hours”“Which users deployed resources in the last 7 days?”“Give a cost breakdown by region for this month”Security and governance“Show me firewall rules with open ports”“List all active API tokens and their scopes”Experimental automation“Create a secure network in Tokyo, export to Terraform, then delete it”We’ll walk through that last one in the full demo below.Full video demoWatch Gcore’s AI Software Engineer, Algis Dumbris, walk through setting up MCP on your machine and show off some use cases. If you prefer reading, we’ve broken down the process step-by-step below.Step-by-step walkthroughThis section maps to the video and shows exactly how to replicate the workflow locally.1. Install MCP locally (0:00–1:28)We use uv to isolate the environment and pull the project directly from GitHub.curl -Ls https://astral.sh/uv/install.sh | sh uvx add gcore-mcp-server https://github.com/G-Core/gcore-mcp-server Requirements:PythonGcore account + API keyTool config file (from the repo)2. Set up your environment (1:28–2:47)Configure two environment variables:GCORE_API_KEY for authGCORE_TOOLS to define what the agent can access (e.g., regions, instances, costs, etc.)Soon, tool selection will be automatic, but today you can define your toolset in YAML or JSON.3. Run a basic query (3:19–4:11)Prompt:“Find the Gcore region closest to Antalya.”The agent maps this to a regions.list call and returns: IstanbulNo need to dig through docs or write an API request.4. Provision, export, and clean up (4:19–5:32)This one’s powerful if you’re experimenting with CI/CD or infrastructure-as-code.Prompt:“Create a secure network in Tokyo. Export to Terraform. Then clean up.”The agent:Provisions the networkExports it to Terraform formatDestroys the resources afterwardYou get usable .tf output with no manual scripting. Perfect for testing, prototyping, or onboarding.Gcore: always building for developersTry it now:Clone the repoInstall UVX + configure your environmentStart prompting your infrastructureOpen issues, contribute tools, or share your use casesThis is early-stage software, and we’re just getting started. Expect more tools, better UX, and deeper integrations soon.Watch how easy it is to deploy an inference instance with Gcore

How to protect login pages with Gcore WAAP

Exposed login pages are a common vulnerability across web applications. Attackers often use automated tools to guess credentials in brute-force or credential-stuffing attacks, probe for login behavior to exploit session or authentication logic, or overload your infrastructure with fake requests.Without specific rules for login-related traffic, your application might miss these threats or apply overly broad protections that disrupt real users. Fortunately, Gcore WAAP makes it easy to defend these sensitive endpoints without touching your application code.In this guide, we’ll show you how to use WAAP’s custom rule engine to identify login traffic and apply protections like CAPTCHA to reduce risk, block automated abuse, and maintain a smooth experience for legitimate users. We’ve also included a complete video walkthrough from Gcore’s Security Presales Engineer, Michal Zalewski.Video walkthroughHere’s Gcore’s Michal Zalewski giving a full walkthrough of the steps in this article.Step 1: Access your WAAP configurationGo to portal.gcore.com and log in.Navigate to WAAP in the sidebar. If you’re not yet a WAAP user, it costs just $26/month.Select the resource that hosts your login form; for example, gcore.zalewski.cloud.Step 2: Create a custom ruleIn the main panel of your selected resource, go to WAAP Rules.Click Add Custom Rule in the upper-right corner.Step 3: Define the login page URLIdentify the login endpoint you want to protect:Use tools like Burp Suite or the "Inspect" feature in your browser to verify the login page URL.In Burp Suite, use the Proxy tab, or in the browser, check the Network tab to inspect a login request.Look for the path (e.g., /login.php) and HTTP method (POST).In the custom rule setup:Enter the URL (e.g., /login.php).Tag the request using a predefined tag. Select Login Page.Step 4: Name and save the ruleProvide a name for the rule, such as “Login Page URL”, and save it.Step 5: Add a CAPTCHA challenge ruleTo protect the login page from automated abuse:Create a new custom rule.Name it something like “Login Page Challenge”.Under Conditions, select the previously created Login Page tag.Set the Action to CAPTCHA.Save the rule.Step 6: Test the protectionReturn to your browser and turn off any proxy tools.Refresh the login page.You should now be challenged with a CAPTCHA each time the login page loads.Once the CAPTCHA is completed successfully, users can log in as usual.Monitor, adapt, and alertAfter deployment:Track rate limit trigger frequencyMonitor WAAP logs for anomaly detectionRotate exemptions or thresholds based on live behaviorFor analytics, refer to the WAAP analytics documentation.Bonus tips for hardened protectionCombine with bot protection: Enable WAAP’s bot mitigation to identify headless browsers and automation tools like Puppeteer or Selenium. See our bot protection docs for setup instructions.Customize 429 responses: Replace default error pages with branded messages or a fallback action. Consider including a support link or CAPTCHA challenge. Check out our response pages documentation for more details.Use geo or ASN exceptions: Whitelist trusted locations or block known bot-heavy ASNs if your audience is localized.Automate it: optional API and Terraform supportTeams with IaC pipelines or security automation workflows might want to automate login page protection with rate limiting. This keeps your WAAP config version-controlled and repeatable.You can use the WAAP API or Terraform to:Create or update rulesRotate session keys or thresholdsExport logs for auditingExplore the WAAP API documentation and WAAP Terraform provider documentation for more details.Stop abuse before it starts with GcoreLogin pages are high-value targets, but they don’t have to be high risk. With Gcore WAAP, setting up robust defenses takes just a few minutes. By tagging login traffic and applying challenge rules like CAPTCHA, you can reduce automated attack risk without sacrificing user experience.As your application grows, revisit your WAAP rules regularly to adapt to new threats, add behavior-based detection, and fine-tune your protective layers. For more advanced configurations, check out our documentation or reach out to Gcore support.Get WAAP today for just $26/month

3 underestimated security risks of AI workloads and how to overcome them

3 underestimated security risks of AI workloads and how to overcome them

Artificial intelligence workloads introduce a fundamentally different security landscape for engineering and security teams. Unlike traditional applications, AI systems must protect not just endpoints and networks, but also training data pipelines, feature stores, model repositories, and inference APIs. Each phase of the AI life cycle presents distinct attack vectors that adversaries can exploit to corrupt model behavior, extract proprietary logic, or manipulate downstream outputs.In this article, we uncover three security vulnerabilities of AI workloads and explain how developers and MLOps teams can overcome them. We also look at how investing in your AI security can save time and money, explore the challenges that lie ahead for AI security, and offer a simplified way to protect your AI workloads with Gcore.Risk #1: data poisoningData poisoning is a targeted attack on the integrity of AI systems, where malicious actors subtly inject corrupted or manipulated data into training pipelines. The result is a model that behaves unpredictably, generates biased or false outputs, or embeds hidden logic that can be triggered post-deployment. This can undermine business-critical applications—from fraud detection and medical diagnostics to content moderation and autonomous decision-making.For developers, the stakes are high: poisoned models are hard to detect once deployed, and even small perturbations in training data can have system-wide consequences. Luckily, you can take a few steps to mitigate against data poisoning and then implement zero-trust AI to further protect your workloads.Mitigation and hardeningRestrict dataset access using IAM, RBAC, or identity-aware proxies.Store all datasets in versioned, signed, and hashed formats.Validate datasets with automated schema checks, label distribution scans, and statistical outlier detection before training.Track data provenance with metadata logs and checksums.Block training runs if datasets fail predefined data quality gates.Integrate data validation scripts into CI/CD pipelines pre-training.Enforce zero-trust access policies for data ingestion services.Solution integration: zero-trust AIImplement continuous authentication and authorization for each component interacting with data (e.g., preprocessing scripts, training jobs).Enable real-time threat detection during training using runtime security tools.Automate incident response triggers for unexpected file access or data source changes.Risk #2: adversarial attacksAdversarial attacks manipulate model inputs in subtle ways that trick AI systems into making incorrect or dangerous decisions. These perturbations—often imperceptible to humans—can cause models to misclassify images, misinterpret speech, or misread sensor data. In high-stakes environments like facial recognition, autonomous vehicles, or fraud detection, these failures can result in security breaches, legal liabilities, or physical harm.For developers, the threat is real: even state-of-the-art models can be easily fooled without adversarial hardening. The good news? You can make your models more robust by combining defensive training techniques, input sanitization, and secure API practices. While encrypted inference doesn’t directly block adversarial manipulation, it ensures that sensitive inference data stays protected even if attackers attempt to probe the system.Mitigation and hardeningUse adversarial training frameworks like CleverHans or IBM ART to expose models to perturbed inputs during training.Apply input sanitization layers (e.g., JPEG re-encoding, blurring, or noise filters) before data reaches the model.Implement rate limiting and authentication on inference APIs to block automated adversarial probing.Use model ensembles or randomized smoothing to improve resilience to small input perturbations.Log and analyze input-output patterns to detect high-variance or abnormal responses.Test models regularly against known attack vectors using robustness evaluation tools.Solution integration: encrypted inferenceWhile encryption doesn't prevent adversarial inputs, it does mean that input data and model responses remain confidential and protected from observation or tampering during inference.Run inference in trusted environments like Intel SGX or AWS Nitro Enclaves to protect model and data integrity.Use homomorphic encryption or SMPC to process encrypted data without exposing sensitive input.Ensure that all intermediate and output data is encrypted at rest and in transit.Deploy access policies that restrict inference to verified users and approved applications.Risk #3: model leakage of intellectual assetsModel leakage—or model extraction—happens when an attacker interacts with a deployed model in ways that allow them to reverse-engineer its structure, logic, or parameters. Once leaked, a model can be cloned, monetized, or used to bypass the very defenses it was meant to enforce. For businesses, this means losing competitive IP, compromising user privacy, or enabling downstream attacks.For developers and MLOps teams, the challenge is securing deployed models in a way that balances performance and privacy. If you're exposing inference APIs, you’re exposing potential entry points—but with the right controls and architecture, you can drastically reduce the risk of model theft.Mitigation and hardeningEnforce rate limits and usage quotas on all inference endpoints.Monitor for suspicious or repeated queries that indicate model extraction attempts.Implement model watermarking or fingerprinting to trace unauthorized model use.Obfuscate models before deployment using quantization, pruning, or graph rewriting.Disable or tightly control any model export functionality in your platform.Sign and verify inference requests and responses to ensure authenticity.Integrate security checks into CI/CD pipelines to detect risky configurations—such as public model endpoints, export-enabled containers, or missing inference authentication—before they reach production.Solution integration: native security integrationIntegrate model validation, packaging, and signing into CI/CD pipelines.Serve models from encrypted containers or TEEs, with minimal runtime exposure.Use container and image scanning tools to catch misconfigurations before deployment.Centralize monitoring and protection with tools like Gcore WAAP for real-time anomaly detection and automated response.How investing in AI security can save your business moneyFrom a financial point of view, the use of AI and machine learning in cybersecurity can lead to massive cost savings. Organizations that utilize AI and automation in cybersecurity have saved an average of $2.22 million per data breach compared to organizations that do not have these protections in place. This is because the necessity for manual oversight is reduced, lowering the total cost of ownership, and averting costly security breaches. The initial investment in advanced security technologies yields returns through decreased downtime, fewer false positives, and an enhanced overall security posture.Challenges aheadWhile securing the AI lifecycle is essential, it’s still difficult to balance robust security with a positive user experience. Rigid scrutiny can add additional latency or false positives that can stop operations, but AI-powered security can avoid such incidents.Another concern organizations must contend with is how to maintain current AI models. With threats changing so rapidly, today's newest model could easily become outdated by tomorrow’s. Solutions must have an ongoing learning ability so that security detection parameters can be revised.Operational maturity is also a concern, especially for companies that operate in multiple geographies. Well-thought-out strategies and sound governance processes must accompany the integration of complex AI/ML tools with existing infrastructure, but automation still offers the most benefits by reducing the overhead on security teams and helping ensure consistent deployment of security policies.Get ahead of AI security with GcoreAI workloads introduce new and often overlooked security risks that can compromise data integrity, model behavior, and intellectual property. By implementing practices like zero-trust architecture, encrypted inference, and native security integration, developers can build more resilient and trustworthy AI systems. As threats evolve, staying ahead means embedding security at every phase of the AI lifecycle.Gcore helps teams apply these principles at scale, offering native support for zero-trust AI, encrypted inference, and intelligent API protection. As an experienced AI and security solutions provider, our DDoS Protection and AI-enabled WAAP solutions integrate natively with Everywhere Inference and GPU Cloud across 210+ global points of presence. That means low latency, high performance, and proven, robust security, no matter where your customers are located.Talk with our AI security experts and secure your workloads today

Flexible DDoS mitigation with BGP Flowspec cover image

Flexible DDoS mitigation with BGP Flowspec

For customers who understand their own network traffic patterns, rigid DDoS protection can be more of a limitation than a safeguard. That’s why Gcore supports BGP Flowspec: a flexible, standards-based method for defining granular filters that block or rate-limit malicious traffic in real time…before it reaches your infrastructure.In this article, we’ll walk through:What Flowspec is and how it worksThe specific filters and actions Gcore supportsCommon use cases, with example rule definitionsHow to activate and monitor Flowspec in your environmentWhat is the BGP Flowspec?BGP Flowspec (RFC 8955) extends Border Gateway Protocol to distribute traffic filtering rules alongside routing updates. Instead of static ACLs or reactive blackholing, Flowspec enables near-instantaneous propagation of mitigation rules across networks.BGP tells routers how to reach IP prefixes across the internet. With Flowspec, those same BGP announcements can now carry rules, not just routes. Each rule describes a pattern of traffic (e.g., TCP SYN packets >1000 bytes from a specific subnet) and what action to take (drop, rate-limit, mark, or redirect).What are the benefits of the BGP Flowspec?Most traditional DDoS protection services react to threats after they start, whether by blackholing traffic to a target IP, redirecting flows to a scrubbing center, or applying rigid, static filters. These approaches can block legitimate traffic, introduce latency, or be too slow to respond to fast-evolving attacks.Flowspec offers a more flexible alternative.Proactive mitigation: Instead of waiting for attacks, you can define known-bad traffic patterns ahead of time and block them instantly. Flowspec lets experienced operators prevent incidents before they start.Granular filtering: You’re not limited to blocking by IP or port. With Flowspec, you can match on packet size, TCP flags, ICMP codes, and more, enabling fine-tuned control that traditional ACLs or RTBH don’t support.Edge offloading: Filtering happens directly on Gcore’s routers, offloading your infrastructure and avoiding scrubbing latency.Real-time updates: Changes to rules are distributed across the network via BGP and take effect immediately, faster than manual intervention or standard blackholing.You still have the option to block traffic during an active attack, but with Flowspec, you gain the flexibility to protect services with minimal disruption and greater precision than conventional tools allow.Which parts of the Flowspec does Gcore implement?Gcore supports twelve filter types and four actions of the Flowspec.Supported filter typesGcore supports all 12 standard Flowspec match components.Filter FieldDescriptionDestination prefixTarget subnet (usually your service or app)Source prefixSource of traffic (e.g., attacker IP range)IP protocolTCP, UDP, ICMP, etc.Port / Source portMatch specific client or server portsDestination portMatch destination-side service portsICMP type/codeFilter echo requests, errors, etc.TCP flagsFilter packets by SYN, ACK, RST, FIN, combinationsPacket lengthFilter based on payload sizeDSCPQuality of service code pointFragmentMatch on packet fragmentation characteristicsSupported actionsGcore DDoS Protection supports the following Flowspec actions, which can be triggered when traffic matches a specific filter:ActionDescriptionTraffic-rate (0x8006)Throttle/rate limit traffic by byte-per-second rateredirectRedirect traffic to alternate location (e.g., scrubbing)traffic-markingApply DSCP marks for downstream classificationno-action (drop)Drop packets (rate-limit 0)Rule orderingRFC 5575 defines the implicit order of Flowspec rules. The crucial point is that more specific announcements take preference, not the order in which the rules are propagated.Gcore also respects Flowspec rule ordering per RFC 5575. More specific filters override broader ones. Future support for Flowspec v2 (with explicit ordering) is under consideration, pending vendor adoption.Blackholing and extended blackholing (eBH)Remote-triggered blackhole (RTBH) is a standardized protection method that the client manages via BGP by analyzing traffic, identifying the direction of the attack (i.e., the destination IP address). This method protects against volumetric attacks.Customers using Gcore IP Transit can trigger immediate blackholing for attacked prefixes via BGP, using the well-known blackhole community tag 65000:666. All traffic to that destination IP is dropped at Gcore’s edge.The list of supported BGP communities is available here.BGP extended blackholeExtended blackhole (eBH) allows for more granular blackholing that does not affect legitimate traffic. For customers unable to implement Flowspec directly, Gcore supports eBH. You announce target prefixes with pre-agreed BGP communities, and Gcore translates them into Flowspec mitigations.To configure this option, contact our NOC at noc@gcore.lu.Monitoring and limitationsGcore can support several logging transports, including mail and Slack.If the number of Flowspec prefixes exceeds the configured limit, Gcore DDoS Protection stops accepting new announcements, but BGP sessions and existing prefixes will stay active. Gcore will receive a notification that you reached the limit.How to activateActivation takes just two steps:Define rules on your edge router using Flowspec NLRI formatAnnounce rules via BGP to Gcore’s intermediate control planeThen, Gcore validates and propagates the filters to border routers. Filters are installed on edge devices and take effect immediately.If attack patterns are unknown, you’ll first need to detect anomalies using your existing monitoring stack, then define the appropriate Flowspec rules.Need help activating Flowspec? Get in touch via our 24/7 support channels and our experts will be glad to assist.Set up GRE and benefit from Flowspec today

Securing AI from the ground up: defense across the lifecycle

As more AI workloads shift to the edge for lower latency and localized processing, the attack surface expands. Defending a data center is old news. Now, you’re securing distributed training pipelines, mobile inference APIs, and storage environments that may operate independently of centralized infrastructure, especially in edge or federated learning contexts. Every stage introduces unique risks. Each one needs its own defenses.Let’s walk through the key security challenges across each phase of the AI lifecycle, and the hardening strategies that actually work.PhaseTop threatsHardening stepsTrainingData poisoning, leaksValidation, dataset integrity tracking, RBAC, adversarial trainingDevelopmentModel extraction, inversionRate limits, obfuscation, watermarking, penetration testingInferenceAdversarial inputs, spoofed accessInput filtering, endpoint auth, encryption, TEEsStorage and deploymentModel theft, tamperingEncrypted containers, signed builds, MFA, anomaly monitoringTraining: your model is only as good as its dataThe training phase sets the foundation. If the data going in is poisoned, biased, or tampered with, the model will learn all the wrong lessons and carry those flaws into production.Why it mattersData poisoning is subtle. You won’t see a red flag during training logs or a catastrophic failure at launch. These attacks don’t break training, they bend it.A poisoned model may appear functional, but behaves unpredictably, embeds logic triggers, or amplifies harmful bias. The impact is serious later in the AI workflow: compromised outputs, unexpected behavior, or regulatory non-compliance…not due to drift, but due to training-time manipulation.How to protect itValidate datasets with schema checks, label audits, and outlier detection.Version, sign, and hash all training data to verify integrity and trace changes.Apply RBAC and identity-aware proxies (like OPA or SPIFFE) to limit who can alter or inject data.Use adversarial training to improve model robustness against manipulated inputs.Development and testing: guard the logicOnce you’ve got a trained model, the next challenge is protecting the logic itself: what it knows and how it works. The goal here is to make attacks economically unfeasible.Why it mattersModels encode proprietary logic. When exposed via poorly secured APIs or unprotected inference endpoints, they’re vulnerable to:Model inversion: Extracting training dataExtraction: Reconstructing logicMembership inference: Revealing whether a datapoint was in trainingHow to protect itApply rate limits, logging, and anomaly detection to monitor usage patterns.Disable model export by default. Only enable with approval and logging.Use quantization, pruning, or graph obfuscation to reduce extractability.Explore output fingerprinting or watermarking to trace unauthorized use in high-value inference scenarios.Run white-box and black-box adversarial evaluations during testing.Integrate these security checks into your CI/CD pipeline as part of your MLOps workflow.Inference: real-time, real riskInference doesn’t get a free pass because it’s fast. Security needs to be just as real-time as the insights your AI delivers.Why it mattersAdversarial attacks exploit the way models generalize. A single pixel change or word swap can flip the classification.When inference powers fraud detection or autonomous systems, a small change can have a big impact.How to protect itSanitize input using JPEG compression, denoising, or frequency filtering.Train on adversarial examples to improve robustness.Enforce authentication and access control for all inference APIs—no open ports.Encrypt inference traffic with TLS. For added privacy, use trusted execution environments (TEEs).For highly sensitive cases, consider homomorphic encryption or SMPC—strong but compute-intensive solutions.Check out our free white paper on inference optimization.Storage and deployment: don’t let your model leakOnce your model’s trained and tested, you’ve still got to deploy and store it securely—often across multiple locations.Why it mattersUnsecured storage is a goldmine for attackers. With access to the model binary, they can reverse-engineer, clone, or rehost your IP.How to protect itStore models on encrypted volumes or within enclaves.Sign and verify builds before deployment.Enforce MFA, RBAC, and immutable logging on deployment pipelines.Monitor for anomalous access patterns—rate, volume, or source-based.Edge strategy: security that moves with your AIAs AI moves to the edge, centralized security breaks down. You need protection that operates as close to the data as your inference does.That’s why we at Gcore integrate protection into AI workflows from start to finish:WAAP and DDoS mitigation at edge nodes—not just centralized DCs.Encrypted transport (TLS 1.3) and in-node processing reduce exposure.Inline detection of API abuse and L7 attacks with auto-mitigation.180+ global PoPs to maintain consistency across regions.AI security is lifecycle securityNo single firewall, model tweak, or security plugin can secure AI workloads in isolation. You need defense in depth: layered, lifecycle-wide protections that work at the data layer, the API surface, and the edge.Ready to secure your AI stack from data to edge inference?Talk to our AI security experts

Subscribe to our newsletter

Get the latest industry trends, exclusive insights, and Gcore updates delivered straight to your inbox.