- Home
- Developers
- How to Fix Error 2006: MySQL Server Has Gone Away
Encountering the ‘Error 2006: MySQL Server Has Gone Away’ can be a disconcerting experience for many database administrators and developers. Often striking without warning, this error can disrupt database operations and bring your applications to a standstill. Understanding the underlying causes and knowing how to effectively address them is crucial. In this guide, we’ll delve deep into the root causes of this infamous MySQL error and provide actionable solutions to get your database running smoothly once again.
Fixing Error 2006: MySQL Server Has Gone Away
Here’s a step-by-step guide to fix this issue:
#1 Check Server Status
Verify that the MySQL server is running. If the server isn’t running, the client won’t be able to connect.
sudo systemctl status mysql
For the output, you’re looking for an “Active” status. If it’s “Inactive” or “Failed”, that’s a potential reason for the error.
#2 Increase ‘wait_timeout’ and ‘max_allowed_packet’ Values
These settings in MySQL configuration determine how long the server waits before closing a non-responsive connection and the maximum size of a packet that can be sent to the server, respectively.
- Edit the MySQL configuration file.
sudo nano /etc/mysql/my.cnf
- Add/Modify under [mysqld].
wait_timeout=28800max_allowed_packet=128M
- Save and exit. For restarting MySQL run this command.
sudo systemctl restart mysql
#3 Check for Crashed Tables
Corrupt or crashed tables can cause connection issues. Run this command:
mysqlcheck -u root -p --all-databases
For the output, you will see a status for each table. Look for any that say “corrupt” or “crashed”.
#4 Review MySQL Error Logs
The logs can give a more in-depth look into any underlying issues causing the server to disconnect.
sudo tail -50 /var/log/mysql/error.log
On the output, look for any recent or recurring errors that might hint at the root cause.
#5 Monitor Server Resources
Insufficient resources can cause the MySQL server to become unresponsive.
top
For the output, review the %CPU and %MEM columns, particularly for the mysqld process. High resource usage might indicate resource constraints.
#6 Verify Disk Space
If the server’s disk is near or at capacity, MySQL might not operate correctly.
df -h
Review available space on the disk, especially for the partition where MySQL data is stored (typically /var/lib/mysql).
#7 Confirm Stable Network Connectivity
For remote MySQL connections, ensure there’s no network interruption between the client and server.
ping -c 5 <MySQL_SERVER_IP>
You should see replies from the server IP with minimal or no packet loss.
#8 Adjust Open Files Limit
MySQL can sometimes exceed the allowed open files limit of the system.
- Edit the MySQL configuration.
sudo nano /etc/mysql/my.cnf
- Add/Modify under [mysqld].
open_files_limit=5000
- Save, exit, and restart MySQL.
After following these steps, try your operation again. If the error persists, you may need to delve deeper, considering factors like firewall configurations, specific application queries, or even potential bugs in the MySQL version in use.
Conclusion
Searching for a managed database solution? Choose Gcore Managed Database for PostgreSQL so you can focus on your core business while we manage your database.
- 99.9% SLA for uninterrupted service with high-availability architecture
- Adjustable database resources for changing demands
- Currently in free public beta
Related articles

3 use cases for geo-aware routing with Gcore DNS
If your audience is global but you’re serving everyone the same content from the same place, you're likely sacrificing performance and resilience. Gcore DNS (which includes a free-forever plan and enterprise-grade option) offers a straightforward way to change that with geo-aware routing, a feature that lets you return different DNS responses based on where users are coming from.This article breaks down how Gcore's geo-routing works, how to set it up using the GeoDNS preset in dynamic response mode, where it shines, and when you might be better off with a different option. We’ll walk through three hands-on use cases with real config examples, highlight TTL trade-offs, and call out what developers need to know about edge cases like resolver mismatch and caching delays.What is geo-aware DNS routing?Gcore DNS lets you return different IP addresses based on the user’s geographic location. This is configured using dynamic response rules with the GeoDNS preset, which lets you match on continent, country, region, ASN, or IP/CIDR. When a user makes a DNS request, Gcore uses the resolver’s location to decide which record to return.You can control traffic to achieve outcomes like:Directing European users to an EU-based CDN endpointSending users in regions with known service degradation to a fallback instanceBehind the scenes, this is done by setting up metadata pickers and specifying fallback behavior.For step-by-step guidance, see the official docs: Configure geo-balancing with Dynamic response.How to configure GeoDNS in Gcore DNSTo use geo-aware routing in Gcore DNS, you'll configure a dynamic response record set with the GeoDNS preset. This lets you return different IPs based on region, country, ASN, or IP/CIDR metadata.Basic stepsGo to DNS → Zones in the Gcore Customer Portal. (If you don’t have an account, you can sign up free and use Gcore DNS in just a few clicks.)Create or edit a record set (e.g., for app.example.com).Switch to Advanced mode.Enable Dynamic response.Choose the GeoDNS preset.Add responses per region or country.Define a fallback record for unmatched queries.For detailed step-by-step instructions, check out our docs.Once you’ve set this up, your config should look like the examples shown in the use cases below.Common use casesEach use case below includes a real-world scenario and a breakdown of how to configure it in Gcore DNS. These examples assume you're working in the DNS advanced mode zone editor with dynamic response enabled and the GeoDNS preset selected.The term “DNS setup” refers to the configuration you’d enter for a specific hostname in the Gcore DNS UI under advanced mode.1. Content localizationScenario: You're running example.com and want to serve language-optimized infrastructure for European and Asian users. This use case is often used to reduce TTFB, apply region-specific UX, or comply with local UX norms. If you're also localizing content (e.g., currency, language), make sure your app handles that via subdomains or headers in addition to routing.Objective:EU users → eu.example.comAsia users → asia.example.comAll others → global.example.comDNS setup:Host: www.example.comType: A TTL: 120 Dynamic response: Enabled Preset: GeoDNS Europe → 185.22.33.44 # EU-based web server Asia → 103.55.66.77 # Asia-based web server Fallback → 198.18.0.1 # Global web server2. Regional CDN failoverScenario: You’re using two CDN clusters: one in North America, one in Europe. If one cluster is unavailable, you want traffic rerouted regionally without impacting users elsewhere. To make this work reliably, you must enable DNS Healthchecks for each origin so that Gcore DNS can automatically detect outages and filter out unhealthy IPs from responses.Objective:North America → na.cdn.example.comEurope → eu.cdn.example.comEach region has its own fallbackDNS setup:Host: cdn.example.comType: A TTL: 60 Dynamic response: Enabled Preset: GeoDNS North America → 203.0.113.10 # NA CDN IP Backup (NA region only) → 185.22.33.44 # EU CDN as backup for NA Health check → Enabled for 203.0.113.10 with HTTP/TCP probe settingsEurope → 185.22.33.44 # EU CDN IP Backup (EU region only) → 203.0.113.10 # NA CDN as backup for EU Health check → Enabled for 185.22.33.44Note: Multi-level fallback by region isn’t supported inside one rule set—you need to separate them to keep routing decisions clean.3. Traffic steering for complianceScenario: You need to keep EU user data inside the EU for GDPR compliance while routing the rest of the world to lower-cost infrastructure elsewhere. This approach is useful for fintech, healthcare, or regulated SaaS workloads where regulatory compliance is a challenge.Objective:EU users → EU-only backendAll other users → Global backendDNS setup:Host: transactions.example.com Type: A TTL: 300 Dynamic response: Enabled Preset: GeoDNS Europe → 185.10.10.10 # EU regional API node Fallback → 198.51.100.42 # Global API nodeEdge casesGeoDNS works well, but it’s worth keeping in mind a few edge cases and limitations when you get set up.Resolver location ≠ user locationBy default, Gcore uses ECS (EDNS Client Subnet) for precise client subnet geo-balancing. If ECS isn’t present, resolver IP is used, which may skew location (e.g., public resolvers, mobile carriers). ECS usage can be disabled in the ManagedDNS UI if needed.Caching slows failoverEven if your upstream fails, users may have cached the original IP for minutes. Fallback + TTL tuning are key.No sub-regional precisionYou can route by continent, country, or ASN—but not city. City-level precision isn’t currently supported.Gcore delivers simple solutions to big problemsGeo-aware routing is one of those features that quietly solves big problems, especially when your app or CDN runs globally. With Gcore DNS, you don’t need complex infrastructure to start optimizing traffic flow.Geo-aware routing with Gcore DNS is a lightweight way to optimize performance, localize content, or handle regional failover. If you need greater precision, consider pairing GeoDNS with in-app geolocation logic or CDN edge logic. But for many routing use cases, DNS is the simplest and fastest way to go.Get free-forever Gcore DNS with just a few clicks

3 underestimated security risks of AI workloads and how to overcome them
Artificial intelligence workloads introduce a fundamentally different security landscape for engineering and security teams. Unlike traditional applications, AI systems must protect not just endpoints and networks, but also training data pipelines, feature stores, model repositories, and inference APIs. Each phase of the AI life cycle presents distinct attack vectors that adversaries can exploit to corrupt model behavior, extract proprietary logic, or manipulate downstream outputs.In this article, we uncover three security vulnerabilities of AI workloads and explain how developers and MLOps teams can overcome them. We also look at how investing in your AI security can save time and money, explore the challenges that lie ahead for AI security, and offer a simplified way to protect your AI workloads with Gcore.Risk #1: data poisoningData poisoning is a targeted attack on the integrity of AI systems, where malicious actors subtly inject corrupted or manipulated data into training pipelines. The result is a model that behaves unpredictably, generates biased or false outputs, or embeds hidden logic that can be triggered post-deployment. This can undermine business-critical applications—from fraud detection and medical diagnostics to content moderation and autonomous decision-making.For developers, the stakes are high: poisoned models are hard to detect once deployed, and even small perturbations in training data can have system-wide consequences. Luckily, you can take a few steps to mitigate against data poisoning and then implement zero-trust AI to further protect your workloads.Mitigation and hardeningRestrict dataset access using IAM, RBAC, or identity-aware proxies.Store all datasets in versioned, signed, and hashed formats.Validate datasets with automated schema checks, label distribution scans, and statistical outlier detection before training.Track data provenance with metadata logs and checksums.Block training runs if datasets fail predefined data quality gates.Integrate data validation scripts into CI/CD pipelines pre-training.Enforce zero-trust access policies for data ingestion services.Solution integration: zero-trust AIImplement continuous authentication and authorization for each component interacting with data (e.g., preprocessing scripts, training jobs).Enable real-time threat detection during training using runtime security tools.Automate incident response triggers for unexpected file access or data source changes.Risk #2: adversarial attacksAdversarial attacks manipulate model inputs in subtle ways that trick AI systems into making incorrect or dangerous decisions. These perturbations—often imperceptible to humans—can cause models to misclassify images, misinterpret speech, or misread sensor data. In high-stakes environments like facial recognition, autonomous vehicles, or fraud detection, these failures can result in security breaches, legal liabilities, or physical harm.For developers, the threat is real: even state-of-the-art models can be easily fooled without adversarial hardening. The good news? You can make your models more robust by combining defensive training techniques, input sanitization, and secure API practices. While encrypted inference doesn’t directly block adversarial manipulation, it ensures that sensitive inference data stays protected even if attackers attempt to probe the system.Mitigation and hardeningUse adversarial training frameworks like CleverHans or IBM ART to expose models to perturbed inputs during training.Apply input sanitization layers (e.g., JPEG re-encoding, blurring, or noise filters) before data reaches the model.Implement rate limiting and authentication on inference APIs to block automated adversarial probing.Use model ensembles or randomized smoothing to improve resilience to small input perturbations.Log and analyze input-output patterns to detect high-variance or abnormal responses.Test models regularly against known attack vectors using robustness evaluation tools.Solution integration: encrypted inferenceWhile encryption doesn't prevent adversarial inputs, it does mean that input data and model responses remain confidential and protected from observation or tampering during inference.Run inference in trusted environments like Intel SGX or AWS Nitro Enclaves to protect model and data integrity.Use homomorphic encryption or SMPC to process encrypted data without exposing sensitive input.Ensure that all intermediate and output data is encrypted at rest and in transit.Deploy access policies that restrict inference to verified users and approved applications.Risk #3: model leakage of intellectual assetsModel leakage—or model extraction—happens when an attacker interacts with a deployed model in ways that allow them to reverse-engineer its structure, logic, or parameters. Once leaked, a model can be cloned, monetized, or used to bypass the very defenses it was meant to enforce. For businesses, this means losing competitive IP, compromising user privacy, or enabling downstream attacks.For developers and MLOps teams, the challenge is securing deployed models in a way that balances performance and privacy. If you're exposing inference APIs, you’re exposing potential entry points—but with the right controls and architecture, you can drastically reduce the risk of model theft.Mitigation and hardeningEnforce rate limits and usage quotas on all inference endpoints.Monitor for suspicious or repeated queries that indicate model extraction attempts.Implement model watermarking or fingerprinting to trace unauthorized model use.Obfuscate models before deployment using quantization, pruning, or graph rewriting.Disable or tightly control any model export functionality in your platform.Sign and verify inference requests and responses to ensure authenticity.Integrate security checks into CI/CD pipelines to detect risky configurations—such as public model endpoints, export-enabled containers, or missing inference authentication—before they reach production.Solution integration: native security integrationIntegrate model validation, packaging, and signing into CI/CD pipelines.Serve models from encrypted containers or TEEs, with minimal runtime exposure.Use container and image scanning tools to catch misconfigurations before deployment.Centralize monitoring and protection with tools like Gcore WAAP for real-time anomaly detection and automated response.How investing in AI security can save your business moneyFrom a financial point of view, the use of AI and machine learning in cybersecurity can lead to massive cost savings. Organizations that utilize AI and automation in cybersecurity have saved an average of $2.22 million per data breach compared to organizations that do not have these protections in place. This is because the necessity for manual oversight is reduced, lowering the total cost of ownership, and averting costly security breaches. The initial investment in advanced security technologies yields returns through decreased downtime, fewer false positives, and an enhanced overall security posture.Challenges aheadWhile securing the AI lifecycle is essential, it’s still difficult to balance robust security with a positive user experience. Rigid scrutiny can add additional latency or false positives that can stop operations, but AI-powered security can avoid such incidents.Another concern organizations must contend with is how to maintain current AI models. With threats changing so rapidly, today's newest model could easily become outdated by tomorrow’s. Solutions must have an ongoing learning ability so that security detection parameters can be revised.Operational maturity is also a concern, especially for companies that operate in multiple geographies. Well-thought-out strategies and sound governance processes must accompany the integration of complex AI/ML tools with existing infrastructure, but automation still offers the most benefits by reducing the overhead on security teams and helping ensure consistent deployment of security policies.Get ahead of AI security with GcoreAI workloads introduce new and often overlooked security risks that can compromise data integrity, model behavior, and intellectual property. By implementing practices like zero-trust architecture, encrypted inference, and native security integration, developers can build more resilient and trustworthy AI systems. As threats evolve, staying ahead means embedding security at every phase of the AI lifecycle.Gcore helps teams apply these principles at scale, offering native support for zero-trust AI, encrypted inference, and intelligent API protection. As an experienced AI and security solutions provider, our DDoS Protection and AI-enabled WAAP solutions integrate natively with Everywhere Inference and GPU Cloud across 210+ global points of presence. That means low latency, high performance, and proven, robust security, no matter where your customers are located.Talk with our AI security experts and secure your workloads today

Flexible DDoS mitigation with BGP Flowspec
For customers who understand their own network traffic patterns, rigid DDoS protection can be more of a limitation than a safeguard. That’s why Gcore supports BGP Flowspec: a flexible, standards-based method for defining granular filters that block or rate-limit malicious traffic in real time…before it reaches your infrastructure.In this article, we’ll walk through:What Flowspec is and how it worksThe specific filters and actions Gcore supportsCommon use cases, with example rule definitionsHow to activate and monitor Flowspec in your environmentWhat is the BGP Flowspec?BGP Flowspec (RFC 8955) extends Border Gateway Protocol to distribute traffic filtering rules alongside routing updates. Instead of static ACLs or reactive blackholing, Flowspec enables near-instantaneous propagation of mitigation rules across networks.BGP tells routers how to reach IP prefixes across the internet. With Flowspec, those same BGP announcements can now carry rules, not just routes. Each rule describes a pattern of traffic (e.g., TCP SYN packets >1000 bytes from a specific subnet) and what action to take (drop, rate-limit, mark, or redirect).What are the benefits of the BGP Flowspec?Most traditional DDoS protection services react to threats after they start, whether by blackholing traffic to a target IP, redirecting flows to a scrubbing center, or applying rigid, static filters. These approaches can block legitimate traffic, introduce latency, or be too slow to respond to fast-evolving attacks.Flowspec offers a more flexible alternative.Proactive mitigation: Instead of waiting for attacks, you can define known-bad traffic patterns ahead of time and block them instantly. Flowspec lets experienced operators prevent incidents before they start.Granular filtering: You’re not limited to blocking by IP or port. With Flowspec, you can match on packet size, TCP flags, ICMP codes, and more, enabling fine-tuned control that traditional ACLs or RTBH don’t support.Edge offloading: Filtering happens directly on Gcore’s routers, offloading your infrastructure and avoiding scrubbing latency.Real-time updates: Changes to rules are distributed across the network via BGP and take effect immediately, faster than manual intervention or standard blackholing.You still have the option to block traffic during an active attack, but with Flowspec, you gain the flexibility to protect services with minimal disruption and greater precision than conventional tools allow.Which parts of the Flowspec does Gcore implement?Gcore supports twelve filter types and four actions of the Flowspec.Supported filter typesGcore supports all 12 standard Flowspec match components.Filter FieldDescriptionDestination prefixTarget subnet (usually your service or app)Source prefixSource of traffic (e.g., attacker IP range)IP protocolTCP, UDP, ICMP, etc.Port / Source portMatch specific client or server portsDestination portMatch destination-side service portsICMP type/codeFilter echo requests, errors, etc.TCP flagsFilter packets by SYN, ACK, RST, FIN, combinationsPacket lengthFilter based on payload sizeDSCPQuality of service code pointFragmentMatch on packet fragmentation characteristicsSupported actionsGcore DDoS Protection supports the following Flowspec actions, which can be triggered when traffic matches a specific filter:ActionDescriptionTraffic-rate (0x8006)Throttle/rate limit traffic by byte-per-second rateredirectRedirect traffic to alternate location (e.g., scrubbing)traffic-markingApply DSCP marks for downstream classificationno-action (drop)Drop packets (rate-limit 0)Rule orderingRFC 5575 defines the implicit order of Flowspec rules. The crucial point is that more specific announcements take preference, not the order in which the rules are propagated.Gcore also respects Flowspec rule ordering per RFC 5575. More specific filters override broader ones. Future support for Flowspec v2 (with explicit ordering) is under consideration, pending vendor adoption.Blackholing and extended blackholing (eBH)Remote-triggered blackhole (RTBH) is a standardized protection method that the client manages via BGP by analyzing traffic, identifying the direction of the attack (i.e., the destination IP address). This method protects against volumetric attacks.Customers using Gcore IP Transit can trigger immediate blackholing for attacked prefixes via BGP, using the well-known blackhole community tag 65000:666. All traffic to that destination IP is dropped at Gcore’s edge.The list of supported BGP communities is available here.BGP extended blackholeExtended blackhole (eBH) allows for more granular blackholing that does not affect legitimate traffic. For customers unable to implement Flowspec directly, Gcore supports eBH. You announce target prefixes with pre-agreed BGP communities, and Gcore translates them into Flowspec mitigations.To configure this option, contact our NOC at noc@gcore.lu.Monitoring and limitationsGcore can support several logging transports, including mail and Slack.If the number of Flowspec prefixes exceeds the configured limit, Gcore DDoS Protection stops accepting new announcements, but BGP sessions and existing prefixes will stay active. Gcore will receive a notification that you reached the limit.How to activateActivation takes just two steps:Define rules on your edge router using Flowspec NLRI formatAnnounce rules via BGP to Gcore’s intermediate control planeThen, Gcore validates and propagates the filters to border routers. Filters are installed on edge devices and take effect immediately.If attack patterns are unknown, you’ll first need to detect anomalies using your existing monitoring stack, then define the appropriate Flowspec rules.Need help activating Flowspec? Get in touch via our 24/7 support channels and our experts will be glad to assist.Set up GRE and benefit from Flowspec today

Securing AI from the ground up: defense across the lifecycle
As more AI workloads shift to the edge for lower latency and localized processing, the attack surface expands. Defending a data center is old news. Now, you’re securing distributed training pipelines, mobile inference APIs, and storage environments that may operate independently of centralized infrastructure, especially in edge or federated learning contexts. Every stage introduces unique risks. Each one needs its own defenses.Let’s walk through the key security challenges across each phase of the AI lifecycle, and the hardening strategies that actually work.PhaseTop threatsHardening stepsTrainingData poisoning, leaksValidation, dataset integrity tracking, RBAC, adversarial trainingDevelopmentModel extraction, inversionRate limits, obfuscation, watermarking, penetration testingInferenceAdversarial inputs, spoofed accessInput filtering, endpoint auth, encryption, TEEsStorage and deploymentModel theft, tamperingEncrypted containers, signed builds, MFA, anomaly monitoringTraining: your model is only as good as its dataThe training phase sets the foundation. If the data going in is poisoned, biased, or tampered with, the model will learn all the wrong lessons and carry those flaws into production.Why it mattersData poisoning is subtle. You won’t see a red flag during training logs or a catastrophic failure at launch. These attacks don’t break training, they bend it.A poisoned model may appear functional, but behaves unpredictably, embeds logic triggers, or amplifies harmful bias. The impact is serious later in the AI workflow: compromised outputs, unexpected behavior, or regulatory non-compliance…not due to drift, but due to training-time manipulation.How to protect itValidate datasets with schema checks, label audits, and outlier detection.Version, sign, and hash all training data to verify integrity and trace changes.Apply RBAC and identity-aware proxies (like OPA or SPIFFE) to limit who can alter or inject data.Use adversarial training to improve model robustness against manipulated inputs.Development and testing: guard the logicOnce you’ve got a trained model, the next challenge is protecting the logic itself: what it knows and how it works. The goal here is to make attacks economically unfeasible.Why it mattersModels encode proprietary logic. When exposed via poorly secured APIs or unprotected inference endpoints, they’re vulnerable to:Model inversion: Extracting training dataExtraction: Reconstructing logicMembership inference: Revealing whether a datapoint was in trainingHow to protect itApply rate limits, logging, and anomaly detection to monitor usage patterns.Disable model export by default. Only enable with approval and logging.Use quantization, pruning, or graph obfuscation to reduce extractability.Explore output fingerprinting or watermarking to trace unauthorized use in high-value inference scenarios.Run white-box and black-box adversarial evaluations during testing.Integrate these security checks into your CI/CD pipeline as part of your MLOps workflow.Inference: real-time, real riskInference doesn’t get a free pass because it’s fast. Security needs to be just as real-time as the insights your AI delivers.Why it mattersAdversarial attacks exploit the way models generalize. A single pixel change or word swap can flip the classification.When inference powers fraud detection or autonomous systems, a small change can have a big impact.How to protect itSanitize input using JPEG compression, denoising, or frequency filtering.Train on adversarial examples to improve robustness.Enforce authentication and access control for all inference APIs—no open ports.Encrypt inference traffic with TLS. For added privacy, use trusted execution environments (TEEs).For highly sensitive cases, consider homomorphic encryption or SMPC—strong but compute-intensive solutions.Check out our free white paper on inference optimization.Storage and deployment: don’t let your model leakOnce your model’s trained and tested, you’ve still got to deploy and store it securely—often across multiple locations.Why it mattersUnsecured storage is a goldmine for attackers. With access to the model binary, they can reverse-engineer, clone, or rehost your IP.How to protect itStore models on encrypted volumes or within enclaves.Sign and verify builds before deployment.Enforce MFA, RBAC, and immutable logging on deployment pipelines.Monitor for anomalous access patterns—rate, volume, or source-based.Edge strategy: security that moves with your AIAs AI moves to the edge, centralized security breaks down. You need protection that operates as close to the data as your inference does.That’s why we at Gcore integrate protection into AI workflows from start to finish:WAAP and DDoS mitigation at edge nodes—not just centralized DCs.Encrypted transport (TLS 1.3) and in-node processing reduce exposure.Inline detection of API abuse and L7 attacks with auto-mitigation.180+ global PoPs to maintain consistency across regions.AI security is lifecycle securityNo single firewall, model tweak, or security plugin can secure AI workloads in isolation. You need defense in depth: layered, lifecycle-wide protections that work at the data layer, the API surface, and the edge.Ready to secure your AI stack from data to edge inference?Talk to our AI security experts

3 ways to safeguard your website against DDoS attacks—and why it matters
DDoS (distributed denial-of-service) attacks are a type of cyberattack in which a hacker overwhelms a server with an excessive number of requests, causing the server to stop functioning correctly and denying access to legitimate users. The volume of these types of attacks is increasing, with a 56% year-on-year rise recorded in late 2024, driven by factors including the growing availability of AI-powered tools, poorly secured IoT devices, and geopolitical tensions worldwide.Fortunately, there are effective ways to defend against DDoS attacks. Because these threats can target different layers of your network, a single tool isn’t enough, and a multi-layered approach is necessary. Businesses need to protect both the website itself and the infrastructure behind it. This article explores the three key security solutions that work together to protect your website—and the costly consequences of failing to prepare.The consequences of not protecting your website against DDoS attacksIf your website isn’t sufficiently protected, DDoS attacks can have severe and far-reaching impacts on your website, business, and reputation. They not only disrupt the user experience but can spiral into complex, costly recovery efforts. Safeguarding your website against DDoS attacks is essential to preventing the following serious outcomes:Downtime: DDoS attacks can exhaust server resources (CPU, RAM, throughput), taking websites offline and making them unavailable to end users.Loss of business/customers: Frustrated users will leave, and many won’t return after failed checkouts or broken sessions.Financial losses: By obstructing online sales, DDoS attacks can cause businesses to suffer substantial loss of revenue.Reputational damage: Websites or businesses that suffer repeated unmitigated DDoS attacks may cause customers to lose trust in them.Loss of SEO rankings: A website could lose its hard-won SEO ranking if it experiences extended downtime due to DDoS attacks.Disaster recovery costs: DDoS disaster recovery costs can escalate quickly, encompassing hardware replacement, software upgrades, and the need to hire external specialists.Solution #1: Implement dedicated DDoS protection to safeguard your infrastructureAdvanced DDoS protection measures are customized solutions designed to protect your servers and infrastructure against DDoS attacks. DDoS protection helps defend against malicious traffic designed to crash servers and interrupt service.Solutions like Gcore DDoS Protection continuously monitor incoming traffic for suspicious patterns, allowing them to automatically detect and mitigate attacks in real time. If your resources are attacked, the system filters out harmful traffic before it reaches your servers. This means that real users can access your website without interruption, even during an attack.For example, a financial services provider could be targeted by cybercriminals attempting to disrupt services with a large-scale volumetric DDoS attack. With dedicated DDoS protection, the provider can automatically detect and filter out malicious traffic before it impacts users. Customers can continue to log in, check balances, and complete transactions, while the system adapts to the evolving nature of the attack in the background, maintaining uninterrupted service.The protection scales with your business needs, automatically adapting to higher traffic loads or more complex attacks. Up-to-date reports and round-the-clock technical support allow you to keep track of your website status at all times.Solution #2: Enable WAAP to protect your websiteGcore WAAP (web application and API protection) is a comprehensive solution that monitors, detects, and mitigates cyber threats, including DDoS layer 7 attacks. WAAP uses AI-driven algorithms to monitor, detect, and mitigate threats in real time, offering an additional layer of defense against sophisticated attackers. Once set up, the system provides powerful tools to create custom rules and set specific triggers. For example, you can specify the conditions under which certain requests should be blocked, such as sudden spikes in API calls or specific malicious patterns common in DDoS attacks.For instance, an e-commerce platform during a major sale like Black Friday could be targeted by bots attempting to flood the site with fake login or checkout requests. WAAP can differentiate between genuine users and malicious bots by analyzing traffic patterns, rate of requests, and attack behaviors. It blocks malicious requests so that real customers can continue to complete transactions without disruption.Solution #3: Connect to a CDN to strengthen defenses furtherA trustworthy content delivery network (CDN) is another valuable addition to your security stack. A CDN is a globally distributed server network that ensures efficient content delivery. CDNs spread traffic across multiple global edge servers, reducing the load on the origin server. During a DDoS attack, a CDN with DDoS protection can protect servers and end users. It filters traffic at the edge, blocking threats before they ever reach your infrastructure. Caching servers within the CDN network then deliver the requested content to legitimate users, preventing network congestion and denial of service to end users.For instance, a gaming company launching a highly anticipated multiplayer title could face a massive surge in traffic as players around the world attempt to download and access the game simultaneously. This critical moment also makes the platform a prime target for DDoS attacks aimed at disrupting the launch. A CDN with integrated DDoS protection can absorb and filter out malicious traffic at the edge before it reaches the core infrastructure. Legitimate players continue to enjoy fast downloads and seamless gameplay, while the origin servers remain stable and protected from overload or downtime.In addition, Super Transit intelligently routes your traffic via Gcore’s 180+ point-of-presence global network, proactively detecting, mitigating, and filtering DDoS attacks. Even mid-attack, users experience seamless access with no interruptions. They also benefit from an enhanced end-user experience, thanks to shorter routes between users and servers that reduce latency.Taking the next steps to protect your websiteDDoS attacks pose significant threats to websites, but a proactive approach is the best way to keep your site online, secure, and resilient. Regardless of your industry or location, it’s crucial to take action to safeguard your website and maintain its uninterrupted availability.Enabling Gcore DDoS protection is a simple and proven way to boost your digital infrastructure’s resiliency against different types of DDoS attacks. Gcore DDoS protection also integrates with other security solutions, including Gcore WAAP, which protects your website and CDNs. These tools work seamlessly together to provide advanced website protection, offering improved security and performance in one intuitive platform.If you’re ready to try Gcore Edge Security, fill in the form below and one of our security experts will be in touch for a personalized consultation.

Tuning Gcore CDN rules for dynamic application data caching
Caching services like content delivery networks (CDNs) can be a solid addition to your web stack. They lower response latency and improve user experience while also helping protect your origin servers through security features like access control lists (ACLs) and traffic filtering. However, if you’re running a highly dynamic web service, a misconfigured CDN might lead to the delivery of stale or, in the worst case, wrong data.If you’re hosting a dynamic web service and want to speed it up, this guide is for you. It explains the common issues dynamic services have with CDNs and how to solve them with Gcore CDN.How does dynamic data differ from static data?There are two main differences between static and dynamic data:Change frequency: Dynamic data changes more often than static data. Some websites stay the same for weeks or months; others change multiple times daily.Personalized responses: Static systems deliver the same response for a given URL path. Dynamic systems, by contrast, can generate different responses for each user, based on parameters like authentication, location, session data, or user preferences.Now, you might ask: Aren’t static websites simply HTML pages while dynamic ones are generated on-the-fly by application servers?It depends.A website consisting only of HTML pages might still be dynamic if the pages are changed frequently, and an application server that generates HTML responses can serve the same HTML forever and always provide everyone with the same content for a URL. The CDN network doesn’t know how you create the HTML. It only sees the finished product and decides how long it should cache it. You need to decide on a case-by-case basis.How do cache rules affect dynamic data?When using a CDN, you have to define rules that govern the caching of your data. If you consider this data dynamic, either because it changes frequently or because you deliver user-specific responses, those rules can drastically impact the user experience, ranging from the delivery of stale data to completely wrong data.Cache expirationFirst, consider cache expiration time. With Gcore CDN, you have two options:Let your origin server control it. This is ideal for dynamic systems using application servers because it gives you precise control without needing to adjust Gcore settings.Let Gcore CDN control it. This works well for static HTTP servers delivering HTML pages that change often. If you can’t modify the server’s cache configuration, using Gcore’s settings is easier.No matter which method you choose, understand what your users consider “stale” and set the expiration time accordingly.Query string handlingNext, decide how Gcore CDN should handle URL query parameters. Ignoring them can improve performance—but for dynamic systems that use query strings for server-side sorting, filtering, or pagination, this can break functionality.For example, a headless CMS might use: https://example.com/api/posts?sort=asc&start=99If the CDN ignores the query string, it will always deliver the cached response, even if new parameters are requested. So, make sure to disable the Ignore query string parameters setting when necessary.Cookie bypassingCookies are often used for session handling. While ignoring cookies can boost performance, doing so risks breaking applications that rely on them.For example: https://example.com/api/users/profileIf this endpoint relies on a session cookie, caching without considering the cookie will serve the same user profile to everyone. Be sure to disable “Ignore cookies” if your server uses them for authentication or personalization.Cache key customizationIf you need more detailed control over the caching, you can modify the cache key generation. This key defines the mapping of a request to a cache entry and allows you to manage the granularity of your caching.The Gcore Customer Portal offers basic customization functionality, and the support team can help with advanced rules. For example, adding the request method (e.g., GET, HEAD, POST, etc.) to your cache key ensures a single URL has a dedicated cache entry for each method instead of using one for all.GraphQL considerationsMost GraphQL implementations only use POST requests and include the GraphQL query in the request body. This means every GraphQL request will use the same URL and the same method, regardless of the query. Gcore CDN doesn’t check the request body when caching, so every query will result in the same cache key and override each other.To make sure the CDN doesn’t break your API, turn off caching for all your GraphQL endpoints.Path-based CDN rules for hybrid contentIf your application serves both static and dynamic content across different paths, Gcore CDN rules offer a powerful way to manage caching more granularly.Using the CDN rules engine, you can create specific rules for individual file paths or extensions. This allows you to apply dynamic-appropriate settings—like disabling caching or respecting cookies—only to dynamic endpoints (e.g., /api/**), while using more aggressive caching for static assets (e.g., /assets/**, /images/**, or /js/**).This path-level control delivers performance gains from CDN caching without compromising the correctness of dynamic content delivery.SummaryUsing a CDN is an easy way to improve your site’s performance, and even dynamic applications can benefit from CDN caching when configured correctly. Check that:Expiration times reflect real-world freshness needsQuery strings and cookies aren’t ignored if they affect the responseCache keys are customized where neededGraphQL endpoints are excluded from cachingCDN rules are used to apply different settings for dynamic and static pathsWith the right setup, you can safely speed up even the most complex applications.Explore our step-by-step guide to setting rules for particular files in Gcore CDN.Discover Gcore CDN
Subscribe to our newsletter
Get the latest industry trends, exclusive insights, and Gcore updates delivered straight to your inbox.