Gcore named a Leader in the GigaOm Radar for AI Infrastructure!Get the report
  1. Home
  2. Developers
  3. How to Check CPU temperature on Linux

How to Check CPU temperature on Linux

  • By Gcore
  • September 12, 2023
  • 2 min read
How to Check CPU temperature on Linux

In today’s high-demand digital environments, keeping tabs on your computer’s health is crucial. One vital metric, often overlooked, is the CPU temperature, which can greatly influence performance and longevity. If you’re a Linux user, understanding how to monitor this temperature can help ensure your system operates efficiently and safely. This guide will walk you through the processes and tools you need to keep an eye on your Linux CPU’s thermal state.

Why is Monitoring CPU Temperature Important?

Overheating can result in issues such as system throttling (where the system deliberately slows down to prevent damage), unexpected shutdowns, and even long-term damage to hardware components. Additionally, here are other potential consequences:

  1. Performance. High temperatures can lead to throttling, which directly impacts system performance. By maintaining an optimal temperature, you ensure your system runs smoothly and efficiently.
  2. Longevity. Constant exposure to high temperatures can reduce the lifespan of your CPU and other system components.
  3. Safety.  Extremely high temperatures can cause system components to fail or even cause physical damage. There have been cases where overheating has led to fires, though they are rare.

Checking the CPU temperature on Linux

Here’s the guide on how to check the CPU temperature on Linux:

#1 Update Your System

Keeping your system updated ensures you have the latest drivers and software packages, which can be essential for accurate hardware readings.

sudo apt update && sudo apt upgrade -y

#2 Install lm-sensors

‘lm-sensors’ is a widely used tool in the Linux ecosystem for monitoring hardware temperatures, fan speeds, and voltages.

sudo apt install lm-sensors

#3 Detect Sensors

After installation, you need to run a detection command. This will detect the sensors on your system and configure ‘lm-sensors’ to read them.

sudo sensors-detect

Follow the on-screen prompts, answering ‘YES’ to most questions to ensure all sensors are detected.

#4 Check CPU Temperature

Now that ‘lm-sensors’ is set up, you can read the temperatures of your CPU and other components.

sensors

This command will display a readout of various system temperatures, fan speeds, and other readings. Look for entries labeled “Core” to see individual core temperatures for multi-core CPUs.

#5 Install Psensor

This is an optional step. If you prefer a graphical representation of temperature and other system metrics, ‘psensor’ provides a user-friendly interface.

sudo apt install psensor

After installation, launch ‘psensor’ from the application menu. It’ll display a graphical representation of your system’s temperatures, including the CPU.

#6 Regular Monitoring

It’s a good practice to check the CPU temperature periodically, especially if you’re performing heavy tasks or notice your system behaving strangely. Monitoring tools can help you identify patterns or spikes in temperature.

That’s all! Now that you’ve learned how to monitor the CPU temperature on Linux, you’re better equipped to maintain a healthy system that performs at its best. Following the outlined steps will empower users to understand their machine’s thermal behavior and make knowledgeable choices.

Conclusion

Looking to deploy Linux in the cloud? With Gcore Cloud, you can choose from Basic VM, Virtual Instances, or VPS/VDS suitable for Linux:

Choose an instance

Related articles

How to protect login pages with Gcore WAAP

Exposed login pages are a common vulnerability across web applications. Attackers often use automated tools to guess credentials in brute-force or credential-stuffing attacks, probe for login behavior to exploit session or authentication logic, or overload your infrastructure with fake requests.Without specific rules for login-related traffic, your application might miss these threats or apply overly broad protections that disrupt real users. Fortunately, Gcore WAAP makes it easy to defend these sensitive endpoints without touching your application code.In this guide, we’ll show you how to use WAAP’s custom rule engine to identify login traffic and apply protections like CAPTCHA to reduce risk, block automated abuse, and maintain a smooth experience for legitimate users. We’ve also included a complete video walkthrough from Gcore’s Security Presales Engineer, Michal Zalewski.Video walkthroughHere’s Gcore’s Michal Zalewski giving a full walkthrough of the steps in this article.Step 1: Access your WAAP configurationGo to portal.gcore.com and log in.Navigate to WAAP in the sidebar. If you’re not yet a WAAP user, it costs just $26/month.Select the resource that hosts your login form; for example, gcore.zalewski.cloud.Step 2: Create a custom ruleIn the main panel of your selected resource, go to WAAP Rules.Click Add Custom Rule in the upper-right corner.Step 3: Define the login page URLIdentify the login endpoint you want to protect:Use tools like Burp Suite or the "Inspect" feature in your browser to verify the login page URL.In Burp Suite, use the Proxy tab, or in the browser, check the Network tab to inspect a login request.Look for the path (e.g., /login.php) and HTTP method (POST).In the custom rule setup:Enter the URL (e.g., /login.php).Tag the request using a predefined tag. Select Login Page.Step 4: Name and save the ruleProvide a name for the rule, such as “Login Page URL”, and save it.Step 5: Add a CAPTCHA challenge ruleTo protect the login page from automated abuse:Create a new custom rule.Name it something like “Login Page Challenge”.Under Conditions, select the previously created Login Page tag.Set the Action to CAPTCHA.Save the rule.Step 6: Test the protectionReturn to your browser and turn off any proxy tools.Refresh the login page.You should now be challenged with a CAPTCHA each time the login page loads.Once the CAPTCHA is completed successfully, users can log in as usual.Monitor, adapt, and alertAfter deployment:Track rate limit trigger frequencyMonitor WAAP logs for anomaly detectionRotate exemptions or thresholds based on live behaviorFor analytics, refer to the WAAP analytics documentation.Bonus tips for hardened protectionCombine with bot protection: Enable WAAP’s bot mitigation to identify headless browsers and automation tools like Puppeteer or Selenium. See our bot protection docs for setup instructions.Customize 429 responses: Replace default error pages with branded messages or a fallback action. Consider including a support link or CAPTCHA challenge. Check out our response pages documentation for more details.Use geo or ASN exceptions: Whitelist trusted locations or block known bot-heavy ASNs if your audience is localized.Automate it: optional API and Terraform supportTeams with IaC pipelines or security automation workflows might want to automate login page protection with rate limiting. This keeps your WAAP config version-controlled and repeatable.You can use the WAAP API or Terraform to:Create or update rulesRotate session keys or thresholdsExport logs for auditingExplore the WAAP API documentation and WAAP Terraform provider documentation for more details.Stop abuse before it starts with GcoreLogin pages are high-value targets, but they don’t have to be high risk. With Gcore WAAP, setting up robust defenses takes just a few minutes. By tagging login traffic and applying challenge rules like CAPTCHA, you can reduce automated attack risk without sacrificing user experience.As your application grows, revisit your WAAP rules regularly to adapt to new threats, add behavior-based detection, and fine-tune your protective layers. For more advanced configurations, check out our documentation or reach out to Gcore support.Get WAAP today for just $26/month

3 use cases for geo-aware routing with Gcore DNS

If your audience is global but you’re serving everyone the same content from the same place, you're likely sacrificing performance and resilience. Gcore DNS (which includes a free-forever plan and enterprise-grade option) offers a straightforward way to change that with geo-aware routing, a feature that lets you return different DNS responses based on where users are coming from.This article breaks down how Gcore's geo-routing works, how to set it up using the GeoDNS preset in dynamic response mode, where it shines, and when you might be better off with a different option. We’ll walk through three hands-on use cases with real config examples, highlight TTL trade-offs, and call out what developers need to know about edge cases like resolver mismatch and caching delays.What is geo-aware DNS routing?Gcore DNS lets you return different IP addresses based on the user’s geographic location. This is configured using dynamic response rules with the GeoDNS preset, which lets you match on continent, country, region, ASN, or IP/CIDR. When a user makes a DNS request, Gcore uses the resolver’s location to decide which record to return.You can control traffic to achieve outcomes like:Directing European users to an EU-based CDN endpointSending users in regions with known service degradation to a fallback instanceBehind the scenes, this is done by setting up metadata pickers and specifying fallback behavior.For step-by-step guidance, see the official docs: Configure geo-balancing with Dynamic response.How to configure GeoDNS in Gcore DNSTo use geo-aware routing in Gcore DNS, you'll configure a dynamic response record set with the GeoDNS preset. This lets you return different IPs based on region, country, ASN, or IP/CIDR metadata.Basic stepsGo to DNS → Zones in the Gcore Customer Portal. (If you don’t have an account, you can sign up free and use Gcore DNS in just a few clicks.)Create or edit a record set (e.g., for app.example.com).Switch to Advanced mode.Enable Dynamic response.Choose the GeoDNS preset.Add responses per region or country.Define a fallback record for unmatched queries.For detailed step-by-step instructions, check out our docs.Once you’ve set this up, your config should look like the examples shown in the use cases below.Common use casesEach use case below includes a real-world scenario and a breakdown of how to configure it in Gcore DNS. These examples assume you're working in the DNS advanced mode zone editor with dynamic response enabled and the GeoDNS preset selected.The term “DNS setup” refers to the configuration you’d enter for a specific hostname in the Gcore DNS UI under advanced mode.1. Content localizationScenario: You're running example.com and want to serve language-optimized infrastructure for European and Asian users. This use case is often used to reduce TTFB, apply region-specific UX, or comply with local UX norms. If you're also localizing content (e.g., currency, language), make sure your app handles that via subdomains or headers in addition to routing.Objective:EU users → eu.example.comAsia users → asia.example.comAll others → global.example.comDNS setup:Host: www.example.comType: A TTL: 120 Dynamic response: Enabled Preset: GeoDNS Europe → 185.22.33.44 # EU-based web server Asia → 103.55.66.77 # Asia-based web server Fallback → 198.18.0.1 # Global web server2. Regional CDN failoverScenario: You’re using two CDN clusters: one in North America, one in Europe. If one cluster is unavailable, you want traffic rerouted regionally without impacting users elsewhere. To make this work reliably, you must enable DNS Healthchecks for each origin so that Gcore DNS can automatically detect outages and filter out unhealthy IPs from responses.Objective:North America → na.cdn.example.comEurope → eu.cdn.example.comEach region has its own fallbackDNS setup:Host: cdn.example.comType: A TTL: 60 Dynamic response: Enabled Preset: GeoDNS North America → 203.0.113.10 # NA CDN IP Backup (NA region only) → 185.22.33.44 # EU CDN as backup for NA Health check → Enabled for 203.0.113.10 with HTTP/TCP probe settingsEurope → 185.22.33.44 # EU CDN IP Backup (EU region only) → 203.0.113.10 # NA CDN as backup for EU Health check → Enabled for 185.22.33.44Note: Multi-level fallback by region isn’t supported inside one rule set—you need to separate them to keep routing decisions clean.3. Traffic steering for complianceScenario: You need to keep EU user data inside the EU for GDPR compliance while routing the rest of the world to lower-cost infrastructure elsewhere. This approach is useful for fintech, healthcare, or regulated SaaS workloads where regulatory compliance is a challenge.Objective:EU users → EU-only backendAll other users → Global backendDNS setup:Host: transactions.example.com Type: A TTL: 300 Dynamic response: Enabled Preset: GeoDNS Europe → 185.10.10.10 # EU regional API node Fallback → 198.51.100.42 # Global API nodeEdge casesGeoDNS works well, but it’s worth keeping in mind a few edge cases and limitations when you get set up.Resolver location ≠ user locationBy default, Gcore uses ECS (EDNS Client Subnet) for precise client subnet geo-balancing. If ECS isn’t present, resolver IP is used, which may skew location (e.g., public resolvers, mobile carriers). ECS usage can be disabled in the ManagedDNS UI if needed.Caching slows failoverEven if your upstream fails, users may have cached the original IP for minutes. Fallback + TTL tuning are key.No sub-regional precisionYou can route by continent, country, or ASN—but not city. City-level precision isn’t currently supported.Gcore delivers simple solutions to big problemsGeo-aware routing is one of those features that quietly solves big problems, especially when your app or CDN runs globally. With Gcore DNS, you don’t need complex infrastructure to start optimizing traffic flow.Geo-aware routing with Gcore DNS is a lightweight way to optimize performance, localize content, or handle regional failover. If you need greater precision, consider pairing GeoDNS with in-app geolocation logic or CDN edge logic. But for many routing use cases, DNS is the simplest and fastest way to go.Get free-forever Gcore DNS with just a few clicks

3 underestimated security risks of AI workloads and how to overcome them

3 underestimated security risks of AI workloads and how to overcome them

Artificial intelligence workloads introduce a fundamentally different security landscape for engineering and security teams. Unlike traditional applications, AI systems must protect not just endpoints and networks, but also training data pipelines, feature stores, model repositories, and inference APIs. Each phase of the AI life cycle presents distinct attack vectors that adversaries can exploit to corrupt model behavior, extract proprietary logic, or manipulate downstream outputs.In this article, we uncover three security vulnerabilities of AI workloads and explain how developers and MLOps teams can overcome them. We also look at how investing in your AI security can save time and money, explore the challenges that lie ahead for AI security, and offer a simplified way to protect your AI workloads with Gcore.Risk #1: data poisoningData poisoning is a targeted attack on the integrity of AI systems, where malicious actors subtly inject corrupted or manipulated data into training pipelines. The result is a model that behaves unpredictably, generates biased or false outputs, or embeds hidden logic that can be triggered post-deployment. This can undermine business-critical applications—from fraud detection and medical diagnostics to content moderation and autonomous decision-making.For developers, the stakes are high: poisoned models are hard to detect once deployed, and even small perturbations in training data can have system-wide consequences. Luckily, you can take a few steps to mitigate against data poisoning and then implement zero-trust AI to further protect your workloads.Mitigation and hardeningRestrict dataset access using IAM, RBAC, or identity-aware proxies.Store all datasets in versioned, signed, and hashed formats.Validate datasets with automated schema checks, label distribution scans, and statistical outlier detection before training.Track data provenance with metadata logs and checksums.Block training runs if datasets fail predefined data quality gates.Integrate data validation scripts into CI/CD pipelines pre-training.Enforce zero-trust access policies for data ingestion services.Solution integration: zero-trust AIImplement continuous authentication and authorization for each component interacting with data (e.g., preprocessing scripts, training jobs).Enable real-time threat detection during training using runtime security tools.Automate incident response triggers for unexpected file access or data source changes.Risk #2: adversarial attacksAdversarial attacks manipulate model inputs in subtle ways that trick AI systems into making incorrect or dangerous decisions. These perturbations—often imperceptible to humans—can cause models to misclassify images, misinterpret speech, or misread sensor data. In high-stakes environments like facial recognition, autonomous vehicles, or fraud detection, these failures can result in security breaches, legal liabilities, or physical harm.For developers, the threat is real: even state-of-the-art models can be easily fooled without adversarial hardening. The good news? You can make your models more robust by combining defensive training techniques, input sanitization, and secure API practices. While encrypted inference doesn’t directly block adversarial manipulation, it ensures that sensitive inference data stays protected even if attackers attempt to probe the system.Mitigation and hardeningUse adversarial training frameworks like CleverHans or IBM ART to expose models to perturbed inputs during training.Apply input sanitization layers (e.g., JPEG re-encoding, blurring, or noise filters) before data reaches the model.Implement rate limiting and authentication on inference APIs to block automated adversarial probing.Use model ensembles or randomized smoothing to improve resilience to small input perturbations.Log and analyze input-output patterns to detect high-variance or abnormal responses.Test models regularly against known attack vectors using robustness evaluation tools.Solution integration: encrypted inferenceWhile encryption doesn't prevent adversarial inputs, it does mean that input data and model responses remain confidential and protected from observation or tampering during inference.Run inference in trusted environments like Intel SGX or AWS Nitro Enclaves to protect model and data integrity.Use homomorphic encryption or SMPC to process encrypted data without exposing sensitive input.Ensure that all intermediate and output data is encrypted at rest and in transit.Deploy access policies that restrict inference to verified users and approved applications.Risk #3: model leakage of intellectual assetsModel leakage—or model extraction—happens when an attacker interacts with a deployed model in ways that allow them to reverse-engineer its structure, logic, or parameters. Once leaked, a model can be cloned, monetized, or used to bypass the very defenses it was meant to enforce. For businesses, this means losing competitive IP, compromising user privacy, or enabling downstream attacks.For developers and MLOps teams, the challenge is securing deployed models in a way that balances performance and privacy. If you're exposing inference APIs, you’re exposing potential entry points—but with the right controls and architecture, you can drastically reduce the risk of model theft.Mitigation and hardeningEnforce rate limits and usage quotas on all inference endpoints.Monitor for suspicious or repeated queries that indicate model extraction attempts.Implement model watermarking or fingerprinting to trace unauthorized model use.Obfuscate models before deployment using quantization, pruning, or graph rewriting.Disable or tightly control any model export functionality in your platform.Sign and verify inference requests and responses to ensure authenticity.Integrate security checks into CI/CD pipelines to detect risky configurations—such as public model endpoints, export-enabled containers, or missing inference authentication—before they reach production.Solution integration: native security integrationIntegrate model validation, packaging, and signing into CI/CD pipelines.Serve models from encrypted containers or TEEs, with minimal runtime exposure.Use container and image scanning tools to catch misconfigurations before deployment.Centralize monitoring and protection with tools like Gcore WAAP for real-time anomaly detection and automated response.How investing in AI security can save your business moneyFrom a financial point of view, the use of AI and machine learning in cybersecurity can lead to massive cost savings. Organizations that utilize AI and automation in cybersecurity have saved an average of $2.22 million per data breach compared to organizations that do not have these protections in place. This is because the necessity for manual oversight is reduced, lowering the total cost of ownership, and averting costly security breaches. The initial investment in advanced security technologies yields returns through decreased downtime, fewer false positives, and an enhanced overall security posture.Challenges aheadWhile securing the AI lifecycle is essential, it’s still difficult to balance robust security with a positive user experience. Rigid scrutiny can add additional latency or false positives that can stop operations, but AI-powered security can avoid such incidents.Another concern organizations must contend with is how to maintain current AI models. With threats changing so rapidly, today's newest model could easily become outdated by tomorrow’s. Solutions must have an ongoing learning ability so that security detection parameters can be revised.Operational maturity is also a concern, especially for companies that operate in multiple geographies. Well-thought-out strategies and sound governance processes must accompany the integration of complex AI/ML tools with existing infrastructure, but automation still offers the most benefits by reducing the overhead on security teams and helping ensure consistent deployment of security policies.Get ahead of AI security with GcoreAI workloads introduce new and often overlooked security risks that can compromise data integrity, model behavior, and intellectual property. By implementing practices like zero-trust architecture, encrypted inference, and native security integration, developers can build more resilient and trustworthy AI systems. As threats evolve, staying ahead means embedding security at every phase of the AI lifecycle.Gcore helps teams apply these principles at scale, offering native support for zero-trust AI, encrypted inference, and intelligent API protection. As an experienced AI and security solutions provider, our DDoS Protection and AI-enabled WAAP solutions integrate natively with Everywhere Inference and GPU Cloud across 210+ global points of presence. That means low latency, high performance, and proven, robust security, no matter where your customers are located.Talk with our AI security experts and secure your workloads today

Flexible DDoS mitigation with BGP Flowspec cover image

Flexible DDoS mitigation with BGP Flowspec

For customers who understand their own network traffic patterns, rigid DDoS protection can be more of a limitation than a safeguard. That’s why Gcore supports BGP Flowspec: a flexible, standards-based method for defining granular filters that block or rate-limit malicious traffic in real time…before it reaches your infrastructure.In this article, we’ll walk through:What Flowspec is and how it worksThe specific filters and actions Gcore supportsCommon use cases, with example rule definitionsHow to activate and monitor Flowspec in your environmentWhat is the BGP Flowspec?BGP Flowspec (RFC 8955) extends Border Gateway Protocol to distribute traffic filtering rules alongside routing updates. Instead of static ACLs or reactive blackholing, Flowspec enables near-instantaneous propagation of mitigation rules across networks.BGP tells routers how to reach IP prefixes across the internet. With Flowspec, those same BGP announcements can now carry rules, not just routes. Each rule describes a pattern of traffic (e.g., TCP SYN packets >1000 bytes from a specific subnet) and what action to take (drop, rate-limit, mark, or redirect).What are the benefits of the BGP Flowspec?Most traditional DDoS protection services react to threats after they start, whether by blackholing traffic to a target IP, redirecting flows to a scrubbing center, or applying rigid, static filters. These approaches can block legitimate traffic, introduce latency, or be too slow to respond to fast-evolving attacks.Flowspec offers a more flexible alternative.Proactive mitigation: Instead of waiting for attacks, you can define known-bad traffic patterns ahead of time and block them instantly. Flowspec lets experienced operators prevent incidents before they start.Granular filtering: You’re not limited to blocking by IP or port. With Flowspec, you can match on packet size, TCP flags, ICMP codes, and more, enabling fine-tuned control that traditional ACLs or RTBH don’t support.Edge offloading: Filtering happens directly on Gcore’s routers, offloading your infrastructure and avoiding scrubbing latency.Real-time updates: Changes to rules are distributed across the network via BGP and take effect immediately, faster than manual intervention or standard blackholing.You still have the option to block traffic during an active attack, but with Flowspec, you gain the flexibility to protect services with minimal disruption and greater precision than conventional tools allow.Which parts of the Flowspec does Gcore implement?Gcore supports twelve filter types and four actions of the Flowspec.Supported filter typesGcore supports all 12 standard Flowspec match components.Filter FieldDescriptionDestination prefixTarget subnet (usually your service or app)Source prefixSource of traffic (e.g., attacker IP range)IP protocolTCP, UDP, ICMP, etc.Port / Source portMatch specific client or server portsDestination portMatch destination-side service portsICMP type/codeFilter echo requests, errors, etc.TCP flagsFilter packets by SYN, ACK, RST, FIN, combinationsPacket lengthFilter based on payload sizeDSCPQuality of service code pointFragmentMatch on packet fragmentation characteristicsSupported actionsGcore DDoS Protection supports the following Flowspec actions, which can be triggered when traffic matches a specific filter:ActionDescriptionTraffic-rate (0x8006)Throttle/rate limit traffic by byte-per-second rateredirectRedirect traffic to alternate location (e.g., scrubbing)traffic-markingApply DSCP marks for downstream classificationno-action (drop)Drop packets (rate-limit 0)Rule orderingRFC 5575 defines the implicit order of Flowspec rules. The crucial point is that more specific announcements take preference, not the order in which the rules are propagated.Gcore also respects Flowspec rule ordering per RFC 5575. More specific filters override broader ones. Future support for Flowspec v2 (with explicit ordering) is under consideration, pending vendor adoption.Blackholing and extended blackholing (eBH)Remote-triggered blackhole (RTBH) is a standardized protection method that the client manages via BGP by analyzing traffic, identifying the direction of the attack (i.e., the destination IP address). This method protects against volumetric attacks.Customers using Gcore IP Transit can trigger immediate blackholing for attacked prefixes via BGP, using the well-known blackhole community tag 65000:666. All traffic to that destination IP is dropped at Gcore’s edge.The list of supported BGP communities is available here.BGP extended blackholeExtended blackhole (eBH) allows for more granular blackholing that does not affect legitimate traffic. For customers unable to implement Flowspec directly, Gcore supports eBH. You announce target prefixes with pre-agreed BGP communities, and Gcore translates them into Flowspec mitigations.To configure this option, contact our NOC at noc@gcore.lu.Monitoring and limitationsGcore can support several logging transports, including mail and Slack.If the number of Flowspec prefixes exceeds the configured limit, Gcore DDoS Protection stops accepting new announcements, but BGP sessions and existing prefixes will stay active. Gcore will receive a notification that you reached the limit.How to activateActivation takes just two steps:Define rules on your edge router using Flowspec NLRI formatAnnounce rules via BGP to Gcore’s intermediate control planeThen, Gcore validates and propagates the filters to border routers. Filters are installed on edge devices and take effect immediately.If attack patterns are unknown, you’ll first need to detect anomalies using your existing monitoring stack, then define the appropriate Flowspec rules.Need help activating Flowspec? Get in touch via our 24/7 support channels and our experts will be glad to assist.Set up GRE and benefit from Flowspec today

Securing AI from the ground up: defense across the lifecycle

As more AI workloads shift to the edge for lower latency and localized processing, the attack surface expands. Defending a data center is old news. Now, you’re securing distributed training pipelines, mobile inference APIs, and storage environments that may operate independently of centralized infrastructure, especially in edge or federated learning contexts. Every stage introduces unique risks. Each one needs its own defenses.Let’s walk through the key security challenges across each phase of the AI lifecycle, and the hardening strategies that actually work.PhaseTop threatsHardening stepsTrainingData poisoning, leaksValidation, dataset integrity tracking, RBAC, adversarial trainingDevelopmentModel extraction, inversionRate limits, obfuscation, watermarking, penetration testingInferenceAdversarial inputs, spoofed accessInput filtering, endpoint auth, encryption, TEEsStorage and deploymentModel theft, tamperingEncrypted containers, signed builds, MFA, anomaly monitoringTraining: your model is only as good as its dataThe training phase sets the foundation. If the data going in is poisoned, biased, or tampered with, the model will learn all the wrong lessons and carry those flaws into production.Why it mattersData poisoning is subtle. You won’t see a red flag during training logs or a catastrophic failure at launch. These attacks don’t break training, they bend it.A poisoned model may appear functional, but behaves unpredictably, embeds logic triggers, or amplifies harmful bias. The impact is serious later in the AI workflow: compromised outputs, unexpected behavior, or regulatory non-compliance…not due to drift, but due to training-time manipulation.How to protect itValidate datasets with schema checks, label audits, and outlier detection.Version, sign, and hash all training data to verify integrity and trace changes.Apply RBAC and identity-aware proxies (like OPA or SPIFFE) to limit who can alter or inject data.Use adversarial training to improve model robustness against manipulated inputs.Development and testing: guard the logicOnce you’ve got a trained model, the next challenge is protecting the logic itself: what it knows and how it works. The goal here is to make attacks economically unfeasible.Why it mattersModels encode proprietary logic. When exposed via poorly secured APIs or unprotected inference endpoints, they’re vulnerable to:Model inversion: Extracting training dataExtraction: Reconstructing logicMembership inference: Revealing whether a datapoint was in trainingHow to protect itApply rate limits, logging, and anomaly detection to monitor usage patterns.Disable model export by default. Only enable with approval and logging.Use quantization, pruning, or graph obfuscation to reduce extractability.Explore output fingerprinting or watermarking to trace unauthorized use in high-value inference scenarios.Run white-box and black-box adversarial evaluations during testing.Integrate these security checks into your CI/CD pipeline as part of your MLOps workflow.Inference: real-time, real riskInference doesn’t get a free pass because it’s fast. Security needs to be just as real-time as the insights your AI delivers.Why it mattersAdversarial attacks exploit the way models generalize. A single pixel change or word swap can flip the classification.When inference powers fraud detection or autonomous systems, a small change can have a big impact.How to protect itSanitize input using JPEG compression, denoising, or frequency filtering.Train on adversarial examples to improve robustness.Enforce authentication and access control for all inference APIs—no open ports.Encrypt inference traffic with TLS. For added privacy, use trusted execution environments (TEEs).For highly sensitive cases, consider homomorphic encryption or SMPC—strong but compute-intensive solutions.Check out our free white paper on inference optimization.Storage and deployment: don’t let your model leakOnce your model’s trained and tested, you’ve still got to deploy and store it securely—often across multiple locations.Why it mattersUnsecured storage is a goldmine for attackers. With access to the model binary, they can reverse-engineer, clone, or rehost your IP.How to protect itStore models on encrypted volumes or within enclaves.Sign and verify builds before deployment.Enforce MFA, RBAC, and immutable logging on deployment pipelines.Monitor for anomalous access patterns—rate, volume, or source-based.Edge strategy: security that moves with your AIAs AI moves to the edge, centralized security breaks down. You need protection that operates as close to the data as your inference does.That’s why we at Gcore integrate protection into AI workflows from start to finish:WAAP and DDoS mitigation at edge nodes—not just centralized DCs.Encrypted transport (TLS 1.3) and in-node processing reduce exposure.Inline detection of API abuse and L7 attacks with auto-mitigation.180+ global PoPs to maintain consistency across regions.AI security is lifecycle securityNo single firewall, model tweak, or security plugin can secure AI workloads in isolation. You need defense in depth: layered, lifecycle-wide protections that work at the data layer, the API surface, and the edge.Ready to secure your AI stack from data to edge inference?Talk to our AI security experts

3 ways to safeguard your website against DDoS attacks—and why it matters

DDoS (distributed denial-of-service) attacks are a type of cyberattack in which a hacker overwhelms a server with an excessive number of requests, causing the server to stop functioning correctly and denying access to legitimate users. The volume of these types of attacks is increasing, with a 56% year-on-year rise recorded in late 2024, driven by factors including the growing availability of AI-powered tools, poorly secured IoT devices, and geopolitical tensions worldwide.Fortunately, there are effective ways to defend against DDoS attacks. Because these threats can target different layers of your network, a single tool isn’t enough, and a multi-layered approach is necessary. Businesses need to protect both the website itself and the infrastructure behind it. This article explores the three key security solutions that work together to protect your website—and the costly consequences of failing to prepare.The consequences of not protecting your website against DDoS attacksIf your website isn’t sufficiently protected, DDoS attacks can have severe and far-reaching impacts on your website, business, and reputation. They not only disrupt the user experience but can spiral into complex, costly recovery efforts. Safeguarding your website against DDoS attacks is essential to preventing the following serious outcomes:Downtime: DDoS attacks can exhaust server resources (CPU, RAM, throughput), taking websites offline and making them unavailable to end users.Loss of business/customers: Frustrated users will leave, and many won’t return after failed checkouts or broken sessions.Financial losses: By obstructing online sales, DDoS attacks can cause businesses to suffer substantial loss of revenue.Reputational damage: Websites or businesses that suffer repeated unmitigated DDoS attacks may cause customers to lose trust in them.Loss of SEO rankings: A website could lose its hard-won SEO ranking if it experiences extended downtime due to DDoS attacks.Disaster recovery costs: DDoS disaster recovery costs can escalate quickly, encompassing hardware replacement, software upgrades, and the need to hire external specialists.Solution #1: Implement dedicated DDoS protection to safeguard your infrastructureAdvanced DDoS protection measures are customized solutions designed to protect your servers and infrastructure against DDoS attacks. DDoS protection helps defend against malicious traffic designed to crash servers and interrupt service.Solutions like Gcore DDoS Protection continuously monitor incoming traffic for suspicious patterns, allowing them to automatically detect and mitigate attacks in real time. If your resources are attacked, the system filters out harmful traffic before it reaches your servers. This means that real users can access your website without interruption, even during an attack.For example, a financial services provider could be targeted by cybercriminals attempting to disrupt services with a large-scale volumetric DDoS attack. With dedicated DDoS protection, the provider can automatically detect and filter out malicious traffic before it impacts users. Customers can continue to log in, check balances, and complete transactions, while the system adapts to the evolving nature of the attack in the background, maintaining uninterrupted service.The protection scales with your business needs, automatically adapting to higher traffic loads or more complex attacks. Up-to-date reports and round-the-clock technical support allow you to keep track of your website status at all times.Solution #2: Enable WAAP to protect your websiteGcore WAAP (web application and API protection) is a comprehensive solution that monitors, detects, and mitigates cyber threats, including DDoS layer 7 attacks. WAAP uses AI-driven algorithms to monitor, detect, and mitigate threats in real time, offering an additional layer of defense against sophisticated attackers. Once set up, the system provides powerful tools to create custom rules and set specific triggers. For example, you can specify the conditions under which certain requests should be blocked, such as sudden spikes in API calls or specific malicious patterns common in DDoS attacks.For instance, an e-commerce platform during a major sale like Black Friday could be targeted by bots attempting to flood the site with fake login or checkout requests. WAAP can differentiate between genuine users and malicious bots by analyzing traffic patterns, rate of requests, and attack behaviors. It blocks malicious requests so that real customers can continue to complete transactions without disruption.Solution #3: Connect to a CDN to strengthen defenses furtherA trustworthy content delivery network (CDN) is another valuable addition to your security stack. A CDN is a globally distributed server network that ensures efficient content delivery. CDNs spread traffic across multiple global edge servers, reducing the load on the origin server. During a DDoS attack, a CDN with DDoS protection can protect servers and end users. It filters traffic at the edge, blocking threats before they ever reach your infrastructure. Caching servers within the CDN network then deliver the requested content to legitimate users, preventing network congestion and denial of service to end users.For instance, a gaming company launching a highly anticipated multiplayer title could face a massive surge in traffic as players around the world attempt to download and access the game simultaneously. This critical moment also makes the platform a prime target for DDoS attacks aimed at disrupting the launch. A CDN with integrated DDoS protection can absorb and filter out malicious traffic at the edge before it reaches the core infrastructure. Legitimate players continue to enjoy fast downloads and seamless gameplay, while the origin servers remain stable and protected from overload or downtime.In addition, Super Transit intelligently routes your traffic via Gcore’s 180+ point-of-presence global network, proactively detecting, mitigating, and filtering DDoS attacks. Even mid-attack, users experience seamless access with no interruptions. They also benefit from an enhanced end-user experience, thanks to shorter routes between users and servers that reduce latency.Taking the next steps to protect your websiteDDoS attacks pose significant threats to websites, but a proactive approach is the best way to keep your site online, secure, and resilient. Regardless of your industry or location, it’s crucial to take action to safeguard your website and maintain its uninterrupted availability.Enabling Gcore DDoS protection is a simple and proven way to boost your digital infrastructure’s resiliency against different types of DDoS attacks. Gcore DDoS protection also integrates with other security solutions, including Gcore WAAP, which protects your website and CDNs. These tools work seamlessly together to provide advanced website protection, offering improved security and performance in one intuitive platform.If you’re ready to try Gcore Edge Security, fill in the form below and one of our security experts will be in touch for a personalized consultation.

Subscribe to our newsletter

Get the latest industry trends, exclusive insights, and Gcore updates delivered straight to your inbox.