- Home
- Developers
- How to Check CPU temperature on Linux
In today’s high-demand digital environments, keeping tabs on your computer’s health is crucial. One vital metric, often overlooked, is the CPU temperature, which can greatly influence performance and longevity. If you’re a Linux user, understanding how to monitor this temperature can help ensure your system operates efficiently and safely. This guide will walk you through the processes and tools you need to keep an eye on your Linux CPU’s thermal state.
Why is Monitoring CPU Temperature Important?
Overheating can result in issues such as system throttling (where the system deliberately slows down to prevent damage), unexpected shutdowns, and even long-term damage to hardware components. Additionally, here are other potential consequences:
- Performance. High temperatures can lead to throttling, which directly impacts system performance. By maintaining an optimal temperature, you ensure your system runs smoothly and efficiently.
- Longevity. Constant exposure to high temperatures can reduce the lifespan of your CPU and other system components.
- Safety. Extremely high temperatures can cause system components to fail or even cause physical damage. There have been cases where overheating has led to fires, though they are rare.
Checking the CPU temperature on Linux
Here’s the guide on how to check the CPU temperature on Linux:
#1 Update Your System
Keeping your system updated ensures you have the latest drivers and software packages, which can be essential for accurate hardware readings.
sudo apt update && sudo apt upgrade -y
#2 Install lm-sensors
‘lm-sensors’ is a widely used tool in the Linux ecosystem for monitoring hardware temperatures, fan speeds, and voltages.
sudo apt install lm-sensors
#3 Detect Sensors
After installation, you need to run a detection command. This will detect the sensors on your system and configure ‘lm-sensors’ to read them.
sudo sensors-detect
Follow the on-screen prompts, answering ‘YES’ to most questions to ensure all sensors are detected.
#4 Check CPU Temperature
Now that ‘lm-sensors’ is set up, you can read the temperatures of your CPU and other components.
sensors
This command will display a readout of various system temperatures, fan speeds, and other readings. Look for entries labeled “Core” to see individual core temperatures for multi-core CPUs.
#5 Install Psensor
This is an optional step. If you prefer a graphical representation of temperature and other system metrics, ‘psensor’ provides a user-friendly interface.
sudo apt install psensor
After installation, launch ‘psensor’ from the application menu. It’ll display a graphical representation of your system’s temperatures, including the CPU.
#6 Regular Monitoring
It’s a good practice to check the CPU temperature periodically, especially if you’re performing heavy tasks or notice your system behaving strangely. Monitoring tools can help you identify patterns or spikes in temperature.
That’s all! Now that you’ve learned how to monitor the CPU temperature on Linux, you’re better equipped to maintain a healthy system that performs at its best. Following the outlined steps will empower users to understand their machine’s thermal behavior and make knowledgeable choices.
Conclusion
Looking to deploy Linux in the cloud? With Gcore Cloud, you can choose from Basic VM, Virtual Instances, or VPS/VDS suitable for Linux:
- Gcore Basic VM offers shared virtual machines from €3.2 per month
- Virtual Instances are virtual machines with a variety of configurations and an application marketplace
- Virtual Dedicated Servers provide outstanding speed of 200+ Mbps in 20+ global locations
Related articles

11 simple tips for securing your APIs
A vast 84% of organizations have experienced API security incidents in the past year. APIs (application programming interfaces) are the backbone of modern technology, allowing seamless interaction between diverse software platforms. However, this increased connectivity comes with a downside: a higher risk of security breaches, which can include injection attacks, credential stuffing, and L7 DDoS attacks, as well as the ever-growing threat of AI-based attacks.Fortunately, developers and IT teams can implement DIY API protection. Mitigating vulnerabilities involves using secure coding techniques, conducting thorough testing, and applying strong security protocols and frameworks. Alternatively, you can simply use a WAAP (web application and API protection) solution for specialized, one-click, robust API protection.This article explains 11 practical tips that can help protect your APIs from security threats and hacking attempts, with examples of commands and sample outputs to provide API security.#1 Implement authentication and authorizationUse robust authentication mechanisms to verify user identity and authorization strategies like OAuth 2.0 to manage access to resources. Using OAuth 2.0, you can set up a token-based authentication system where clients request access tokens using credentials. # Requesting an access token curl -X POST https://yourapi.com/oauth/token \ -d "grant_type=client_credentials" \ -d "client_id=your_client_id" \ -d "client_secret=your_client_secret" Sample output: { "access_token": "eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9...", "token_type": "bearer", "expires_in": 3600 } #2 Secure communication with HTTPSEncrypting data in transit using HTTPS can help prevent eavesdropping and man-in-the-middle attacks. Enabling HTTPS may involve configuring your web server with SSL/TLS certificates, such as Let’s Encrypt with nginx. sudo certbot --nginx -d yourapi.com #3 Validate and sanitize inputValidating and sanitizing all user inputs protects against injection and other attacks. For a Node.js API, use express-validator middleware to validate incoming data. app.post('/api/user', [ body('email').isEmail(), body('password').isLength({ min: 5 }) ], (req, res) => { const errors = validationResult(req); if (!errors.isEmpty()) { return res.status(400).json({ errors: errors.array() }); } // Proceed with user registration }); #4 Use rate limitingLimit the number of requests a client can make within a specified time frame to prevent abuse. The express-rate-limit library implements rate limiting in Express.js. const rateLimit = require('express-rate-limit'); const apiLimiter = rateLimit({ windowMs: 15 * 60 * 1000, // 15 minutes max: 100 }); app.use('/api/', apiLimiter); #5 Undertake regular security auditsRegularly audit your API and its dependencies for vulnerabilities. Runnpm auditin your Node.js project to detect known vulnerabilities in your dependencies. npm audit Sample output: found 0 vulnerabilities in 1050 scanned packages #6 Implement access controlsImplement configurations so that users can only access resources they are authorized to view or edit, typically through roles or permissions. The two more common systems are Role-Based Access Control (RBAC) and Attribute-Based Access Control (ABAC) for a more granular approach.You might also consider applying zero-trust security measures such as the principle of least privilege (PoLP), which gives users the minimal permissions necessary to perform their tasks. Multi-factor authentication (MFA) adds an extra layer of security beyond usernames and passwords.#7 Monitor and log activityMaintain comprehensive logs of API activity with a focus on both performance and security. By treating logging as a critical security measure—not just an operational tool—organizations can gain deeper visibility into potential threats, detect anomalies more effectively, and accelerate incident response.#8 Keep dependencies up-to-dateRegularly update all libraries, frameworks, and other dependencies to mitigate known vulnerabilities. For a Node.js project, updating all dependencies to their latest versions is vital. npm update #9 Secure API keysIf your API uses keys for access, we recommend that you make sure that they are securely stored and managed. Modern systems often utilize dynamic key generation techniques, leveraging algorithms to automatically produce unique and unpredictable keys. This approach enhances security by reducing the risk of brute-force attacks and improving efficiency.#10 Conduct penetration testingRegularly test your API with penetration testing to identify and fix security vulnerabilities. By simulating real-world attack scenarios, your organizations can systematically identify vulnerabilities within various API components. This proactive approach enables the timely mitigation of security risks, reducing the likelihood of discovering such issues through post-incident reports and enhancing overall cybersecurity resilience.#11 Simply implement WAAPIn addition to taking the above steps to secure your APIs, a WAAP (web application and API protection) solution can defend your system against known and unknown threats by consistently monitoring, detecting, and mitigating risks. With advanced algorithms and machine learning, WAAP safeguards your system from attacks like SQL injection, DDoS, and bot traffic, which can compromise the integrity of your APIs.Take your API protection to the next levelThese steps will help protect your APIs against common threats—but security is never one-and-done. Regular reviews and updates are essential to stay ahead of evolving vulnerabilities. To keep on top of the latest trends, we encourage you to read more of our top cybersecurity tips or download our ultimate guide to WAAP.Implementing specialized cybersecurity solutions such as WAAP, which combines web application firewall (WAF), bot management, Layer 7 DDoS protection, and API security, is the best way to protect your assets. Designed to tackle the complex challenges of API threats in the age of AI, Gcore WAAP is an advanced solution that keeps you ahead of security threats.Discover why WAAP is a non-negotiable with our free ebook

What are zero-day attacks? Risks, prevention tips, and new trends
Zero-day attack is a term for any attack that targets a vulnerability in software or hardware that has yet to be discovered by the vendor or developer. The term “zero-day” stems from the idea that the developer has had zero days to address or patch the vulnerability before it is exploited.In a zero-day attack, an attacker finds a vulnerability before a developer discovers and patches itThe danger of zero-day attacks lies in their unknownness. Because the vulnerabilities they target are undiscovered, traditional defense mechanisms or firewalls may not detect them as no specific patch exists, making attack success rates higher than for known attack types. This makes proactive and innovative security measures, like AI-enabled WAAP, crucial for organizations to stay secure.Why are zero-day attacks a threat to businesses?Zero-day attacks pose a unique challenge for businesses due to their unpredictable nature. Since these exploits take advantage of previously unknown vulnerabilities, organizations have no warning or time to deploy a patch before they are targeted. This makes zero-day attacks exceptionally difficult to detect and mitigate, leaving businesses vulnerable to potentially severe consequences. As a result, zero-day attacks can have devastating consequences for organizations of all sizes. They pose financial, reputational, and regulatory risks that can be difficult to recover from, including the following:Financial and operational damage: Ransomware attacks leveraging zero-day vulnerabilities can cripple operations and lead to significant financial losses due to data breach fines. According to recent studies, the average cost of a data breach in 2025 has surpassed $5 million, with zero-day exploits contributing significantly to these figures.Reputation and trust erosion: Beyond monetary losses, zero-day attacks erode customer trust. A single breach can damage an organization’s reputation, leading to customer churn and lost opportunities.Regulatory implications: With strict regulations like GDPR in the EU and similar frameworks emerging globally, organizations face hefty fines for data breaches. Zero-day vulnerabilities, though difficult to predict, do not exempt businesses from compliance obligations.The threat is made clear by recent successful examples of zero-day attacks. The Log4j vulnerability (Log4Shell), discovered in 2021, affected millions of applications worldwide and was widely exploited. In 2023, the MOVEit Transfer exploit was used to compromise data from numerous government and corporate systems. These incidents demonstrate how zero-day attacks can have far-reaching consequences across different industries.New trends in zero-day attacksAs cybercriminals become more sophisticated, zero-day attacks continue to evolve. New methods and technologies are making it easier for attackers to exploit vulnerabilities before they are discovered. The latest trends in zero-day attacks include AI-powered attacks, expanding attack surfaces, and sophisticated multi-vendor attacks.AI-powered attacksAttackers are increasingly leveraging artificial intelligence to identify and exploit vulnerabilities faster than ever before. AI tools can analyze vast amounts of code and detect potential weaknesses in a fraction of the time it would take a human. Moreover, AI can automate the creation of malware, making attacks more frequent and harder to counter.For example, AI-driven malware can adapt in real time to avoid detection, making it particularly effective in targeting enterprise networks and cloud-based applications. Hypothetically, an attacker could use an AI algorithm to scan for weaknesses in widely used SaaS applications, launching an exploit before a patch is even possible.Expanding attack surfacesThe digital transformation continues to expand the attack surface for zero-day exploits. APIs, IoT devices, and cloud-based services are increasingly targeted, as they often rely on interconnected systems with complex dependencies. A single unpatched vulnerability in an API could provide attackers with access to critical data or applications.Sophisticated multi-vector attacksCybercriminals are combining zero-day exploits with other tactics, such as phishing or social engineering, to create multi-vector attacks. This approach increases the likelihood of success and makes defense efforts more challenging.Prevent zero-day attacks with AI-powered WAAPWAAP solutions are becoming a cornerstone of modern cybersecurity, particularly in addressing zero-day vulnerabilities. Here’s how they help:Behavioral analytics: WAAP solutions use behavioral models to detect unusual traffic patterns, blocking potential exploits before they can cause damage.Automated patching: By shielding applications with virtual patches, WAAP can provide immediate protection against vulnerabilities while a permanent fix is developed.API security: With APIs increasingly targeted, WAAP’s ability to secure API endpoints is critical. It ensures that only authorized requests are processed, reducing the risk of exploitation.How WAAP stops AI-driven zero-day attacksAI is not just a tool for attackers—it is also a powerful ally for defenders. Machine learning algorithms can analyze user behavior and network activity to identify anomalies in real time. These systems can detect and block suspicious activities that might indicate an attempted zero-day exploit.Threat intelligence platforms powered by AI can also predict emerging vulnerabilities by analyzing trends and known exploits. This enables organizations to prepare for potential attacks before they occur.At Gcore, our WAAP solution combines these features to provide comprehensive protection. By leveraging cutting-edge AI and machine learning, Gcore WAAP detects and mitigates threats in real time, keeping web applications and APIs secure even from zero-day attacks.More prevention techniquesBeyond WAAP, layering protection techniques can further enhance your business’ ability to ward off zero-day attacks. Consider the following measures:Implement a robust patch management system so that known vulnerabilities are addressed promptly.Conduct regular security assessments and penetration testing to help identify potential weaknesses before attackers can exploit them.Educate employees about phishing and other social engineering tactics to decease the likelihood of successful attacks.Protect your business against zero-day attacks with GcoreZero-day attacks pose a significant threat to businesses, with financial, reputational, and regulatory consequences. The rise of AI-powered cyberattacks and expanding digital attack surfaces make these threats even more pressing. Organizations must adopt proactive security measures, including AI-driven defense mechanisms like WAAP, to protect their critical applications and data. By leveraging behavioral analytics, automated patching, and advanced threat intelligence, businesses can minimize their risk and stay ahead of attackers.Gcore’s AI-powered WAAP provides the robust protection your business needs to defend against zero-day attacks. With real-time threat detection, virtual patching, and API security, Gcore WAAP ensures that your web applications remain protected against even the most advanced cyber threats, including zero-day threats. Don’t wait until it’s too late—secure your business today with Gcore’s cutting-edge security solutions.Discover how WAAP can help stop zero-day attacks

What are virtual machines?
A virtual machine (VM), also called a virtual instance, is a software-based version of a physical computer. Instead of running directly on hardware, a VM operates inside a program that emulates a complete computer system, including a processor, memory, storage, and network connections. This allows multiple VMs to run on a single physical machine, each with its own operating system and applications, as if they were independent computers.VMS are useful because they provide flexibility, isolation, and scalability. Since each VM is self-contained, it can run different operating systems (like Windows, Linux, or macOS) on the same hardware without affecting other VMs or the host machine. This makes them ideal for testing software, running legacy applications, or efficiently using server resources in data centers. Because VMs exist as software, they can be easily copied, moved, or backed up, making them a powerful tool for both individuals and businesses.Read on to learn about types of VMs, their benefits, common use cases, and how to choose the right VM provider for your needs.How do VMs work?A virtual machine (VM) runs inside a program called a hypervisor, which acts as an intermediary between the VM and the actual computer hardware. Every time a VM needs to perform an action—such as running software, accessing storage, or using the processor—the hypervisor intercepts these requests and decides how to allocate resources like CPU power, memory, and disk space. You can think of a hypervisor as an operating system for VMs, managing multiple virtual machines on a single physical computer. Popular hypervisors like VirtualBox and VMware enable users to run multiple operating systems simultaneously while providing strong isolation.Modern hypervisors optimize performance by giving VMs direct access to certain hardware components when possible, reducing the need for constant intervention. However, some level of overhead remains because the hypervisor still needs to manage and coordinate resources efficiently. This means that while VMs can leverage most of the system’s hardware, they can’t use 100% of it, as some processing power is always reserved for managing virtualization itself. This small trade-off is often worth it, as hypervisors keep each VM isolated and secure, preventing one VM from interfering with another.VM layersFigure 1 illustrates the layers of a system virtual machine setup. The layer model can vary depending on the hypervisor. Some hypervisors include a built-in host operating system, while modern hardware offers native virtualization support. Many hypervisors can also manage multiple physical machines and VMs efficiently.VM snapshots are an essential feature in cloud computing, allowing users to quickly restore a virtual machine to a previous state.Figure 1: Layers of system virtual machinesHypervisors that emulate hardware architectures different from what the guest OS expects have a bigger overhead, as they can’t relay commands directly to the hardware without first translating them.VM snapshotsVM snapshots are an essential feature in cloud computing, allowing users to quickly restore a virtual machine to a previous state. The hypervisor can save the complete state of the VM and restore it at a later time to skip the boot process of the guest OS. The hypervisor can also move these snapshots between different physical machines, making the software running in the VM completely independent from the underlying hardware.What are the benefits of using VMs?Virtual machines offer benefits including resource efficiency, isolation, simplified operations, easy migration, faster deployment, cost savings, and security. Let’s look at these one by one.Multiple VMs can run on a single physical machine, making sharing resources between various guest operating systems easier. This is especially important when each guest OS needs to be isolated from the others, such as when they belong to different customers of a cloud service provider. Sharing resources through VMs makes running a server cheaper because you don’t have to buy or rent a whole physical machine, but only parts of it.Since VMs abstract the underlying hardware, they also improve resilience. If the physical machine fails, the hypervisor can perform a quick recovery by moving the snapshots to another machine without changing the guest OS installations to minimize downtime. This abstraction also allows operations teams to focus their deployment efforts on a standardized VM instead of considering different physical implementations.Migrations become easier with snapshots as you can simply move them to a faster machine without modifying the software running inside the VM.Faster deployments are possible because starting a VM is just a software execution instead of setting up a physical server in a data center. While you had to buy a server or rent it for months, with fast deployments, you can now rent a machine for hours, minutes, or even seconds, which allows for quite some savings.Modern CPUs have built-in virtualization features that enable easy resource sharing and enforce the isolation at the hardware layer. This prevents the services of one VM from accessing the resources of the others, improving security compared to running multiple apps inside one OS.Common use cases for VMsVMs have a range of use cases. Let’s look at the most popular ones.Cloud computingThe most popular use case is cloud computing, where VMs allow the secure sharing of the cloud provider’s resources, enabling their customers to rent only the resources they need for the period their workload will run.Software development and testingSoftware development often requires specific tools and libraries that aren’t available on a production machine, so having a development VM with all these tools preinstalled can be helpful. An example is cloud IDEs, which look and feel like regular IDEs but run on a cloud VM. A developer can have one for each project with the required dev tools installed.VMs also allow a developer to set up a machine for software testing that looks exactly like the production environment. Here, the opposite of the development VM is required; it should not have any development tools installed because they would also be missing from production.Cross-platform developmentA special case of the software development use case is cross-platform development. When you implement an app for Android or iOS, for example, you usually don’t do this on a mobile device but on your computer. With VMs, developers can simulate different hardware environments, enabling cross-platform testing without requiring physical devices.Legacy system supportIf the hardware your application requires is no longer in production, a VM might be the only way to keep running your software without reimplementing it. This is similar to the cross-platform development use case, as the VM emulates different hardware, but the difference is that the hardware no longer exists.How to choose the right VM providerTo find the right provider for your workload, the most important factor to assess is your own workload requirements. Ask the following questions and compare the answers to what providers offer.Is your workload compute or I/O-bound?Many workloads, like web servers, are I/O-bound. They don’t make complex calculations but rather simply load data and send it over the network. If you need a VM for an I/O-bound workload, you care more about disk and memory size, as well as network speed.However, compute-heavy workloads, such as AI inference or Kubernetes clusters, require careful resource allocation. If you’re evaluating whether to run Kubernetes on bare metal or VMs, check out our white paper on Bare Metal vs. VM-based Kubernetes Clusters for an in-depth comparison.If your workload is compute-bound instead, you need a high-performance CPU or a GPU and loads of memory. An AI inference engine, for example, only sends a bit of text to a client, but it does many calculations to generate this text.How long will your workload run?Web servers usually run indefinitely, but some workloads only run a few hours or minutes. If you’re doing AI training, you don’t want to pay for your huge VM cluster 24/7 if it only runs a few hours or days a week. In such cases, looking for a provider that allows renting your desired VM type hourly on a pay-as-you-go model might be worthwhile.Certain cloud providers offer cost-effective spot instances, which provide lower prices for non-critical workloads that can tolerate interruptions. These cheap VMs can get shut down at any time with minimal notice, but if your calculations aren’t time-critical, you might save quite a bit of money here.How does your workload scale?Scaling in the cloud is usually done horizontally. That is, by adding more VMs and distributing the work between them. Workloads can have different requirements for when and how fast they must be added and removed.In the AI training example, you might know in advance that one training takes more resources than the other, so you can provision enough VMs when starting. However, a web server workload might change its requirements constantly. Hence, you need a load balancer that automatically scales the instances up and down depending on the number of clients that want to access your service.Do you handle sensitive data?You might have to comply with specific laws and regulations depending on your jurisdiction(s) and industry. This means you must check whether the cloud provider also complies. How secure are their data centers? Where are they located? Do they support encryption in transit, at rest, and in process?What are your reliability requirements?Reliability is a question of costs and, again, of compliance. You might get into financial or regulatory troubles if your workload can’t run. Cloud providers often boast about their guaranteed uptimes, but remember that 99% uptime a year still means over three days of potential downtime. Check your needs and then seek a provider that can meet them reliably.Do you need customer support?If your organization doesn’t have the know-how for operating VMs in the cloud, you might need technical support from the provider. Most cloud providers are self-service, offering you a GUI and an API to manage resources. If your business lacks the resources to operate VMs, seek out a provider that can manage VMs on your behalf.SummaryVMs are a core technology for cloud computing and software development alike. They enable efficient resource sharing, improve security with hardware-enforced guest isolation, and simplify migration and disaster recovery. Choosing the right VM provider starts with understanding your workload requirements, from resource allocation to security and scalability.Maximize cloud efficiency with Gcore Virtual Machines—engineered for high performance, seamless scalability, and enterprise-grade security at competitive pricing. Whether you need to run workloads at scale or deploy applications in seconds, our VMs provide enterprise-grade security, built-in resilience, and optimized resource allocation, all powered by cutting-edge infrastructure. With global reach, fast provisioning, egress traffic included, and pay-as-you-go pricing, you get the scalability and reliability your business needs without overspending. Start your journey with Gcore VMs today and experience cloud computing that’s built for speed, security, and savings.Discover Gcore VMs

Why do bad actors carry out Minecraft DDoS attacks?
One of the most played video games in the world, Minecraft, relies on servers that are frequently a target of distributed denial-of-service (DDoS) attacks. But why would malicious actors target Minecraft servers? In this article, we’ll look at why these servers are so prone to DDoS attacks and uncover the impact such attacks have on the gaming community and broader cybersecurity landscape. For a comprehensive analysis and expert tips, read our ultimate guide to preventing DDoS attacks on Minecraft servers.Disruption for financial gainFinancial exploitation is a typical motivator for DDoS attacks in Minecraft. Cybercriminals frequently demand ransom to stop their attacks. Server owners, especially those with lucrative private or public servers, may feel pressured to pay to restore normalcy. In some cases, bad actors intentionally disrupt competitors to draw players to their own servers, leveraging downtime for monetary advantage.Services that offer DDoS attacks for hire make these attacks more accessible and widespread. These malicious services target Minecraft servers because the game is so popular, making it an attractive and easy option for attackers.Player and server rivalriesRivalries within the Minecraft ecosystem often escalate to DDoS attacks, driven by competition among players, servers, hosts, and businesses. Players may target opponents during tournaments to disrupt their gaming experience, hoping to secure prize money for themselves. Similarly, players on one server may initiate attacks to draw members to their server and harm the reputation of other servers. Beyond individual players, server hosts also engage in DDoS attacks to disrupt and induce outages for their rivals, subsequently attempting to poach their customers. On a bigger scale, local pirate servers may target gaming service providers entering new markets to harm their brand and hold onto market share. These rivalries highlight the competitive and occasionally antagonistic character of the Minecraft community, where the stakes frequently surpass in-game achievements.Personal vendettas and retaliationPersonal conflicts can occasionally be the source of DDoS attacks in Minecraft. In these situations, servers are targeted in retribution by individual gamers or disgruntled former employees. These attacks are frequently the result of complaints about unsolved conflicts, bans, or disagreements over in-game behavior. Retaliation-driven DDoS events can cause significant disruption, although lower in scope than attacks with financial motivations.Displaying technical masterySome attackers carry out DDoS attacks to showcase their abilities. Minecraft is a perfect testing ground because of its large player base and community-driven server infrastructure. Successful strikes that demonstrate their skills enhance reputations within some underground communities. Instead of being a means to an end, the act itself becomes a badge of honor for those involved.HacktivismHacktivists—people who employ hacking as a form of protest—occasionally target Minecraft servers to further their political or social goals. These attacks are meant to raise awareness of a subject rather than be driven by personal grievances or material gain. To promote their message, they might, for instance, assault servers that are thought to support unfair policies or practices. This would be an example of digital activism. Even though they are less frequent, these instances highlight the various reasons why DDoS attacks occur.Data theftMinecraft servers often hold significant user data, including email addresses, usernames, and sometimes even payment information. Malicious actors sometimes launch DDoS attacks as a smokescreen to divert server administrators’ attention from their attempts to breach the server and steal confidential information. This dual-purpose approach disrupts gameplay and poses significant risks to user privacy and security, making data theft one of the more insidious motives behind such attacks.Securing the Minecraft ecosystemDDoS attacks against Minecraft are motivated by various factors, including personal grudges, data theft, and financial gain. Every attack reveals wider cybersecurity threats, interferes with gameplay, and damages community trust. Understanding these motivations can help server owners take informed steps to secure their servers, but often, investing in reliable DDoS protection is the simplest and most effective way to guarantee that Minecraft remains a safe and enjoyable experience for players worldwide. By addressing the root causes and improving server resilience, stakeholders can mitigate the impact of such attacks and protect the integrity of the game.Gcore offers robust, multi-layered security solutions designed to shield gaming communities from the ever-growing threat of DDoS attacks. Founded by gamers for gamers, Gcore understands the industry’s unique challenges. Our tools enable smooth gameplay and peace of mind for both server owners and players.Want an in-depth look at how to secure your Minecraft servers?Download our ultimate guide

How to deploy DeepSeek 70B with Ollama and a Web UI on Gcore Everywhere Inference
Large language models (LLMs) like DeepSeek 70B are revolutionizing industries by enabling more advanced and dynamic conversational AI solutions. Whether you’re looking to build intelligent customer support systems, enhance content generation, or create data-driven applications, deploying and interacting with LLMs has never been more accessible.In this tutorial, we’ll show you exactly how to set up DeepSeek 70B using Ollama and a Web UI on Gcore Everywhere Inference. By the end, you’ll have a fully functional environment where you can easily interact with your custom LLM via a user-friendly interface. This process involves three simple steps: deploying Ollama, deploying the web UI, and configuring the web UI and connecting to Ollama.Let’s get started!Step 1: Deploy OllamaLog in to Gcore Everywhere Inference and select Deploy Custom Model.In the model image field, enter ollama/ollama.Set the Port to 11434.Under Pod Configuration, configure the following:Select GPU-Optimized.Choose a GPU type, such as 1×A100 or 1×H100.Choose a region (e.g., Luxembourg-3).Set an autoscaling policy or use the default settings.Name your deployment (e.g., ollama).Click Deploy model on the right side of the screen.Once deployed, you’ll have an Ollama endpoint ready to serve your model.Step 2: Deploy the Web UI for OllamaGo back to the Gcore Everywhere Inference console and select Deploy Custom Model again.In the Model Image field, enter ghcr.io/open-webui/open-webui:main.Set the Port to 8080.Under Pod Configuration, set:CPU-Optimized.Choose 4 vCPU / 16 GiB RAM.Select the same region as before (e.g., Luxembourg-3).Configure an autoscaling policy or use the default settings.Name your deployment (e.g., webui).Click Deploy model on the right side of the screen.Once deployed, navigate to the Web UI endpoint from the Gcore Customer Portal.Step 3: Configure the Web UIFrom the Web UI endpoint and set up a username and password when prompted.Log in and navigate to the admin panel.Go to Settings → Connections → Disable the OpenAI API integration.In the Ollama API field, enter the endpoint for your Ollama deployment. You can find this in the Gcore Customer Portal. It will look similar to this: https://<your-ollama-deployment>.ai.gcore.dev/.Click Save to confirm your changes.Step 4: Pull and Use DeepSeek 70BOpen the chat section in the Web UI.In the Select a model field, type deepseek-r1:70b.Click Pull to download the model.Wait for the download to complete.Once downloaded, select the model and start chatting!Your AI environment is ready to exploreBy following these steps, you’ve successfully deployed DeepSeek 70B on Gcore Everywhere Inference with Ollama. This setup provides a powerful and user-friendly environment for experimenting with LLMs, prototyping AI-driven features, or integrating advanced conversational AI into your applications.Ready to unlock the full potential of AI? Gcore Everywhere Inference offers outstanding scalability, performance, and support, making it the perfect solution for developers and businesses working with advanced AI models. Dive deeper into our powerful tools and resources by exploring our AI blog and docs.Discover Gcore Everywhere Inference

How do CDNs work?
Picture this: A visitor lands on your website excited to watch a video, buy an item, or explore your content. If your page loads too slowly, they may leave before it even loads completely. Every second matters when it comes to customer retention, engagement, and purchasing patterns.This is where a content delivery network (CDN) comes in, operating in the background to help end users access digital content quickly, securely, and without interruption. In this article, we’ll explain how a CDN works to optimize the delivery of websites, applications, media, and other online content, even during high-traffic spikes and cyberattacks. If you’re new to CDNs, you might want to check out our introductory article first.Key components of a CDNA CDN is a network of interconnected servers that work together to optimize content delivery. These servers communicate to guarantee that data reaches users as quickly and efficiently as possible. The core of a CDN consists of globally distributed edge servers, also known as points of presence (PoPs):Origin server: The central server where website data is stored. Content is distributed from the origin to other servers in the CDN to improve availability and performance.Points of presence (PoPs): A globally distributed network of edge servers. PoPs store cached content—pre-saved copies of web pages, images, videos, and other assets. By serving cached content from the nearest PoP to the user, the CDN reduces the distance data needs to travel, improving load times and minimizing strain on the origin server. The more PoPs a network has, the faster content is served globally.How a CDN delivers contentCDNs rely on edge servers to store content in a cache, enabling faster delivery to end users. The delivery process differs depending on whether the content is already cached or needs to be fetched from the origin server.A cache hit occurs when the requested content is already stored on a CDN’s edge server. Here’s the process:User requests content: When a user visits a website, their device sends a request to load the necessary content.Closest edge server responds: The CDN routes the request to the nearest edge server to the user, minimizing travel time.Content delivered: The edge server delivers the cached content directly to the user. This is faster because:The distance between the user and the server is shorter.The edge server has already optimized the content for delivery.What happens during a cache miss?A cache miss occurs when the requested content is not yet stored on the edge server. In this case, the CDN fetches the content from the origin server and then updates its cache:User requests content: The process begins when a user’s device sends a request to load website content.The closest server responds: As usual, the CDN routes the request to the nearest edge server.Request to the origin server: If the content isn’t cached, the CDN fetches it from the origin server, which houses the original website data. The edge server then delivers it to the user.Content cached on edge servers: After retrieving the content, the edge server stores a copy in its cache. This ensures that future requests for the same content can be delivered quickly without returning to the origin server.Do you need a CDN?Behind every fast, reliable website is a series of split-second processes working to optimize content delivery. A CDN caches content closer to users, balances traffic across multiple servers, and intelligently routes requests to deliver smooth performance. This reduces latency, prevents downtime, and strengthens security—all critical for businesses serving global audiences.Whether you’re running an e-commerce platform, a streaming service, or a high-traffic website, a CDN ensures your content is delivered quickly, securely, and without interruption, no matter where your users are or how much demand your site experiences.Take your website’s performance to the next level with Gcore CDN. Powered by a global network of over 180+ points of presence, our CDN enables lightning-fast content delivery, robust security, and unparalleled reliability. Don’t let slow load times or security risks hold you back. Contact our team today to learn how Gcore can elevate your online presence.Discover Gcore CDN
Subscribe to our newsletter
Get the latest industry trends, exclusive insights, and Gcore updates delivered straight to your inbox.