- Home
- Developers
- How to Install LibreOffice on Ubuntu
When we talk about “LibreOffice on Ubuntu,” we refer to the installation and usage of this office suite on the Ubuntu operating system. Ubuntu, being one of the most popular Linux distributions, often includes LibreOffice as part of its default software set. However, users can also download and install newer or specific versions if they wish. This guide showcases the process of installing LibreOffice on Ubuntu, empowering you with a top-tier office solution without the hefty price tag.
What is LibreOffice?
LibreOffice is a free and open-source office suite, widely recognized as a powerful alternative to proprietary office suites like Microsoft Office. The suite includes applications making it the most powerful free and open-source office suite on the market:
- Writer. A word processing tool comparable to Microsoft Word.
- Calc. A spreadsheet application similar to Microsoft Excel.
- Impress. A presentation software analogous to Microsoft PowerPoint.
- Draw. A vector graphics editor and diagramming tool.
- Base. A database management program, akin to Microsoft Access.
- Math. An application to create and edit mathematical formulas.
LibreOffice’s open-source nature, combined with its robust feature set, makes it an excellent choice for Ubuntu users looking for a comprehensive office solution without the licensing costs associated with commercial products. Let’s take a look at the next section on how to install it.
Installing LibreOffice on Ubuntu
Here’s a step-by-step guide on how to install LibreOffice on Ubuntu:
1. Update the Package Index. Always start by ensuring your system’s package list and software are up to date.
sudo apt update && sudo apt upgrade -y
2. Install LibreOffice. Now, you can install LibreOffice using the apt package manager.
sudo apt install libreoffice
If it doesn’t work, you can also try these following commands:
sudo snap install libreoffice
sudo apt install libreoffice-common
Example:

3. Verify Installation. To confirm that LibreOffice was installed correctly, you can check its version.
libreoffice --version
Example:

4. Launch LibreOffice. You can start LibreOffice either from the terminal or through the Ubuntu application menu.
libreoffice
Expected Output: The LibreOffice start center will open, presenting various options like Writer, Calc, and Impress.
And that’s it! You have successfully installed LibreOffice on Ubuntu. This powerful office suite is now at your disposal, ready to cater to all your document editing, spreadsheet calculations, and presentation needs.
Conclusion
Want to run Ubuntu in a virtual environment? With Gcore Cloud, you can choose from Basic VM, Virtual Instances, or VPS/VDS suitable for Ubuntu:
- Gcore Basic VM offers shared virtual machines from €3.2 per month
- Virtual Instances are virtual machines with a variety of configurations and an application marketplace
- Virtual Dedicated Servers provide outstanding speed of 200+ Mbps in 20+ global locations
Related articles

Query your cloud with natural language: A developer’s guide to Gcore MCP
What if you could ask your infrastructure questions and get real answers?With Gcore’s open-source implementation of the Model Context Protocol (MCP), now you can. MCP turns generative AI into an agent that understands your infrastructure, responds to your queries, and takes action when you need it to.In this post, we’ll demo how to use MCP to explore and inspect your Gcore environment just by prompting, to list resources, check audit logs, and generate cost reports. We’ll also walk through a fun bonus use case: provisioning infrastructure and exporting it to Terraform.What is MCP and why do devs love it?Originally developed by Anthropic, the Model Context Protocol (MCP) is an open standard that turns language models into agents that interact with structured tools: APIs, CLIs, or internal systems. Gcore’s implementation makes this protocol real for our customers.With MCP, you can:Ask questions about your infrastructureList, inspect, or filter cloud resourcesView cost data, audit logs, or deployment metadataExport configs to TerraformChain multi-step operations via natural languageGcore MCP removes friction from interacting with your infrastructure. Instead of wiring together scripts or context-switching across dashboards and CLIs, you can just…ask.That means:Faster debugging and auditsMore accessible infra visibilityFewer repetitive setup tasksBetter team collaborationBecause it’s open source, backed by the Gcore Python SDK, you can plug it into other APIs, extend tool definitions, or even create internal agents tailored to your stack. Explore the GitHub repo for yourself.What can you do with it?This isn’t just a cute chatbot. Gcore MCP connects your cloud to real-time insights. Here are some practical prompts you can use right away.Infrastructure inspection“List all VMs running in the Frankfurt region”“Which projects have over 80% GPU utilization?”“Show all volumes not attached to any instance”Audit and cost analysis“Get me the API usage for the last 24 hours”“Which users deployed resources in the last 7 days?”“Give a cost breakdown by region for this month”Security and governance“Show me firewall rules with open ports”“List all active API tokens and their scopes”Experimental automation“Create a secure network in Tokyo, export to Terraform, then delete it”We’ll walk through that last one in the full demo below.Full video demoWatch Gcore’s AI Software Engineer, Algis Dumbris, walk through setting up MCP on your machine and show off some use cases. If you prefer reading, we’ve broken down the process step-by-step below.Step-by-step walkthroughThis section maps to the video and shows exactly how to replicate the workflow locally.1. Install MCP locally (0:00–1:28)We use uv to isolate the environment and pull the project directly from GitHub.curl -Ls https://astral.sh/uv/install.sh | sh uvx add gcore-mcp-server https://github.com/G-Core/gcore-mcp-server Requirements:PythonGcore account + API keyTool config file (from the repo)2. Set up your environment (1:28–2:47)Configure two environment variables:GCORE_API_KEY for authGCORE_TOOLS to define what the agent can access (e.g., regions, instances, costs, etc.)Soon, tool selection will be automatic, but today you can define your toolset in YAML or JSON.3. Run a basic query (3:19–4:11)Prompt:“Find the Gcore region closest to Antalya.”The agent maps this to a regions.list call and returns: IstanbulNo need to dig through docs or write an API request.4. Provision, export, and clean up (4:19–5:32)This one’s powerful if you’re experimenting with CI/CD or infrastructure-as-code.Prompt:“Create a secure network in Tokyo. Export to Terraform. Then clean up.”The agent:Provisions the networkExports it to Terraform formatDestroys the resources afterwardYou get usable .tf output with no manual scripting. Perfect for testing, prototyping, or onboarding.Gcore: always building for developersTry it now:Clone the repoInstall UVX + configure your environmentStart prompting your infrastructureOpen issues, contribute tools, or share your use casesThis is early-stage software, and we’re just getting started. Expect more tools, better UX, and deeper integrations soon.Watch how easy it is to deploy an inference instance with Gcore

Cloud computing: types, deployment models, benefits, and how it works
Cloud computing is a model for enabling on-demand network access to a shared pool of configurable computing resources, such as networks, servers, storage, applications, and services that can be rapidly provisioned and released with minimal management effort or service provider interaction. According to research by Gartner (2024), the global cloud computing market size is projected to reach $1.25 trillion by 2025, reflecting the rapid growth and widespread adoption of these services.The National Institute of Standards and Technology (NIST) defines five core characteristics that distinguish cloud computing from traditional IT infrastructure. These include on-demand self-service, broad network access, resource pooling, rapid elasticity, and measured service.Each characteristic addresses specific business needs while enabling organizations to access computing resources without maintaining physical hardware on-premises.Cloud computing services are organized into three main categories that serve different business requirements and technical needs. Infrastructure as a Service (IaaS) provides basic computing resources, Platform as a Service (PaaS) offers development environments and tools, and Software as a Service (SaaS) delivers complete applications over the internet. Major cloud providers typically guarantee 99.9% or higher uptime in service level agreements to ensure reliable access to these services.Organizations can choose from four primary use models based on their security, compliance, and operational requirements. Public cloud services are offered over the internet to anyone, private clouds are proprietary networks serving limited users, hybrid clouds combine public and private cloud features, and community clouds serve specific groups with shared concerns. Each model provides different levels of control, security, and cost structures.Over 90% of enterprises use some form of cloud services as of 2024, according to Forrester Research (2024), making cloud computing knowledge important for modern business operations. This widespread adoption reflects how cloud computing has become a cornerstone of digital change and competitive advantage across industries.What is cloud computing?Cloud computing is a model that delivers computing resources like servers, storage, databases, and software over the internet on demand, allowing users to access and use these resources without owning or managing the physical infrastructure. Instead of buying and maintaining your own servers, you can rent computing power from cloud providers and scale resources up or down based on your needs.Over 90% of enterprises now use some form of cloud services, with providers typically guaranteeing 99.9% or higher uptime in their service agreements.The three main service models offer different levels of control and management. Infrastructure as a Service (IaaS) provides basic computing resources like virtual machines and storage. Platform as a Service (PaaS) adds development tools and runtime environments, and Software as a Service (SaaS) delivers complete applications that are ready to use. Each model handles different aspects of the technology stack, so you only manage what you need while the provider handles the rest.Cloud use models vary by ownership and access control. Public clouds serve multiple customers over the internet, private clouds operate exclusively for one organization, and hybrid clouds combine both approaches for flexibility. This variety lets organizations choose the right balance of cost, control, and security for their specific needs while maintaining the core benefits of cloud computing's flexible, elastic infrastructure.What are the main types of cloud computing services?The main types of cloud computing services refer to the different service models that provide computing resources over the internet with varying levels of management and control. The main types of cloud computing services are listed below.Infrastructure as a service (IaaS): This model provides basic computing infrastructure, including virtual machines, storage, and networking resources over the internet. Users can install and manage their own operating systems, applications, and development frameworks while the provider handles the physical hardware.Platform as a service (PaaS): This service offers a complete development and use environment in the cloud, including operating systems, programming languages, databases, and web servers. Developers can build, test, and use applications without managing the underlying infrastructure complexity.Software as a service (SaaS): This model delivers fully functional software applications over the internet through a web browser or mobile app. Users access the software on a subscription basis without needing to install, maintain, or update the applications locally.Function as a service (FaaS): Also known as serverless computing, this model allows developers to run individual functions or pieces of code in response to events. The cloud provider automatically manages server provisioning, scaling, and maintenance while charging only for actual compute time used.Database as a service (DBaaS): This service provides managed database solutions in the cloud, handling database administration tasks like backups, updates, and scaling. Organizations can access database functionality without maintaining physical database servers or hiring specialized database administrators.Storage as a service (STaaS): This model offers flexible cloud storage solutions for data backup, archiving, and file sharing needs. Users can store and retrieve data from anywhere with internet access while paying only for the storage space they actually use.What are the different cloud deployment models?Cloud use models refer to the different ways organizations can access and manage cloud computing resources based on ownership, location, and access control. The cloud use models are listed below.Public cloud: Services are delivered over the internet and shared among multiple organizations by third-party providers. Anyone can purchase and use these services on a pay-as-you-go basis, making them cost-effective for businesses without large upfront investments.Private cloud: Computing resources are dedicated to a single organization and can be hosted on-premises or by a third party. This model offers greater control, security, and customization options but requires higher costs and more management overhead.Hybrid cloud: Organizations combine public and private cloud environments, allowing data and applications to move between them as needed. This approach provides flexibility to keep sensitive data in private clouds while using public clouds for less critical workloads.Community cloud: Multiple organizations with similar requirements share cloud infrastructure and costs. Government agencies, healthcare organizations, or financial institutions often use this model to meet specific compliance and security standards.Multi-cloud: Organizations use services from multiple cloud providers to avoid vendor lock-in and improve redundancy. This plan allows businesses to choose the best services from different providers while reducing dependency on any single vendor.How does cloud computing work?Cloud computing works by delivering computing resources like servers, storage, databases, and software over the internet on an on-demand basis. Instead of owning physical hardware, users access these resources through web browsers or applications, while cloud providers manage the underlying infrastructure in data centers worldwide.The system operates through a front-end and back-end architecture. The front end includes your device, web browser, and network connection that you use to access cloud services.The back end consists of servers, storage systems, databases, and applications housed in the provider's data centers. When you request a service, the cloud infrastructure automatically allocates the necessary resources from its shared pool.The technology achieves its flexibility through virtualization, which creates multiple virtual instances from single physical servers. Resource pooling allows providers to serve multiple customers from the same infrastructure, while rapid elasticity automatically scales resources up or down based on demand.This elastic scaling can reduce resource costs by up to 30% compared to fixed infrastructure, according to McKinsey (2024), making cloud computing both flexible and cost-effective for businesses of all sizes.What are the key benefits of cloud computing?The key benefits of cloud computing refer to the advantages organizations and individuals gain from using internet-based computing services instead of traditional on-premises infrastructure. The key benefits of cloud computing are listed below.Cost reduction: Organizations eliminate upfront hardware investments and reduce ongoing maintenance expenses by paying only for resources they actually use. Cloud providers handle infrastructure management, reducing IT staffing costs and operational overhead.Flexibility and elasticity: Computing resources can expand or contract automatically based on demand, ensuring best performance during traffic spikes. This flexibility prevents over-provisioning during quiet periods and under-provisioning during peak usage.Improved accessibility: Users can access applications and data from any device with an internet connection, enabling remote work and global collaboration. This mobility supports modern work patterns and increases productivity across distributed teams.Enhanced reliability: Cloud providers maintain multiple data centers with redundant systems and backup infrastructure to ensure continuous service availability.Automatic updates and maintenance: Software updates, security patches, and system maintenance happen automatically without user intervention. This automation reduces downtime and ensures systems stay current with the latest features and security protections.Disaster recovery: Cloud services include built-in backup and recovery capabilities that protect against data loss from hardware failures or natural disasters. Recovery times are typically faster than traditional backup methods since data exists across multiple locations.Environmental effectiveness: Shared cloud infrastructure uses resources more effectively than individual company data centers, reducing overall energy consumption. Large cloud providers can achieve better energy effectiveness through economies of scale and advanced cooling technologies.What are the drawbacks and challenges of cloud computing?The drawbacks and challenges of cloud computing refer to the potential problems and limitations organizations may face when adopting cloud-based services. They are listed below.Security concerns: Organizations lose direct control over their data when it's stored on third-party servers. Data breaches, unauthorized access, and compliance issues become shared responsibilities between the provider and customer. Sensitive information may be vulnerable to cyber attacks targeting cloud infrastructure.Internet dependency: Cloud services require stable internet connections to function properly. Poor connectivity or outages can completely disrupt business operations and prevent access to critical applications. Remote locations with limited bandwidth face particular challenges accessing cloud resources.Vendor lock-in: Switching between cloud providers can be difficult and expensive due to proprietary technologies and data formats. Organizations may become dependent on specific platforms, limiting their flexibility to negotiate pricing or change services. Migration costs and technical complexity often discourage switching providers.Limited customization: Cloud services offer standardized solutions that may not meet specific business requirements. Organizations can't modify underlying infrastructure or install custom software configurations. This restriction can force businesses to adapt their processes to fit the cloud platform's limitations.Ongoing costs: Monthly subscription fees can accumulate to exceed traditional on-premise infrastructure costs over time. Unexpected usage spikes or data transfer charges can lead to budget overruns. Organizations lose the asset value that comes with owning physical hardware.Performance variability: Shared cloud resources can experience slower performance during peak usage periods. Network latency affects applications requiring real-time processing or frequent data transfers. Organizations can't guarantee consistent performance levels for mission-critical applications.Compliance complexity: Meeting regulatory requirements becomes more challenging when data is stored across multiple locations. Organizations must verify that cloud providers meet industry-specific compliance standards. Audit trails and data governance become shared responsibilities that require careful coordination.Gcore Edge CloudWhen building AI applications that require serious computational power, the infrastructure you choose can make or break your project's success. Whether you're training large language models, running complex inference workloads, or tackling high-performance computing challenges, having access to the latest GPU technology without performance bottlenecks becomes critical.Gcore's AI GPU Cloud Infrastructure addresses these demanding requirements with bare metal NVIDIA H200. H100. A100. L40S, and GB200 GPUs, delivering zero virtualization overhead for maximum performance. The platform's ultra-fast InfiniBand networking and multi-GPU cluster support make it particularly well-suited for distributed training and large-scale AI workloads, starting from just €1.25/hour. Multi-instance GPU (MIG) support also allows you to improve resource allocation and costs for smaller inference tasks.Discover how Gcore's bare metal GPU performance can accelerate your AI training and inference workloads at https://gcore.com/gpu-cloud.Frequently asked questionsPeople often have questions about cloud computing basics, costs, and how it fits their specific needs. These answers cover the key service models, use options, and practical considerations that help clarify what cloud computing can do for your organization.What's the difference between cloud computing and traditional hosting?Cloud computing delivers resources over the internet on demand, while traditional hosting provides fixed server resources at dedicated locations. Cloud offers elastic growth and pay-as-you-go pricing, whereas traditional hosting requires upfront capacity planning and fixed costs regardless of actual usage.What is cloud computing security?Cloud computing security protects data, applications, and infrastructure in cloud environments through shared responsibility models between providers and users. Cloud providers secure the underlying infrastructure while users protect their data, applications, and access controls.What is virtualization in cloud computing?Virtualization in cloud computing creates multiple virtual machines (VMs) on a single physical server using hypervisor software that separates computing resources. This technology allows cloud providers to increase hardware effectiveness and offer flexible, isolated environments to multiple users simultaneously.Is cloud computing secure for business data?Yes, cloud computing is secure for business data when proper security measures are in place, with major providers offering encryption, access controls, and compliance certifications that often exceed what most businesses can achieve on-premises. Cloud service providers typically guarantee 99.9% or higher uptime in service level agreements while maintaining enterprise-grade security standards.How much does cloud computing cost compared to on-premises infrastructure?Cloud computing typically costs 20-40% less than on-premises infrastructure due to shared resources, reduced hardware purchases, and lower maintenance expenses, according to IDC (2024). However, costs vary primarily based on usage patterns, with predictable workloads sometimes being cheaper on-premises while variable workloads benefit more from cloud's pay-as-you-go model.How do I choose between IaaS, PaaS, and SaaS?Choose based on your control needs. IaaS gives you full infrastructure control, PaaS handles infrastructure so you focus on development, and SaaS provides ready-to-use applications with no technical management required.

How to protect login pages with Gcore WAAP
Exposed login pages are a common vulnerability across web applications. Attackers often use automated tools to guess credentials in brute-force or credential-stuffing attacks, probe for login behavior to exploit session or authentication logic, or overload your infrastructure with fake requests.Without specific rules for login-related traffic, your application might miss these threats or apply overly broad protections that disrupt real users. Fortunately, Gcore WAAP makes it easy to defend these sensitive endpoints without touching your application code.In this guide, we’ll show you how to use WAAP’s custom rule engine to identify login traffic and apply protections like CAPTCHA to reduce risk, block automated abuse, and maintain a smooth experience for legitimate users. We’ve also included a complete video walkthrough from Gcore’s Security Presales Engineer, Michal Zalewski.Video walkthroughHere’s Gcore’s Michal Zalewski giving a full walkthrough of the steps in this article.Step 1: Access your WAAP configurationGo to portal.gcore.com and log in.Navigate to WAAP in the sidebar. If you’re not yet a WAAP user, it costs just $26/month.Select the resource that hosts your login form; for example, gcore.zalewski.cloud.Step 2: Create a custom ruleIn the main panel of your selected resource, go to WAAP Rules.Click Add Custom Rule in the upper-right corner.Step 3: Define the login page URLIdentify the login endpoint you want to protect:Use tools like Burp Suite or the "Inspect" feature in your browser to verify the login page URL.In Burp Suite, use the Proxy tab, or in the browser, check the Network tab to inspect a login request.Look for the path (e.g., /login.php) and HTTP method (POST).In the custom rule setup:Enter the URL (e.g., /login.php).Tag the request using a predefined tag. Select Login Page.Step 4: Name and save the ruleProvide a name for the rule, such as “Login Page URL”, and save it.Step 5: Add a CAPTCHA challenge ruleTo protect the login page from automated abuse:Create a new custom rule.Name it something like “Login Page Challenge”.Under Conditions, select the previously created Login Page tag.Set the Action to CAPTCHA.Save the rule.Step 6: Test the protectionReturn to your browser and turn off any proxy tools.Refresh the login page.You should now be challenged with a CAPTCHA each time the login page loads.Once the CAPTCHA is completed successfully, users can log in as usual.Monitor, adapt, and alertAfter deployment:Track rate limit trigger frequencyMonitor WAAP logs for anomaly detectionRotate exemptions or thresholds based on live behaviorFor analytics, refer to the WAAP analytics documentation.Bonus tips for hardened protectionCombine with bot protection: Enable WAAP’s bot mitigation to identify headless browsers and automation tools like Puppeteer or Selenium. See our bot protection docs for setup instructions.Customize 429 responses: Replace default error pages with branded messages or a fallback action. Consider including a support link or CAPTCHA challenge. Check out our response pages documentation for more details.Use geo or ASN exceptions: Whitelist trusted locations or block known bot-heavy ASNs if your audience is localized.Automate it: optional API and Terraform supportTeams with IaC pipelines or security automation workflows might want to automate login page protection with rate limiting. This keeps your WAAP config version-controlled and repeatable.You can use the WAAP API or Terraform to:Create or update rulesRotate session keys or thresholdsExport logs for auditingExplore the WAAP API documentation and WAAP Terraform provider documentation for more details.Stop abuse before it starts with GcoreLogin pages are high-value targets, but they don’t have to be high risk. With Gcore WAAP, setting up robust defenses takes just a few minutes. By tagging login traffic and applying challenge rules like CAPTCHA, you can reduce automated attack risk without sacrificing user experience.As your application grows, revisit your WAAP rules regularly to adapt to new threats, add behavior-based detection, and fine-tune your protective layers. For more advanced configurations, check out our documentation or reach out to Gcore support.Get WAAP today for just $26/month

3 use cases for geo-aware routing with Gcore DNS
If your audience is global but you’re serving everyone the same content from the same place, you're likely sacrificing performance and resilience. Gcore DNS (which includes a free-forever plan and enterprise-grade option) offers a straightforward way to change that with geo-aware routing, a feature that lets you return different DNS responses based on where users are coming from.This article breaks down how Gcore's geo-routing works, how to set it up using the GeoDNS preset in dynamic response mode, where it shines, and when you might be better off with a different option. We’ll walk through three hands-on use cases with real config examples, highlight TTL trade-offs, and call out what developers need to know about edge cases like resolver mismatch and caching delays.What is geo-aware DNS routing?Gcore DNS lets you return different IP addresses based on the user’s geographic location. This is configured using dynamic response rules with the GeoDNS preset, which lets you match on continent, country, region, ASN, or IP/CIDR. When a user makes a DNS request, Gcore uses the resolver’s location to decide which record to return.You can control traffic to achieve outcomes like:Directing European users to an EU-based CDN endpointSending users in regions with known service degradation to a fallback instanceBehind the scenes, this is done by setting up metadata pickers and specifying fallback behavior.For step-by-step guidance, see the official docs: Configure geo-balancing with Dynamic response.How to configure GeoDNS in Gcore DNSTo use geo-aware routing in Gcore DNS, you'll configure a dynamic response record set with the GeoDNS preset. This lets you return different IPs based on region, country, ASN, or IP/CIDR metadata.Basic stepsGo to DNS → Zones in the Gcore Customer Portal. (If you don’t have an account, you can sign up free and use Gcore DNS in just a few clicks.)Create or edit a record set (e.g., for app.example.com).Switch to Advanced mode.Enable Dynamic response.Choose the GeoDNS preset.Add responses per region or country.Define a fallback record for unmatched queries.For detailed step-by-step instructions, check out our docs.Once you’ve set this up, your config should look like the examples shown in the use cases below.Common use casesEach use case below includes a real-world scenario and a breakdown of how to configure it in Gcore DNS. These examples assume you're working in the DNS advanced mode zone editor with dynamic response enabled and the GeoDNS preset selected.The term “DNS setup” refers to the configuration you’d enter for a specific hostname in the Gcore DNS UI under advanced mode.1. Content localizationScenario: You're running example.com and want to serve language-optimized infrastructure for European and Asian users. This use case is often used to reduce TTFB, apply region-specific UX, or comply with local UX norms. If you're also localizing content (e.g., currency, language), make sure your app handles that via subdomains or headers in addition to routing.Objective:EU users → eu.example.comAsia users → asia.example.comAll others → global.example.comDNS setup:Host: www.example.comType: A TTL: 120 Dynamic response: Enabled Preset: GeoDNS Europe → 185.22.33.44 # EU-based web server Asia → 103.55.66.77 # Asia-based web server Fallback → 198.18.0.1 # Global web server2. Regional CDN failoverScenario: You’re using two CDN clusters: one in North America, one in Europe. If one cluster is unavailable, you want traffic rerouted regionally without impacting users elsewhere. To make this work reliably, you must enable DNS Healthchecks for each origin so that Gcore DNS can automatically detect outages and filter out unhealthy IPs from responses.Objective:North America → na.cdn.example.comEurope → eu.cdn.example.comEach region has its own fallbackDNS setup:Host: cdn.example.comType: A TTL: 60 Dynamic response: Enabled Preset: GeoDNS North America → 203.0.113.10 # NA CDN IP Backup (NA region only) → 185.22.33.44 # EU CDN as backup for NA Health check → Enabled for 203.0.113.10 with HTTP/TCP probe settingsEurope → 185.22.33.44 # EU CDN IP Backup (EU region only) → 203.0.113.10 # NA CDN as backup for EU Health check → Enabled for 185.22.33.44Note: Multi-level fallback by region isn’t supported inside one rule set—you need to separate them to keep routing decisions clean.3. Traffic steering for complianceScenario: You need to keep EU user data inside the EU for GDPR compliance while routing the rest of the world to lower-cost infrastructure elsewhere. This approach is useful for fintech, healthcare, or regulated SaaS workloads where regulatory compliance is a challenge.Objective:EU users → EU-only backendAll other users → Global backendDNS setup:Host: transactions.example.com Type: A TTL: 300 Dynamic response: Enabled Preset: GeoDNS Europe → 185.10.10.10 # EU regional API node Fallback → 198.51.100.42 # Global API nodeEdge casesGeoDNS works well, but it’s worth keeping in mind a few edge cases and limitations when you get set up.Resolver location ≠ user locationBy default, Gcore uses ECS (EDNS Client Subnet) for precise client subnet geo-balancing. If ECS isn’t present, resolver IP is used, which may skew location (e.g., public resolvers, mobile carriers). ECS usage can be disabled in the ManagedDNS UI if needed.Caching slows failoverEven if your upstream fails, users may have cached the original IP for minutes. Fallback + TTL tuning are key.No sub-regional precisionYou can route by continent, country, or ASN—but not city. City-level precision isn’t currently supported.Gcore delivers simple solutions to big problemsGeo-aware routing is one of those features that quietly solves big problems, especially when your app or CDN runs globally. With Gcore DNS, you don’t need complex infrastructure to start optimizing traffic flow.Geo-aware routing with Gcore DNS is a lightweight way to optimize performance, localize content, or handle regional failover. If you need greater precision, consider pairing GeoDNS with in-app geolocation logic or CDN edge logic. But for many routing use cases, DNS is the simplest and fastest way to go.Get free-forever Gcore DNS with just a few clicks

3 underestimated security risks of AI workloads and how to overcome them
Artificial intelligence workloads introduce a fundamentally different security landscape for engineering and security teams. Unlike traditional applications, AI systems must protect not just endpoints and networks, but also training data pipelines, feature stores, model repositories, and inference APIs. Each phase of the AI life cycle presents distinct attack vectors that adversaries can exploit to corrupt model behavior, extract proprietary logic, or manipulate downstream outputs.In this article, we uncover three security vulnerabilities of AI workloads and explain how developers and MLOps teams can overcome them. We also look at how investing in your AI security can save time and money, explore the challenges that lie ahead for AI security, and offer a simplified way to protect your AI workloads with Gcore.Risk #1: data poisoningData poisoning is a targeted attack on the integrity of AI systems, where malicious actors subtly inject corrupted or manipulated data into training pipelines. The result is a model that behaves unpredictably, generates biased or false outputs, or embeds hidden logic that can be triggered post-deployment. This can undermine business-critical applications—from fraud detection and medical diagnostics to content moderation and autonomous decision-making.For developers, the stakes are high: poisoned models are hard to detect once deployed, and even small perturbations in training data can have system-wide consequences. Luckily, you can take a few steps to mitigate against data poisoning and then implement zero-trust AI to further protect your workloads.Mitigation and hardeningRestrict dataset access using IAM, RBAC, or identity-aware proxies.Store all datasets in versioned, signed, and hashed formats.Validate datasets with automated schema checks, label distribution scans, and statistical outlier detection before training.Track data provenance with metadata logs and checksums.Block training runs if datasets fail predefined data quality gates.Integrate data validation scripts into CI/CD pipelines pre-training.Enforce zero-trust access policies for data ingestion services.Solution integration: zero-trust AIImplement continuous authentication and authorization for each component interacting with data (e.g., preprocessing scripts, training jobs).Enable real-time threat detection during training using runtime security tools.Automate incident response triggers for unexpected file access or data source changes.Risk #2: adversarial attacksAdversarial attacks manipulate model inputs in subtle ways that trick AI systems into making incorrect or dangerous decisions. These perturbations—often imperceptible to humans—can cause models to misclassify images, misinterpret speech, or misread sensor data. In high-stakes environments like facial recognition, autonomous vehicles, or fraud detection, these failures can result in security breaches, legal liabilities, or physical harm.For developers, the threat is real: even state-of-the-art models can be easily fooled without adversarial hardening. The good news? You can make your models more robust by combining defensive training techniques, input sanitization, and secure API practices. While encrypted inference doesn’t directly block adversarial manipulation, it ensures that sensitive inference data stays protected even if attackers attempt to probe the system.Mitigation and hardeningUse adversarial training frameworks like CleverHans or IBM ART to expose models to perturbed inputs during training.Apply input sanitization layers (e.g., JPEG re-encoding, blurring, or noise filters) before data reaches the model.Implement rate limiting and authentication on inference APIs to block automated adversarial probing.Use model ensembles or randomized smoothing to improve resilience to small input perturbations.Log and analyze input-output patterns to detect high-variance or abnormal responses.Test models regularly against known attack vectors using robustness evaluation tools.Solution integration: encrypted inferenceWhile encryption doesn't prevent adversarial inputs, it does mean that input data and model responses remain confidential and protected from observation or tampering during inference.Run inference in trusted environments like Intel SGX or AWS Nitro Enclaves to protect model and data integrity.Use homomorphic encryption or SMPC to process encrypted data without exposing sensitive input.Ensure that all intermediate and output data is encrypted at rest and in transit.Deploy access policies that restrict inference to verified users and approved applications.Risk #3: model leakage of intellectual assetsModel leakage—or model extraction—happens when an attacker interacts with a deployed model in ways that allow them to reverse-engineer its structure, logic, or parameters. Once leaked, a model can be cloned, monetized, or used to bypass the very defenses it was meant to enforce. For businesses, this means losing competitive IP, compromising user privacy, or enabling downstream attacks.For developers and MLOps teams, the challenge is securing deployed models in a way that balances performance and privacy. If you're exposing inference APIs, you’re exposing potential entry points—but with the right controls and architecture, you can drastically reduce the risk of model theft.Mitigation and hardeningEnforce rate limits and usage quotas on all inference endpoints.Monitor for suspicious or repeated queries that indicate model extraction attempts.Implement model watermarking or fingerprinting to trace unauthorized model use.Obfuscate models before deployment using quantization, pruning, or graph rewriting.Disable or tightly control any model export functionality in your platform.Sign and verify inference requests and responses to ensure authenticity.Integrate security checks into CI/CD pipelines to detect risky configurations—such as public model endpoints, export-enabled containers, or missing inference authentication—before they reach production.Solution integration: native security integrationIntegrate model validation, packaging, and signing into CI/CD pipelines.Serve models from encrypted containers or TEEs, with minimal runtime exposure.Use container and image scanning tools to catch misconfigurations before deployment.Centralize monitoring and protection with tools like Gcore WAAP for real-time anomaly detection and automated response.How investing in AI security can save your business moneyFrom a financial point of view, the use of AI and machine learning in cybersecurity can lead to massive cost savings. Organizations that utilize AI and automation in cybersecurity have saved an average of $2.22 million per data breach compared to organizations that do not have these protections in place. This is because the necessity for manual oversight is reduced, lowering the total cost of ownership, and averting costly security breaches. The initial investment in advanced security technologies yields returns through decreased downtime, fewer false positives, and an enhanced overall security posture.Challenges aheadWhile securing the AI lifecycle is essential, it’s still difficult to balance robust security with a positive user experience. Rigid scrutiny can add additional latency or false positives that can stop operations, but AI-powered security can avoid such incidents.Another concern organizations must contend with is how to maintain current AI models. With threats changing so rapidly, today's newest model could easily become outdated by tomorrow’s. Solutions must have an ongoing learning ability so that security detection parameters can be revised.Operational maturity is also a concern, especially for companies that operate in multiple geographies. Well-thought-out strategies and sound governance processes must accompany the integration of complex AI/ML tools with existing infrastructure, but automation still offers the most benefits by reducing the overhead on security teams and helping ensure consistent deployment of security policies.Get ahead of AI security with GcoreAI workloads introduce new and often overlooked security risks that can compromise data integrity, model behavior, and intellectual property. By implementing practices like zero-trust architecture, encrypted inference, and native security integration, developers can build more resilient and trustworthy AI systems. As threats evolve, staying ahead means embedding security at every phase of the AI lifecycle.Gcore helps teams apply these principles at scale, offering native support for zero-trust AI, encrypted inference, and intelligent API protection. As an experienced AI and security solutions provider, our DDoS Protection and AI-enabled WAAP solutions integrate natively with Everywhere Inference and GPU Cloud across 210+ global points of presence. That means low latency, high performance, and proven, robust security, no matter where your customers are located.Talk with our AI security experts and secure your workloads today

Flexible DDoS mitigation with BGP Flowspec
For customers who understand their own network traffic patterns, rigid DDoS protection can be more of a limitation than a safeguard. That’s why Gcore supports BGP Flowspec: a flexible, standards-based method for defining granular filters that block or rate-limit malicious traffic in real time…before it reaches your infrastructure.In this article, we’ll walk through:What Flowspec is and how it worksThe specific filters and actions Gcore supportsCommon use cases, with example rule definitionsHow to activate and monitor Flowspec in your environmentWhat is the BGP Flowspec?BGP Flowspec (RFC 8955) extends Border Gateway Protocol to distribute traffic filtering rules alongside routing updates. Instead of static ACLs or reactive blackholing, Flowspec enables near-instantaneous propagation of mitigation rules across networks.BGP tells routers how to reach IP prefixes across the internet. With Flowspec, those same BGP announcements can now carry rules, not just routes. Each rule describes a pattern of traffic (e.g., TCP SYN packets >1000 bytes from a specific subnet) and what action to take (drop, rate-limit, mark, or redirect).What are the benefits of the BGP Flowspec?Most traditional DDoS protection services react to threats after they start, whether by blackholing traffic to a target IP, redirecting flows to a scrubbing center, or applying rigid, static filters. These approaches can block legitimate traffic, introduce latency, or be too slow to respond to fast-evolving attacks.Flowspec offers a more flexible alternative.Proactive mitigation: Instead of waiting for attacks, you can define known-bad traffic patterns ahead of time and block them instantly. Flowspec lets experienced operators prevent incidents before they start.Granular filtering: You’re not limited to blocking by IP or port. With Flowspec, you can match on packet size, TCP flags, ICMP codes, and more, enabling fine-tuned control that traditional ACLs or RTBH don’t support.Edge offloading: Filtering happens directly on Gcore’s routers, offloading your infrastructure and avoiding scrubbing latency.Real-time updates: Changes to rules are distributed across the network via BGP and take effect immediately, faster than manual intervention or standard blackholing.You still have the option to block traffic during an active attack, but with Flowspec, you gain the flexibility to protect services with minimal disruption and greater precision than conventional tools allow.Which parts of the Flowspec does Gcore implement?Gcore supports twelve filter types and four actions of the Flowspec.Supported filter typesGcore supports all 12 standard Flowspec match components.Filter FieldDescriptionDestination prefixTarget subnet (usually your service or app)Source prefixSource of traffic (e.g., attacker IP range)IP protocolTCP, UDP, ICMP, etc.Port / Source portMatch specific client or server portsDestination portMatch destination-side service portsICMP type/codeFilter echo requests, errors, etc.TCP flagsFilter packets by SYN, ACK, RST, FIN, combinationsPacket lengthFilter based on payload sizeDSCPQuality of service code pointFragmentMatch on packet fragmentation characteristicsSupported actionsGcore DDoS Protection supports the following Flowspec actions, which can be triggered when traffic matches a specific filter:ActionDescriptionTraffic-rate (0x8006)Throttle/rate limit traffic by byte-per-second rateredirectRedirect traffic to alternate location (e.g., scrubbing)traffic-markingApply DSCP marks for downstream classificationno-action (drop)Drop packets (rate-limit 0)Rule orderingRFC 5575 defines the implicit order of Flowspec rules. The crucial point is that more specific announcements take preference, not the order in which the rules are propagated.Gcore also respects Flowspec rule ordering per RFC 5575. More specific filters override broader ones. Future support for Flowspec v2 (with explicit ordering) is under consideration, pending vendor adoption.Blackholing and extended blackholing (eBH)Remote-triggered blackhole (RTBH) is a standardized protection method that the client manages via BGP by analyzing traffic, identifying the direction of the attack (i.e., the destination IP address). This method protects against volumetric attacks.Customers using Gcore IP Transit can trigger immediate blackholing for attacked prefixes via BGP, using the well-known blackhole community tag 65000:666. All traffic to that destination IP is dropped at Gcore’s edge.The list of supported BGP communities is available here.BGP extended blackholeExtended blackhole (eBH) allows for more granular blackholing that does not affect legitimate traffic. For customers unable to implement Flowspec directly, Gcore supports eBH. You announce target prefixes with pre-agreed BGP communities, and Gcore translates them into Flowspec mitigations.To configure this option, contact our NOC at noc@gcore.lu.Monitoring and limitationsGcore can support several logging transports, including mail and Slack.If the number of Flowspec prefixes exceeds the configured limit, Gcore DDoS Protection stops accepting new announcements, but BGP sessions and existing prefixes will stay active. Gcore will receive a notification that you reached the limit.How to activateActivation takes just two steps:Define rules on your edge router using Flowspec NLRI formatAnnounce rules via BGP to Gcore’s intermediate control planeThen, Gcore validates and propagates the filters to border routers. Filters are installed on edge devices and take effect immediately.If attack patterns are unknown, you’ll first need to detect anomalies using your existing monitoring stack, then define the appropriate Flowspec rules.Need help activating Flowspec? Get in touch via our 24/7 support channels and our experts will be glad to assist.Set up GRE and benefit from Flowspec today
Subscribe to our newsletter
Get the latest industry trends, exclusive insights, and Gcore updates delivered straight to your inbox.