Gcore named a Leader in the GigaOm Radar for AI Infrastructure!Get the report
  1. Home
  2. Blog
  3. Looking for a virtual server in Spain? Order it in Madrid

Looking for a virtual server in Spain? Order it in Madrid

  • By Gcore
  • 1 min read
Looking for a virtual server in Spain? Order it in Madrid

Table of contents

Try Gcore Cloud

Try for free

We launched the 21st location of our hosting infrastructure in Madrid, the city with a population of 3 million.

Virtual servers in Madrid from €3.25 per month

Leasing a virtual server in Madrid with a basic configuration (speed of 200 Mbps and unlimited traffic) will cost just €3.25 per month.

Advantages of our hosting in Madrid

Reliable data center

Virtual servers are installed in a certified Tier III data center located in Madrid.

Permanent free access to IPMI

We truly realize the importance of support in emergency situations. That is why we provide our customers with permanent free access to IPMI.

System crashed? No worries, you can always reinstall it on the fly remotely via IPMI.

Automatic installation of the operating system

When ordering any server from us, you will save tons of time on installing the operating system. Most Unix and Windows systems are installed automatically after the server is ordered. Moreover, you can provide your ISO image to install your preferred operating system.

Instant server activation

We have automated the purchasing process—the server is activated immediately after payment, so there is no delay.

24/7 technical support

Competent and responsive technical support will answer most technical questions in English or Chinese.

In case of emergency, we involve technical experts who promptly solve the problem.

Don’t miss out on the opportunity to lease the most optimal configurations of virtual servers in Madrid while they are still available.

Order a virtual server in Madrid

Table of contents

Try Gcore Cloud

Try for free

Related articles

Cilium CNI is Now Available in Gcore Managed Kubernetes

We’re excited to announce that we now support Cilium in Gcore Managed Kubernetes. Cilium provides advanced networking and security capabilities, making it easier to manage large-scale Kubernetes deployments. It also offers flexible and robust network policy management, which is especially useful for organizations with strict security requirements. In this article, we’ll explore key Cilium features and benefits, compare it to Calico—another container network interface (CNI) that we support—and explain how to enable Cilium in Gcore Managed Kubernetes.What Is Cilium?Cilium is a CNI that provides powerful networking, security, and observability capabilities for container orchestration systems like Kubernetes. It’s based on eBPF (Extended Berkeley Packet Filter) technology, which allows it to handle networking functions at a high speed with minimal overhead. eBPF allows programs to run directly in the Linux kernel and offers broad functionality beyond basic filtering. As a result, Cilium enables the effortless management of clusters, with a larger number of pods and nodes than CNIs based on previous-generation technologies like iptables.Cilium CNI is an open-source CNCF (Cloud Native Computing Foundation) project that reached the “Graduated” maturity level in 2023, indicating its stability for production environments. It has increasingly been integrated into managed Kubernetes services.Key Features of CiliumCilium offers three main sets of features, respectively addressing networking, security, and observability. The most important elements of each are as follows.NetworkingHigh performance: Enables the creation and removal of thousands of containers in seconds, allowing the management of large and dynamic container environments.L7 network policies: Supports OSI Layer 7 network policies for ingress and egress traffic based on application protocols such as HTTP and TCP. Traditional L3 and L4 policies are also supported.Layer 4 load balancer: Offers high-performance load balancing based on BGP, XDP, and eBPF.Gateway API: Enables advanced routing capabilities beyond the limitations of the Ingress API, such as header modification, traffic splitting, and URL rewriting. Gateway API also provides a fully functional, no-sidecar service mesh, eliminating the need for additional tools like Istio, and their associated recourse overhead.SecurityPolicy enforcement modes: Offers three levels of rule enforcement for how endpoints accept traffic, from less restrictive to more restrictive. These are suitable for organizations with varying security requirements.Inter-node traffic control: Supports cluster-wide, non-namespaced policies that allow you to specify nodes as source and destination. This makes it easy to filter traffic between different node groups.Transparent encryption: Enables pod-to-pod encryption. Features can be added, such as datapath encryption via in-kernel IPsec or WireGuard and automatic key rotation with overlapping keys.ObservabilityService map: Supports integration with Hubble, which provides real-time monitoring of traffic and service interactions visually represented through a dynamic service connection diagram. Support for an out-of-the-box Hubble UI will be introduced in 2024.Metrics and tracing export: Enables a solution that empowers users to monitor and streamline their Kubernetes environments.What Types of Workloads Can Benefit from Cilium?Let’s take a look at some examples of workloads that can benefit significantly from using Cilium CNI.Microservices: Cilium’s L7 awareness and granular security policies are well-suited for enforcing communication control between tightly coupled microservices that use API-level security for protocols, like HTTP and gRPC. Its eBPF-based performance helps maintain low latency and high throughput in highly dynamic microservice environments such as messaging systems and authentification-authorization services.Security-sensitive workloads: Cilium’s identity-based security and advanced network policies strengthen security for workloads that require robust protection, such as financial services, government applications, and healthcare.High-performance computing (HPC): Cilium’s efficient network processing and low latency provide benefits for HPC workloads that require fast and trusted communication between nodes. Examples of such workloads include analytical systems and database management systems.Cilium vs. iptables-Based CalicoIn Gcore Managed Kubernetes, we also provide another popular CNI: Calico, which is built on top of iptables. Calico, while simple and reliable, does not perform as well in large-scale clusters and lacks many of Cilium’s advanced features.Calico adds complicated logic to container networking, like iptables PREROUTING, POSTROUTING, and FORWARD. In contrast, the eBPF implemented in Cilium doesn’t have extra layers of network abstraction; it works in the Linux kernel itself, which makes it very fast. Here is a comparison between iptables-based networking and eBPF-based networking that shows the additional logic in Calico.Figure 1: eBPF container networking compared to standard iptables-based (Source: cilium.io)As a result, Cilium passes more traffic with less delay than Calico, given the same resources and conditions. This enhanced throughput is a particular advantage for applications that require access to extensive data, media streaming services, and data upload/download services.Until now, we couldn’t support deployments with more than 110 pods per node because of Calico’s technical limitations. With Cilium, we can support three times that number. Given that we offer Gcore Bare Metal worker nodes, this is a huge benefit for customers who prefer to run large Kubernetes clusters on bare metal servers.However, if Calico meets your needs, you can still use it in your Gcore Managed Kubernetes clusters.How to Enable Cilium in Gcore Managed KubernetesSelect Cilium as your CNI when creating a Kubernetes cluster. The process is as follows:Log in to the Gcore Customer Portal. If you are not registered, sign up using your email, Google, or GitHub account.From the vertical menu on the left, select Cloud, open the Kubernetes tab, and click Create Cluster.Figure 2: Creating a Kubernetes clusterIn the “CNI Provider” section, select Cilium:Figure 3: Choosing a CNI providerComplete the cluster setup and click Create Cluster. If you need more information on how to configure a cluster, please refer to our Managed Kubernetes documentation.Once you have connected to your cluster, you can configure the necessary Cilium policies and use them in your Gcore Managed Kubernetes installation. For example, here is a policy to use a simple ingress rule to allow communication between endpoints with frontend and backend labels:apiVersion: "cilium.io/v2"kind: CiliumNetworkPolicymetadata: name: "l3-rule"spec: endpointSelector: matchLabels: role: backend ingress: - fromEndpoints: - matchLabels: role: frontendSee the Cilium documentation and GitHub for more examples of policies that you can customize to your needs.You can also use Network Policy Editor, which provides a simple and user-friendly interface. It allows you to create policies and use the corresponding YAMLs in your Kubernetes clusters.Future Plans: Hubble + CiliumWe plan to integrate out-of-the-box support for Hubble into Cilium later this year. Hubble, an open-source tool developed specifically for Cilium, automatically detects all services within a cluster and maps their interactions. This service map is accessible through any web browser. Using Hubble’s visualizations, you can gain a deeper understanding of service interdependencies and behaviors within your cluster, enabling quicker identification and resolution of network interaction issues.We’ll keep you posted as the feature is released and explain its benefits in more detail.ConclusionWe’re constantly working to enhance our offerings with the latest technologies to meet the evolving needs of our customers. Cilium represents one of these significant advancements. It integrates seamlessly into Gcore Managed Kubernetes, enabling our customers to use advanced networking and security capabilities without complex configuration or setup.Gcore Managed Kubernetes takes care of setting up and maintaining Kubernetes cluster for you. Our team manages master nodes (control plane) while you maintain full control over your worker nodes. Choose from Virtual Instances and Bare Metal Servers as worker nodes, including those powered by GPU accelerators to boost your AI/ML workloads. We provide free, production-grade cluster management with a 99.9% SLA for your peace of mind.Explore Gcore Managed Kubernetes

Cilium CNI is Now Available in Gcore Managed Kubernetes

We’re excited to announce that we now support Cilium in Gcore Managed Kubernetes. Cilium provides advanced networking and security capabilities, making it easier to manage large-scale Kubernetes deployments. It also offers flexible and robust network policy management, which is especially useful for organizations with strict security requirements. In this article, we’ll explore key Cilium features and benefits, compare it to Calico—another container network interface (CNI) that we support—and explain how to enable Cilium in Gcore Managed Kubernetes.What Is Cilium?Cilium is a CNI that provides powerful networking, security, and observability capabilities for container orchestration systems like Kubernetes. It’s based on eBPF (Extended Berkeley Packet Filter) technology, which allows it to handle networking functions at a high speed with minimal overhead. eBPF allows programs to run directly in the Linux kernel and offers broad functionality beyond basic filtering. As a result, Cilium enables the effortless management of clusters, with a larger number of pods and nodes than CNIs based on previous-generation technologies like iptables.Cilium CNI is an open-source CNCF (Cloud Native Computing Foundation) project that reached the “Graduated” maturity level in 2023, indicating its stability for production environments. It has increasingly been integrated into managed Kubernetes services.Key Features of CiliumCilium offers three main sets of features, respectively addressing networking, security, and observability. The most important elements of each are as follows.NetworkingHigh performance: Enables the creation and removal of thousands of containers in seconds, allowing the management of large and dynamic container environments.L7 network policies: Supports OSI Layer 7 network policies for ingress and egress traffic based on application protocols such as HTTP and TCP. Traditional L3 and L4 policies are also supported.Layer 4 load balancer: Offers high-performance load balancing based on BGP, XDP, and eBPF.Gateway API: Enables advanced routing capabilities beyond the limitations of the Ingress API, such as header modification, traffic splitting, and URL rewriting. Gateway API also provides a fully functional, no-sidecar service mesh, eliminating the need for additional tools like Istio, and their associated recourse overhead.SecurityPolicy enforcement modes: Offers three levels of rule enforcement for how endpoints accept traffic, from less restrictive to more restrictive. These are suitable for organizations with varying security requirements.Inter-node traffic control: Supports cluster-wide, non-namespaced policies that allow you to specify nodes as source and destination. This makes it easy to filter traffic between different node groups.Transparent encryption: Enables pod-to-pod encryption. Features can be added, such as datapath encryption via in-kernel IPsec or WireGuard and automatic key rotation with overlapping keys.ObservabilityService map: Supports integration with Hubble, which provides real-time monitoring of traffic and service interactions visually represented through a dynamic service connection diagram. Support for an out-of-the-box Hubble UI will be introduced in 2024.Metrics and tracing export: Enables a solution that empowers users to monitor and streamline their Kubernetes environments.What Types of Workloads Can Benefit from Cilium?Let’s take a look at some examples of workloads that can benefit significantly from using Cilium CNI.Microservices: Cilium’s L7 awareness and granular security policies are well-suited for enforcing communication control between tightly coupled microservices that use API-level security for protocols, like HTTP and gRPC. Its eBPF-based performance helps maintain low latency and high throughput in highly dynamic microservice environments such as messaging systems and authentification-authorization services.Security-sensitive workloads: Cilium’s identity-based security and advanced network policies strengthen security for workloads that require robust protection, such as financial services, government applications, and healthcare.High-performance computing (HPC): Cilium’s efficient network processing and low latency provide benefits for HPC workloads that require fast and trusted communication between nodes. Examples of such workloads include analytical systems and database management systems.Cilium vs. iptables-Based CalicoIn Gcore Managed Kubernetes, we also provide another popular CNI: Calico, which is built on top of iptables. Calico, while simple and reliable, does not perform as well in large-scale clusters and lacks many of Cilium’s advanced features.Calico adds complicated logic to container networking, like iptables PREROUTING, POSTROUTING, and FORWARD. In contrast, the eBPF implemented in Cilium doesn’t have extra layers of network abstraction; it works in the Linux kernel itself, which makes it very fast. Here is a comparison between iptables-based networking and eBPF-based networking that shows the additional logic in Calico.Figure 1: eBPF container networking compared to standard iptables-based (Source: cilium.io)As a result, Cilium passes more traffic with less delay than Calico, given the same resources and conditions. This enhanced throughput is a particular advantage for applications that require access to extensive data, media streaming services, and data upload/download services.Until now, we couldn’t support deployments with more than 110 pods per node because of Calico’s technical limitations. With Cilium, we can support three times that number. Given that we offer Gcore Bare Metal worker nodes, this is a huge benefit for customers who prefer to run large Kubernetes clusters on bare metal servers.However, if Calico meets your needs, you can still use it in your Gcore Managed Kubernetes clusters.How to Enable Cilium in Gcore Managed KubernetesSelect Cilium as your CNI when creating a Kubernetes cluster. The process is as follows:Log in to the Gcore Customer Portal. If you are not registered, sign up using your email, Google, or GitHub account.From the vertical menu on the left, select Cloud, open the Kubernetes tab, and click Create Cluster.Figure 2: Creating a Kubernetes clusterIn the “CNI Provider” section, select Cilium:Figure 3: Choosing a CNI providerComplete the cluster setup and click Create Cluster. If you need more information on how to configure a cluster, please refer to our Managed Kubernetes documentation.Once you have connected to your cluster, you can configure the necessary Cilium policies and use them in your Gcore Managed Kubernetes installation. For example, here is a policy to use a simple ingress rule to allow communication between endpoints with frontend and backend labels:apiVersion: "cilium.io/v2"kind: CiliumNetworkPolicymetadata: name: "l3-rule"spec: endpointSelector: matchLabels: role: backend ingress: - fromEndpoints: - matchLabels: role: frontendSee the Cilium documentation and GitHub for more examples of policies that you can customize to your needs.You can also use Network Policy Editor, which provides a simple and user-friendly interface. It allows you to create policies and use the corresponding YAMLs in your Kubernetes clusters.Future Plans: Hubble + CiliumWe plan to integrate out-of-the-box support for Hubble into Cilium later this year. Hubble, an open-source tool developed specifically for Cilium, automatically detects all services within a cluster and maps their interactions. This service map is accessible through any web browser. Using Hubble’s visualizations, you can gain a deeper understanding of service interdependencies and behaviors within your cluster, enabling quicker identification and resolution of network interaction issues.We’ll keep you posted as the feature is released and explain its benefits in more detail.ConclusionWe’re constantly working to enhance our offerings with the latest technologies to meet the evolving needs of our customers. Cilium represents one of these significant advancements. It integrates seamlessly into Gcore Managed Kubernetes, enabling our customers to use advanced networking and security capabilities without complex configuration or setup.Gcore Managed Kubernetes takes care of setting up and maintaining Kubernetes cluster for you. Our team manages master nodes (control plane) while you maintain full control over your worker nodes. Choose from Virtual Instances and Bare Metal Servers as worker nodes, including those powered by GPU accelerators to boost your AI/ML workloads. We provide free, production-grade cluster management with a 99.9% SLA for your peace of mind.Explore Gcore Managed Kubernetes

Run an OpenVPN Server on Ubuntu Using a Gcore OpenVPN Instance

In this tutorial, we will explain how to run an OpenVPN server on Ubuntu using a preconfigured Gcore Cloud OpenVPN instance. The main advantage of this approach is that it saves you time and effort: your VPN server is ready for use in just 2-5 minutes. You simply need to create an instance, download the OpenVPN configuration, and install the OpenVPN client on your device. No manual manipulations are required in the command line.What Is OpenVPN?OpenVPN is an open-source Virtual Private Network (VPN) application. It is a powerful tool that allows you to connect securely to your server from anywhere in the world and use this server as a VPN.How to Run and Use OpenVPNStep 1. Create a Virtual Machine with an OpenVPN ServerFirst, let’s create a virtual machine with an OpenVPN server:Log in to your Gcore Cloud account. If you don’t have a GCore Cloud account yet, sign up.Go to Cloud and select Projects.Click Create project; fill in the Name field. Projects are groups of separate Cloud resources, and these groups are isolated from one another. The isolation gives you the ability to set user rights for each project.In your project, click Create Instance. Here’s what you’ll see:Figure 1: Create an instanceSelect one of the available regions.In the Image section, select Marketplace. Click the Openvpn Latest image.Figure 2: Select “Openvpn Latest”Set the following parameters:App Template Configuration: external URL if you have a domain.Type: we recommend 1vCPU / 2GB RAM.Volume: choose any volume type with 10GB.Network: set by default to a public IP.Firewall: select “Default” with the “Add application ports to firewall group” flag.SSH Key: choose your public SSH key or generate a new one.Instance name: openvpn-server (or whatever you want.)Now you’ve completed the set up steps, click Create Instance. The virtual machine will appear in the “Instances” list. Wait until the virtual machine’s “Creating” status changes to “Power On”.Figure 3: The process of creating the virtual machineStep 2. Download the OpenVPN Configuration to your DeviceWait for five minutes after the virtual machine is powered on. You can then download the OpenVPN configuration, which will allow you to connect to the OpenVPN server.On your device, type the virtual machine’s public IP address in the address bar of the browser, as follows:http://<your_public_ip_address>For example:http://202.78.166.105Press “Enter.” Here’s what you’ll see:Figure 4: Download the OpenVPN configurationConfirm by clicking “Allow.” The configuration file will be downloaded.Step 3. Download an OpenVPN ClientDownload the OpenVPN client to your device. Navigate to the OpenVPN page Community Downloads, select the appropriate installer for your operating system, and follow the installation instructions.Once the client is installed, you can use the OpenVPN configuration that you downloaded in Step 2 to connect to your OpenVPN server.Step 4. Apply the OpenVPN Configuration and Check How It WorksNext, we can apply the OpenVPN configuration. For this example, we will use screenshots from the macOS installer.Open the OpenVPN client.Import the configuration file you downloaded in Step 2.Figure 5: Import the configuration fileOnce imported, connect to the OpenVPN server.Figure 6: Connect to the OpenVPN serverCongratulations! You have successfully connected to the OpenVPN server. You can now use it to make secure connections anywhere in the world.ConclusionIn this tutorial, we explained how to run an OpenVPN server on Ubuntu. Check out our other articles dedicated to setting up different types of software on Gcore Cloud instances:How to Run Grafana on Ubuntu ServerHow to Set Up macOS on Bare Metal with Ubuntu Using DockerHow to Set Up Odoo on Ubuntu Using DockerHow to Install nginx on Kubernetes Using Helm

Training in the Sovereign Cloud, Deploying at the Edge: Part 2

In part one of this article, we explained the critical importance of training AI models in the sovereign cloud and the two options available for doing so. In this part, we move on to deploying trained models at the edge.What Are the Benefits of Deploying AI Models at the Edge?Edge computing helps meet compliance with data sovereignty and residency laws. But its benefits go far beyond regulatory obligations. Deploying AI models at the edge introduces several advantages that enhance both operational efficiency and user experience. Here are the key benefits of considering an edge approach when deploying AI models within a sovereign cloud environment.Simplified Adherence to Regional AI RegulationsEdge deployments also offer significant advantages in tailoring AI models to meet local or regional standards. It’s particularly beneficial in multi-jurisdictional environments, like global businesses, where data is subject to different regulatory regimes. Many countries have unique regulations, cultural preferences, and operational requirements that must be addressed, and edge computing allows organizations to customize AI deployments for each location. For example, an AI model deployed in the healthcare sector in Europe may need to comply with GDPR, while a similar model in the United States may need to follow HIPAA regulations.By deploying models locally, organizations can ensure that each model is optimized for the legal, regulatory, and technical demands of the region where it operates. This level of customization also allows organizations to fine-tune models to better align with regional preferences, language, and behavior, creating a more tailored and relevant user experience.Enhanced Privacy and SecurityThe regulations mentioned above are designed to improve the privacy and security of those whose data is used in training and of end users who engage in inference. So it’s logical that edge computing offers a privacy advantage. Here’s how it works.By processing data locally at the edge, sensitive information spends less time traveling across public networks, reducing the risk of interception or cyberattacks. With edge computing, data can be processed within secure, geographically bound environments, ensuring that it stays within specific regulatory jurisdictions. In contrast to a centralized system where all data is pooled together—potentially creating a single point of failure—edge computing decentralizes data processing, making it easier to isolate and protect individual models and data sets. This approach not only minimizes the exposure of sensitive data but also helps organizations comply with local security standards and privacy regulations.Reduced Latency and Improved PerformanceKeeping data local means that latency is reduced for end users. Instead of sending data back and forth to a central server that could be located hundreds or thousands of kilometers away, edge-deployed models can operate in close proximity to where the data is produced.This proximity dramatically reduces response times, allowing AI models to make real-time predictions and decisions more efficiently. For applications that require near-instantaneous feedback, such as chatbots, autonomous vehicles, real-time video analytics, or industrial automation, deploying AI at the edge can significantly improve performance and user experience, like getting rid of those pesky lags on ChatGPT or AI image generation.Bandwidth Efficiency and Cost SavingsAnother advantage of edge computing is its ability to optimize bandwidth usage and reduce overall network costs. Centralized cloud architectures often require vast amounts of data to be transmitted back and forth between the user and a remote data center, consuming significant bandwidth and generating high network costs.Edge computing reduces this burden by processing data closer to where it is generated, minimizing the amount of data that needs to be transmitted over long distances. For AI applications that involve large data sets—such as real-time video streaming or IoT sensor data—processing and analyzing this information at the edge reduces the need for excessive network traffic, lowering both costs and the strain on the network infrastructure. Organizations can save on data transfer fees while also freeing up bandwidth for other critical processes.Increased Scalability and FlexibilityEdge computing offers flexibility by distributing workloads across multiple geographic locations, enabling organizations to scale their AI deployments more easily. As business needs evolve, edge infrastructure can be expanded incrementally by adding more nodes at specific locations, without the need to overhaul an entire centralized data center. This scalability is particularly valuable for organizations operating across multiple regions, as it allows for seamless adaptation to local demand. Whether handling a surge in user activity or deploying a new AI model in a different region, edge computing provides the agility to adjust quickly to changing conditions.Model Drift DetectionEdge computing also helps detect model drift faster by continuously comparing real-time data at the edge against original training data. This allows organizations to quickly identify performance issues and ensure that models remain compliant with regulations, ensuring better overall accuracy.Improved Reliability and Business ContinuityFinally, edge computing enhances the reliability and resiliency of AI operations. In a centralized cloud model, disruptions at a single data center can lead to widespread service outages. However, edge computing’s distributed architecture ensures that even if one node or location experiences an issue, other edge locations can continue to function independently, minimizing downtime. This decentralized structure is particularly beneficial for critical applications that require constant availability, such as healthcare systems, financial services, or industrial automation. By deploying AI models at the edge, organizations can ensure greater continuity of service and improve their disaster recovery capabilities.Train in the Sovereign Cloud and Deploy at the Edge with GcoreDeploying AI models in a sovereign cloud and utilizing edge computing can help secure compliance with regional data laws, enhance performance, and provide greater flexibility and scalability. By localizing data processing and training, organizations can meet multi-jurisdictional regulations, reduce latency, improve security, and achieve cost savings, making edge and sovereign cloud solutions essential for modern AI deployments.Gcore Edge AI offers complete AI lifecycle infrastructure, including sovereign cloud training in multiple locations including the EU, and inference at the edge on best-in-class NVIDIA L40S GPUs on 180+ globally distributed edge points of presence. Simplify your AI training and deployment with our integrated approach.Discover how to deploy your AI models globally with Gcore Inference at the Edge

Introducing Gcore for Startups: created for builders, by builders

Building a startup is tough. Every decision about your infrastructure can make or break your speed to market and burn rate. Your time, team, and budget are stretched thin. That’s why you need a partner that helps you scale without compromise.At Gcore, we get it. We’ve been there ourselves, and we’ve helped thousands of engineering teams scale global applications under pressure.That’s why we created the Gcore Startups Program: to give early-stage founders the infrastructure, support, and pricing they actually need to launch and grow.At Gcore, we launched the Startups Program because we’ve been in their shoes. We know what it means to build under pressure, with limited resources, and big ambitions. We wanted to offer early-stage founders more than just short-term credits and fine print; our goal is to give them robust, long-term infrastructure they can rely on.Dmitry Maslennikov, Head of Gcore for StartupsWhat you get when you joinThe program is open to startups across industries, whether you’re building in fintech, AI, gaming, media, or something entirely new.Here’s what founders receive:Startup-friendly pricing on Gcore’s cloud and edge servicesCloud credits to help you get started without riskWhite-labeled dashboards to track usage across your team or customersPersonalized onboarding and migration supportGo-to-market resources to accelerate your launchYou also get direct access to all Gcore products, including Everywhere Inference, GPU Cloud, Managed Kubernetes, Object Storage, CDN, and security services. They’re available globally via our single, intuitive Gcore Customer Portal, and ready for your production workloads.When startups join the program, they get access to powerful cloud and edge infrastructure at startup-friendly pricing, personal migration support, white-labeled dashboards for tracking usage, and go-to-market resources. Everything we provide is tailored to the specific startup’s unique needs and designed to help them scale faster and smarter.Dmitry MaslennikovWhy startups are choosing GcoreWe understand that performance and flexibility are key for startups. From high-throughput AI inference to real-time media delivery, our infrastructure was designed to support demanding, distributed applications at scale.But what sets us apart is how we work with founders. We don’t force startups into rigid plans or abstract SLAs. We build with you 24/7, because we know your hustle isn’t a 9–5.One recent success story: an AI startup that migrated from a major hyperscaler told us they cut their inference costs by over 40%…and got actual human support for the first time. What truly sets us apart is our flexibility: we’re not a faceless hyperscaler. We tailor offers, support, and infrastructure to each startup’s stage and needs.Dmitry MaslennikovWe’re excited to support startups working on AI, machine learning, video, gaming, and real-time apps. Gcore for Startups is delivering serious value to founders in industries where performance, cost efficiency, and responsiveness make or break product experience.Ready to scale smarter?Apply today and get hands-on support from engineers who’ve been in your shoes. If you’re an early-stage startup with a working product and funding (pre-seed to Series A), we’ll review your application quickly and tailor infrastructure that matches your stage, stack, and goals.To get started, head on over to our Gcore for Startups page and book a demo.Discover Gcore for Startups

The cloud control gap: why EU companies are auditing jurisdiction in 2025

Europe’s cloud priorities are changing fast, and rightly so. With new regulations taking effect, concerns about jurisdictional control rising, and trust becoming a key differentiator, more companies are asking a simple question: Who really controls our data?For years, European companies have relied on global cloud giants headquartered outside the EU. These providers offered speed, scale, and a wide range of services. But 2025 is a different landscape.Recent developments have shown that data location doesn’t always mean data protection. A service hosted in an EU data center may still be subject to laws from outside the EU, like the US CLOUD Act, which could require the provider to hand over customer data regardless of where it’s stored.For regulated industries, government contractors, and data-sensitive businesses, that’s a growing problem. Sovereignty today goes beyond compliance. It’s central to business trust, operational transparency, and long-term risk management.Rising risks of non-EU cloud dependencyIn 2025, the conversation has shifted from “is this provider GDPR-compliant?” to “what happens if this provider is forced to act against our interests?”Here are three real concerns European companies now face:Foreign jurisdiction risk: Cloud providers based outside Europe may be legally required to share customer data with foreign authorities, even if it’s stored in the EU.Operational disruption: Geopolitical tensions or executive decisions abroad could affect service availability or create new barriers to access.Reputational and compliance exposure: Customers and regulators increasingly expect companies to use providers aligned with European standards and legal protections.European leaders are actively pushing for “full-stack European solutions” across cloud and AI infrastructure, citing sovereignty and legal clarity as top concerns. Leading European firms like Deutsche Telekom and Airbus have criticized proposals that would grant non-EU tech giants access to sensitive EU cloud data.This reinforces a broader industry consensus: jurisdictional control is a serious strategic issue for European businesses across industries. Relying on foreign cloud services introduces risks that no business can control, and that few can absorb.What European companies must do nextEuropean businesses can’t wait for disruption to happen. They must build resilience now, before potentially devastating problems occur.Audit their cloud stack to identify data locations and associated legal jurisdictions.Repatriate sensitive workloads to EU-based providers with clear legal accountability frameworks.Consider deploying hybrid or multi-cloud architectures, blending hyperscaler agility and EU sovereign assurance.Over 80% of European firms using cloud infrastructure are actively exploring or migrating to sovereign solutions. This is a smart strategic maneuver in an increasingly complex and regulated cloud landscape.Choosing a futureproof pathIf your business depends on the cloud, sovereignty should be part of your planning. It’s not about political trends or buzzwords. It’s about control, continuity, and credibility.European cloud providers like Gcore support organizations in achieving key sovereignty milestones:EU legal jurisdiction over dataAlignment with sectoral compliance requirementsResilience to legal and geopolitical disruptionTrust with EU customers, partners, and regulatorsIn 2025, that’s a serious competitive edge that shows your customers that you take their data protection seriously. A European provider is quickly becoming a non-negotiable for European businesses.Want to explore what digital sovereignty looks like in practice?Gcore’s infrastructure is fully self-owned, jurisdictionally transparent, and compliant with EU data laws. As a European provider, we understand the legal, operational, and reputational demands on EU businesses.Talk to us about sovereignty strategies for cloud, AI, network, and security that protect your data, your customers, and your business. We’re ready to provide a free, customized consultation to help your European business prepare for sovereignty challenges.Auditing your cloud stack is the first step. Knowing what to look for in a provider comes next.Not all EU-based cloud providers guarantee sovereignty. Learn what to evaluate in infrastructure, ownership, and legal control to make the right decision.Learn how to verify EU cloud control in our blog

Subscribe to our newsletter

Get the latest industry trends, exclusive insights, and Gcore updates delivered straight to your inbox.