Gcore named a Leader in the GigaOm Radar for AI Infrastructure!Get the report
  1. Home
  2. Developers
  3. Deploying bulletproof PAM with Gravitational Teleport

Deploying bulletproof PAM with Gravitational Teleport

  • By Gcore
  • March 20, 2023
  • 6 min read
Deploying bulletproof PAM with Gravitational Teleport

Introduction

According to one study, there is a cybersecurity attack every 39 seconds. Defending against such attacks has become of paramount importance to how businesses operate today. The traditional approach to securing IT infrastructure is perimeter-oriented solutions such as firewalls, VPNs, and password policies. While these are all good and necessary, they don’t speak to the security vectors that are “built-in” to an organization at the code or infrastructure level.

Many companies are solving these issues by leveraging more non-traditional cloud security solutions such as compliance, DevOps, and engineering. Many companies have found that at the DevOps level, protecting infrastructure starts and ends with a Secure Shell (SSH) protocol.

In this tutorial, we are going to take you through the steps of deploying a PAM system with Gravitational’s Teleport. Teleport is a compliance and SSH gateway for multi-cloud privileged access. It allows you to isolate secure infrastructure and enforce security best practices such as 2FA.

Install Teleport

To get started, head over to the Teleport Downloads page and grab the latest binary. Next, open up a terminal and run:

$ tar -xzf teleport-binary-release.tar.gz$ sudo make install

Another option is installing via source. Teleport is written in Go language. It requires the prerequisite of Golang v1.8.3 or newer to install. To install via source, you’ll want to run these commands:

# get the source & build:$ mkdir -p $GOPATH/src/github.com/gravitational$ cd $GOPATH/src/github.com/gravitational$ git clone https://github.com/gravitational/teleport.git$ cd teleport$ make full# create the default data directory before starting:$ sudo mkdir -p /var/lib/teleport

Start Teleport

Next, create a directory for Teleport to keep its data. By default it’s /var/lib/teleport/. Then start the teleport daemon.

$ mkdir -p /var/lib/teleport

$ teleport start # if you are not `root` you may need `sudo`

By default Teleport services bind to 0.0.0.0. If you ran teleport without any configuration or flags you should see this output in your console or logfile.

[AUTH]  Auth service is starting on 0.0.0.0:3025[PROXY] Reverse tunnel service is starting on 0.0.0.0:3024[PROXY] Web proxy service is starting on 0.0.0.0:3080[PROXY] SSH proxy service is starting on 0.0.0.0:3023[SSH]   Service is starting on 0.0.0.0:3022

Congratulations – you are now running Teleport!

Create a user signup token

So, we’ve got Teleport running but there aren’t any users recognized by Teleport Auth yet! We need to create one for your OS user. In this example the OS user is teleport and the hostname of the node is grav-00 .

# A new Teleport user will be assigned a# mapping to an OS user of the same name# This is the same as running `tctl users add teleport teleport`[teleport@grav-00 ~]$ tctl users add teleportSignup token has been created and is valid for 1 hours. Share this URL with the user:https://grav-00:3080/web/newuser/3a8e9fb6a5093a47b547c0f32e3a98d4NOTE: Make sure grav-00:3080 points at a Teleport proxy which users can access.

If you want to map to a different OS user, james for instance, you can specify like so: tctl users add teleport james . You can also add assign multiple mappings like this tctl users add teleport james,joe,root .

You now have a signup token for the Teleport User teleport and will need to open this URL in a web browser to complete the registration process.

Register a user

  • If the machine where you ran these commands has a web browser installed you should be able to open the browser and connect to Teleport Proxy via URL right away.
  • If the machine you are working on is a remote machine you may need to access the Teleport

Proxy via the host machine and port 3080 in a web browser. One simple way to do this is to temporarily append [HOST_IP] grav-00 to /etc/hosts

Teleport enforces two-factor authentication by default. If you do not already have Google Authenticator, Authy or another 2FA client installed, you will need to install it on your smartphone. Then you can scan the QR code on the Teleport login web page, pick a password and enter the two-factor token.

After completing the registration you will be logged in automatically.

Log in via CLI

Let’s login using the tsh command-line tool. Just as in the previous step, you will need to be able to resolve the hostname of the cluster to a network-accessible IP.

# here grav-00 is a resolvable hostname on the same network# --proxy can be an IP, hostname, or URL[teleport@grav-00 ~]$ tsh --proxy=grav-00 --insecure loginWARNING: You are using insecure connection to SSH proxy https://grav-00:3080Enter password for Teleport user teleport:Enter your OTP token:XXXXXXWARNING: You are using insecure connection to SSH proxy https://grav-00:3080> Profile URL:  https://grav-00:3080  Logged in as: teleport  Cluster:      grav-00  Roles:        admin*  Logins:       teleport  Valid until:  2019-10-05 02:01:36 +0000 UTC [valid for 12h0m0s]  Extensions:   permit-agent-forwarding, permit-port-forwarding, permit-pty* RBAC is only available in Teleport Enterprise  https://gravitational.com/teleport/docs/enterprise

Start a recorded session

At this point, you have authenticated with Teleport Auth and can now start a recorded SSH session. You logged in as the teleport user in the last step so the --user is defaulted to teleport.

$ tsh ssh --proxy=grav-00 grav-00$ echo 'howdy'howdy# run whatever you want here, this is a regular SSH session.

Note: The tsh client always requires the --proxy flag

Your command prompt may not look different, but you are now in a new SSH session which has been authenticated by Teleport!

Try a few things to get familiar with recorded sessions:

  1. Navigate to https://[HOST]:3080/web/sessions in your web browser to see the list of current and past sessions on the cluster. The session you just created should be listed.
  2. After you end a session, replay it in your browser.
  3. Join the session in your web browser.

Here we’ve started two recorded sessions on the node grav-00 : one via the web browser and one in the command line. Notice that there are distinct SSH sessions even though we logged in with the root user. In the next step, you’ll learn how to join a shared session.

Join a session on the CLI

One of the most important features of Teleport is the ability to share a session between users. If you joined your active session in your browser in the previous step you will see the complete session history of the recorded session in the web terminal.

Joining a session via a browser is often the easiest way to see what another user is up to, but if you have access to the proxy server from your local machine (or any machine) you can also join a session on the CLI.

# This is the recorded session you started in Step 6$ tsh ssh --proxy=grav-00 grav-00$ echo 'howdy'howdy# you might have run more stuff here...$ teleport statusCluster Name: grav-00Host UUID   : a3f67090-99cc-45cf-8f70-478d176b970eSession ID  : cd908432-950a-4493-a561-9c272b0e0ea6Session URL : https://grav-00:3080/web/cluster/grav-00/node/a3f67090-99cc-45cf-8f70-478d176b970e/teleport/cd908432-950a-4493-a561-9c272b0e0ea6

Copy the Session ID and open a new SSH session.

%~$ tsh join -d --proxy grav-00 --insecurecd908432-950a-4493-a561-9c272b0e0ea6# you will be asked to re-authenticate your user$ echo 'howdy'howdy# you might have run more stuff here...$ teleport statusCluster Name: grav-00Host UUID   : a3f67090-99cc-45cf-8f70-478d176b970eSession ID  : cd908432-950a-4493-a561-9c272b0e0ea6Session URL : https://grav-00:3080/web/cluster/grav-00/node/a3f67090-99cc-45cf-8f70-478d176b970e/teleport/cd908432-950a-4493-a561-9c272b0e0ea6$ echo "Awesome!"# check out your shared ssh session between two CLI windows

Authentication

Teleport uses the concept of “authentication connectors” to authenticate users when they execute tsh login command. There are three types of authentication connectors:

LOCAL CONNECTOR

Local authentication is used to authenticate against a local Teleport user database. This database is managed by tctl users command. Teleport also supports second factor authentication (2FA) for the local connector. There are three possible values (types) of 2FA:

  • otp is the default. It implements TOTP standard. You can use Google Authenticator or Authy or any other TOTP client.
  • u2f implements U2F standard for utilizing hardware (USB) keys for second factor.
  • off turns off second factor authentication.

Here is an example of this setting in the teleport.yaml :

auth_service:  authentication:    type: local    second_factor: off

GITHUB OAUTH 2.0 CONNECTOR

This connector implements Github OAuth 2.0 authentication flow. Please refer to Github documentation on Creating an OAuth App to learn how to create and register an OAuth app. The Community edition of Teleport only supports this authentication flow.

Here is an example of this setting in the teleport.yaml :

auth_service:  authentication:    type: github

See Github OAuth 2.0 for details on how to configure it.

SAML

This connector type implements SAML authentication. It can be configured against any external identity manager like Okta or Auth0. This feature is only available for Teleport Enterprise.

Here is an example of this setting in the teleport.yaml :

auth_service:  authentication:    type: saml

OIDC

Teleport implements OpenID Connect (OIDC) authentication, which is similar to SAML in principle. This feature is only available for Teleport Enterprise.

Here is an example of this setting in the teleport.yaml :

auth_service:  authentication:    type: oidc

FIDO U2F

Teleport supports FIDO U2F hardware keys as a second authentication factor. By default U2F is disabled. To start using U2F:

  • Enable U2F in Teleport configuration /etc/teleport.yaml .
  • For CLI-based logins you have to install u2f-host utility.
  • For web-based logins you have to use Google Chrome, as it is the only browser supporting U2F at this time.
# snippet from /etc/teleport.yaml to show an example configuration of U2F:auth_service:  authentication:    type: local    second_factor: u2f    # this section is needed only if second_factor is set to 'u2f'    u2f:       # app_id must point to the URL of the Teleport Web UI (proxy) accessible       # by the end users       app_id: https://localhost:3080       # facets must list all proxy servers if there are more than one deployed       facets:       - https://localhost:3080

For single-proxy setups, the app_id setting can be equal to the domain name of the proxy, but this will prevent you from adding more proxies without changing the app_id. For multi-proxy setups, the app_id should be an HTTPS URL pointing to a JSON file that mirrors facets in the auth config.

Logging in with U2F

For logging in via the CLI, you must first install u2f-host. Installing:

# OSX:$ brew install libu2f-host# Ubuntu 16.04 LTS:$ apt-get install u2f-host

Then invoke tsh ssh as usual to authenticate:

tsh --proxy <proxy-addr> ssh <hostname>

Next steps

Congratulations! You now are prepared with the basic principles of working with Teleport in production and should be ready to secure your SSH infrastructure in a more practical manner. Your next steps could be to check out the Admin Guide in the Teleport docs and take a deep dive into the configuration of Teleport. I hope you enjoyed this tutorial!

Related articles

What is a SYN flood attack?

A SYN flood is a type of distributed denial-of-service (DDoS) attack that exploits the TCP three-way handshake process to overwhelm a target server, making it inaccessible to legitimate traffic. Over 60% of DDoS attacks in 2024 involve SYN flood vectors as a primary or secondary method.The attack works by interrupting the normal TCP connection process. During a standard handshake, the client sends a SYN packet, the server replies with SYN-ACK, and the client responds with ACK to establish a connection.SYN flood attacks break this process by sending thousands of SYN packets, often with spoofed IP addresses, and never sending the final ACK.This interruption targets the server's connection state rather than bandwidth. The server maintains a backlog queue of half-open connections waiting for the final ACK, typically holding between 128 and 1024 connections depending on the OS and configuration. When attackers flood this queue with fake requests, they exhaust server resources, such as CPU, memory, and connection slots. This makes the system unable to accept legitimate connections.Recognizing a SYN flood early is critical. Typical attack rates can exceed tens of thousands of SYN packets per second targeting a single server. Signs include sudden spikes in half-open connections, server slowdowns, and connection timeouts for legitimate users. Attackers also use different types of SYN floods, ranging from direct attacks using real source IPs to more complex spoofed and distributed variants. Each requires specific detection and response methods.What is a SYN flood attack?A SYN flood attack is a type of DDoS attack that exploits the TCP three-way handshake to overwhelm a target server. The attacker sends a large number of SYN packets, often with spoofed IP addresses, causing the server to allocate resources and wait for final ACK packets that never arrive.During a standard TCP handshake, the client sends a SYN, the server replies with SYN-ACK, and the client responds with ACK to establish a connection. SYN flood attacks interrupt this process by never sending the final ACK.The server maintains a backlog queue of half-open connections waiting for completion. SYN floods fill this queue, exhausting critical server resources, including CPU, memory, and connection slots.How does a SYN flood attack work?A SYN flood attack exploits the TCP handshake to exhaust server resources and block legitimate connections. The attacker sends a massive volume of SYN packets to the target server, typically with spoofed IP addresses, forcing the server to allocate resources for connections that never complete.In a typical TCP handshake, the computer sends a SYN packet, the server responds with SYN-ACK, and the client sends back an ACK to establish the connection. SYN flood attacks break this process by flooding the server with SYN requests but never sending the final ACK.The server keeps each half-open connection in a backlog queue, usually holding 128 to 1024 connections, depending on the system. It waits about 60 seconds for the ACK that never arrives.This attack doesn't require high bandwidth. Instead of overwhelming network capacity like volumetric DDoS attacks, SYN floods target the server's connection state table. When the backlog queue fills up, the server cannot accept new connections, causing legitimate users to experience connection timeouts and errors.The use of spoofed IP addresses makes the attack harder to stop. The server sends SYN-ACK responses to fake addresses, wasting resources and complicating traceability. Attack rates can exceed tens of thousands of SYN packets per second, quickly exhausting even well-configured servers.What are the signs of a SYN flood attack?Signs of a SYN flood attack are observable indicators that show a server is being targeted by malicious SYN packets designed to exhaust connection resources. These signs include:Sudden SYN packet spike: Network monitoring tools show unusual increases in incoming SYN requests, jumping from normal levels to thousands or tens of thousands per second within minutes.High half-open connections: The server's connection table fills with incomplete TCP handshakes waiting for ACKs that never arrive. Most systems maintain backlog queues of 128 to 1,024 connections.Elevated resource usage: CPU and memory consumption rise sharply as the server tracks thousands of pending connections, even when actual data transfer is low.Failed legitimate connections: Users cannot establish new connections because the backlog queue is full, causing timeouts or error messages.Increased TCP retransmissions: The server repeatedly sends SYN-ACK packets in an attempt to complete handshakes that never complete, wasting bandwidth and processing power.Spoofed source addresses: Log analysis shows SYN packets arriving from random or non-existent IPs, masking the attacker's true location.Connection timeout patterns: Half-open connections remain in the queue for extended periods, typically around 60 seconds, preventing new legitimate requests.What are the different types of SYN flood attacks?Types of SYN flood attacks refer to the different methods attackers use to exploit the TCP handshake process and overwhelm target servers with connection requests. The types of SYN flood attacks are listed below.Direct attacks: The attacker sends SYN packets from their real IP address to the target server without spoofing. This method is simple but exposes the attacker's location, making it easier to trace and block.Spoofed IP attacks: The attacker sends SYN packets with forged source IP addresses, making it difficult to trace the attack origin. The server responds with SYN-ACK packets to these fake addresses, wasting resources. This is the most common variant because it protects the attacker's identity.Distributed SYN floods: Multiple compromised devices (botnet) send SYN packets simultaneously to a single target from different IP addresses. This increases attack volume and makes blocking more difficult.Pulsed attacks: The attacker sends bursts of SYN packets in waves rather than a constant stream, creating periodic spikes that can evade traditional rate-limiting systems.Low-rate attacks: The attacker sends SYN packets at a slow, steady rate to stay below detection thresholds while exhausting connection resources over time. These attacks are effective against servers with smaller connection backlogs.Reflection attacks: The attacker spoofs the victim's IP address and sends SYN packets to multiple servers, causing those servers to send SYN-ACK responses to the victim. This amplifies the attack.Hybrid volumetric attacks: The attacker combines SYN floods with other DDoS methods, such as UDP amplification or HTTP floods, to overwhelm multiple network layers simultaneously.What is the impact of SYN flood attacks on networks?SYN flood attacks severely exhaust network resources, making servers inaccessible to legitimate users by filling connection queues with incomplete TCP handshakes. Attackers send thousands of SYN packets per second without completing the handshake, causing the server to allocate memory and CPU resources for connections that remain active for about 60 seconds.The impact can reduce legitimate connection success rates by over 90% during peak periods, even though traffic volume is relatively low. The server's backlog queue (typically 128-1024 half-open connections) fills rapidly, preventing new connections and causing service outages until defenses are activated.How to detect SYN flood attacksDetection involves monitoring network traffic, analyzing connection states, and tracking server resource usage for anomalies. Key steps include:Monitor incoming SYN packet rates and compare to baseline traffic. Sudden spikes to thousands of packets per second, especially from diverse IPs, indicate a potential attack.Check half-open connection counts in the TCP backlog queue. Counts approaching or exceeding limits indicate resource exhaustion.Analyze the ratio of SYN packets to completed connections (SYN-ACK followed by ACK). A normal ratio is close to 1; during an attack, it may exceed 10:1.Monitor CPU and memory usage for sudden spikes without legitimate traffic growth. SYN floods consume resources by maintaining state for half-open connections.Monitor TCP retransmissions and connection timeout errors. Sharp increases indicate the backlog queue is full.Examine source IP addresses for spoofing. Unallocated, geographically impossible, or sequential addresses suggest attacker evasion.Set automated alerts that trigger when multiple indicators occur: high SYN rates, elevated half-open connections, high CPU, and rising retransmissions.How to prevent and mitigate SYN flood attacksPrevention and mitigation require multiple defense layers that detect abnormal connection patterns, filter malicious traffic, and optimize server configurations for incomplete handshakes. Key strategies include:Enable SYN cookies: Handle connection requests without maintaining state for half-open connections.Configure rate limiting: Restrict the number of SYN packets accepted from individual IPs per time frame, based on normal traffic patterns.Reduce timeout periods: Shorten half-open connection timeouts from 60 to 10-20 seconds to free resources faster.Deploy network monitoring: Track SYN rates, half-open counts, and retransmissions in real time. Set alerts when thresholds are exceeded.Filter spoofed IPs: Enable reverse path filtering (RPF) to block packets from invalid sources.Increase backlog queue size: Expand from defaults (128-512) to 1024 or higher and adjust memory to support it.Use ISP or DDoS protection services: Filter SYN flood traffic upstream before it reaches your network.Test defenses: Run controlled SYN flood simulations to verify rate limits, timeouts, and monitoring alerts.Best practices for protecting against SYN floodsBest practices include implementing multiple layers of defense and optimizing server configurations. Key practices are:SYN cookies: Avoid storing connection state until handshake completes. Encode connection info in SYN-ACK sequence numbers.Rate limiting: Restrict SYN packets from a single source to prevent rapid-fire attacks, typically 10-50 packets/sec/IP.Backlog queue expansion: Increase TCP backlog queue beyond defaults to handle spikes.Connection timeout reduction: Reduce half-open connection timeout to 10-20 seconds while balancing legitimate slow clients.Traffic filtering: Drop packets with spoofed or reserved IP addresses using ingress/egress filtering.Load balancing: Distribute SYN packets across servers and validate connections before forwarding.Anomaly detection: Monitor metrics for spikes in SYN packets, half-open connections, and CPU usage.Proxy protection: Use reverse proxies or scrubbing services to absorb and validate SYN requests.How has SYN flood attack methodology evolved?SYN flood attacks have evolved significantly. What started as simple single-source attacks has transformed into sophisticated multi-vector campaigns combining IP spoofing, distributed botnets, and low-rate pulsed techniques designed to evade modern detection systems.Early SYN floods were straightforward, with a single attacker sending large volumes of SYN packets from easily traceable sources. Modern attacks use thousands of compromised IoT devices and randomized spoofed addresses to hide origin and distribute traffic.Attackers have adapted to bypass defenses such as SYN cookies by combining SYN floods with application-layer attacks or sending timed bursts that stay below rate-limiting thresholds while still exhausting server resources. This reflects a shift from brute-force volume attacks to intelligent, evasive techniques targeting TCP connection weaknesses and DDoS mitigation systems.What are the legal and ethical considerations of SYN flood attacks?Legal and ethical considerations include laws, regulations, and moral principles that govern execution, impact, and response to these attacks:Criminal prosecution: SYN flood attacks violate computer crime laws, such as the US Computer Fraud and Abuse Act (CFAA). Penalties include fines up to $500,000 and prison sentences of 5-20 years. International treaties, like the Budapest Convention on Cybercrime, enable cross-border prosecution.Civil liability: Attackers can face lawsuits for lost revenue, recovery costs, and reputational harm. Courts may award damages for negligence, intentional interference, or breach of contract.Unauthorized access: Attacks constitute unauthorized access to systems. Even testing without explicit permission is illegal; researchers must obtain written authorization.Collateral damage: Attacks often affect third parties, such as shared hosting or ISPs, raising ethical concerns about disproportionate harm.Attribution challenges: Spoofed IPs complicate enforcement. Innocent parties may be misattributed, requiring careful verification.Defense legality: Organizations defending against attacks must ensure countermeasures comply with laws. Aggressive filtering can unintentionally affect legitimate users.Research ethics: Security research must avoid unauthorized testing. Academic standards require informed consent, review board approval, and responsible disclosure.State-sponsored attacks: Government-conducted attacks raise questions under international law and rules of armed conflict. Attacks on critical infrastructure may violate humanitarian principles.How do SYN flood attacks compare to other DDoS attacks?SYN flood attacks differ from other DDoS attacks by targeting connection state rather than bandwidth. Volumetric attacks, like UDP floods, overwhelm network capacity with massive data, while SYN floods exhaust server resources through half-open connections at lower traffic volumes.SYN floods attack at the transport layer, filling connection queues before requests reach applications, unlike application-layer attacks such as HTTP floods. Detection differs as well; volumetric attacks show clear bandwidth spikes, whereas SYN floods produce elevated SYN packet rates and half-open connection counts with normal total bandwidth.Mitigation strategies also differ. Rate limiting works against volumetric floods but is less effective against distributed SYN floods. SYN cookies and connection timeout adjustments specifically counter SYN floods.Frequently asked questionsWhat's the difference between a SYN flood and a regular DDoS attack?A SYN flood is a specific DDoS attack exploiting the TCP handshake. Attackers send thousands of SYN requests without completing the connection, quickly exhausting server resources, even with lower traffic volumes than volumetric DDoS attacks.How much bandwidth is needed to launch a SYN flood attack?Minimal bandwidth is needed—just 1-5 Mbps can exhaust a server's connection table by sending thousands of small SYN packets per second.Can a firewall alone stop SYN flood attacks?No. Standard firewalls lack mechanisms to manage half-open connection states and distinguish legitimate SYN packets from attack traffic. Additional protections like SYN cookies, rate limiting, and connection tracking are required.What is the cost of SYN flood mitigation services?Costs range from $50 to over $10,000 per month depending on traffic volume, attack frequency, and protection features. Pricing is usually based on bandwidth protected or tiered monthly plans.How long does a typical SYN flood attack last?Attacks typically last a few minutes to several hours. Some persist for days if resources and objectives are sustained.Are cloud-hosted applications vulnerable to SYN floods?Yes. Cloud-hosted applications rely on TCP connections that attackers can exhaust with thousands of incomplete handshake requests per second.What tools can be used to test SYN flood defenses?Tools like hPing3, LOIC (Low Orbit Ion Cannon), and Metasploit simulate controlled SYN flood traffic to test protection mechanisms.

What are volumetric DDoS attacks?

A volumetric attack is a Distributed Denial of Service (DDoS) attack that floods a server or network with massive amounts of traffic to overwhelm its bandwidth and cause service disruption.Volumetric attacks target Layers 3 (Network) and 4 (Transport) of the OSI model. Attackers use botnets (networks of compromised devices) to generate the high volume of malicious traffic required to exhaust bandwidth.Traffic volume is measured in bits per second (bps), packets per second (pps), or connections per second (cps). The largest attacks now exceed three terabits per second (Tbps).The main types include DNS amplification, NTP amplification, and UDP flood attacks. Reflection and amplification techniques are common, where attackers send small requests to vulnerable servers with a spoofed source IP (the target), causing the server to respond with much larger packets to the victim. This amplification can increase attack traffic by 50 to 100 times the original request size.Recognizing the signs of a volumetric attack is critical for a fast response.Network performance drops sharply when bandwidth is exhausted. You will see slow connectivity, timeouts, and complete service outages. These attacks typically last from minutes to hours, though some persist for days without proper defenses in place.Understanding volumetric attacks is crucial because they can bring down services in minutes and result in organizations losing thousands of dollars in revenue per hour.Modern attacks regularly reach multi-terabits per second, overwhelming even well-provisioned networks without proper DDoS protection.What are volumetric attacks?Volumetric attacks are Distributed Denial of Service (DDoS) attacks that flood a target's network or server with massive amounts of traffic. The goal? Overwhelm bandwidth and disrupt service.These attacks work at Layers 3 (Network) and 4 (Transport) of the OSI model. They focus on bandwidth exhaustion rather than exploiting application vulnerabilities. Attackers typically use botnets (networks of compromised devices) to generate the high volume of malicious traffic needed.Here's how it works. Attackers often employ reflection and amplification techniques, sending small requests to vulnerable servers, such as DNS or NTP, with a spoofed source IP address. This causes these servers to respond with much larger packets to the victim, multiplying the attack's impact.Attack volume is measured in bits per second (bps), packets per second (pps), or connections per second (cps). The largest attacks now exceed multiple terabits per second.How do volumetric attacks work?Volumetric attacks flood a target's network or server with massive amounts of traffic to exhaust bandwidth and make services unavailable to legitimate users. Attackers use botnets (networks of compromised devices) to generate enough traffic volume to overwhelm the target's capacity, typically measured in bits per second (bps), packets per second (pps), or connections per second (cps).The attack targets Layers 3 (Network) and 4 (Transport) of the OSI model. Attackers commonly use reflection and amplification techniques to multiply their attack power.Here's how it works: They send small requests to vulnerable servers, such as DNS, NTP, or memcached, with a spoofed source IP address (the victim's address). The servers respond with much larger packets directed at the target, amplifying the attack traffic by 10 times to 100 times or more.The sheer volume of malicious traffic, combined with legitimate requests, makes detection difficult. When the flood of packets arrives, it consumes all available bandwidth and network resources.Routers, firewalls, and servers can't process the volume. This causes service disruption or complete outages. Common attack types include DNS amplification, UDP floods, and ICMP floods (also known as ping floods), each targeting different protocols to maximize bandwidth consumption.Modern volumetric attacks regularly exceed multiple terabits per second in size. IoT devices comprise a significant portion of botnets due to their often weak security and always-on internet connections.Attacks typically last minutes to hours but can persist for days without proper protection.What are the main types of volumetric attacks?The main types of volumetric attacks refer to the specific methods attackers use to flood a target with massive amounts of traffic and exhaust its bandwidth. The main types of volumetric attacks are listed below.DNS amplification: Attackers send small DNS queries to open resolvers with a spoofed source IP address (the victim's). The DNS servers respond with much larger replies to the target, creating traffic volumes 28–54 times the original request size. This method remains one of the most effective amplification techniques.UDP flood: The attacker sends a high volume of UDP packets to random ports on the target system. The target checks for applications listening on those ports and responds with ICMP "Destination Unreachable" packets, exhausting network resources. These attacks are simple to execute but highly effective at consuming bandwidth.ICMP flood: Also called a ping flood, this attack bombards the target with ICMP Echo Request packets. The target attempts to respond to each request with ICMP Echo Reply packets. This consumes both bandwidth and processing power. The sheer volume of requests can bring down network infrastructure.NTP amplification: Attackers exploit Network Time Protocol servers by sending small requests with spoofed source addresses. The NTP servers respond with much larger packets to the victim, creating amplification factors up to 556 times the original request. This makes NTP one of the most dangerous protocols for reflection attacks.SSDP amplification: Simple Service Discovery Protocol, used by Universal Plug and Play devices, can amplify attack traffic by 30–40 times. Attackers send discovery requests to IoT devices with spoofed source IPs, causing these devices to flood the victim with response packets. The proliferation of unsecured IoT devices makes this attack increasingly common.Memcached amplification: Attackers target misconfigured memcached servers with small requests that trigger massive responses. This protocol can achieve amplification factors exceeding 50,000 times, making it capable of generating multi-terabits-per-second attacks. Several record-breaking attacks in recent years have used this method.SYN flood: The attacker sends a rapid succession of SYN requests to initiate TCP connections without completing the handshake. The target allocates resources for each half-open connection, quickly exhausting its connection table. While technically targeting connection resources, large-scale SYN floods can also consume a significant amount of bandwidth.What are the signs of a volumetric attack?Signs of a volumetric attack are the observable indicators that a network or server is experiencing a DDoS attack designed to exhaust bandwidth through massive traffic floods. Here are the key signs to watch for.Sudden traffic spikes: Network monitoring tools show an abrupt increase in traffic volume, often reaching gigabits or terabits per second. These spikes happen without any corresponding increase in legitimate user activity.Network congestion: Bandwidth becomes saturated, causing legitimate traffic to slow or stop entirely. Users experience timeouts, failed connections, and complete service unavailability.Unusual protocol activity: Monitoring reveals abnormal levels of specific protocols, such as DNS, NTP, ICMP, or UDP traffic. Attackers commonly exploit these protocols in reflection and amplification attacks.High packet rates: The network receives an extreme number of packets per second (pps), overwhelming routers and firewalls. This flood exhausts processing capacity even when individual packets are small.Traffic from multiple sources: Logs show incoming connections from thousands or millions of different IP addresses simultaneously. This pattern indicates botnet activity rather than legitimate user behavior.Asymmetric traffic patterns: Inbound traffic dramatically exceeds outbound traffic, creating an imbalanced flow. Normal operations typically show more balanced bidirectional communication.Repeated connection attempts: Systems log massive numbers of connection requests to random or non-existent ports. These requests aim to exhaust server resources through sheer volume.Geographic anomalies: Traffic originates from unexpected regions or countries where the service has few legitimate users. This geographic mismatch suggests coordinated attack traffic rather than organic usage.What impact do volumetric attacks have on businesses?Volumetric attacks hit businesses hard by flooding network bandwidth with massive traffic surges, causing complete service outages, revenue loss, and damaged customer trust. When these attacks overwhelm a network with hundreds of gigabits or even terabits per second of malicious traffic, legitimate users can't access your services. This results in direct revenue loss during downtime and potential long-term customer attrition.The financial damage doesn't stop when the attack ends. Beyond immediate outages, you'll face costs from emergency mitigation services, increased infrastructure investments, and reputational damage that can persist for months or years after the incident.How to protect against volumetric attacksYou can protect against volumetric attacks by deploying traffic filtering, increasing bandwidth capacity, and using specialized DDoS mitigation services that can absorb and filter malicious traffic before it reaches your network.First, deploy traffic filtering at your network edge to identify and block malicious packets. Configure your routers and firewalls to drop traffic from known malicious sources and apply rate-limiting rules to suspicious IP addresses. This stops basic attacks before they consume your bandwidth.Next, increase your bandwidth capacity to absorb traffic spikes without service degradation. While this won't stop an attack, having 2 to 3 times your normal bandwidth gives you buffer time to apply other defenses. Major attacks regularly exceed multiple terabits per second, so plan capacity accordingly.Then, set up real-time traffic monitoring to detect unusual patterns early. Configure alerts for sudden spikes in bits per second, packets per second, or connections per second. Early detection lets you respond within minutes instead of hours.After that, work with your ISP to implement upstream filtering when attacks exceed your capacity. ISPs can drop malicious traffic at their network edge before it reaches you. Establish this relationship before an attack happens because response time matters.Deploy anti-spoofing measures to prevent your network from being used in reflection attacks. Enable ingress filtering (BCP 38) to verify source IP addresses and reject packets with spoofed origins. This protects both your network and potential victims.Finally, consider using a DDoS protection service that can handle multi-terabit attacks through global scrubbing centers. These services route your traffic through their infrastructure, filtering out malicious packets while allowing legitimate requests to pass through. This is essential since volumetric attacks account for over 75% of all DDoS incidents.Test your defenses regularly with simulated attacks to verify your response procedures and identify weak points before real attackers do.What are the best practices for volumetric attack mitigation?Best practices for volumetric attack mitigation refer to the proven strategies and techniques organizations use to defend against bandwidth exhaustion attacks. The best practices for mitigating volumetric attacks are listed below.Deploy traffic scrubbing: Traffic scrubbing centers filter malicious packets before they reach your network infrastructure. These specialized facilities can absorb multi-Tbps attacks by analyzing traffic patterns in real-time and blocking suspicious requests while allowing legitimate users through.Use anycast network routing: Anycast routing distributes incoming traffic across multiple data centers instead of directing it to a single location. This distribution prevents attackers from overwhelming a single point of failure and spreads the attack load across your infrastructure.Implement rate limiting: Rate limiting controls restrict the number of requests a single source can send within a specific timeframe. You can configure these limits at your network edge to drop excessive traffic from suspicious IP addresses before it consumes bandwidth.Monitor baseline traffic patterns: Establish normal traffic baselines for your network to detect anomalies quickly. When traffic volume suddenly spikes by 300% or more, automated systems can trigger mitigation protocols within seconds rather than minutes.Configure upstream filtering: Work with your ISP to filter attack traffic before it reaches your network perimeter. ISPs can block malicious packets at their backbone level, preventing bandwidth saturation on your connection and preserving service availability.Enable connection tracking: Connection tracking systems maintain state information about active network connections to identify suspicious patterns. These systems can detect when a single source opens thousands of connections simultaneously (a common sign of volumetric attacks).Maintain excess bandwidth capacity: Keep at least 50% more bandwidth capacity than your peak legitimate traffic requires. This buffer won't stop large attacks, but it gives you time to activate other defenses before services degrade.How to respond during an active volumetric attackWhen a volumetric attack occurs, you need to act quickly: activate your DDoS mitigation service, reroute traffic through scrubbing centers, and isolate affected network segments while maintaining service availability.First, confirm you're facing a volumetric attack. Check your network monitoring tools for sudden traffic spikes measured in gigabits per second (Gbps) or packets per second (pps). Look for patterns such as UDP floods, ICMP floods, or DNS amplification attacks that target your bandwidth rather than your application logic.Next, activate your DDoS mitigation service immediately or contact your provider to reroute traffic through scrubbing centers. These centers filter out malicious packets before they reach your infrastructure. You'll typically see attack traffic reduced by 90-95% within 3-5 minutes of activation.Then, implement rate limiting on your edge routers to cap incoming traffic from suspicious sources. Set thresholds based on your normal traffic baseline. If you typically handle 10 Gbps, limit individual source IPs so no single origin consumes more than 1-2% of capacity.After that, enable geo-blocking or IP blacklisting for regions where you don't operate if attack sources concentrate in specific countries. This immediately cuts off large portions of botnet traffic while preserving access for legitimate users.Isolate critical services by redirecting less important traffic to secondary servers or temporarily turning off non-essential services. This preserves bandwidth for your core business functions during the attack.Finally, document the attack details. Record start time, peak traffic volume, attack vectors used, and source IP ranges for post-incident analysis. This data helps you strengthen defenses and may be required for law enforcement or insurance claims.Monitor your traffic continuously for 24 to 48 hours after the attack subsides. Attackers often launch follow-up waves to test your defenses or exhaust your mitigation resources.Frequently asked questionsWhat's the difference between volumetric attacks and application-layer attacks?Volumetric attacks flood your network with massive traffic to exhaust bandwidth at Layers 3 and 4. Application-layer attacks work differently. They target specific software vulnerabilities at Layer 7 using low-volume, sophisticated requests that are harder to detect.How large can volumetric attacks get?Volumetric attacks regularly reach multiple terabits per second (Tbps). The largest recorded attacks exceeded 3 Tbps in 2024.Can small businesses be targeted by volumetric attacks?Yes, small businesses are frequently targeted by volumetric attacks. Attackers often view them as easier targets with weaker defenses and less sophisticated DDoS protection than enterprises.How quickly can volumetric attack mitigation be deployed?Modern DDoS protection platforms activate automatically when they detect attack patterns. Once traffic reaches the protection service, volumetric attack mitigation deploys in under 60 seconds, routing malicious traffic away from your network.Initial setup of the protection infrastructure takes longer. You'll need hours to days to configure your defenses properly before you're fully protected.What is the cost of volumetric DDoS protection?Volumetric DDoS protection costs vary widely. Basic services start at $50 to $500+ per month, while enterprise solutions can run $10,000+ monthly. The price depends on three main factors: bandwidth capacity, attack size limits, and response times.Most providers use a tiered pricing model. You'll pay based on your clean bandwidth needs (measured in Gbps) and the maximum attack mitigation capacity you need (measured in Tbps).Do volumetric attacks always target specific organizations?No, volumetric attacks don't target specific organizations. They flood any available bandwidth indiscriminately and often hit unintended victims through reflection and amplification techniques. Here's how it works: attackers spoof the target's IP address when sending requests to vulnerable servers, which causes those servers to overwhelm the victim with massive response traffic.How does Gcore detect volumetric attacks in real-time?The system automatically flags suspicious traffic when it exceeds your baseline thresholds, measured in bits per second (bps) or packets per second (pps).

What is cloud security? Definition, challenges, and best practices

Cloud security is the discipline of protecting cloud-based infrastructure, applications, and data from internal and external threats, ensuring confidentiality, integrity, and availability of cloud resources. This protection model has become important as organizations increasingly move their operations to cloud environments.Cloud security operates under a shared responsibility model where providers secure the infrastructure while customers secure their deployed applications, data, and access policies. This responsibility distribution varies by service model, with Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS) each requiring different levels of customer involvement.The model creates clear boundaries between provider and customer security obligations.Cloud security protects resources and data individually rather than relying on a traditional perimeter defense approach. This protection method uses granular controls like cloud security posture management (CSPM), network segmentation, and encryption to secure specific assets. The approach addresses the distributed nature of cloud computing, where resources exist across multiple locations and services.Organizations face several cloud security challenges, including misconfigurations, account hijacking, data breaches, and insider threats.Cloud security matters because the average cost of a cloud data breach has reached $5 million according to IBM, making effective security controls essential for protecting both financial assets and organizational reputation.What is cloud security?Cloud security is the practice of protecting cloud-based infrastructure, applications, and data from cyber threats through specialized technologies, policies, and controls designed for cloud environments. This protection operates under a shared responsibility model where cloud providers secure the underlying infrastructure while customers protect their applications, data, and access configurations.Cloud security includes identity and access management (IAM), data encryption, continuous monitoring, workload protection, and automated threat detection to address the unique challenges of distributed cloud resources. The approach differs from traditional security by focusing on individual resource protection rather than perimeter defense, as cloud environments require granular controls and real-time visibility across flexible infrastructure.How does cloud security work?Cloud security works by using a multi-layered defense system that protects data, applications, and infrastructure hosted in cloud environments through shared responsibility models, identity controls, and continuous monitoring. Unlike traditional perimeter-based security, cloud security operates on a distributed model where protection is applied at multiple levels across the cloud stack.The foundation of cloud security rests on the shared responsibility model, where cloud providers secure the underlying infrastructure while customers protect their applications, data, and access policies. This division varies by service type - in Infrastructure as a Service (IaaS), customers handle more security responsibilities, including operating systems and network controls. In contrast, Software as a Service (SaaS) shifts most security duties to the provider.Identity and Access Management (IAM) serves as the primary gatekeeper, controlling who can access cloud resources and what actions they can perform.IAM systems use role-based access control (RBAC) and multi-factor authentication (MFA) to verify user identities and enforce least-privilege principles. These controls prevent unauthorized access even if credentials are compromised.Data protection operates through encryption both at rest and in transit, ensuring information remains unreadable to unauthorized parties. Cloud security platforms also employ workload protection agents that monitor running applications for suspicious behavior. At the same time, Security Information and Event Management (SIEM) systems collect and analyze logs from across the cloud environment to detect potential threats.Continuous monitoring addresses the flexible nature of cloud environments, where resources are constantly created, modified, and destroyed.Cloud Security Posture Management (CSPM) tools automatically scan configurations against security best practices, identifying misconfigurations that could expose data.What are the main cloud security challenges?Cloud security challenges refer to the obstacles and risks that organizations face when protecting their cloud-based infrastructure, applications, and data from threats. The main cloud security challenges are listed below.Misconfigurations: According to Zscaler research, improper cloud settings create the most common security vulnerabilities, with 98.6% of organizations having misconfigurations that cause critical risks to data and infrastructure. These include exposed storage buckets, overly permissive access controls, and incorrect network settings.Shared responsibility confusion: Organizations struggle to understand which security tasks belong to the cloud provider versus what their own responsibilities are. This confusion leads to security gaps where critical protections are assumed to be handled by the other party.Identity and access management complexity: Managing user permissions across multiple cloud services and environments becomes difficult as organizations scale. Weak authentication, excessive privileges, and poor access controls create entry points for attackers.Data protection across environments: Securing sensitive data as it moves between on-premises systems, multiple cloud platforms, and edge locations requires consistent encryption and monitoring. Organizations often lack visibility into where their data resides and how it's protected.Compliance and regulatory requirements: Meeting industry standards like GDPR, HIPAA, or SOC 2 becomes more complex in cloud environments where data location and processing methods may change flexibly. Organizations must maintain compliance across multiple jurisdictions and service models.Limited visibility and monitoring: Traditional security tools often can't provide complete visibility into cloud workloads, containers, and serverless functions. This blind spot makes it difficult to detect threats, track user activities, and respond to incidents quickly.Insider threats and privileged access: Cloud environments often grant broad administrative privileges that can be misused by malicious insiders or compromised accounts. The distributed nature of cloud access makes it harder to monitor and control privileged user activities.What are the essential cloud security technologies and tools?Essential cloud security technologies and tools refer to the specialized software, platforms, and systems designed to protect cloud-based infrastructure, applications, and data from cyber threats and operational risks. The essential cloud security technologies and tools are listed below.Identity and access management (IAM): IAM systems control who can access cloud resources and what actions they can perform through role-based permissions and multi-factor authentication. These platforms prevent unauthorized access by requiring users to verify their identity through multiple methods before granting system entry.Cloud security posture management (CSPM): CSPM tools continuously scan cloud environments to identify misconfigurations, compliance violations, and security gaps across multiple cloud platforms. They provide automated remediation suggestions and real-time alerts when security policies are violated or resources are improperly configured.Data encryption services: Encryption technologies protect sensitive information both at rest in storage systems and in transit between cloud services using advanced cryptographic algorithms. These tools mean that even if data is intercepted or accessed without authorization, it remains unreadable without proper decryption keys.Cloud workload protection platforms (CWPP): CWPP solutions monitor and secure applications, containers, and virtual machines running in cloud environments against malware, vulnerabilities, and suspicious activities. They provide real-time threat detection and automated response capabilities specifically designed for flexible cloud workloads.Security information and event management (SIEM): Cloud-based SIEM platforms collect, analyze, and correlate security events from across cloud infrastructure to detect potential threats and compliance violations. These systems use machine learning and behavioral analysis to identify unusual patterns that may indicate security incidents.Cloud access security brokers (CASB): CASB solutions act as intermediaries between users and cloud applications, enforcing security policies and providing visibility into cloud usage across the organization. They monitor data movement, detect risky behaviors, and ensure compliance with regulatory requirements for cloud-based activities.Network security tools: Cloud-native firewalls and network segmentation tools control traffic flow between cloud resources and external networks using intelligent filtering rules. These technologies create secure network boundaries and prevent lateral movement of threats within cloud environments.What are the key benefits of cloud security?The key benefits of cloud security refer to the advantages organizations gain from protecting their cloud-based infrastructure, applications, and data from threats. The key benefits of cloud security are listed below.Cost reduction: Cloud security eliminates the need for expensive on-premises security hardware and reduces staffing requirements. Organizations can access enterprise-grade security tools through subscription models rather than large capital investments.Improved threat detection: Cloud security platforms use machine learning and AI to identify suspicious activities in real-time across distributed environments. These systems can detect anomalies that traditional security tools might miss.Automatic compliance: Cloud security solutions help organizations meet regulatory requirements like GDPR, HIPAA, and SOC 2 through built-in compliance frameworks. Automated reporting and audit trails simplify compliance management and reduce manual oversight.Reduced misconfiguration risks: Cloud security posture management tools automatically scan for misconfigurations and provide remediation guidance.Enhanced data protection: Cloud security provides multiple layers of encryption for data at rest, in transit, and in use. Advanced key management systems ensure that sensitive information remains protected even if other security measures fail.Flexible security coverage: Cloud security solutions automatically scale with business growth without requiring additional infrastructure investments. Organizations can protect new workloads and applications instantly as they use them.Centralized security management: Cloud security platforms provide unified visibility across multiple cloud environments and hybrid infrastructures. Security teams can monitor, manage, and respond to threats from a single dashboard rather than juggling multiple tools.What are the challenges of cloud security?Cloud security challenges refer to the obstacles and risks organizations face when protecting their cloud-based infrastructure, applications, and data from threats. These challenges are listed below.Misconfigurations: Cloud environments are complex, and improper settings create security gaps that attackers can exploit. These errors include exposed storage buckets, overly permissive access controls, and incorrect network settings.Shared responsibility confusion: Organizations often misunderstand which security tasks belong to them versus their cloud provider. This confusion leads to gaps where critical security measures aren't implemented by either party. The division of responsibilities varies between IaaS, PaaS, and SaaS models, adding to the complexity.Identity and access management complexity: As organizations scale, managing user permissions across multiple cloud services and environments becomes difficult. Weak authentication methods and excessive privileges create entry points for unauthorized access. Multi-factor authentication and role-based access controls require careful planning and ongoing maintenance.Data protection across environments: Ensuring data remains encrypted and secure as it moves between on-premises systems and cloud platforms presents ongoing challenges. Organizations must track data location, apply appropriate encryption, and maintain compliance across different jurisdictions. Data residency requirements add another layer of complexity.Visibility and monitoring gaps: Traditional security tools often can't provide complete visibility into cloud environments and workloads. The flexible nature of cloud resources makes it hard to track all assets and their security status. Real-time monitoring becomes critical but technically challenging to use effectively.Compliance and regulatory requirements: Meeting industry standards and regulations in cloud environments requires continuous effort and specialized knowledge. Different regions have varying data protection laws that affect cloud deployments. Organizations must prove compliance while maintaining operational effectiveness.Insider threats and privileged access: Cloud environments often grant broad access to administrators and developers, creating risks from malicious or careless insiders. Monitoring privileged user activities without impacting productivity requires advanced tools and processes. The remote nature of cloud access makes traditional oversight methods less effective.How to implement cloud security best practices?You use cloud security best practices by establishing a complete security framework that covers identity management, data protection, monitoring, and compliance across your cloud environment.First, configure identity and access management (IAM) with role-based access control (RBAC) and multi-factor authentication (MFA). Create specific roles for different job functions and require MFA for all administrative accounts to prevent unauthorized access.Next, encrypt all data both at rest and in transit using industry-standard encryption protocols like AES256.Enable encryption for databases, storage buckets, and communication channels between services to protect sensitive information from interception.Then, use continuous security monitoring with automated threat detection tools. Set up real-time alerts for suspicious activities, failed login attempts, and unusual data access patterns to identify potential security incidents quickly.After that, establish cloud security posture management (CSPM) to scan for misconfigurations automatically. Configure automated remediation for common issues like open security groups, unencrypted storage, and overly permissive access policies.Create network segmentation using virtual private clouds (VPCs) and security groups to isolate different workloads. Limit communication between services to only what's necessary and use zero-trust network principles.Set up regular security audits and compliance monitoring to meet industry standards like SOC 2, HIPAA, or GDPR. Document all security controls and maintain audit trails for regulatory requirements.Finally, develop an incident response plan specifically for cloud environments. Include procedures for isolating compromised resources, preserving forensic evidence, and coordinating with your cloud provider's security team.Start with IAM and encryption as your foundation, then build additional security layers progressively to avoid overwhelming your team while maintaining strong protection.Gcore cloud securityWhen using cloud security measures, the underlying infrastructure becomes just as important as the security tools themselves. Gcore’s cloud security solutions address this need with a global network of 180+ points of presence and 30ms latency, ensuring your security monitoring and threat detection systems perform consistently across all regions. Our edge cloud infrastructure supports real-time security analytics and automated threat response without the performance bottlenecks that can leave your systems vulnerable during critical moments.What sets our approach apart is the combination of security directly into the infrastructure layer, eliminating the complexity of managing separate security vendors while providing enterprise-grade DDoS protection and encrypted data transmission as standard features. This unified approach typically reduces security management overhead by 40-60% compared to multi-vendor solutions, while maintaining the continuous monitoring capabilities.Explore how Gcore's integrated cloud security infrastructure can strengthen your defense plan at gcore.com/cloud.Frequently asked questionsWhat's the difference between cloud security and traditional approaches?Cloud security differs from traditional approaches by protecting distributed resources through shared responsibility models and cloud-native tools, while traditional security relies on perimeter-based defenses around centralized infrastructure. Traditional security assumes a clear network boundary with firewalls and intrusion detection systems protecting internal resources. In contrast, cloud security secures individual workloads, data, and identities across multiple environments without relying on network perimeters.What is cloud security posture management?Cloud security posture management (CSPM) is a set of tools and processes that continuously monitor cloud environments to identify misconfigurations, compliance violations, and security risks across cloud infrastructure. CSPM platforms automatically scan cloud resources, assess security policies, and provide remediation guidance to maintain proper security configurations.How does Zero Trust apply to cloud security?Zero Trust applies to cloud security by treating every user, device, and connection as untrusted and requiring verification before granting access to cloud resources. This approach replaces traditional perimeter-based security with continuous authentication, micro-segmentation, and least-privilege access controls across cloud environments.What compliance standards apply?Cloud security must comply with industry-specific regulations like SOC 2, ISO 27001, GDPR, HIPAA, PCI DSS, and FedRAMP, depending on your business sector and geographic location. Organizations typically need to meet multiple standards simultaneously, with financial services requiring PCI DSS compliance, healthcare needing HIPAA certification, and EU operations mandating GDPR adherence.What happens during a cloud security breach?During a cloud security breach, attackers gain unauthorized access to cloud resources, potentially exposing sensitive data, disrupting services, and causing financial damage averaging $5 million per incident, according to IBM. The breach typically involves exploiting misconfigurations, compromised credentials, or vulnerabilities to access cloud infrastructure, applications, or data stores.

Query your cloud with natural language: A developer’s guide to Gcore MCP

What if you could ask your infrastructure questions and get real answers?With Gcore’s open-source implementation of the Model Context Protocol (MCP), now you can. MCP turns generative AI into an agent that understands your infrastructure, responds to your queries, and takes action when you need it to.In this post, we’ll demo how to use MCP to explore and inspect your Gcore environment just by prompting, to list resources, check audit logs, and generate cost reports. We’ll also walk through a fun bonus use case: provisioning infrastructure and exporting it to Terraform.What is MCP and why do devs love it?Originally developed by Anthropic, the Model Context Protocol (MCP) is an open standard that turns language models into agents that interact with structured tools: APIs, CLIs, or internal systems. Gcore’s implementation makes this protocol real for our customers.With MCP, you can:Ask questions about your infrastructureList, inspect, or filter cloud resourcesView cost data, audit logs, or deployment metadataExport configs to TerraformChain multi-step operations via natural languageGcore MCP removes friction from interacting with your infrastructure. Instead of wiring together scripts or context-switching across dashboards and CLIs, you can just…ask.That means:Faster debugging and auditsMore accessible infra visibilityFewer repetitive setup tasksBetter team collaborationBecause it’s open source, backed by the Gcore Python SDK, you can plug it into other APIs, extend tool definitions, or even create internal agents tailored to your stack. Explore the GitHub repo for yourself.What can you do with it?This isn’t just a cute chatbot. Gcore MCP connects your cloud to real-time insights. Here are some practical prompts you can use right away.Infrastructure inspection“List all VMs running in the Frankfurt region”“Which projects have over 80% GPU utilization?”“Show all volumes not attached to any instance”Audit and cost analysis“Get me the API usage for the last 24 hours”“Which users deployed resources in the last 7 days?”“Give a cost breakdown by region for this month”Security and governance“Show me firewall rules with open ports”“List all active API tokens and their scopes”Experimental automation“Create a secure network in Tokyo, export to Terraform, then delete it”We’ll walk through that last one in the full demo below.Full video demoWatch Gcore’s AI Software Engineer, Algis Dumbris, walk through setting up MCP on your machine and show off some use cases. If you prefer reading, we’ve broken down the process step-by-step below.Step-by-step walkthroughThis section maps to the video and shows exactly how to replicate the workflow locally.1. Install MCP locally (0:00–1:28)We use uv to isolate the environment and pull the project directly from GitHub.curl -Ls https://astral.sh/uv/install.sh | sh uvx add gcore-mcp-server https://github.com/G-Core/gcore-mcp-server Requirements:PythonGcore account + API keyTool config file (from the repo)2. Set up your environment (1:28–2:47)Configure two environment variables:GCORE_API_KEY for authGCORE_TOOLS to define what the agent can access (e.g., regions, instances, costs, etc.)Soon, tool selection will be automatic, but today you can define your toolset in YAML or JSON.3. Run a basic query (3:19–4:11)Prompt:“Find the Gcore region closest to Antalya.”The agent maps this to a regions.list call and returns: IstanbulNo need to dig through docs or write an API request.4. Provision, export, and clean up (4:19–5:32)This one’s powerful if you’re experimenting with CI/CD or infrastructure-as-code.Prompt:“Create a secure network in Tokyo. Export to Terraform. Then clean up.”The agent:Provisions the networkExports it to Terraform formatDestroys the resources afterwardYou get usable .tf output with no manual scripting. Perfect for testing, prototyping, or onboarding.Gcore: always building for developersTry it now:Clone the repoInstall UVX + configure your environmentStart prompting your infrastructureOpen issues, contribute tools, or share your use casesThis is early-stage software, and we’re just getting started. Expect more tools, better UX, and deeper integrations soon.Watch how easy it is to deploy an inference instance with Gcore

How to protect login pages with Gcore WAAP

Exposed login pages are a common vulnerability across web applications. Attackers often use automated tools to guess credentials in brute-force or credential-stuffing attacks, probe for login behavior to exploit session or authentication logic, or overload your infrastructure with fake requests.Without specific rules for login-related traffic, your application might miss these threats or apply overly broad protections that disrupt real users. Fortunately, Gcore WAAP makes it easy to defend these sensitive endpoints without touching your application code.In this guide, we’ll show you how to use WAAP’s custom rule engine to identify login traffic and apply protections like CAPTCHA to reduce risk, block automated abuse, and maintain a smooth experience for legitimate users. We’ve also included a complete video walkthrough from Gcore’s Security Presales Engineer, Michal Zalewski.Video walkthroughHere’s Gcore’s Michal Zalewski giving a full walkthrough of the steps in this article.Step 1: Access your WAAP configurationGo to portal.gcore.com and log in.Navigate to WAAP in the sidebar. If you’re not yet a WAAP user, it costs just $26/month.Select the resource that hosts your login form; for example, gcore.zalewski.cloud.Step 2: Create a custom ruleIn the main panel of your selected resource, go to WAAP Rules.Click Add Custom Rule in the upper-right corner.Step 3: Define the login page URLIdentify the login endpoint you want to protect:Use tools like Burp Suite or the "Inspect" feature in your browser to verify the login page URL.In Burp Suite, use the Proxy tab, or in the browser, check the Network tab to inspect a login request.Look for the path (e.g., /login.php) and HTTP method (POST).In the custom rule setup:Enter the URL (e.g., /login.php).Tag the request using a predefined tag. Select Login Page.Step 4: Name and save the ruleProvide a name for the rule, such as “Login Page URL”, and save it.Step 5: Add a CAPTCHA challenge ruleTo protect the login page from automated abuse:Create a new custom rule.Name it something like “Login Page Challenge”.Under Conditions, select the previously created Login Page tag.Set the Action to CAPTCHA.Save the rule.Step 6: Test the protectionReturn to your browser and turn off any proxy tools.Refresh the login page.You should now be challenged with a CAPTCHA each time the login page loads.Once the CAPTCHA is completed successfully, users can log in as usual.Monitor, adapt, and alertAfter deployment:Track rate limit trigger frequencyMonitor WAAP logs for anomaly detectionRotate exemptions or thresholds based on live behaviorFor analytics, refer to the WAAP analytics documentation.Bonus tips for hardened protectionCombine with bot protection: Enable WAAP’s bot mitigation to identify headless browsers and automation tools like Puppeteer or Selenium. See our bot protection docs for setup instructions.Customize 429 responses: Replace default error pages with branded messages or a fallback action. Consider including a support link or CAPTCHA challenge. Check out our response pages documentation for more details.Use geo or ASN exceptions: Whitelist trusted locations or block known bot-heavy ASNs if your audience is localized.Automate it: optional API and Terraform supportTeams with IaC pipelines or security automation workflows might want to automate login page protection with rate limiting. This keeps your WAAP config version-controlled and repeatable.You can use the WAAP API or Terraform to:Create or update rulesRotate session keys or thresholdsExport logs for auditingExplore the WAAP API documentation and WAAP Terraform provider documentation for more details.Stop abuse before it starts with GcoreLogin pages are high-value targets, but they don’t have to be high risk. With Gcore WAAP, setting up robust defenses takes just a few minutes. By tagging login traffic and applying challenge rules like CAPTCHA, you can reduce automated attack risk without sacrificing user experience.As your application grows, revisit your WAAP rules regularly to adapt to new threats, add behavior-based detection, and fine-tune your protective layers. For more advanced configurations, check out our documentation or reach out to Gcore support.Get WAAP today for just $26/month

3 underestimated security risks of AI workloads and how to overcome them

3 underestimated security risks of AI workloads and how to overcome them

Artificial intelligence workloads introduce a fundamentally different security landscape for engineering and security teams. Unlike traditional applications, AI systems must protect not just endpoints and networks, but also training data pipelines, feature stores, model repositories, and inference APIs. Each phase of the AI life cycle presents distinct attack vectors that adversaries can exploit to corrupt model behavior, extract proprietary logic, or manipulate downstream outputs.In this article, we uncover three security vulnerabilities of AI workloads and explain how developers and MLOps teams can overcome them. We also look at how investing in your AI security can save time and money, explore the challenges that lie ahead for AI security, and offer a simplified way to protect your AI workloads with Gcore.Risk #1: data poisoningData poisoning is a targeted attack on the integrity of AI systems, where malicious actors subtly inject corrupted or manipulated data into training pipelines. The result is a model that behaves unpredictably, generates biased or false outputs, or embeds hidden logic that can be triggered post-deployment. This can undermine business-critical applications—from fraud detection and medical diagnostics to content moderation and autonomous decision-making.For developers, the stakes are high: poisoned models are hard to detect once deployed, and even small perturbations in training data can have system-wide consequences. Luckily, you can take a few steps to mitigate against data poisoning and then implement zero-trust AI to further protect your workloads.Mitigation and hardeningRestrict dataset access using IAM, RBAC, or identity-aware proxies.Store all datasets in versioned, signed, and hashed formats.Validate datasets with automated schema checks, label distribution scans, and statistical outlier detection before training.Track data provenance with metadata logs and checksums.Block training runs if datasets fail predefined data quality gates.Integrate data validation scripts into CI/CD pipelines pre-training.Enforce zero-trust access policies for data ingestion services.Solution integration: zero-trust AIImplement continuous authentication and authorization for each component interacting with data (e.g., preprocessing scripts, training jobs).Enable real-time threat detection during training using runtime security tools.Automate incident response triggers for unexpected file access or data source changes.Risk #2: adversarial attacksAdversarial attacks manipulate model inputs in subtle ways that trick AI systems into making incorrect or dangerous decisions. These perturbations—often imperceptible to humans—can cause models to misclassify images, misinterpret speech, or misread sensor data. In high-stakes environments like facial recognition, autonomous vehicles, or fraud detection, these failures can result in security breaches, legal liabilities, or physical harm.For developers, the threat is real: even state-of-the-art models can be easily fooled without adversarial hardening. The good news? You can make your models more robust by combining defensive training techniques, input sanitization, and secure API practices. While encrypted inference doesn’t directly block adversarial manipulation, it ensures that sensitive inference data stays protected even if attackers attempt to probe the system.Mitigation and hardeningUse adversarial training frameworks like CleverHans or IBM ART to expose models to perturbed inputs during training.Apply input sanitization layers (e.g., JPEG re-encoding, blurring, or noise filters) before data reaches the model.Implement rate limiting and authentication on inference APIs to block automated adversarial probing.Use model ensembles or randomized smoothing to improve resilience to small input perturbations.Log and analyze input-output patterns to detect high-variance or abnormal responses.Test models regularly against known attack vectors using robustness evaluation tools.Solution integration: encrypted inferenceWhile encryption doesn't prevent adversarial inputs, it does mean that input data and model responses remain confidential and protected from observation or tampering during inference.Run inference in trusted environments like Intel SGX or AWS Nitro Enclaves to protect model and data integrity.Use homomorphic encryption or SMPC to process encrypted data without exposing sensitive input.Ensure that all intermediate and output data is encrypted at rest and in transit.Deploy access policies that restrict inference to verified users and approved applications.Risk #3: model leakage of intellectual assetsModel leakage—or model extraction—happens when an attacker interacts with a deployed model in ways that allow them to reverse-engineer its structure, logic, or parameters. Once leaked, a model can be cloned, monetized, or used to bypass the very defenses it was meant to enforce. For businesses, this means losing competitive IP, compromising user privacy, or enabling downstream attacks.For developers and MLOps teams, the challenge is securing deployed models in a way that balances performance and privacy. If you're exposing inference APIs, you’re exposing potential entry points—but with the right controls and architecture, you can drastically reduce the risk of model theft.Mitigation and hardeningEnforce rate limits and usage quotas on all inference endpoints.Monitor for suspicious or repeated queries that indicate model extraction attempts.Implement model watermarking or fingerprinting to trace unauthorized model use.Obfuscate models before deployment using quantization, pruning, or graph rewriting.Disable or tightly control any model export functionality in your platform.Sign and verify inference requests and responses to ensure authenticity.Integrate security checks into CI/CD pipelines to detect risky configurations—such as public model endpoints, export-enabled containers, or missing inference authentication—before they reach production.Solution integration: native security integrationIntegrate model validation, packaging, and signing into CI/CD pipelines.Serve models from encrypted containers or TEEs, with minimal runtime exposure.Use container and image scanning tools to catch misconfigurations before deployment.Centralize monitoring and protection with tools like Gcore WAAP for real-time anomaly detection and automated response.How investing in AI security can save your business moneyFrom a financial point of view, the use of AI and machine learning in cybersecurity can lead to massive cost savings. Organizations that utilize AI and automation in cybersecurity have saved an average of $2.22 million per data breach compared to organizations that do not have these protections in place. This is because the necessity for manual oversight is reduced, lowering the total cost of ownership, and averting costly security breaches. The initial investment in advanced security technologies yields returns through decreased downtime, fewer false positives, and an enhanced overall security posture.Challenges aheadWhile securing the AI lifecycle is essential, it’s still difficult to balance robust security with a positive user experience. Rigid scrutiny can add additional latency or false positives that can stop operations, but AI-powered security can avoid such incidents.Another concern organizations must contend with is how to maintain current AI models. With threats changing so rapidly, today's newest model could easily become outdated by tomorrow’s. Solutions must have an ongoing learning ability so that security detection parameters can be revised.Operational maturity is also a concern, especially for companies that operate in multiple geographies. Well-thought-out strategies and sound governance processes must accompany the integration of complex AI/ML tools with existing infrastructure, but automation still offers the most benefits by reducing the overhead on security teams and helping ensure consistent deployment of security policies.Get ahead of AI security with GcoreAI workloads introduce new and often overlooked security risks that can compromise data integrity, model behavior, and intellectual property. By implementing practices like zero-trust architecture, encrypted inference, and native security integration, developers can build more resilient and trustworthy AI systems. As threats evolve, staying ahead means embedding security at every phase of the AI lifecycle.Gcore helps teams apply these principles at scale, offering native support for zero-trust AI, encrypted inference, and intelligent API protection. As an experienced AI and security solutions provider, our DDoS Protection and AI-enabled WAAP solutions integrate natively with Everywhere Inference and GPU Cloud across 210+ global points of presence. That means low latency, high performance, and proven, robust security, no matter where your customers are located.Talk with our AI security experts and secure your workloads today

Subscribe to our newsletter

Get the latest industry trends, exclusive insights, and Gcore updates delivered straight to your inbox.