A Load Balancer is a tool used to sort incoming requests across your Virtual Machines to improve your infrastructure's fault tolerance.
Go to your project, navigate to the "Load Balancers" in the "Networking" section and click Create Load Balancer.
The new page opens. Perform the remaining steps there.
Select a region for balancing.
You can balance traffic only within a single data center.
Select a suitable computing configuration for your Load Balancer: GiB and vCPU. We create all Load Balancers in high availability mode with active-standby instances. Upon failure of the active instance, the standby one will seamlessly take over the load-balancing functions.
Select a network, public or private, and enable additional features:
You can choose between L2 (Layer 2) and L3 (Layer 3) connectivity. This setting determines the preferred connectivity method the Load Balancer uses to connect to backend pool members. If the preferred connectivity is not feasible, traffic will automatically route via the alternative method:
L2 (preferred) → (if not possible) L3 → (if not possible) Validation Error
L3 (preferred) → (if not possible) L2 → (if not possible) Validation Error
The Load Balancer determines available routes only by evaluating subnet host routes. Due to current system limitations, it does not take router host_routes
into account.
L2 connectivity offers better performance because traffic flows directly between the Load Balancer and pool members without passing through a router. This reduces network hops and minimizes latency.
However, this approach requires more IP addresses. In networks with many /24
subnets, each Load Balancer must create ports in every subnet where its members are located. This can lead to high IP utilization and reduced efficiency in large-scale deployments.
L3 connectivity routes traffic through a router or gateway, introducing additional network hops that may slightly impact performance.
It also optimizes IP address utilization by reducing the number of required IPs per Load Balancer. Instead of allocating a separate IP in every subnet, the Load Balancer communicates with pool members across subnets using routing mechanisms. This approach improves scalability and efficiency, especially in environments with multiple subnets.
For most cases, such as a single subnet setup, use L2 connectivity for optimal performance. If your deployment involves multiple subnets or complex networking requirements, contact support to determine the best configuration.
Add listeners that will check for connection requests using the protocol and port that you specify. You can add multiple listeners to a Load Balancer.
To configure a listener:
1. In the Listeners section, click Add listener.
2. Enter the listener’s name.
3. Select the required protocol: TCP, UDP, HTTP, Terminated HTTPS, and Prometheus. You can configure multiple TLS certificates for Terminated HTTPS and Prometheus.
4. Specify a port that the Load Balancer will listen on for incoming traffic. You can keep a default port for some protocols or specify the needed port from 1 to 65535.
5. (Optional) To identify the origin of the user's IP address connecting to a web server via a load balancer, enable the Add headers X-Forwarded-For, X-Forwarded-Port, X-Forwarded-Proto to requests toggle.
6. If you select Terminated HTTPS and Prometheus protocols, you can configure TLS certificates. Follow instructions from our dedicated guide.
7. Set the connection limit - a maximum number of simultaneous connections that can be handled by this listener.
8. (Optional) Add allowed CIDR ranges to define which IP addresses can access your content. All IP addresses that don’t belong to the specified range will be denied access.
To ensure correct operation and avoid misconfigurations, the IP version of Allowed CIDRs must match the IP version of the Load Balancer's Virtual IPs (VIPs).
Basic Rules:
VIPs with both IPv4 and IPv6 support CIDRs of both versions.
VIPs: 62.112.222.52 (IPv4), 2a03:90c0:4d6:1::2e8 (IPv6)
Allowed CIDRs: 10.0.0.0/8, fe00::/7
If the provided CIDRs do not match the VIP's IP version, the API will return a validation error.
9. (Optional) For HTTP-based listeners, you can configure basic user authentication to protect your resource from unauthorized access. Click Add users to enable the authentication:
Enter username: specify a username.
Password: specify a password or choose the Encrypted password option to store password as a hash. Check out create an encrypted password for instructions.
A password must contain at least one lowercase character, one uppercase character, at least one number, and a special character. The maximum password length is 128 symbols.
10. Click Create Listener.
Listener configuration options differ depending on the selected protocol. For example, HTTP and Terminated HTTPS protocols allow additional settings, such as enabling headers and authentication, while TCP and UDP listeners focus on specifying ports and connection limits.
Configure a pool—a list of VMs to which the listener will redirect incoming traffic. Click Add new pool in the "Listeners" block to start configuring.
1. Specify the pool name.
2. Select the balancing algorithm:
3. A protocol will be automatically selected based on the listener's settings: the HTTP listener can communicate with servers via the HTTP protocol.
4. Select App Cookie and fill in the "Cookie" field. A special module creates a cookie and then uses it to forward requests to the same server.
Click Add Instance to add Virtual Machines that will participate in the traffic distribution for the configured listener.
1. Select Custom IP, Virtual Machine, or Bare Metal and appropriate configurations.
2. Specify its port and weight in the distribution.
1. Select the protocol for checking: TCP, Ping, HTTP and appropriate configurations.
2. Specify сheck interval (sec)—time between sent requests.
3. Specify response time (sec)—the time to wait for a response from a server.
4. Specify unhealthy threshold—the number of failed requests after which traffic will no longer be sent to the Virtual Machine.
5. Specify healthy thresholds—the number of successful requests after which the Virtual Machine will be considered ready to receive traffic.
Specify client data, member connect and member data timeouts in msec.
Each terminated HTTPS listener requires a default TLS certificate. Additional SNI certificates can also be configured, allowing multiple certificates to be assigned to the same listener.
Enter a name for the Load Balancer. This name will be displayed in the Gcore Customer Portal.
The Logging service will be activated to store your logs. To learn how it works and how to configure it, refer to the article about Logging.
Create tags for your load balancer by entering "Key" and "Value".
Check the settings and click Create Load Balancer on the right.
Configure firewalls for Virtual Machines in the pool according to the separate guide.
Make sure their ports are open for the Load Balancer traffic:
Was this article helpful?
Discover our offerings, including virtual instances starting from 3.7 euro/mo, bare metal servers, AI Infrastructure, load balancers, Managed Kubernetes, Function as a Service, and Centralized Logging solutions.