After you create a Load Balancer, it will appear on the Load Balancers page in the Customer Portal. To access the page:
1. Navigate to Cloud > Networking.
2. Open the Load Balancers page.
On this page, you can find all the necessary information about the created resource:
You can explore these settings in more detail, monitor Load Balancer's performance and health metrics, and adjust configurations as needed. Refer to the following sections for instructions.
When managing a significant number of Load Balancers, use the Search by name feature to find the exact one you need.
A Load Balancer and its components (listener, pool, and pool member) can have two types of statuses:
Operating status: The current observed operational state of a resource. The status reflects whether the resource is functioning as expected or if there are any issues with its operation.
Provisioning status: The lifecycle stage of a resource, from creation through any changes to potential deletion.
The operating status of a resource can change to:
During the resource’s lifetime, its provisioning status can signal the following:
Updating the name, description, or tags of a Load Balancer won't cause any connection interruptions or downtime.
You can inspect the configuration details of a created Load Balancer, view and copy its routing scheme, add and remove listeners, or enable logging on the Overview page. To open the page:
1. Navigate to Cloud > Networking.
2. On the Load Balancers page, find the balancer you want to configure and click its name to open the settings.
The Overview page features the following information:
The firewall offers flexible network configuration, allowing you to control incoming and outgoing traffic.
After creating the balancer, update firewall settings and open VM ports so that traffic can easily pass from the balancer and back.
The Load Balancer settings are organized into tabs, each dedicated to a specific functionality. The following sections provide a detailed description of these settings.
This tab features a summary of the load balancer's configuration and its resources.
This tab provides an overview of the Load Balancer's performance metrics that are crucial for tracking its health and performance.
By checking the metrics, you can understand if the current resources are sufficient or if there's a need to scale up and optimize Load Balancer’s performance.
Monitoring data is available in hourly and weekly views:
You can also specify the auto-refresh cadence: in minutes (1, 5, 15, or 30) or for one hour.
CPU Utilization, %: The percentage of currently used Load Balancer's CPU. It's a key performance indicator that can help you understand if the Load Balancer is nearing its capacity limits. High CPU usage is typically observed with a large number of connections or high volumes of traffic.
RAM Utilization, %: The percentage of total available memory currently used by the Load Balancer. High RAM utilization indicates that the Load Balancer is handling a large number of connections or running memory-intensive processes, which is common with terminated HTTPS traffic. Other types of traffic generally consume less RAM.
Network traffic: The rate of incoming (ingress) and outgoing (egress) network traffic measured in kilobits per second (kbit/s). The metrics can help you understand the network load on the Load Balancer.
Network packets: Network packets per second (PPS) measure the rate of network packets being received (ingress) and sent out (egress) by the balancer.
Listener is a process that checks for connection requests using the configured protocol and port. A listener contains a pool—a list of virtual machines or Bare Metal servers that will receive traffic from the listener.
Listener configuration:
You can't add multiple listeners with the same L4 type and port number. For example, you can create a listener with TCP 80 and UDP 80 but not a listener with TCP 80 and TCP 80.
It's also not possible to use both TCP 80 and HTTP 80. The reason is that HTTP is an L7 protocol based on L4 TCP, so in this case, it's perceived as TCP 80 + TCP 80.
For optimized performance, we recommend making batch updates of pool settings. To perform comprehensive simultaneous updates, use the Gcore API.
A pool is the list of VMs or Bare Metal to which the listener will redirect incoming traffic. Each pool in the listener has the following configuration:
Name: The name of the pool.
Algorithm: The load-balancing method used to distribute traffic among instances in the pool. Check available algorithms in pool settings.
Instance count: The number of Virtual Machines whose traffic is managed by a Load Balancer.
Health Check: The method used to determine the health of the instances in the pool.
Operating status: The current operational state of the pool. "Online" indicates that the pool can handle traffic.
Provisioning status: The status of the configuration process for the pool. "Active" means the pool is fully set up and ready for use.
Protocol: The network protocol that the pool uses to communicate with instances and members.
Instances:
Health check:
Timeouts:
L7 policy specifies how to manage and route incoming traffic at the application layer (Layer 7 of the OSI model).
This tab allows you to change the flavor of your Load Balancer based on the requirements of your workload, expected traffic, the computing resources needed, and others.
Record detailed information about the traffic processed by a Load Balancer. This data is useful for debugging, monitoring, and analyzing the behavior of your applications.
Our Logging for Load Balancers guide covers all available settings and configuration steps.
On this tab, you can review and assign metadata to a Load Balancer.
You can use tags to group multiple resources that belong to the same functionality. For instance, if you have a single project with different services—such as an authentication service, a shop service, and a cart service—each service can have its own Load Balancer. By applying tags, you can easily organize and manage these resources as a group.
When you add the “test” custom tag to your Load Balancer, you essentially label that resource with "test:LB". This can help you quickly identify the resource's purpose, environment, owner, or any other characteristic that is relevant to your organization.
The failover mechanism automatically detects Load Balancers that have failed or demonstrate degraded performance.
When you initiate a failover, the traffic is redirected to an alternate Load Balancer within your network infrastructure. The second balancer immediately takes over the duties of the failed one without interrupting the availability of your application.
To initiate a failover:
1. Navigate to Cloud > Networking.
2. Open the Load Balancers page.
3. Find the needed balancer and click its name to open it.
4. In the top-right corner of the screen, click Actions > Failover.
5. (Optional) Enable force failover. To failover the primary Load Balancer, click the Ignore Load Balancer provisioning status checkbox. After you complete the failover, the traffic will be handled by a standby balancer.
6. Click Initiate Failover.
The failover might take a few minutes to complete. During this time, the Load Balancer provisioning status will change to Updating.
During failover or resize operations, all current connections will be terminated, causing temporary disconnects.
1. Navigate to Cloud > Networking.
2. Open the Load Balancers page.
3. Find the Load Balancer you want to update and click the three-dot icon next to it.
4. Select Rename.
5. Enter a new name and click Save to apply the changes.
To update metadata assigned to a Load Balancer:
1. Navigate to Cloud > Networking.
2. Open the Load Balancers page.
3. Find the Load Balancer you want to update and click its name to open the settings.
4. Navigate to the Tags tab, where you can view and modify the balancer’s metadata.
5. Update tags as you see fit and click Save.
1. Navigate to Cloud > Networking.
2. Open the Load Balancers page.
3. Find the Load Balancer you want to update and click the three-dot icon next to it.
4. Select Delete.
5. Click Delete to verify your action.
Your Load Balancer has been successfully removed.
Was this article helpful?
Discover our offerings, including virtual instances starting from 3.7 euro/mo, bare metal servers, AI Infrastructure, load balancers, Managed Kubernetes, Function as a Service, and Centralized Logging solutions.