A load balancer is a tool used to sort incoming requests across your virtual machines so to improve the fault tolerance of your infrastructure.
1. Go to your project - > Networking - > Load Balancers - > Create Load Balancer.
The new window opens. Do the remaining steps in it.
2. Select a region for balancing. Please note that you can balance traffic only within a single data center.
3. Select a network. If you want to use a private network for load balancing, enable the Use private network option. For more information, see the article "Create and manage a network".
4. Add one or more listeners. A listener is a process that checks for connection requests using the protocol and port that you configure.
In the drop-down window, specify the listener name, protocol (TCP or HTTP), and port in the range from 1 to 65535.
We also support the option to add an X-Forwarded-For header to identify an origin of the IP address of the client connecting to a web server via a load balancer.
5. Configure a pool. A pool is a list of virtual machines to which the listener will redirect incoming traffic.
Click Add pool to start configuring.
5. 1. Specify the pool name.
5. 2. Select the balancing algorithm:
Round robin — requests are distributed across servers within a cluster one by one: the first request is sent to the first server, the second request is sent to the second server, and so on in a circle.
Least Connection — new requests are sent to a server with the fewest active connections.
Source IP — a client's IP address is used to determine which server receives the request.
4.3. Select a protocol. The system will offer you an option based on the listener's settings: the HTTP listener can communicate with servers via the HTTP protocol, the TCP listener — via TCP.
4.4. If you need to route the requests for a particular session to the same machine that serviced the first request for that session, select App Cookie and fill in the Cookie field. A special module creates a cookie — which makes each browser unique — and then uses it to forward requests to the same server.
4.5. Add virtual machines that will participate in the traffic distribution for the configured listener. For adding an instance, you must specify its port and weight in the distribution.
4.6. In the Health check section, select the Protocol for checking: TCP, Ping, HTTP.
For the HTTP Protocol, select the HTTP method and add the URL path.
Specify the following setting for the protocols:
6. Enter a name for the load balancer and click Create Load Balancer.
7. Configure firewalls for instances in the pool.
Make sure their ports are open for the load balancer traffic:
8. Set up the balancer's firewall (optionally)
Create a custom security group (this is the firewall) and edit it: configure the rules for inbound and outbound traffic.
The list of created balancers is located inside the project - > Networking - > Load balancers
There you can:
To do it, select the necessary action on the selector on the right from the balancer.
Go to your project - > Networking - > Load balancers -> select the Overview option on the selector on the right from the chosen balancer.
In the drop-down window, you can edit existing listeners in the load balancer and also add new ones.
You can edit and delete listeners. Select the appropriate option on the selector on the right from the listener.
In the editor, you can:
Status (UI) | Status (API) | Value |
---|---|---|
Healthy | Online | The balancer is working.All virtual machines in the pool accept requests. |
Unhealthy | Draining | A virtual machine from the pool does not accept new requests. |
Degraded | One or more balancer components have the "Error" status. | |
Error | The balancer doesn't work.Virtual machines do not pass check requests.All virtual machines in the pool have the "Error" status. |
We have tested our load balancers to determine the performance of different flavors. The test results show the throughput - the number of requests per second (rps) a load balancer can handle under a number of simultaneous users’ requests for worker nodes that communicate through the load balancer considering that 95 percent of requests are handled within 1 ms.
Flavor | Throughput | Worker request | Percentile response time |
---|---|---|---|
1 vCPU - 2 GiB | 5k | 1 | 95th percentile is 1 ms |
2 vCPU - 4 GiB | 5k | 16 | 95th percentile is 1 ms |
4 vCPU - 8 GiB | 5k | 512 | 95th percentile is 1 ms |
8 vCPU - 16 GiB | 5k | 2048 | 95th percentile is 1 ms |
Was this article helpful?
Discover our offerings, including virtual instances starting from 3.7 euro/mo, bare metal servers, AI Infrastructure, load balancers, Managed Kubernetes, Function as a Service, and Centralized Logging solutions.