A Load Balancer is a tool used to sort incoming requests across Virtual Machines and Bare Metal servers to improve your infrastructure's fault tolerance.
Gcore Load Balancers come with various configuration options to fit different network requirements. We’ve also conducted multiple performance tests on available flavors to help you make an informed decision and select the most effective solution for your infrastructure.
Our Load Balancers also support long (keepalive) connections through Server-Sent Events (SSE), Long polling, and WebSockets. To keep the connections consistently stable, you need to adjust the data timeout to the appropriate values based on your application's requirements.
We’ve tested our Load Balancers to determine the performance of different flavors.
For each flavor, we’ve deployed the client in multithreading mode with 36 concurrent threads and 400 connections over the test duration of 30 seconds.
The results show:
Throughput: The number of requests per second (RPS) a Load Balancer can handle under a number of simultaneous users’ requests.
Latency: Response times for both HTTP and HTTPS traffic across different Load Balancer flavors.
Flavor | HTTP | HTTPS | ||
---|---|---|---|---|
Throughput | Latency (ms) | Throughput | Latency | |
1 vCPU - 2 GiB | 21k | 4 | 20k | 20 |
2 vCPU - 2 GiB | 45k | 3 | 34k | 12 |
4 vCPU - 8 GiB | 91k | 5 | 51k | 8 |
8 vCPU - 16 GiB | 142k | 3 | 117k | 4 |
Was this article helpful?
Discover our offerings, including virtual instances starting from 3.7 euro/mo, bare metal servers, AI Infrastructure, load balancers, Managed Kubernetes, Function as a Service, and Centralized Logging solutions.