Load Balancer
Servers & Server SoftwareLoad Balancer is a server, service, or software component that distributes incoming network traffic across multiple backend servers to improve availability, performance, and scalability. By routing requests based on health checks, algorithms, and session rules, it prevents any single server from becoming a bottleneck. Load balancing can operate at the transport layer (TCP/UDP) or application layer (HTTP/HTTPS).
How It Works
A load balancer sits in front of a group of backend servers (often called a pool, cluster, or target group) and exposes a single public endpoint such as an IP address or domain name. When a request arrives, it selects a healthy backend based on a routing method like round robin, least connections, weighted distribution, or hash-based routing. It can also terminate TLS/SSL, reuse connections to backends, and apply timeouts and limits to protect servers from slow or abusive clients.
Health checks are central to load balancing. The load balancer periodically probes each backend (for example, an HTTP path or a TCP port) and removes unhealthy nodes from rotation until they recover. Many setups also use session persistence (sticky sessions) to keep a user bound to the same backend when an application stores session state locally. More modern architectures avoid stickiness by storing sessions in shared systems (Redis, databases) so any backend can serve any request.
Why It Matters for Web Hosting
Load balancing affects which hosting plan fits your traffic and uptime goals. Shared hosting rarely includes it, while VPS, dedicated, and cloud plans may offer built-in or add-on load balancers. When comparing providers, look for support for health checks, HTTPS termination, WebSocket/HTTP2 compatibility, configurable algorithms, and easy scaling of backend nodes. Also consider whether your application needs sticky sessions or can run statelessly for simpler scaling.
Common Use Cases
- Scaling a website across multiple web servers (Nginx/Apache) to handle traffic spikes
- High-availability setups where failed servers are automatically removed from service
- Separating responsibilities by routing to different backends (API vs static content)
- TLS/SSL offloading to reduce CPU load on application servers
- Blue-green or canary deployments by shifting a percentage of traffic to new versions
Load Balancer vs Reverse Proxy
A reverse proxy forwards client requests to one or more backend servers and often focuses on features like caching, compression, security headers, and request filtering. A load balancer specifically emphasizes distributing traffic across multiple backends and maintaining availability through health checks and failover. In practice, many tools (for example, Nginx or HAProxy) can act as both, but the key difference is intent: optimization and mediation (reverse proxy) versus distribution and resilience (load balancer).