Nginx Load Balancing Configuration
Set up Nginx load balancing to distribute traffic across multiple backend servers. Covers round-robin, least connections, IP hash, and weighted methods.
Detailed Explanation
Load balancing distributes incoming traffic across multiple backend servers to improve availability, reliability, and overall performance. Nginx supports several load balancing algorithms out of the box.
Upstream Block
Define your backend servers in an upstream block, then reference that group name in proxy_pass. Nginx will automatically distribute requests across the defined servers.
upstream backend {
server 10.0.0.1:3000;
server 10.0.0.2:3000;
server 10.0.0.3:3000;
}
server {
listen 80;
location / {
proxy_pass http://backend;
}
}
Load Balancing Methods
Round-robin (the default) distributes requests sequentially across servers in order. Least connections (least_conn) sends each request to the server with the fewest active connections at that moment, making it ideal for applications with varying request durations. IP hash (ip_hash) computes a hash of the client IP address to ensure the same client always reaches the same backend server, which is useful for maintaining session persistence without external session storage.
upstream backend {
least_conn;
server 10.0.0.1:3000;
server 10.0.0.2:3000;
}
Weighted Distribution
Assign weights to servers when they have different processing capacities. A server with weight=3 receives three times as many requests as a server with weight=1, allowing you to take advantage of more powerful hardware in your pool.
upstream backend {
server 10.0.0.1:3000 weight=3;
server 10.0.0.2:3000 weight=1;
}
Health Checks and Failover
Mark a server as a backup so it only receives traffic when all primary servers are down. Use max_fails and fail_timeout to define how Nginx detects unhealthy servers and temporarily removes them from the rotation.
upstream backend {
server 10.0.0.1:3000 max_fails=3 fail_timeout=30s;
server 10.0.0.2:3000 max_fails=3 fail_timeout=30s;
server 10.0.0.3:3000 backup;
}
Best Practices
- Monitor backend server health and response times with logging and external monitoring tools.
- Use
keepaliveconnections between Nginx and backends to reduce TCP handshake overhead significantly. - Combine load balancing with SSL termination at the Nginx layer for simplified certificate management across your entire fleet.
Use Case
You are scaling a web application horizontally by adding multiple application servers and need Nginx to distribute traffic evenly while providing automatic failover for downed servers.