Nginx Upstream Configuration
Configure Nginx upstream blocks to define backend server groups with keepalive connections, weighted servers, hash-based routing, and failover setup.
Detailed Explanation
The upstream block in Nginx defines a named group of backend servers that can be referenced by proxy_pass, fastcgi_pass, or other proxy directives. It forms the foundation of load balancing and high availability configurations.
Basic Upstream Definition
upstream api_servers {
server 10.0.1.10:8080;
server 10.0.1.11:8080;
server 10.0.1.12:8080;
}
server {
location /api/ {
proxy_pass http://api_servers;
}
}
Server Parameters
Each server entry in an upstream block supports several parameters for fine-grained control over traffic distribution and health detection:
upstream backend {
server 10.0.1.10:8080 weight=5;
server 10.0.1.11:8080 max_fails=3 fail_timeout=30s;
server 10.0.1.12:8080 backup;
server 10.0.1.13:8080 down;
}
- weight: Controls the traffic distribution ratio between servers with different capacities.
- max_fails / fail_timeout: After
max_failsconsecutive failed attempts withinfail_timeoutseconds, the server is marked unavailable for the remainder of thefail_timeoutperiod before being retried. - backup: This server only receives traffic when all non-backup servers in the group are unavailable.
- down: Marks a server as permanently unavailable, useful during planned maintenance windows.
Keepalive Connections
Establishing a new TCP connection for every proxied request adds measurable latency. The keepalive directive maintains a pool of idle persistent connections to upstream servers:
upstream backend {
server 10.0.1.10:8080;
server 10.0.1.11:8080;
keepalive 32;
}
location / {
proxy_pass http://backend;
proxy_http_version 1.1;
proxy_set_header Connection "";
}
Setting proxy_http_version 1.1 and clearing the Connection header are both required for keepalive connections to function correctly with upstream servers. Without these directives, Nginx falls back to HTTP/1.0 which closes connections after each response.
Hash-Based Routing
Use the hash directive for consistent routing based on a request attribute, ensuring the same request key always reaches the same backend server:
upstream backend {
hash $request_uri consistent;
server 10.0.1.10:8080;
server 10.0.1.11:8080;
}
The consistent parameter implements a consistent hashing ring algorithm that minimizes key redistribution when servers are added to or removed from the upstream group, which is important for maintaining cache efficiency.
Slow Start
The slow_start parameter gradually ramps up traffic to a server that has just recovered from a failure, preventing it from being immediately overwhelmed with the full traffic load before its caches and connection pools are warmed up.
Use Case
You are managing a pool of backend application servers and need fine-grained control over how Nginx distributes traffic, handles failures, and maintains persistent connections.