Load Balancing Algorithms used in NGINX

In Distributed Systems, Performance, Software Development, web application by Prabhu Missier

NGINX can be configured as a reverse proxy to balance the load or to distribute requests to application servers over protocols other than HTTP or in it’s original avatar as a Web server. When it comes to load balancing algorithms the following are the configurations which are most commonly used :

Round Robin
This is default option where requests from clients are dispersed evenly among the servers in the cluster. The disadvantage of this method is that the load of a server is not given due consideration

Least Connections
This algorithm takes into consideration the number of active connections maintained by an application server. So a client request is routed to a server which is loaded the least.

IP Hash
A hash of the client’s IP address is used to determine which server will service the client’s request. This algorithm can be used when a persistent connection has to be maintained between the client and server. So this method can be used instead of cookies to retrieve session information.

Generic Hash
The server to which a client’s request is directed is determined by hashing a user defined key which can be a variable or string or a combination of both.

Least Time
The server with the lowest average response time and the least number of active connections is chosen by NGINX when this algorithm is used.

Random
As the name suggests NGINX can be configured to choose a server at random. If the ‘two’ parameter is specified 2 servers are selected at random taking into account server weights and then a further selection is made based on the specified algorithm which could be : Least Connection or Least Time.

upstream backend {

random two least_conn;
server server1.com weight=3;
server server2.com;
server server3.com;
server server4.com;

}

Additional attributes
Every server in the cluster can be qualified further using the following parameters:

Weight – By default a server in the cluster is given a weight of 1. Increasing the weight will ensure that the requests get directed more to the server proportionate to the weight allocated to that server.
So for eg. If Server A has a weight of 2 and Server B has a weight of 1 then for every 3 client requests, 2 will be directed to A and the 3rd one to B.

Down – A server in the cluster can be taken out of consideration by marking it as down. This will ensure that NGINX will skip the server when routing client requests.

Backup – One server in the cluster can be marked as ‘backup’ and this will ensure that if all other servers in the cluster are down the backup server can still handle the client load.

Here’s a code snippet which demonstrates all the above concepts :

http{

upstream backend{
least_conn;
server 192.168.1.110 weight = 3;
server 192.168.1.111;
server 192.168.1.112 down;
server 192.168.1.114;
server 192.168.1.113 backup;

}

server{

location / {
proxy_pass http://backend;

}

}

}

Reference
https://docs.nginx.com/nginx/admin-guide/load-balancer/http-load-balancer/