D
D
dake12312017-10-09 06:35:38
PHP
dake1231, 2017-10-09 06:35:38

How to organize load balancing correctly?

Hello! The servers are organized like this - there is a main one with an nginx balancer, upstream and 3 web servers are used, one of them has a database
. Here are the balancer settings

upstream servers {
                server ip1;
                server ip2;
                server ip3;
                keepalive 16;
        }
server {
                listen 80;

                location / {
                        proxy_pass http://servers;
                        proxy_http_version 1.1;
                        proxy_set_header Connection "";
        }

All three web servers have the same settings
location ~ \.php$ {
                include fastcgi_params;
                fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
                fastcgi_pass 127.0.0.1:7777;
                try_files $uri =404;
                fastcgi_read_timeout 150;
        }

During peak hours I get
upstream timed out (110: Connection timed out) while reading response header from upstream, client: 127.0.0.1, server: domain_name, request: "GET /api/endpoint HTTP/1.1", upstream: "fastcgi://127.0.0.1:7777", host: "domain_name"

Answer the question

In order to leave comments, you need to log in

1 answer(s)
M
Mikhail Grigoriev, 2017-10-09
@Sleuthhound

If you are sure that the Web application can handle the load, then it may be worth increasing the timeouts on the load balancer, add to location /

proxy_connect_timeout 120s;
proxy_send_timeout 120s;
proxy_read_timeout 120s;

On 3 webservers, if you use php-fpm, then it is better to work through unix sockets, and not through tcp, it will be faster through sockets.
It may be worth looking towards the least_conn parameter in upstream, that is, requests will first be sent to the backend with the least number of active connections (but taking into account weights). Read more here .
If some backend is more powerful than others, then use weight determination through weight
. Also, set the max_fails and fail_timeout directives in the upstream block (in my example, the parameters are given below for an example).
Also enable logs on the balancer, this will greatly simplify debugging:
http {
    ...
    log_format upstream_log '[$time_local] $remote_addr - $remote_user - $server_name to: $upstream_addr: $request upstream_response_time $upstream_response_time msec $msec request_time $request_time';

upstream servers {
                least_conn;
                server ip1;
                server ip2 max_fails=3 fail_timeout=30s;
                server ip3 max_fails=5 fail_timeout=30s;
                keepalive 16;
        }

server {
                listen 80;
                access_log /var/log/nginx/servers-access.log upstream_log;
                error_log /var/log/nginx/servers-error.log debug;

                location / {
                        proxy_pass http://servers;
                        proxy_http_version 1.1;
                        proxy_set_header Connection "";
                        proxy_connect_timeout 120s;
                        proxy_send_timeout 120s;
                        proxy_read_timeout 120s;
        }
}

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question