A
A
Alexey Krupsky2018-08-15 05:55:19
Nginx
Alexey Krupsky, 2018-08-15 05:55:19

How to configure nginx for 800 requests per second?

Site face on NUXT and api on php.
NUXT launched via pm2 in cluster mode in the amount of 10 pieces.
api is spinning on php-fpm and proxied through nginx
now at some point during server rendering, some requests sag and nginx breaks the connection, as a result, part of the requests to the site falls.
how to configure nginx for such a load? mysql copes
htop shows system load at 30-50%, that is, there is still potential, but it is not clear how to use it
/etc/nginx/nginx.conf

user www-data;
worker_processes 2;
pid /run/nginx.pid;

events {
  use epoll;
  worker_connections 25000;
    multi_accept on;
}

http {

  ##
  # Basic Settings
  ##

  sendfile on;
  tcp_nopush on;
  tcp_nodelay on;
  keepalive_timeout 65;
  keepalive_requests 100;
  types_hash_max_size 2048;
  reset_timedout_connection on;
  client_body_timeout 10;
  # server_tokens off;

  # server_names_hash_bucket_size 64;
  # server_name_in_redirect off;

  include /etc/nginx/mime.types;
  default_type application/octet-stream;

  ##
  # SSL Settings
  ##

  ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
  ssl_prefer_server_ciphers on;

  ##
  # Logging Settings
  ##

  access_log off;
  error_log /var/log/nginx/error.log;

  ##
  # Gzip Settings
  ##

  gzip on;
  gzip_disable "msie6";

  # gzip_vary on;
  # gzip_proxied any;
  # gzip_comp_level 6;
  # gzip_buffers 16 8k;
  # gzip_http_version 1.1;
  # gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;

  ##
  # Virtual Host Configs
  ##

  include /etc/nginx/conf.d/*.conf;
  include /etc/nginx/sites-enabled/*;
}

/etc/php/5.6/fpm/pool.d
listen = /run/php/php5.6-fpm.sock
pm = ondemand
pm.max_children = 5000
pm.start_servers = 2
pm.max_spare_servers = 10
pm.min_spare_servers = 2
pm.process_idle_timeout = 100s;
pm.max_requests = 5000
rlimit_files = 1024
rlimit_core = 5

nginx muzzle config
server {

  listen 80;
  listen [::]:80;


  root /home/nuxt;

  access_log /home/nuxt/faucetAccess.log;
  error_log /home/nuxt/faucetErrors.log;

  location / {
    gzip off;
      proxy_pass http://localhost:3007;
  }
}

pm2
{
  "apps": [
    {
      "name": "app",
      "script": "../app/node_modules/nuxt/bin/nuxt-start",
      "instances": "10",
      "exec_mode": "cluster",
    "cwd": "./",
      "env": {
        "PORT": 3007,
        "NODE_ENV": "production"
      }
    }
  ]
}

nginx server config
server {
  listen 2052;
  server_name api.localhost;

  root /home/public_html/;
  index index.php 
  
  access_log /home/nuxt/nginxAccess.log;
  error_log /home/nuxt/nginxErrors.log;

  location / {
        try_files $uri $uri/ /index.php?$args;
    }
  location ~ \.php$ {
           try_files $uri =404;
           fastcgi_pass unix:/run/php/php5.6-fpm.sock;
           fastcgi_index index.php;
           fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
           include fastcgi_params;
                
     }
}

Answer the question

In order to leave comments, you need to log in

4 answer(s)
P
Pavel Stepanov, 2018-08-15
@pinkskin

So, first of all, how many cores do you have on the machine? Why are there 10 clusters, and nginx worker processes 2 (both values ​​should be equal to the number of cores)? Secondly, instead of PM 2, you can use upstream in nigsa. You can slip other servers into it if this one does not cope. 800 connections is not much, but it already requires caching, so you need to register caching in upstream, proxy and output. Well, watch the code, smoke logs with mana. Good luck.

H
hckn, 2018-08-15
@hckn

now at some point during server rendering, some requests sag

And what about nginx? If it's Express choking that renders your Nuxt.
Did you set up caching somewhere?
How is testing done?

N
Night, 2018-08-17
@maxtm

now, at some point during server rendering, some requests sag and nginx breaks the connection, as a result, part of the requests to the site falls.
Even a bare nginx instance is hard to kill for only 800 rps. Let it even be a single-core gamno.
It seems to me that it's too early to think about tuning nginx itself, from experience - 4-5k RPS per 1 nginx node does not create any problems for a regular 2-core DO's server for $15.
The problem is not the web server, the problem is the backend behind it.
Connection breaks with whom? What about the client - is it clear what response nginx gives, probably 504?
It breaks the connection with the client, most likely because the backend has fallen off. Who falls off the back?
If with puff - pick the puff.
If with a node - a node.
Nginx can drop connections for a number of reasons:
- connection timeout when communicating with the backend
- disconnection from the backend
- incorrect (erroneous) backend response
- a bunch of errors when communicating with the client (we omit this)
Nginx successfully writes all this to its logs, look what happens there.
Also, the puff itself and the node can be configured for debugging logs.
Puff has slow-log tracing, logging of each request, and more.
Mana and a tambourine in hand. Good luck!

D
Denis, 2018-08-15
@ttys

maybe the question should sound like this: "how to set up / tune phpfpm so that he does not die?"
the engine of the box handles quite a lot of requests,
the number of cores certainly plays a role, but the response time also if 1 core and the answer will be generated for 10 seconds, then what can we talk about?
PS well, you need to understand that PHP is a single-threaded UG from which you should not expect much if you do not use some kind of hacklang from Facebook to compile PHP code into binary!

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question