N
N
N2020-11-19 02:27:47
Nginx
N, 2020-11-19 02:27:47

NGINX + LUA = Benchmark. Who has experience?

The question is a bit rhetorical...

There is such a stack:

NGINX + LUA
In LUA, something is checked by the key in Redis: if it exists, then we return the statics... otherwise - 404...

Everything is fine - everything works :)

The question is in what:
Before "fastening" the check for LUA in NGINX using a load test through WRK , it showed about 370 requests / sec and traffic about 10.5Mb / sec ...
After "screwing" the check, it shows about the same: 360 requests / sec and 10.3Mb / sec ...

I.e. my LUA script didn't really change anything in terms of performance...
The size of the returned static file on which the test was run is about ~30kb...
My Internet speed is measuredspeedtest.net about 90Mbps...

Server in Germany (Hetzner). I ran the test from my computer from Russia...in 8 threads and 200 connections within 15 seconds...

wrk -t8 -c200 -d15s --latency http://example.com/file.jpg


Is this a normal return ?
What can be improved? Considering that without any scripts, it turns out that the benchmark is almost the same (I can even attribute it to an error) ...

Below are the main parameters of the NGINX config:

nginx.conf :
user www-data;
worker_processes auto;
worker_rlimit_nofile 65535;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;

events {
  worker_connections 65535;
    use epoll;
    multi_accept on;
}

http {

  ##
  # Basic Settings
  ##

  sendfile on;
  tcp_nopush on;
  tcp_nodelay on;

  keepalive_timeout  30;
    keepalive_requests 100;

  client_body_timeout 10;
    client_header_timeout 15;

    reset_timedout_connection on;

    send_timeout 2;
  types_hash_max_size 4096;
  server_tokens off;

  # server_names_hash_bucket_size 64;
  # server_name_in_redirect off;

  include /etc/nginx/mime.types;
  default_type application/octet-stream;

  ##
  # SSL Settings
  ##

  ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
  ssl_prefer_server_ciphers on;

  ##
  # Logging Settings
  ##

  #access_log /var/log/nginx/access.log;
  #error_log /var/log/nginx/error.log;

    access_log off;
    error_log /var/log/nginx/error.log crit;

  ##
  # Gzip Settings
  ##

  gzip on;
  gzip_disable "msie6";

  # gzip_vary on;
  # gzip_proxied any;
  # gzip_comp_level 6;
  # gzip_buffers 16 8k;
  # gzip_http_version 1.1;
  # gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;

    open_file_cache max=200000 inactive=20s;
    open_file_cache_valid 30s;
    open_file_cache_min_uses 2;
    open_file_cache_errors on;

  ##
  # Virtual Host Configs
  ##

  include /etc/nginx/conf.d/*.conf;
  include /etc/nginx/sites-enabled/*;
}


Here is the "host" part with the LUA script:
#.. тут стандартная "петушня"...

location ~* ^ХАЛИ_ГАЛИ_ТЫРЫМ_ПЫРЫМ$ {
    #set $images_dir "PATH_TO_FOLDER";
    # $4 - берётя из location...    

    # Выносим весь Lua код в отдельный файл
    content_by_lua_file /etc/nginx/lua/img_data.lua;
    #lua_code_cache off; #dev

}

# Этот "локатион" вызывается из LUA в случае "успеха", иначе сразу - 404 прям из LUA с помощью - ngx.exit(ngx.HTTP_NOT_FOUND)
location @imagedata {

    try_files /$images_dir/$4 =404;

}

#...далее тоже ничего значительного...


Hardware:
Intel Xeon E3-1275v5
2x HDD SATA (RAID 1)
4x RAM 16384 MB DDR4 ECC
OS:
Debian 9

It seems to me that it can be better ... :)

UPD:
Right now I took this picture from the Yandex server (market) :
https://avatars.mds.yandex.net/get-mpic/1883514/im... I ran

a similar test and got about 200 requests per second...and the same traffic was about 10Mb/s with the weight of this picture 35kb. ..

Hiking with performance, everything turns out to be normal ... if you count Yandex as a standard (but this is not accurate :)) ...
Either I stupidly rest on the data channel ...

Answer the question

In order to leave comments, you need to log in

1 answer(s)
N
N, 2020-11-19
@Fernus

We managed to increase performance by about 5% (tears, of course, but it may not be possible to squeeze more out of the current stack) ...
In nginx.conf , add to the events section : In "hosts": At the same time, nginx must be built with the option: --with -file-aio Read more here: https://habr.com/ru/post/260669/ But this will not be acceptable in all cases... read - google - delve into... After each change in the config, I ran the test several times. .. BUT, anyway, I'm waiting for people who have also experimented - I will be grateful if you tell me something else ... UPD:
accept_mutex off;
aio threads;

Compared to previous tests, right now it stably holds 100 more requests in total in 15 seconds and 15-20 more in 1 second ... The increase in the
return of "traffic" is ~ 1mb more ...
UPD2:
A text file containing "hello" is returned with speed ~ 2800 requests per second ...
In general, as they wrote, the next thing is the network ... for real tests, it must be excluded ...

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question