V
V
VitaliyBorys2019-10-03 02:30:40
PHP
VitaliyBorys, 2019-10-03 02:30:40

How to optimize php+fpm?

Friends, I have a VPS with the following specifications:
Ubuntu 18.04
RAM: 6144 MB
Disk SSD: 70000 MB
CPU: 2x2.8
Ghz Stress load using jmetter.
I send 100 streams to the main page (there are very few requests even in the database).
The result of receiving a response in a hundred threads ranges from Load time: 1160 to Load time: 6136.
If you send just one thread, then the answer is 230ms.
I thought that there might be problems in the database, or in the code. I decided to create just an empty index.php file and output phpinfo() there.
That is, there is no base there, there is nothing at all, except for the output of phpinfo ().
Sending 100 streams and getting:
From Load time: 124 to Load time: 3135.
Why such a significant difference? Why is it taking so long at all? T
I assume that things are in php or nginx.
I set everything to default (nxinx 1.4 + php-fpm7.2).
And changed the following settings.
/etc/php/7.2/fpm/pool.d/www.conf
pm.max_children = 140
pm.start_servers = 20
pm.min_spare_servers = 20
pm.max_spare_servers = 60
pm.max_requests = 500
Here is the NGINX config nginx.conf
user www- data;
worker_processes auto;
pid /run/nginx.pid
include /etc/nginx/modules-enabled/*.conf;
events {
use epoll;
worker_connections 2048;
multi_accept on;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
access_logoff;
error_log /var/log/nginx/error.log crit;
keepalive_timeout 30;
keepalive_requests 100;
client_max_body_size 1m;
client_body_timeout 10;
reset_timedout_connection on;
send_timeout 2;
sendfile on;
tcp_nodelay on;
tcp_nopush on;
gzip on;
gzip_disable "msie6";
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/javascript;
open_file_cache max=200000 inactive=20s;
open_file_cache_valid 30s;
open_file_cache_min_uses 2;
open_file_cache_errors on;
# PHP-FPM Configuration Nginx
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/run/php/php7.2-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
Friends, what's wrong? Where to dig?
Desired result :
Send 100 streams to an empty file and get them at the same speed, well, no more than 200-300ms

Answer the question

In order to leave comments, you need to log in

3 answer(s)
V
Vitaly Karasik, 2019-10-03
@vitaly_il1

- What does 'top' show?
- I would check at what load it starts to slow down

D
deadem, 2019-10-03
@deadem

And you, I hope, are not sending requests from the same machine? Jmeter is hungry for memory and resources, which will not add speed at all ... And the number of threads must be set based on the number of processor cores and the size of the memory that jmeter will eat plus your application, otherwise resources will simply flow away to switch between threads and work with swap. It is likely that in the end it is not the site that slows down, but the tester. It is correct to test a multi-threaded load from a remote server in order to exclude the influence of the testing environment on the stand. And if you need to test work with 100 threads, you will have to run several remote testing machines at the same time, or look for a 100-core monster computer.
I recommend checking out jmeter.apache.org/usermanual/best-practices.html

E
Evgeny Koryakin, 2019-10-05
@zettend

And you try another hoster or test with exactly the same config on your PC.
For I came across unscrupulous hosters who wildly trotted VPS under load.

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question