N
N
Nikolai Alexandrov2011-06-14 11:10:56
PHP
Nikolai Alexandrov, 2011-06-14 11:10:56

nginx + php fpm

I read many articles that advised using unix socket in nginx + php-fpm. Then I came across this article: I decided to check which is better. For tests, I decided to use ab and two files that nginx will give:

  1. index.html with the default: It Works!
  2. phpinfo.php with content: <?php phpinfo();
We have: Debian 6.0 with latest updates and nginx 1.0.2 + php-fpm 5.3.6 (dotdeb). Server: HP DL140 (2 x Intel Xeon L5320, 16GB DDR3). php-fpm in static with max_children 1000
Testing:
  1. ab -c 20000 -n 100000 - Result: Failed requests: 0
  2. ab -c 4000 -n 10000 - Result: Failed requests: 5294 (unix socket)
  3. ab -c 4000 -n 10000 - Result: Failed requests: 0 (tcp/ip)
Thus, according to the 1st test, we have proof that nginx correctly processes 20,000 concurrent requests and the problem is not in it. Therefore, the matter is in php-fpm and in the type of its “connection”. Of course, you may need to tweak the kernel to change the situation, but with default settings, why does php-fpm over unix socket lose so much?
UPD1:
Increasing the value of the net.core.somaxconn kernel parameter from 128 to 1024 (and even 102400) slightly affected the decrease in fail to 4000-4500. But it's still far from the reference zero...
UPD2:
Changing the backlog both in the core and in php-fpm did not give any results. Left -1, but added 8 php-fpm pools (one per core), and set max_children to 500 for each. The results are impressive:
ab -c 7000 -n 7000 - Result: Failed requests: 0(unix socket)
If you overestimate -c > 7000, then testing fails with an error. Apparently, the limitation is already at the kernel level. Although somaxconn, backlog > 10000. Then I decided to drive tcp / ip:
  1. ab -c 7000 -n 7000 - Result: Failed requests: 0 (tcp/ip)
  2. ab -c 10000 -n 10000 - Result: Failed requests: 225 (tcp/ip)
  3. ab -c 20000 -n 20000 - Result: Failed requests: 733 (tcp/ip)
Conclusion: it turns out that tcp / ip is better so far, because. withstands > 7000 concurrent requests.

Answer the question

In order to leave comments, you need to log in

5 answer(s)
S
Sergey, 2011-06-14
@bondbig

Increase the number of php workers, for starters.

H
homm, 2011-06-14
@homm

And for what reason did Failed requests turn out? If anything, even a size mismatch is counted as a fail.

S
shagguboy, 2011-06-14
@shagguboy

usually everyone stumbles over the limit of pending connections to unix sockets. there 128 unlike tcp.
))

S
shagguboy, 2011-06-14
@shagguboy

I recommend reading about the first rake
www.sql.ru/forum/actualthread.aspx?tid=856742

A
agentru, 2012-07-02
@agentru

So they didn’t find an answer, why do sockets lose tcp / ip? I myself was puzzled by such a question, but alas, experience is not enough.

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question