V
V
Viktor2015-06-14 11:22:06
Server equipment
Viktor, 2015-06-14 11:22:06

Why does an increase in the number of simultaneous requests increase the response time?

I tested it in a bunch of different ways, and I see the same situation.
When the number of simultaneous requests increases by 10 times, the response time increases accordingly.
The average number of requests per second remains approximately the same, and no process rests on the CPU / Memory
I do not understand what other limitation I ran into this test:
- in the limitation of the web server (for Ngnix it should be child's play) ?
- in the limitation of the testing code (I think ab should also do well) ?
- in OS restriction (number of connections, etc...)?
For example, I tested an unconfigured Nginx (under Windows 7) using apache benchmark for 10/100/1000 simultaneous requests, for 10000 requests in 1 KB
Here is a summary
"Time per request" grows every time by an order of magnitude
"Requests per second" the first two times the same, the third falls twice

Concurrency Level:    10      100      100
Requests per second:  885.61  878.22   390.39
Time per request:     11.292  113.867  2561.546

The same situation is observed if:
- testing nginx, or the simplest Node.js server with cluster
- testing using ab, or Node.js script (http or request module)
- testing on windows, or on debian
- done this is in a bunch of different variations (web server and testing utility on the same computer)
everywhere the request time grows in proportion to the number of threads.
Here are more detailed ab logs:
Concurrency Level:      10
Time taken for tests:   11.292 seconds
Complete requests:      10000
Failed requests:        0
Total transferred:      12570000 bytes
HTML transferred:       10240000 bytes
Requests per second:    885.61 [#/sec] (mean)
Time per request:       11.292 [ms] (mean)
Time per request:       1.129 [ms] (mean, across all concurrent requests)
Transfer rate:          1087.12 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    1   0.6      1       4
Processing:     5   10   3.1     10     184
Waiting:        4    9   3.2      9     181
Total:          6   11   3.1     11     184

Concurrency Level:      100
Time taken for tests:   11.387 seconds
Complete requests:      10000
Failed requests:        0
Total transferred:      12570000 bytes
HTML transferred:       10240000 bytes
Requests per second:    878.22 [#/sec] (mean)
Time per request:       113.867 [ms] (mean)
Time per request:       1.139 [ms] (mean, across all concurrent requests)
Transfer rate:          1078.05 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    1   0.9      1      15
Processing:    14  112  15.2    111     363
Waiting:       12   94  25.4    100     363
Total:         14  113  15.2    112     364

Concurrency Level:      1000
Time taken for tests:   25.615 seconds
Complete requests:      10000
Failed requests:        0
Total transferred:      12570000 bytes
HTML transferred:       10240000 bytes
Requests per second:    390.39 [#/sec] (mean)
Time per request:       2561.546 [ms] (mean)
Time per request:       2.562 [ms] (mean, across all concurrent requests)
Transfer rate:          479.22 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    2  23.1      1     514
Processing:   520 2471 749.2   2747    3931
Waiting:      478 2408 735.6   2668    3659
Total:        521 2473 749.6   2748    3932

Answer the question

In order to leave comments, you need to log in

1 answer(s)
V
Vitaly Pukhov, 2015-06-15
@Neuroware

The execution time will definitely increase with an increase in the number of simultaneous requests, this is obvious, it cannot be otherwise. Another question is why it increases so much, 100 threads is not so much, but the response time has grown too much. Here the question is not so much in Ngnix itself, but in what exactly is spinning on it, what requests were sent and who sent them. A crookedly constructed functionality may rest, for example, on disk operations or irrationally handle the database or overload the processor (which is unlikely, because such a crooked code is still rare). You can check this assumption if you send a ramdisk to it, throw a static html page on it and distribute it to Ngnix, if there are indicators on it an order of magnitude better, you need to check what is spinning on it. If not, then you can check Ngnix.

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question