C
C
catquistador2021-11-24 19:16:38
Nginx
catquistador, 2021-11-24 19:16:38

Why is req_limit considered incorrect?

the block http {}contains limit_req_zone $binary_remote_addr zone=test:30m rate=1000r/s;
server { location {}}has limit_req zone=test burst=10 nodelay;
I run the test with ApacheBenchmarkab -n 10000 -c 80 http://localhost:80/test

Concurrency Level: 80
Time taken for tests: 9.743 seconds
Complete requests: 10000
Failed requests: 6289
(Connect: 0, Receive: 0, Length: 6289, Exceptions: 0)
Non-2xx responses: 6289
Total transfers: 3104023 bytes
HTML transfers: 1490115 bytes
Requests per second: 1026.34 [#/sec] (mean)
Time per request: 77.947 [ms] (mean)
Time per request: 0.974 [ms] (mean, across all concurrent requests)
Transfer rate: 311.11 [Kbytes/sec ]received


I get 62% of failed requests as an output. despite the fact that RPS barely intercepted beyond the limit
, moreover, in some cases, when RPS is below the limit, connections leak.
spam logs
56360 limiting requests, excess: 11.000 by zone "test", client: 127.0.0.1, server: test, request: "GET /test HTTP/1.0", host: "localhost"

I do not understand where such losses come from and why there is "excess 11.000" in the log. in my understanding there should be ~ 1010 (rps limit + burst)

Answer the question

In order to leave comments, you need to log in

1 answer(s)
K
ky0, 2021-11-24
@catquistador

Let's take a look at the very beginning of the process, when ab starts up and 80 requests start flying to nginx at about the same time. The counters are at zero at this moment, but a limit of 1000 rps is set, that is, requests can be processed no more than once every 1 ms.
In fact, exactly 1 + 10 = 11 requests will fail inside, and the remaining 80-11 = 69 will be beaten off with 503 code - this is even more than 62%.
In the future, since requests will gradually spread over time, no longer coming synchronously, the percentage of hits will decrease, but by no means to the (1026-1000) / 1000 * 100% expected by you, because from time to time (and more often than rarely) requests all -they will come more often than once every 1 ms (burst of 10 rps can be ignored, it is microscopic compared to the limit) - this is hinted at by the average execution time of 0.974 ms divided by 80 threads.
Try increasing the burst to, say, 500-1000 and see how it affects the results.

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question