M
M
Mark2021-02-19 18:34:41
PHP
Mark, 2021-02-19 18:34:41

What is the algorithm for calculating the optimal latency for an API and telling it to the user software so as not to generate unnecessary load?

Description of the situation : There is an API, the "executors" software accesses it to receive the task. There is a problem: there are not always enough orders to "occupy" all the performers. There is a desire to avoid calls every second, and independently orient the performers after how many seconds to try to call again.

Question : how is it time to calculate?

The algorithm may not be ideal ("similar" sites are corrected manually), but it is desirable to be as close to reality as possible.
Data available:
1) The number of performers in the system.
2) Number of tasks.

I think we will have to enter the field "last_execution" to calculate the "active" executors for N time.
The number of executors will be approximately ~15,000. Each call generates:
- 1 SELECT-EXIST query
- 1 SELECT query with LEFT JOIN

PS No statistical data, unfortunately, at the moment. There are simply empirical data of colleagues. But in the process of work it will be possible to adjust.

PSS Sorry for 3 tags. I don’t know exactly where to define it: in fact, I’m looking for an algorithm, while PHP is limited, but I believe that the logic for executing the task will be assigned to MySQL.

Note: Alas, there is a rule that nothing can be changed on the "client software" side. This "software" is ready to focus only on the "retry_after" parameter with the number of seconds when to repeat.

Answer the question

In order to leave comments, you need to log in

5 answer(s)
R
Rsa97, 2021-02-19
@Rsa97

websocket

W
Wataru, 2021-02-19
@wataru

This is a well-known and a hundred times solved problem. Google "exponential backoff" or "exponential backoff".

N
nokimaro, 2021-02-19
@nokimaro

Why not set the delay based on the available server capacity divided by the number of workers on the line?
First, we measure the upper value, for example, the maximum RPS (requests per second) that the server can keep without failures. Let's assume it's 1000 rps. Next, we set a limit value, for example, 80% of the maximum are always ready to keep for this type of request. Total we have 800 rps. Based on this, we calculate the delay for each user. We do automatic recalculation of the delay every minute or after any suitable time interval.
The idea is not as crazy as it might seem. For example, VK in their news feed at high loads can turn off auto-loading, and turn on the "show more" button in order to reduce rps.
The only option when this may not work is if you have billing and payment for the server based on the resources used (iops, cpu time, etc.). In other cases, if there is a server, let it work to the maximum of its capabilities.

S
Sergey Sokolov, 2021-02-19
@sergiks

The server receives requests of 2 types: Hit (if there is a task for it) and Miss (when it hits, but there are no tasks).

  • Hit request penalty - the time that the appeared task was waiting for the request.
  • Penalty for the Miss request - the sequence number of this idle request from this client is 1 (1st request is free : )

Tasks arrive randomly and unpredictably. Performers also connect randomly and unpredictably.
The question is how to minimize the fines.
Not enough data.
I would make the delay a random variable with a non-linear distribution. Most likely a small delay, and the probability of longer delays decreases exponentially.
Schedule
602ff598991c9802769845.png
Или, половина «шляпы» нормального распределения:
602ff62996122863074501.gif
The parameter to steer (and steer very smoothly) is the steepness of this distribution exponent.
To do this, you need to evaluate the efficiency for the last X seconds-minutes-hours.
  • If there are more penalties for the late arrival of a worker, make the exponent steeper - so that a small delay is even more likely and a long one is less likely.
  • And vice versa, if too many are accessed early, spread the exponent, increasing the likelihood of a long pause for the next request.
You can make it a little more complicated by giving penalties weights depending on their prescription: fresh penalties are more weighty than those at the far-late end of the window.

A
Anton Shamanov, 2021-02-20
@SilenceOfWinter

How can I calculate when there will be new tasks? if not, then consider based on the resources of the host: if there are few users, i.e. there are a lot of resources - let them knock more often, a lot - less often.

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question