A
A
Andrey_Dolg2020-08-19 11:49:09
Python
Andrey_Dolg, 2020-08-19 11:49:09

Where can there be failures during the constant operation of the script without restarting the interpreter??

There is a parser that usually works 24/7 in 8 processes (launched via multiprocessing) aiohttp work for different but identical data streams. The work is in the format of the id pack appeared for processing, the client is launched through the with statement and begins to collect data. Data is sent there via api (forced to send as quickly as possible).
Sometimes there is a hang in all 8 processes at the same time, no error information at all flies to the log in such cases, only killing the process helps, but there is data loss 2 minutes before the watchdog is triggered. This happens about once every 3 days.
There is also a strange failure when all 8 aiohttp instances (but launched through multiprocessing in one interpreter begin to return connection tiomeout to the server). At the same time, aiohttp running in a neighboring interpreter also working with that resource does not return errors and works quietly. Again, in this case, the watchdog is triggered, which after 15 minutes stops seeing signals from the parser due to the fact that it accumulates a large enough amount of data to load the entire processor and cannot process it in less than 2 minutes.
Actually the question is where to dig, all timeouts in aiohttp are set including the global one. From the side of the proxy provider, there are no records of proxy server errors at that time.

Answer the question

In order to leave comments, you need to log in

1 answer(s)
S
Sergey Tikhonov, 2020-08-20
@tumbler

Write detailed logs, maybe you will find patterns in failures.

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question