Q
Q
q2zoff2019-01-19 16:10:42
linux
q2zoff, 2019-01-19 16:10:42

How to prevent a process from terminating when there are a lot of network errors?

Hello!
There is a client application that works over the network with a server application. Occasionally, server resources are exhausted and all new connections are rejected. On the client side, this manifests itself in the fact that polling returns a large number of ready-made descriptors with errors. Each descriptor needs to be processed. Therefore, the load on the processor core increases to 100% (under normal conditions, ~ 50%).
Let me summarize. On the client side, "anomalous" situations occasionally occur in which a large number of socket errors are returned to the process, and the latter, in turn, loads the CPU core by 100%.
Then something (not oom-killer) comes along and terminates the process. Probably, some protective mechanisms of the OS are triggered.
Can this behavior be disabled? If so, how?
I hoped that ignoring the signals would solve the problem, but the hopes were not justified.
Writing something that will restart the process when it ends seems like some kind of terrible crutch to me.
UPD. It is not even clear yet what exactly terminates the process. I ask to prompt, in what party to dig in general?

Answer the question

In order to leave comments, you need to log in

1 answer(s)
A
Andrej Gessel, 2019-01-22
@andiges

Probably in your situation it will be more logical to correct the behavior of the client than the behavior of the system. But for example, without an example code, it’s hard for me to suggest something specifically.

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question