Answer the question
In order to leave comments, you need to log in
Linux process memory control?
Good afternoon. And immediately to the point:
- There is a server on which the video is converted, and a bunch of php scripts are executed, periodically or in a compartment a lot of memory. Because of what the system, the necessary services fall from lack of memory (sshd, httpd, nginx, postgresql, monit, syslog). A couple of times there was panic'a
And now the attention is the question:
- What to control? How to limit - the number of allowed memory per process? How to make it so that when the memory limit is exceeded, the eating process falls, and not the system, the necessary services.
Answer the question
In order to leave comments, you need to log in
Try man ulimit. It seems that he is needed for this, although I personally have never used it.
cgroups
cgroups (control groups) is a Linux kernel feature to limit , account and isolate resource usage (CPU, memory , disk I/O, etc.) of process groups.
Tell me, is there a tool to manage all this? ulimit is very narrow, and is designed for processes created by the user, not the system.
For example *.php - 200mb of memory despite the fact that 200mb is already set in php.ini.
Etc.
You can use top with the -b switch and parse the shell/python/… output with a script, and then kill the desired process.
There is such a thing as oom adjust, it controls the order in which processes will be killed. Unfortunately, by default, this mechanism works counterintuitively in most cases, killing system services by a couple of megabytes, instead of killing the process that ate most of the memory.
There are some mechanisms in upstart and possibly systemd to use this option, but it seems to me that it is more convenient to use one of the scripts available on the Internet that manually prioritize processes in the system based on configs.
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question