G
G
gregorypetrov2019-05-07 11:45:24
linux
gregorypetrov, 2019-05-07 11:45:24

Every day apache2 and mysql (under Debian) crashes due to Out of memory, where to dig?

Several sites are running on a web server with Debian 7 Wheezy + LAMP.
During this week, every day I observe such a picture - at an arbitrary time (sometimes during the day, sometimes at night) the connection of sites to the database disappears, as a result of which they become inoperable.
We have to restart the server (after the reboot everything works fine for a while, until the next crash). I installed logswatch
, in the reports I saw that the apache2 and mysql processes are being "killed" due to lack of memory.

1 Time(s): Out of memory: Kill process 20879 (apache2) score 39 or sacrifice child
1 Time(s): Out of memory: Kill process 20882 (apache2) score 35 or sacrifice child
1 Time(s): Out of memory: Kill process 20897 (apache2) score 35 or sacrifice child
1 Time(s): Out of memory: Kill process 20898 (apache2) score 36 or sacrifice child
1 Time(s): Out of memory: Kill process 20899 (apache2) score 36 or sacrifice child
1 Time(s): Out of memory: Kill process 20901 (apache2) score 35 or sacrifice child
1 Time(s): Out of memory: Kill process 2605 (mysqld) score 36 or sacrifice child 


1 Time(s): Killed process 20879 (apache2) total-vm:125048kB, anon-rss:81644kB, file-rss:0kB
1 Time(s): Killed process 20882 (apache2) total-vm:118400kB, anon-rss:72496kB, file-rss:56kB
1 Time(s): Killed process 20897 (apache2) total-vm:118256kB, anon-rss:73512kB, file-rss:276kB
1 Time(s): Killed process 20898 (apache2) total-vm:120304kB, anon-rss:74688kB, file-rss:20kB
1 Time(s): Killed process 20899 (apache2) total-vm:121328kB, anon-rss:75776kB, file-rss:4kB
1 Time(s): Killed process 20901 (apache2) total-vm:119536kB, anon-rss:73292kB, file-rss:0kB
1 Time(s): Killed process 2605 (mysqld) total-vm:348892kB, anon-rss:76160kB, file-rss:0kB

Mysql logs are empty for some reason.
In what specific direction to dig to find out what exactly clogs memory in such a way that knocks out the web server and DBMS? Are there any tools to trace the source of such a load "in hindsight"?

Answer the question

In order to leave comments, you need to log in

5 answer(s)
L
Lazy @BojackHorseman MySQL, 2019-05-07
Tag

apache and scores. See where it says about MaxRequestWorkers .
well, do not forget that the OS, muscle, and so on also love RAM. maybe just not enough to handle all the traffic. ran out of RAM, climbed into swap, swap ended and that's it.

T
TyzhSysAdmin, 2019-05-07
@POS_troi

Your processes are nailed by OOM Killer - due to lack of RAM and SWAP resources.
It is necessary to configure MySQL in accordance with the available resources and also put Apache on a diet. In principle, think about getting rid of Apache altogether and replace it with Nginx.
Any monitoring - Zabbix, Nagios, etc.
It seems that from the utilities, atop can keep a stat for a period, but I don’t remember if it can write to the log.
PS You can ask OOM not to touch your processes, but then to replace out of memmory you can grab a kernel panic :)

R
Roman Mirilaczvili, 2019-05-07
@2ord

First, the Apache logs should show what activity has been in the past. From there, you can extract query statistics (number) both for sites individually and in aggregate. A typical cause of a sudden load can be search bots, as well as vulnerability scanners.
Secondly, it is worth installing the munin-node agent on the server and monitoring from another server.
Third, perhaps it's time to separate MySQL into a separate server.

V
Victor Taran, 2019-06-05
@shambler81

input data where, where system resources.
Where are the logs
Where are the configs?
you know not telepaths live here

S
Semal, 2019-06-25
@Semal

Apache and takes all the memory, look towards memcache, it helps a lot, fasten nginx as a proxy or even switch completely to it, well, it won’t hurt to take out mysql separately

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question