M
M
Michael2015-01-13 11:19:32
Django
Michael, 2015-01-13 11:19:32

How to determine why uwsgi is crashing?

Hello dear experts.
Once a month, my VPS steadily crashes and the logs show the lack of memory caused by python.
About 3 months ago it all worked in debug mode (manage.py runserver .......) and these problems started after I moved everything to uwsgi in emperor-mode.
The stable error in the logs is as follows:

kern.log

Jan  9 15:45:05 vps04104 kernel: [1578651.353004] Out of memory: kill process 25286 (zabbix_agentd) score 125810 or a child
Jan  9 15:45:05 vps04104 kernel: [1578651.353004] Killed process 25288 (zabbix_agentd)
Jan  9 15:45:05 vps04104 kernel: [1578651.434161] python invoked oom-killer: gfp_mask=0x201da, order=0, oom_adj=0
Jan  9 15:45:05 vps04104 kernel: [1578651.435489] python cpuset=/ mems_allowed=0
Jan  9 15:45:05 vps04104 kernel: [1578651.436829] Pid: 24440, comm: python Not tainted 2.6.32-5-amd64 #1
Jan  9 15:45:05 vps04104 kernel: [1578651.437064] Call Trace:
Jan  9 15:45:05 vps04104 kernel: [1578651.437064]  [<ffffffff810b6174>] ? oom_kill_process+0x7f/0x23f
Jan  9 15:45:05 vps04104 kernel: [1578651.437064]  [<ffffffff810b6698>] ? __out_of_memory+0x12a/0x141
Jan  9 15:45:05 vps04104 kernel: [1578651.437064]  [<ffffffff810b67ef>] ? out_of_memory+0x140/0x172
Jan  9 15:45:05 vps04104 kernel: [1578651.437064]  [<ffffffff810ba54d>] ? __alloc_pages_nodemask+0x4e5/0x5f4
Jan  9 15:45:05 vps04104 kernel: [1578651.437064]  [<ffffffff810bbab1>] ? __do_page_cache_readahead+0x9b/0x1b4
Jan  9 15:45:05 vps04104 kernel: [1578651.437064]  [<ffffffff810bbbe6>] ? ra_submit+0x1c/0x20
Jan  9 15:45:05 vps04104 kernel: [1578651.437064]  [<ffffffff810b48c2>] ? filemap_fault+0x17d/0x2f6
Jan  9 15:45:05 vps04104 kernel: [1578651.437064]  [<ffffffff810ca82a>] ? __do_fault+0x54/0x3c3
Jan  9 15:45:05 vps04104 kernel: [1578651.437064]  [<ffffffff8100c241>] ? __raw_callee_save_xen_pud_val+0x11/0x1e
Jan  9 15:45:05 vps04104 kernel: [1578651.437064]  [<ffffffff810ccb53>] ? handle_mm_fault+0x3b8/0x80f
Jan  9 15:45:05 vps04104 kernel: [1578651.437064]  [<ffffffff812fe876>] ? do_page_fault+0x2e0/0x2fc
Jan  9 15:45:05 vps04104 kernel: [1578651.437064]  [<ffffffff812fc715>] ? page_fault+0x25/0x30
Jan  9 15:45:05 vps04104 kernel: [1578651.437064] Mem-Info:
Jan  9 15:45:05 vps04104 kernel: [1578651.437064] Node 0 DMA per-cpu:
Jan  9 15:45:05 vps04104 kernel: [1578651.437064] CPU    0: hi:    0, btch:   1 usd:   0
Jan  9 15:45:05 vps04104 kernel: [1578651.437064] CPU    1: hi:    0, btch:   1 usd:   0
Jan  9 15:45:05 vps04104 kernel: [1578651.437064] Node 0 DMA32 per-cpu:
Jan  9 15:45:05 vps04104 kernel: [1578651.437064] CPU    0: hi:  186, btch:  31 usd:  78
Jan  9 15:45:05 vps04104 kernel: [1578651.437064] CPU    1: hi:  186, btch:  31 usd:  48
Jan  9 15:45:05 vps04104 kernel: [1578651.437064] active_anon:363776 inactive_anon:121622 isolated_anon:0
Jan  9 15:45:05 vps04104 kernel: [1578651.437064]  active_file:35 inactive_file:8 isolated_file:0
Jan  9 15:45:05 vps04104 kernel: [1578651.437064]  unevictable:1107 dirty:0 writeback:0 unstable:0
Jan  9 15:45:05 vps04104 kernel: [1578651.437064]  free:3434 slab_reclaimable:1855 slab_unreclaimable:3582
Jan  9 15:45:05 vps04104 kernel: [1578651.437064]  mapped:748 shmem:127 pagetables:10113 bounce:0
Jan  9 15:45:05 vps04104 kernel: [1578651.437064] Node 0 DMA free:8032kB min:32kB low:40kB high:48kB active_anon:1820kB inactive_anon:1948kB active_file:16kB inactive_file:0kB unevictable:0
Jan  9 15:45:05 vps04104 kernel: [1578651.437064] lowmem_reserve[]: 0 2004 2004 2004
Jan  9 15:45:05 vps04104 kernel: [1578651.437064] Node 0 DMA32 free:5704kB min:5708kB low:7132kB high:8560kB active_anon:1453284kB inactive_anon:484540kB active_file:0kB inactive_file:32kB
Jan  9 15:45:05 vps04104 kernel: [1578651.437064] lowmem_reserve[]: 0 0 0 0
Jan  9 15:45:05 vps04104 kernel: [1578651.437064] Node 0 DMA: 2*4kB 9*8kB 3*16kB 5*32kB 3*64kB 1*128kB 3*256kB 3*512kB 3*1024kB 1*2048kB 0*4096kB = 8032kB
Jan  9 15:45:05 vps04104 kernel: [1578651.437064] Node 0 DMA32: 673*4kB 30*8kB 2*16kB 1*32kB 1*64kB 1*128kB 0*256kB 1*512kB 0*1024kB 1*2048kB 0*4096kB = 5748kB
Jan  9 15:45:05 vps04104 kernel: [1578651.437064] 6881 total pagecache pages
Jan  9 15:45:05 vps04104 kernel: [1578651.437064] 6066 pages in swap cache
Jan  9 15:45:05 vps04104 kernel: [1578651.437064] Swap cache stats: add 293524, delete 287458, find 7490/10319
Jan  9 15:45:05 vps04104 kernel: [1578651.437064] Free swap  = 0kB
Jan  9 15:45:05 vps04104 kernel: [1578651.437064] Total swap = 1048568kB
Jan  9 15:45:05 vps04104 kernel: [1578651.437064] 524288 pages RAM
Jan  9 15:45:05 vps04104 kernel: [1578651.437064] 11000 pages reserved
Jan  9 15:45:05 vps04104 kernel: [1578651.437064] 67652 pages shared
Jan  9 15:45:05 vps04104 kernel: [1578651.437064] 507702 pages non-shared
Jan  9 15:45:05 vps04104 kernel: [1578651.437064] Out of memory: kill process 25286 (zabbix_agentd) score 112178 or a child
Jan  9 15:45:05 vps04104 kernel: [1578651.437064] Killed process 25289 (zabbix_agentd)
Jan  9 15:45:05 vps04104 kernel: [1578651.508537] uwsgi invoked oom-killer: gfp_mask=0x201da, order=0, oom_adj=0
Jan  9 15:45:05 vps04104 kernel: [1578651.509423] uwsgi cpuset=/ mems_allowed=0
Jan  9 15:45:05 vps04104 kernel: [1578651.510357] Pid: 4468, comm: uwsgi Not tainted 2.6.32-5-amd64 #1


I start with this line:
/usr/local/bin/uwsgi --emperor /opt/config --emperor-nofollow --daemonize=/var/log/uwsgi.log The
vassal config looks like this:
[uwsgi]
uid = ................
gid = www
umask = 012
chdir = /opt/apps/%n/app
virtualenv = /opt/apps/%n/venv
master = true
threads = 2
socket = /var/run/uwsgi/%n.sock
logto = /opt/apps/%n/logs/uwsgi.log
processes = %k
harakiri = 120
max-requests = 200
env = DJANGO_SETTINGS_MODULE=app.settings
module = django.core.handlers.wsgi:WSGIHandler()

Please tell me which way to debug to find the error. Thanks

Answer the question

In order to leave comments, you need to log in

1 answer(s)
M
Michael, 2015-04-15
@1099511627776

No, the problem seems to be found. The bottom line is that on the VPSk there was 2GB of RAM and in the muscle configs they were configured for a 4GB RAM machine, i.e. the maximum amount of RAM that the muscle ate was somewhere around 3.5GB, respectively, everything fell down.

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question