V
V
Vic Shostak2018-03-14 11:31:40
Django
Vic Shostak, 2018-03-14 11:31:40

VDS Debian 9 + Redis. Why are Celery tasks not running in Django 2.x?

Good time of the year.

My config: VDS on Debian 9.3 (1x kernel, 512 MB RAM), standard Python 3.5.3 and Redis 3.2.6 (out of the Debian box). Django 2.0.3 project using Celery 4.1.0 task queue.

Redis started:
$ redis-server --version
Redis server v=3.2.6 sha=00000000:0 malloc=jemalloc-3.6.0 bits=64 build=826601c992442478

$ redis-cli ping
PONG

Celery settings in Django are the most standard (from the docs):
######
# ./myproject/celery.py
######

# Import __future__
from __future__ import absolute_import, unicode_literals
# Import Python packages
import os
# Import Celery
from celery import Celery

# Set the default Django settings module for the 'celery' program
os.environ.setdefault(
    'DJANGO_SETTINGS_MODULE', 'myproject.settings.base' 
    # где settings — просто папка с тремя конфигами
    # dev.py и prod.py — на разное окружение
    # base.py — общие настройки (там, где INSTALLED_APPS)
)

# Init Celery
app = Celery('myproject')

# Using a `CELERY_` prefix
app.config_from_object('django.conf:settings', namespace='CELERY')

# Load task modules from all registered Django app configs
app.autodiscover_tasks()


######
# ./myproject/__init__.py
######

# Import __future__
from __future__ import absolute_import, unicode_literals
# This will make sure the app is always imported when
# Django starts so that shared_task will use this app.
from .celery import app as celery_app

__all__ = ['celery_app']

Celery config in base.py is like this:
######
# ./myproject/settings/base.py
######

...
BROKER_URL = 'redis://127.0.0.1:6379/0'  # Redis
BROKER_TRANSPORT_OPTIONS = {'visibility_timeout': 3600}

CELERY_BROKER_URL = BROKER_URL
CELERY_RESULT_BACKEND = BROKER_URL
CELERY_ACCEPT_CONTENT = ['application/json']
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
CELERY_TIMEZONE = TIME_ZONE
CELERY_IGNORE_RESULT = True
...

The test task is like this:
######
# ./app/tasks.py
######

# Import __future__
from __future__ import absolute_import, unicode_literals
# Import Celery
from celery import shared_task

@shared_task
def send_welcome_mail_new_user(user_email):

    # Set defaults
    from_email = '[email protected]'
    subject = _('Welcome!')
    html_content = render_to_string('mails/mails_welcome_mail_new_user.html', {
        'subject': subject,
    })

    # Make `msg` and send
    msg = EmailMessage(subject, html_content, from_email, [user_email])
    msg.content_subtype = 'html'
    msg.send()

I call it, as usual, through send_welcome_mail_new_user.delay('[email protected]'). For example, in the view for the main page, each time you go to it - for the sake of testing.

For this task, more extra. config for sending mail:
######
# ./myproject/settings/base.py
######

EMAIL_BACKEND = 'django.core.mail.backends.smtp.EmailBackend'
EMAIL_HOST = 'smtp.mail.ru'
EMAIL_HOST_PASSWORD = 'password'
EMAIL_HOST_USER = '[email protected]'
EMAIL_USE_SSL = True
EMAIL_PORT = 465

The daemon for Celery on VDS was configured completely according to this manual: https://pythad.github.io/articles/2016-12/how-to-r... (in fact, answers from StackOverflow are simply compiled there).
configs:
######
# /etc/default/celeryd
######

# Absolute or relative path to the 'celery' command:
CELERY_BIN="/usr/local/bin/celery"

# App instance to use
CELERY_APP="myproject"

# Where to chdir at start.
CELERYD_CHDIR="/var/www/html/myproject"

# Extra command-line arguments to the worker
CELERYD_OPTS="--time-limit=300 --concurrency=8"

# %n will be replaced with the first part of the nodename.
CELERYD_LOG_FILE="/var/log/celery/%n%I.log"
CELERYD_PID_FILE="/var/run/celery/%n.pid"

# Workers should run as an unprivileged user.
#   You need to create this user manually (or you can choose
#   a user/group combination that already exists (e.g., nobody).
CELERYD_USER="celery"
CELERYD_GROUP="celery"

# If enabled pid and log directories will be created if missing,
# and owned by the userid/group configured.
CELERY_CREATE_DIRS=1

export SECRET_KEY="secret"


######
# /etc/default/celerybeat
######

# Absolute or relative path to the 'celery' command:
CELERY_BIN="/usr/local/bin/celery"

# App instance to use
# comment out this line if you don't use an app
CELERY_APP="myproject"

# Where to chdir at start.
CELERYBEAT_CHDIR="/var/www/html/myproject"

# Extra arguments to celerybeat
CELERYBEAT_OPTS="--schedule=/var/run/celery/celerybeat-schedule"

Everything is running and (sort of) working:
$ sudo /etc/init.d/celeryd start

celery init v10.1.
Using config script: /etc/default/celeryd
celery multi v4.1.0 (latentcall)
> Starting nodes...
  > [email protected]: OK

$ sudo /etc/init.d/celeryd status

celery init v10.1.
Using config script: /etc/default/celeryd
celeryd (node celery) (pid 22335) is up...

$ sudo /etc/init.d/celerybeat status

celery init v10.1.
Using configuration: /etc/default/celeryd, /etc/default/celerybeat
celerybeat (pid 20183) is up...

Next, I pull the task (I just go to the main one) ... but nothing happens - the letter does not fly away. I go to the Celery logs at /var/log/celery/celery.log , and there everything is in such warnings:
...
During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.5/dist-packages/celery/worker/worker.py", line 203, in start
    self.blueprint.start(self)
  File "/usr/local/lib/python3.5/dist-packages/celery/bootsteps.py", line 119, in start
    step.start(parent)
  File "/usr/local/lib/python3.5/dist-packages/celery/bootsteps.py", line 370, in start
    return self.obj.start()
  File "/usr/local/lib/python3.5/dist-packages/celery/worker/consumer/consumer.py", line 320, in start
    blueprint.start(self)
  File "/usr/local/lib/python3.5/dist-packages/celery/bootsteps.py", line 119, in start
    step.start(parent)
  File "/usr/local/lib/python3.5/dist-packages/celery/worker/consumer/consumer.py", line 596, in start
    c.loop(*c.loop_args())
  File "/usr/local/lib/python3.5/dist-packages/celery/worker/loops.py", line 88, in asynloop
    next(loop)
  File "/usr/local/lib/python3.5/dist-packages/kombu/async/hub.py", line 293, in create_loop
    poll_timeout = fire_timers(propagate=propagate) if scheduled else 1
  File "/usr/local/lib/python3.5/dist-packages/kombu/async/hub.py", line 136, in fire_timers
    entry()
  File "/usr/local/lib/python3.5/dist-packages/kombu/async/timer.py", line 68, in __call__
    return self.fun(*self.args, **self.kwargs)
  File "/usr/local/lib/python3.5/dist-packages/kombu/async/timer.py", line 127, in _reschedules
    return fun(*args, **kwargs)
  File "/usr/local/lib/python3.5/dist-packages/billiard/pool.py", line 1316, in maintain_pool
    sys.exc_info()[2])
  File "/usr/local/lib/python3.5/dist-packages/billiard/five.py", line 123, in reraise
    raise value.with_traceback(tb)
  File "/usr/local/lib/python3.5/dist-packages/billiard/pool.py", line 1307, in maintain_pool
    self._maintain_pool()
  File "/usr/local/lib/python3.5/dist-packages/billiard/pool.py", line 1299, in _maintain_pool
    self._repopulate_pool(joined)
  File "/usr/local/lib/python3.5/dist-packages/billiard/pool.py", line 1284, in _repopulate_pool
    self._create_worker_process(self._avail_index())
  File "/usr/local/lib/python3.5/dist-packages/celery/concurrency/asynpool.py", line 439, in _create_worker_process
    return super(AsynPool, self)._create_worker_process(i)
  File "/usr/local/lib/python3.5/dist-packages/billiard/pool.py", line 1116, in _create_worker_process
    w.start()
  File "/usr/local/lib/python3.5/dist-packages/billiard/process.py", line 124, in start
    self._popen = self._Popen(self)
  File "/usr/local/lib/python3.5/dist-packages/billiard/context.py", line 333, in _Popen
    return Popen(process_obj)
  File "/usr/local/lib/python3.5/dist-packages/billiard/popen_fork.py", line 24, in __init__
    self._launch(process_obj)
  File "/usr/local/lib/python3.5/dist-packages/billiard/popen_fork.py", line 72, in _launch
    self.pid = os.fork()
MemoryError: [Errno 12] Cannot allocate memory

I'm running the project locally (macOS 10.13.3) with parameters: celery -A my project worker -l info. In the same way, I pull the task and the letter flies to the console (for dev, I have the console backend for mail enabled).
I would be very happy to give you good advice and / or use cases about setting up these gizmos in production - I can’t beat it for a day :( If you need to show any other configs / logs, I’ll be happy to provide them, write in the comments.
Thanks in advance!

Answer the question

In order to leave comments, you need to log in

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question