G
G
Grigory Dikiy2018-01-19 14:02:09
Django
Grigory Dikiy, 2018-01-19 14:02:09

Celery - two daemons for systemd server?

Good afternoon! I'm trying to set up celery to work with multiple django sites.
First site:
celery.py

import os
from celery import Celery

os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'config.settings')

app = Celery('posudahome')

# Configurations
app.config_from_object('django.conf:settings')
app.autodiscover_tasks()

systemd config
[Unit]
Description=Celery Service
After=network.target

[Service]
Type=forking
User=dikiigr
Group=dikiigr
EnvironmentFile=-/etc/conf.d/celery_posudahome
WorkingDirectory=/home/dikiigr/posudahome/engine
ExecStart=/bin/sh -c '${CELERY_BIN} multi start ${CELERYD_NODES} \
  -A ${CELERY_APP} --pidfile=${CELERYD_PID_FILE} \
  --logfile=${CELERYD_LOG_FILE} --loglevel=${CELERYD_LOG_LEVEL} ${CELERYD_OPTS}'
ExecStop=/bin/sh -c '${CELERY_BIN} multi stopwait ${CELERYD_NODES} \
  --pidfile=${CELERYD_PID_FILE}'
ExecReload=/bin/sh -c '${CELERY_BIN} multi restart ${CELERYD_NODES} \
  -A ${CELERY_APP} --pidfile=${CELERYD_PID_FILE} \
  --logfile=${CELERYD_LOG_FILE} --loglevel=${CELERYD_LOG_LEVEL} ${CELERYD_OPTS}'

[Install]
WantedBy=multi-user.target

/etc/conf.d/celery_posudahome
CELERYD_NODES="w1"
CELERY_BIN="/home/dikiigr/venv/posudahome/bin/celery"
CELERY_APP="config.celery:app"
CELERYD_MULTI="multi"
CELERYD_OPTS="--time-limit=300 --concurrency=2"
CELERYD_PID_FILE="/home/dikiigr/.celery/posudahome/%n.pid"
CELERYD_LOG_FILE="/home/dikiigr/.celery/posudahome/%n%I.log"
CELERYD_LOG_LEVEL="INFO"

Logs
[2018-01-19 13:37:27,534: INFO/MainProcess] Connected to amqp://guest:**@127.0.0.1:5672//
[2018-01-19 13:37:27,549: INFO/MainProcess] mingle: searching for neighbors
[2018-01-19 13:37:27,857: INFO/MainProcess] mingle: all alone
[2018-01-19 13:37:27,871: INFO/MainProcess] [email protected] ready.
[2018-01-19 13:45:11,927: INFO/MainProcess] Received task: apps.orders.tasks.send_order_email[8e94a18e-a052-4cb3-98f0-131c6de85a5f]

Second site
celery.py
import os
from celery import Celery

os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'config.settings')

app = Celery('grandlux')

# Configurations
app.config_from_object('django.conf:settings')
app.autodiscover_tasks()

system service
[Unit]
Description=Celery Service
After=network.target

[Service]
Type=forking
User=dikiigr
Group=dikiigr
EnvironmentFile=-/etc/conf.d/celery_grandlux
WorkingDirectory=/home/dikiigr/grandlux/engine
ExecStart=/bin/sh -c '${CELERY_BIN} multi start ${CELERYD_NODES} \
  -A ${CELERY_APP} --pidfile=${CELERYD_PID_FILE} \
  --logfile=${CELERYD_LOG_FILE} --loglevel=${CELERYD_LOG_LEVEL} ${CELERYD_OPTS}'
ExecStop=/bin/sh -c '${CELERY_BIN} multi stopwait ${CELERYD_NODES} \
  --pidfile=${CELERYD_PID_FILE}'
ExecReload=/bin/sh -c '${CELERY_BIN} multi restart ${CELERYD_NODES} \
  -A ${CELERY_APP} --pidfile=${CELERYD_PID_FILE} \
  --logfile=${CELERYD_LOG_FILE} --loglevel=${CELERYD_LOG_LEVEL} ${CELERYD_OPTS}'

[Install]
WantedBy=multi-user.target

/etc/conf.d/celery_grandlux
CELERYD_NODES="w2"
CELERY_BIN="/home/dikiigr/venv/grandlux/bin/celery"
CELERY_APP="config.celery:app"
CELERYD_MULTI="multi"
CELERYD_OPTS="--time-limit=300 --concurrency=2"
CELERYD_PID_FILE="/home/dikiigr/.celery/grandlux/%n.pid"
CELERYD_LOG_FILE="/home/dikiigr/.celery/grandlux/%n%I.log"
CELERYD_LOG_LEVEL="INFO"

Logs
[2018-01-19 13:37:27,362: INFO/MainProcess] Connected to amqp://guest:**@127.0.0.1:5672//
[2018-01-19 13:37:27,376: INFO/MainProcess] mingle: searching for neighbors
[2018-01-19 13:37:27,696: INFO/MainProcess] mingle: all alone
[2018-01-19 13:37:27,713: INFO/MainProcess] [email protected] ready.
[2018-01-19 13:40:36,778: INFO/MainProcess] Received task: apps.orders.tasks.send_order_email[b5e90626-ca2f-47cc-bf80-8792928426eb]  
[2018-01-19 13:48:36,090: INFO/MainProcess] Received task: apps.orders.tasks.send_order_email[2cbf51be-87fa-4d63-a8e2-f94338391d87]

The most interesting thing is that the last task from the logs above should have been related to the first site. Also, for some reason, the worker is called [email protected], although in theory it should be different
. How can I solve this problem?

Answer the question

In order to leave comments, you need to log in

2 answer(s)
G
Grigory Dikiy, 2018-01-20
@frilix

Created two different hosts for RabbitMQ and the problem was solved.

V
Vladimir, 2018-01-19
@vintello

To be honest, it is not entirely clear what is meant by working with two sites.
what prevented you from making different queues and, depending on the site, firing tasks each in turn?
and then you just set up your own workers for each queue.
it is more convenient to set up the start of workers through the supervisor and not through systemctl, but this is already a matter of taste

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question