A
A
AlikDex2016-09-15 06:14:32
RabbitMQ
AlikDex, 2016-09-15 06:14:32

Clarifications on how RabbitMQ works?

Let me start by saying that I haven't worked with him yet. But there was a desire to try. After reading the mana and a few lessons, I was a little puzzled. And here's what:
1) It turns out that for each type of task you need to create a separate worker, which is launched separately from all (moreover, run manually \ through cron). Those. if we have 5 tasks, then 5 workers should be running, right? And if, for example, there are 20 sites on the server and each one works with tasks, then in total there are 100 workers in operation, and each one can freeze / crash ...
2) It's not entirely clear how to resolve the situation for several sites on the server in terms of queuing and exchangers . Those. whether to launch a separate rabbit instance for each site, or use prefixes for the names of queues / exchangers.
3) maybe I'm driving, and these queues are needed on a real highload for one, many server applications? As if there are several tasks that would be nice to put in the background, so that the pages are given faster + do post processing of materials uploaded by the user. But even tormented by vague doubts about this rabbit.

Answer the question

In order to leave comments, you need to log in

1 answer(s)
V
Vladislav Ivasik, 2017-07-14
@AlikDex

1. The interpretation is a little wrong, there are no task workers in the rabbit, this is from gearman. There are messages, exchangers, queues, consumers and producers. Messages are not always a task, they can be a stream of data from another application (which does not have access to the DBMS) for storage. Depending on the technologies that you use, consumers look a little different, for example, in php - you really need to run each consumer in 1 process (command), in java - you can start 1 process that will consume the queue and send messages to threads for processing in parallel (instead of multiple processes). It is very easy to resolve the situation with maintaining several command-consumers (in several processes each) in a running state - supervisord. This is Linux software which is configured to run a specific console command and the number of its instances, and monitors all. It turns out that you will have only 1 supervisord config for each type of consumer, he will do the rest of the work to keep it running. (Naturally, if we are talking about PHP, you need to properly debug the consumer code to avoid memory-leaks).
2. Depends on the capacity of the server, the normal practice is 1 rebbit server for several projects, which can later grow into a cluster (at certain loads), but you can also try to install it on a machine with sites and use it as you thought - with prefixes.
3. Queues are needed where you feel that they will help you speed up / stabilize the application, try, experiment, and you will see how well the crawl is applicable to your type of tasks.
PS Only when I finished writing the answer I saw that the question was from 2016, so sorry if the topic is no longer relevant)

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question