A
A
arturgspb2013-10-26 15:05:30
Java
arturgspb, 2013-10-26 15:05:30

Application architecture with deferred tasks in JAVA?

Hello! There is a question, so to speak, best practices are needed to solve the problem in the context of a Java application.
There is a site where the user can initiate slow operations: for example, one of such tasks could be downloading and parsing a 3Gb file.
I mean that in my project there will be some set of commands that, when initiated, I will put in the queue server, for example, RabbitMQ and some process (hereinafter referred to as the Daemon) of the application will parse this queue asynchronously in the background. I want the application to scale so that I can run N Daemons on M servers and thus achieve processing scaling. The project will also have an API. The UI interface will go as a separate project and will set tasks through api, so there are no problems here - its code base is essentially different and intersects a lot in domain models.
In the end, the question is: is it possible for the API and Daemons to use the same codebase and work more or less isolated, so that if you want to raise another daemon, you do not have to raise another API instance along with it, etc... If so I don't understand;) In general, there were thoughts to run a simple MVC for the API and in a separate thread of the Demon (s), but I don't know how kosher it is.

Answer the question

In order to leave comments, you need to log in

5 answer(s)
R
relgames, 2013-10-29
@arturgspb

Solved a similar issue. We use ActiveMQ as a queue. There are N daemons and M APIs, the code is common, but different Spring configs. Accordingly, the demon launches its config with its services, the API - its own. But I don’t really like this solution, I plan to use Maven modules and move the common code into a separate module, and register it as a dependency in the daemon and API part. The main disadvantage of the common code is that if you change only the API, you have to restart the services, and vice versa.
Deployment should be made as simple as possible - the same configs or a centralized configuration. Plus, we actively use JMX to fine-tune individual nodes.

B
bak, 2013-10-26
@bak

Yes, but what stops them from doing so? Write some wrapper over the queue of your choice, which will be able to put a new task in the queue, take the task from the queue (pull-it or via pub-sub), save the result of the processed task, and use it everywhere. The interface that puts the task in the queue can be taught to work both with the queue itself and through the api (to use the same code in the UI). When scaling to reasonable limits, one queue server will most likely be enough for you. It’s more a question of how big the result of the handlers is, whether it needs to be stored somewhere (if necessary, the storage will also have to be scaled somehow).

D
Dmitry Zaitsev, 2013-10-26
@dim_s

Write in one codebase, but make a flag in the configuration - disable the API, and run as many additional daemons as you want. I do this all the time, it is possible to contain several functional project modules in one codebase, which are optionally enabled / disabled.

A
asm0dey, 2013-10-26
@asm0dey

Since we are talking about queue servers anyway - I would make it so that clients connect to the server using JMX or any other protocol that is supported by the MQ you use (by the way, why not ActiveMQ, which is perfectly embedded in Java applications?) and said " I'm a client, give me a task too." Thus, raising clients would become a trivial matter - they launched a Java application, it itself raised the MQ of the client and itself began to process tasks. Stopped / cut down => the connection fell off, do not send jobs there anymore. If the result of this task is not returned, put it in the queue first and give it to the first freed worker.

B
baadf00d, 2013-10-27
@baadf00d

If you want to get by with basic Java technologies, then you can make the task manager a simple Java class and connect to it via RMI.
The dispatcher can issue jobs through the Runnable interface, and all the client does is get the next job and run run(). The main thing is that the performer in the classpath has the necessary classes with tasks.

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question