T
T
Tony2017-02-01 12:57:54
Programming
Tony, 2017-02-01 12:57:54

How to correctly implement a multi-threaded daemon with REST API?

I need to write a multi-threaded (or multi-process?, or asynchronous?) daemon that will provide an API (REST) ​​and perform some parallel tasks in the background. I have never written such applications and I do not understand the principle of multi-threaded and asynchronous applications well. The principle of operation seemed to me something like this: the main process, which launches child processes with API and background tasks, child processes spawn more processes or threads, if necessary. When I started looking for how to write an API (I took flask), I read that a WSGI server is required to work. I tried to search for information on how to write a multi-threaded daemon with an API, but did not find anything. The idea came to make two applications, but I don’t really like it. Now I'm looking at how to make such a scheme in java.
1) Do I understand correctly how the multithreaded daemon works (or what can I read about multithreading and asynchrony)?
2) Is it possible to implement such an application in python?
3) Wouldn't it be easier to write such an application in java?

Answer the question

In order to leave comments, you need to log in

3 answer(s)
M
malbaron, 2017-02-01
@malbaron

1. The REST API is usually stateless. That is, each API call is completely independent of the other.
This way all multithreading stays inside and not on your REST API
2. The biggest inconvenience for programming is locks/synchronizations. That is, when you use shared resources that do not allow you to access them concurrently, these need to be blocked for the duration of use. But these locks, in turn, can cause a deadlock.
Don't forget to block. But you need to do this minimally, only when necessary. For example, perhaps not exclusive lock will be enough, but just rw-lock on read.
3. It is easier to implement on what you know better. I would choose Go, but this is me. You need to implement on what you personally know best.

D
Dmtm, 2017-02-01
@Dmtm

I would make a general queue (Deque) of requests, FIFO,
waiting for the request executor, picks up tasks from the beginning of the queue, is activated when the next request enters the queue, a
custom thread pool for parallelization
, if desired, you can add request validation and, for example, not passed - put at the end queues (read about poison request messages)

T
Tony, 2017-02-01
@arghhh

I found the bottle framework and an approximate implementation of what I wanted to do. One process processes api and the second one does something in the background. I hope you don't run into any trouble doing this.

from multiprocessing import Process
from bottle import Bottle, run
import time
import os

app = Bottle()


@app.route('/')
def index():
    return 'Hello !'


def child():
    while True:
        time.sleep(10)
        pid = str(os.getpid())
        print(pid + ' PID \n')


def api():
    run(app, host='localhost', port=8000)


def main():
    process1 = Process(target=child, args=())
    process2 = Process(target=api, args=())

    process1.start()
    process2.start()

if __name__ == '__main__':
    main()

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question