R
R
Runcorn2013-12-24 23:02:04
Nginx
Runcorn, 2013-12-24 23:02:04

Workflows in asynchronous servers

Hello.
I can not fully understand such a topic as worker processes (workers) in asynchronous servers. Why are they needed and do they make sense?
For example, in nginx, worker processes are normal: nginx.org/en/docs/ngx_core_module.html#worker_processes
In lighttpd, they are not advised to use: redmine.lighttpd.net/projects/1/wiki/Docs_MultiPro...
1. Are are they for real?
2. And a question for those who have been developing asynchronous servers with workers or dealt with them: how is all this implemented at a low level? Is the connection accepted from the main process and then somehow passed to one of the child workers, or are all the workers listening to the socket at the same time?

Answer the question

In order to leave comments, you need to log in

3 answer(s)
S
Sergey, 2013-12-24
@Runcorn

if at a low level, then everything is simple. The main process listens on the socket and does an accept. Then there are 2 options: multiplex requests (from the array select sockets ready for reading via select / epoll and do recv or accept depending on which socket is ready to provide data) or push socket processing into a separate thread / process. Processes are more reliable because if it falls, then nothing will be done to the server from this.
And then there are optimizations. The same nginx uses multiplexing inside worker processes. Multiplexing allows you to more intelligently approach the use of resources within a single process, and multiple processes allow you to process more requests in parallel. At the same time, the processing of these requests does not in any way affect the main process listening on the socket for new connections.
As for the workers themselves. Since forking a new process is still not such a cheap operation, there is such a practice as a prefork. That is, a certain number of ready-made processes are already spinning in memory, and when requests for processing appear, they are simply transferred there. As the number of requests increases, the number of workers increases, which minimizes the losses from blocking processes ...
in general, google about the 10k problem, you can google a bunch of request processing algorithms.
As for lighttpd, the documentation describes cases when you need to use multiprocessing, and shows possible problems with modules. There are no "don't use" tips.

I
Ilya Evseev, 2013-12-25
@IlyaEvseev

There is a sense, because By default, an asynchronous application runs on a single thread, i.e. can load at most one processor core.
If one core is not enough and / or there is a risk that a long operation will occur in the thread - increase the number of workers.
In Lighty, the multi-threaded mode is debugged less carefully, so they warn you.
There are no problems in Nginx, set it to "auto".

A
AxisPod, 2013-12-25
@AxisPod

The main point is that processing is based on an event model. For this, epoll / kqueue and similar solutions are used, depending on the OS. Further, more than one queue is used, in this case, without the use of threads, only one core would be used, which is wrong (which is why the libuv libraries (developed for node.js), libevent and the like) merge well. In order to fully utilize all the processor cores and run one process per core. In theory, multithreading could also be used, but in this case, synchronization would be required, and it, in turn, would negatively affect performance.

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question