Y
Y
Yuri Puzynya2015-08-25 01:47:34
Node.js
Yuri Puzynya, 2015-08-25 01:47:34

How do you feel about short lived processes in NodeJS?

Actually, does anyone have a positive experience of using such an architecture?
If there is a negative, it is also interesting to hear. In what situations was there an attempt to implement this and why they refused.

Answer the question

In order to leave comments, you need to log in

2 answer(s)
A
Alexander Prozorov, 2015-08-25
@3y3

Creating a new process to handle a user request is essentially a step back to the CGI protocol. It will be slow and costly in terms of resources. That is why the FastCGI protocol appeared to reduce the response time due to a constantly living process.
Regardless of the architecture of the application, it's hard to answer your question. On the one hand, Node is designed for the long life of the process, at least the head one. On the other hand, the implementation of a web worker, running side processes through spawn, and even in some cases, workers in a cluster (cluster) may well be short-lived for performing one-time tasks.
In general, you need to dance from your task. It is very interesting what exactly made you think in this direction.

T
Timur Shemsedinov, 2015-09-01
@MarcusAurelius

I’ll tell you what is the situation with this in Impress and what suits me and does not suit me.
Now the processes for processing requests are spawned at startup, and re-spawned when it crashes. Memory leaks or process crashes - I consider it abnormal behavior, but in reality, I don’t push all the libraries I use, and I don’t even write all the application code, so I need to deal with leaks and crashes somehow. Of course, falls are minimized, because all application code runs in sandboxes, and in case of leaks, you can simply create a new sandbox in the same process, restore all the necessary data structures in it, then replace the link from the old one (leaked or corrupted when the sandbox was excluded) with the new one and kill the old one . This is faster than spawning a new process.
But sometimes you want to branch the processing of one request into a separate process so that it does not interfere with others. Now I have implemented transparent for applications generation of a new process for these cases, i.e. made something similar to workers. But processes start relatively slowly. In addition, there is also a task scheduler, which must execute part of the business logic on a schedule.
How I want to do. In addition to request processing processes, have some more pool of prepared running processes that have already established a connection to the database and deployed everything that is needed in memory. They are connected to the parent process by a TCP socket, through which RPC is established (a full-fledged remote procedure call with support for callbacks, events, asynchronous invocation of several requests, etc.) And when it becomes necessary to branch processing or execute a scheduled task, instead of spawning a process the first free process from the pool will be taken and an order comes to it via RPC. When it has done everything, it returns a callback or event via RPC.
That's all I already seem to be ready for this, and the RPC itself has been written and debugged, I will implement it soon. And then, using the same RPC (but with transport via websockets), I am going to connect client applications so that they become one with server processes and workers.

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question