Answer the question
In order to leave comments, you need to log in
RAM vs file system?
Suppose we have a CLI application that parses a large amount of data. Well, relatively large, for example, it parses the site and counts the number of div tags, then writes it as a report to a file.
In order to bypass the entire site, you need enough time, let's say we can bypass a small site in 30 seconds, collect all the data in an array, and after all this array, collect a file with a report. By the way, many applications work this way, for example, frontend assemblers, they first load something into the RAM and only then into the file system. I came across such a situation that if there is not enough RAM, the same composer or gulp cannot load large packages (for example, on the server it could not load gulp-image, where there was less than 400 MB of RAM).
Let's go back to the site analyzer, what if we don't have enough RAM or another error occurs? Before us is a moment, either we get the result entirely and completely, or we get nothing. The question is why do they do this, why do they work with RAM more often than, say, write data to a file or database, so that in case of a failure there will be at least some result?
Answer the question
In order to leave comments, you need to log in
if it hits the limit specified in the settings - an error.
if it hits the physical limit of the server - lays.
we write the results to the queue, the queue handler writes the report to a file or adds more tasks to the queue, etc. etc. If you use any queue managers, they usually have the ability to persist queues on the file system for reliability.
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question