S
S
Sergey Savostin2015-04-11 17:30:30
System administration
Sergey Savostin, 2015-04-11 17:30:30

How to optimize (delay?) writing to a large number of files at the same time?

Please tell me if there is a software solution for this problem.
There is a dedicated server. It runs 1000+ "daemons" that receive data via HTTP and put it into files.
The speed of obtaining data is important (the stability of the frequency of requests), the relevance of the data in the files is not important (they are processed later), i.e. data in files can appear in "packs", with a delay, as you like, so long as they do not disappear.
During peak hours, when the number of daemons reaches a certain value, the load average of the server rises above 100-200%. At the same time, the processor / memory is loaded by 10-20%. Those. I/O problem. In this case, the stability of queries suffers.
Can I somehow configure the system, apply some kind of caching so that writing to files does not block the server?
I solved programmatically both in c++, and in php, and in node.js - everywhere it rests on I / O.
Or can the issue be solved with just a few drives, preferably an SSD?

Answer the question

In order to leave comments, you need to log in

1 answer(s)
S
Sergey, 2015-04-11
@butteff

Can write first to the database, and then from the database to files?
Some databases, like mongo, have something like caching out of the box.
Here is the article from which I am copying the data below:

Whenever possible, MongoDB tries to keep all data in RAM and flushes changed data to disk every 60 seconds. MongoDB allows you to manage both synchronization with disk and blocking for the duration of the query.

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question