P
P
Ptolemy_master2018-01-30 04:27:07
MongoDB
Ptolemy_master, 2018-01-30 04:27:07

What strategy should be used in case of a huge amount of data?

So, there is a MongoDB base, on the NodeJs backend, connection to the base through the mongodb driver. The user at the front is allowed to make any selection. If the amount of data is too large, node.js will crash with the following error:

>> FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory
>> Exited with code: 3.
>> Error executing child process: Error: Process exited with code 3.

What can be done?
I see the following approaches (in theory):
1) Somehow find out (how?) the amount of data that will be returned in the case of a given request, and if it exceeds a certain limit, do not execute the request, but return an error message to the user.
2) If this is not possible, then stop the query in progress and return an error message.
3) One thing remains - to have some kind of daemon running, which will immediately restart the node if it crashes.
What do you recommend, thanks!

Answer the question

In order to leave comments, you need to log in

3 answer(s)
X
xmoonlight, 2018-01-30
@Ptolemy_master

1. First, count ONLY the NUMBER of records: COUNT
2. To select with the required data, use the constraint: LIMIT

S
SEOD, 2018-01-30
@SEOVirus

Ptolemy_master , can get a separate column, where to indicate the amount of data in the column? That. you will be told how much data is accumulated for output. And accumulate the output until it exceeds the threshold.

P
Ptolemy_master, 2018-02-02
@Ptolemy_master

Unfortunately, this is not possible in the database itself, since this is not our database, but clients. It is possible, of course, to start a separate collection in our database with this meta-information, but it is very expensive. For the time being, we decided to simply limit the sample size.

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question