B
B
berkutxxx2014-10-27 19:19:13
MongoDB
berkutxxx, 2014-10-27 19:19:13

How to combine collections (1GB per day) from data collection servers to the main server (backup)?

Weak VPS servers convert the collected quotes, accumulating up to ~1GB.
What is the best way to transfer temporary information to a home disk, where the size of the hard disk is unlimited?
At home, the computer cannot work around the clock.
I planned to do this:
VPS:
1) collects quotes in mongoDB
2) transfers them to the main server once a day
3) clears (deletes?) the mongoDB database.
Home computer:
1) Accepts from 18h to 24h accumulated from the VPS and combines into a collection. (Thus, the size of the database will increase by an average of 1G per day.)
2) In the future, the database will be divided into shards.

Answer the question

In order to leave comments, you need to log in

2 answer(s)
L
lega, 2014-10-27
@berkutxxx

You can transfer data via mongodump -> mongorestore, while data will be added on the destination server (provided that there are no _id collisions). You can also try the db.copyDatabase command.
Through ssh, you can make a tunnel , and dump directly to the local machine.
Although, for automatic collection, I would make an authorization by keys and put the script in cron, so that it would authorize itself and do a dump / restore. Along the way, you can do a quick compression via gzip, so that the data can be downloaded faster.
For example, something like this, makes a remote dump, presses, pumps to the local machine, unpacks and does a restore.

ssh -p 1022 server "cd /tmp/; rm -rf /tmp/dump.tbz2 /tmp/dump/; mongodump -d database; tar -cjf dump.tbz2 dump"
  rm -rf /tmp/dump/
  scp -P 1022 server:/tmp/dump.tbz2 /tmp/
  cd /tmp/; tar -xjf dump.tbz2; mongorestore

The compression itself can be done "on the fly" without creating a file.

D
Dmitry, 2014-10-27
@zmeyjr

rsync

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question