S
S
Sergey2012-08-06 11:24:53
linux
Sergey, 2012-08-06 11:24:53

What is the best way to sync physical files used on two different servers?

Perhaps someone faced a similar problem.
So, we have a “main” server where users hang out, and there is a “test” server where there are also users, but (as the name implies) they are already testing it (regular users do not have access there).
The environment on both servers is the same: debian, nginx, php-fpm, sphinx, percona mysql. At the same time, the DBMS is actually the same for both machines (same data) - which gives the same users, data about them, their records, etc. etc.
The question is as follows: how to synchronize physical data on both servers (image files, flash drives - what do users download, both test and regular)? That is, it is necessary that the added files on the main server appear on the test one, and those added on the test are duplicated on the main one.
Files already now occupy 2 + Gb, 50-70Mb are added per day.
PS tried rsync (with keys for update and recursion) - the update takes more than 10 minutes, which is already abnormal.
PPS tried and sshfs + symlinks - part of the cache files (text) that are included - fall on Permission denied (php-fpm in this case could not be transferred to root), as well as newly downloaded files (on the server where the symlink is located) .

Answer the question

In order to leave comments, you need to log in

5 answer(s)
S
script88, 2012-08-06
@Ualde

Use csync2.
As a plus, csync2 is suitable for fairly heavy projects. Configs can be divided into several parts, and these parts can be run at different intervals, it also does not load the system under such loads.

D
dgeliko, 2012-08-06
@dgeliko

As an option - fasten GlusterFS and place user directories on this section. Its minus is that the update takes place when accessing or listing files and eats the CPU under heavy loads. As a crutch - put in cron ls -R for every 5 minutes. In general, it is good, but there are some shortcomings that you have to finish by yourself.

E
egorinsk, 2012-08-06
@egorinsk

Maybe you are using rsync incorrectly somehow? There are also various checks so as not to copy too much, but to synchronize 70 MB - this is generally a ridiculous task for her.
Also, I advise you to change the replication scheme, there is no need to copy (possibly erroneous) data from the test server to the combat one.

A
amgorb, 2012-08-06
@amgorb

We use lsyncd on projects with a large number of files and volumes (hundreds of GB) - everything is fine. For synchronization, rsync is used internally - in general, it works quickly on an already synchronized directory, 10 minutes - this can only be for the first run.

D
Dmitry Agafonov, 2012-08-15
@AgaFonOff

We mount one on top of the other through aufs. /etc/fstab:
none /home/prj/data-dev aufs br=/home/prj/data-dev-rw:/home/prj/data,auto 0 0

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question