B
B
by_EL2021-12-20 00:19:40
linux
by_EL, 2021-12-20 00:19:40

Setting up replication between two web servers?

There is a web server, for fault tolerance I want to organize a second replicating server in case something happens for a reserve, How can I organize replication of web files and folders between two servers so that the files are synchronized periodically, preferably with rights.
Thank you very much in advance !!

Answer the question

In order to leave comments, you need to log in

4 answer(s)
S
shurshur, 2021-12-20
@shurshur

If we are talking about constant synchronization of files, then lsyncd works very well. But in no case should you try to use it to synchronize in two directions (by running two instances), only in one! (I tested, even if the write goes only in one direction, synchronization in the wrong direction of an incompletely written file sometimes happens spontaneously)
It is better not to use file synchronization for databases, they have their own replication methods.
Of course, all this does not cancel backups and other organizational measures for deployment and support, but you can minimize the risk of losing the most recent data as much as possible.

P
pfg21, 2021-12-20
@pfg21

for constant file synchronization, I advise syncthing (or its commercial brother resilio sync), they grew out of torrents, so full-fledged p2p to a bunch of systems is possible.
the demon constantly hangs in memory. listens to inotify i.e. can instantly catch changes in files. can simple diff files. digital signature of each file, i.e. 100% delivery confirmation, plus adequately handles disconnects, packet loss. encrypts the channel. etc.
there are some options with file permissions, but I have a heterogeneous cloud (lin wine android) so I turned it off.

D
Drno, 2021-12-20
@Drno

rsync
rclone

R
rPman, 2021-12-20
@rPman

Synchronization at the file level has already been suggested (I hope they didn’t forget that you need to copy using fs snapshots), it’s especially difficult to set it up if there are databases, since each database has its own methods and limitations.
There is a way to organize work when synchronization takes place at the block level of the file system. For example DRBD (when several block devices on the network are configured as raid1 mirror). Or file systems like glusterfs, when network storage is organized on the basis of a file system, in addition to options like luster or ceph.
If the network connection is not stable or even intermittent, I would advise using a simple set of scripts to organize a replication mechanism based on btrfs snapshots using btrfs-send, this regular mechanism allows you to get the difference between two snapshots of the file system in the form of a file (stream), send it to a remote machine and either store it there or apply it as a patch of changes to a copy of the file system, so a copy of the file system with a managed file system will be stored on the remote machine. lag.
Since creating a snapshot is an atomic operation, working with databases is more secure than copying them online using conventional file copying tools (the restored copy will work as if the server was abnormally stopped, which the databases are ready for and there will be no data loss with a high chance, as opposed to the possibility of getting a mess of mixed data if the files are copied while new data is being written to them)

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question