S
S
Salavat Sitdikov2013-04-11 11:38:22
Nginx
Salavat Sitdikov, 2013-04-11 11:38:22

Servers in different DCs of different providers - how to do it?

Good day to all.
Actually, there is a task - to divide the current server into three absolute copies, thus ensuring independence from one hosting provider.
What can you advise as a solution to this problem?
The nginx balancing option is understandable, but you also need to keep a copy of all files on remote servers. And it happens that among the files there is a video weighing ~ 35-50 MB.
The solution that seems to me is cluster file systems. But I'm not sure I'm thinking in the right direction.
If anyone has experience, I would be grateful for advice and advice.

Answer the question

In order to leave comments, you need to log in

7 answer(s)
R
rukhem, 2013-04-18
@zona7o

clusterFS is all very slow.
We did this - we selected 1 server as master-stat and updates are posted on it.
The rest of the stat servers (in other data centers) if they cannot find the file locally, they climb to this particular master-stat and download the missing file:
set $root /opt/www/img.domain.com;
location / {
root $root;
try_files $uri @master-stat;
}
location @master-stat{
internal;
proxy_pass img.master-stat.domain.com;
proxy_set_header Host img.domain.com;
proxy_store on;
proxy_store_access user:rw group:rw all:r;
proxy_temp_path /opt/tmp;
root $root;
break;
}
Well, once a day with mster-stat rsync to all the others, in order to remove the unnecessary and climb over the updated ones (rarely, but still necessary).

A
Alexander Kouznetsov, 2013-04-11
@unconnected

Actually, I would not like to be tied in the answer to specific implementations (nginx, apache, etc), because I myself now work mainly with Windows Azure (I can tell you specifically with the implementation on it).
By tasks:
1. It is necessary that one of the servers responds to the request, even if one of them fails - this is done with DNS tricks
large.
Here I would do something like this: the server that received the file informs other servers that it has a new file. They put this knowledge into their base. When a user requests this file from another server, the file is uploaded to the server, we mark that we now have the file and give it to the user.
If we are talking, for example, about blog entries, then it is easier to immediately replicate it to other servers when receiving an entry.

A
Alexander Kouznetsov, 2013-04-11
@unconnected

In general, this is called CDN , they are used not only to speed up loading, but also for fault tolerance.
Cluster FS, IMHO, is not an option. It is difficult to provide sufficient speeds through external channels for the design to work stably.
I would build a replication mechanism.

S
Salavat Sitdikov, 2013-04-11
@zona7o

gag_fenix , we are looking for a reliable solution :) Thank you!
PS Missed, reply to comment

P
polyakstar, 2013-04-11
@polyakstar

Virtualization + replication between DCs by means of storage systems.
expensive.

R
rukhem, 2013-04-18
@rukhem

There is another version of this logic:
if there is no file locally, go through the rest of the stat servers and try to pick it up from there. But they abandoned this scheme. It is needed when updates are uploaded to different stat. and we have developed so that all the same poured into one.

C
CLaiN, 2013-04-18
@CLaiN

Well, for example, on Windows there is DFSR, which takes care of all file replication issues. + RR in dns for fault tolerance between data centers. Database replication depends on the solution, but usually a master-slave replication with a vitness in the third code is enough.
I suspect that niks have their own similar file replication mechanisms.

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question