P
P
pashaxp2015-05-13 13:45:15
MySQL
pashaxp, 2015-05-13 13:45:15

What is the best way to organize fault tolerance between data centers?

Good afternoon.
There is a certain project for which the most valuable and most frequently changing database is mysql.
The code, for example, rarely changes and is easy to restore.
The task is as follows: you need to organize disaster recovery in such a way as to keep the database in working condition and for transactions - if possible, the latest version, well, or Nm (where N is the main, a fairly large number of existing records in the database, and m is the loss the last few transactions, when the virtual machine with the main system crashed, and the last few requests could not be processed)
There is an idea to make a database on some very fault-tolerant hosting like Amazon, where mysql will always (or almost always) work. And from the virtual machine with the code, from any data centers, wherever I place it, a connection to the Amazon storage will be registered.
And with fast NSs I will switch the entry point to the project, change the A-record for the domain, and later I will automate this switching.
As I see it, the method is not particularly expensive and quite working.
But maybe there is something better?

Answer the question

In order to leave comments, you need to log in

5 answer(s)
P
pashaxp, 2015-05-13
@pashaxp

All operating systems implement DNS caching. If there is no way to disable it, then the idea with DNS is most likely a failure. Yes, and depending on how much the cache holds.

The focus here is on users. If the main server has crashed, you need to change the A-record, otherwise the project will be unavailable. We neglect caching. And it, as far as I know, occurs for a period of time, no more than the TTL of the domain. And if the TTL is small enough in the NS server settings, switching will be fast. This is how all large public services work. A-records are constantly changing.
I would also like to clarify how the project will behave if there are several A-records? And entry points, front-end, too. Here really, you can not do without mysql master-master ...

P
Puma Thailand, 2015-05-13
@opium

For a distributed system, only replication, no cluster will work normally in spaced DCs.
Make a master slave replica and switch to the second server in case of problems.

H
He11ion, 2015-05-13
@He11ion

Cluster with replication - no? Well, instead of MySQL, I would advise you to look towards Postgres.

A
Andrew, 2015-05-13
@dredd_krd

And with fast NSs I will switch the entry point to the project, change the A-record for the domain, and later I will automate this switching.

All operating systems implement DNS caching. If there is no way to disable it, then the idea with DNS is most likely a failure. Yes, and depending on how much the cache holds.
It seems to me that it is possible to make master-master replication between two storages, and in the frontend code (if it is in the third place), in case of failures (perhaps even automatically), switch the database from one storage to another. When communication is restored, the lagging base will catch up, and it will be possible to switch back if it is the most optimal in terms of performance.

C
Cool Admin, 2015-05-13
@ifaustrue

According to the fault tolerance of the traffic entry point, for example, you can rent a virtual machine on the same Amazon, and raise nginx on it with proxying traffic to your virtual machines, for renting such a wheelbarrow there will be mere pennies, and it will have many times more availability. You won't have to fence the garden with DNS and you can always lift the stub in case both backends fall.

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question