Answer the question
In order to leave comments, you need to log in
How to organize a hot database replacement for queries with exclusive access?
Good afternoon.
Now there is a need to design a service with a sufficiently high fault tolerance and the question arose of how to work with the database. The service has data that must be protected from simultaneous editing (for example, a virtual money account in an online game, you can’t allow money to be debited from it for two purchases at the same time) and there is data, say, a task queue for issuing rewards for an event, it’s impossible that the reward was given out twice (in case two nodes are taken out of the base at the same time and start to perform this task).
The first question is about architecture. It would be desirable to organize automatic fast replacement of subd. If for some reason the main master fails, a second, full-fledged one (read + write) automatically takes its place. Those. it turns out it is necessary to organize Master-Master replication. We looked towards Percona XtraDB Cluster. However, a superficial search has shown that this will impose a number of restrictions on the queries being executed, and in general the scheme is not simple. Question: is this solution optimal for organizing uptime? Or are there other, simpler solutions with master-slave replication?
The second question, about admissible requests at such (M-M) architecture. The Internet writes about two problems (which, it seems, even Percon does not finally solve), this is the desynchronization of data on the servers and the inability to use queries like select for update and, in general, use locks. Considering that MM is still used, then somehow such atomic operations as I described above are performed during MM replication. Question: how to organize protection against the two described cases (simultaneous purchase (balance check + write-off of funds) from it by two nodes) and selection for execution by two nodes of one "record-task" from the subd.
Thanks
Answer the question
In order to leave comments, you need to log in
Master-Master replication has the potential to cause problems that can be difficult to sort out. At a minimum, it would be good to guarantee that there will always be a record in one of the cluster nodes. Well, in your case, do not forget about transactions.
A cluster of 3 nodes perfectly survived the killing of any two, followed by recovery without manual intervention. However, I did not go into the details of its configuration - I believe it was still guaranteed to write only on 1 node and switch to another node through keepalived. I note that we had the task of survival, and not a high load and balancing.
A more correct answer is better to hear from the DBA.
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question