R
R
Roman Gor2013-12-04 16:11:25
MySQL
Roman Gor, 2013-12-04 16:11:25

MySQL Query Synchronization

Hello!
I want to share the problem that exists in our company, and find out possible ways to solve it.
The problem lies in the synchronization between MySQL query machines.
I'll explain in more detail.
N (example 5) programmers work in our office, approximately the same number of programmers work remotely + full-time developers can work at home. Let's calculate the approximate number of local virtual servers: 5 (at the office) + 5 (remote) + 5 (office workers work at home) = 15 machines.
All our work is dumped (via GIT) on some pre-production server, let's call it site.alpha. We figured out how to synchronize the versioning of project sources, we have been successfully working with GIT for about a year now. But how to synchronize MySQL queries is not clear.
Now the whole thing is happening according to the following scheme. If I need to make some changes to the database structure, I execute the query on my local machine and copy it to a Patch.sql file, which I then merge into GIT. Further, my colleagues, if they notice any changes in the database patch file, they execute the request on their machine and add something like this to the patch file:

-- 06.02.13 11:05 Roman Ivanov [email protected] Work
-- 06.02.13 11:05 Boris Petrov [email protected] Home
....

That is, in the comments to the request, we keep a log of their (requests) execution by each developer. Tin, huh?! The problem is further complicated by the fact that the senior developer needs to constantly monitor these logs, and after the request is completed by all developers, execute the same request for site.alpha.
Now we have this Patch.sql already occupies several hundred MB.
I wanted to ask: how do you solve a problem in your company?
PS Of course, I heard about replication, but it seems to me that this is not what we need. And you need the following. Some kind of master-slave script that will monitor the state of the local database (slave) and central (master). If something has changed on the slave of the database, it makes a request that needs to update the master and the rest of the slaves and sends it to all other servers, or uploads everything to the master, and the slaves of other local servers monitor the master directly and take it from there themselves requests, thereby updating themselves.
Phew ... He spoke out.

Answer the question

In order to leave comments, you need to log in

5 answer(s)
P
papahoolio, 2013-12-04
@papahoolio

Migrations.
The idea of ​​migrations is to bring the creation and modification of the database structure into the program code. One change, one migration. Migrations have a serial number and are accordingly executed one after another according to the serial number, somewhere in the system the number of the migration that was applied last is always stored.
The main thing in migrations is that after it got into the project repository, it should not change! If you need to undo changes to an already published migration, a new one is written, canceling / correcting the old one.
Developers should change the structure/filling of the database only with migrations.
Again, CodeReview migrations are just a great tool.
For PHP
For example in Yii:
www.yiiframework.com/doc/guide/1.1/ru/database.mig...
As a separate solution
https://github.com/ruckus/ruckusing-migrations
For Python:
https://pypi.python.org/pypi/alembic
bash:
https://github.com/dwb/dogfish/blob/master/ dog fish

S
Sergey, 2013-12-04
Protko @Fesor

do you need to synchronize the database structure between developers? Migrations... and that's all... store them in git, if someone has changed the structure of the database - a migration is written (or even better, change the database only with migrations) and pushed, the rest of the developers take the changes for themselves and after each pool / merge do a migration up . You can even hang on post-hooks.
Synchronization of data between developers is unnecessary. If there should be any mandatory data, they can be pushed into the migration again.

A
Alexander, 2013-12-04
@kryoz

We use a git repo with migrations in the form of sql and a phing script to deploy / rollback the database structure, and you can both step-by-step roll up to the current state and roll back. That is, each migration worker has 2 blocks: for rolling forward and for rolling back.
To store information about the current state there is a plate in the database.

M
Max, 2013-12-04
@7workers

I once did this with my bike: habrahabr.ru/post/80486/ The essence is almost the same as yours, but instead of one file, a new one is created each time. And each mysql server "remembers" what last patch it executed.
But there are also ready-made solutions (read the comments).

E
evnuh, 2013-12-04
@evnuh

If you don’t want to bother with migrations in the code, as you have already been suggested, then the simplest (collective farm) option is to put the next commands to change the database into a separate file, which is placed in a folder with all the same files. The folder, respectively, under the git. And we hang a post-hook in the git to execute a command (bash script?) that would apply the new file to everyone in the database to whom this file came.

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question