Answer the question
In order to leave comments, you need to log in
How to synchronize only the DB structure?
I am developing sites in PHP + MySQL. I work remotely and from different machines, each of which has a local LAMP server. Plus a remote working server.
There are no problems with script synchronization: GIT and deployment scripts satisfy all my needs. I would like to have the same tools for the structure of MySQL tables.
Let me give you an example of a problem. While working on a site, I repeatedly change the structure of the local database containing the test data. At the end of the working day, I update the scripts and frantically try to remember what changes I made to the local database during the day in order to accurately reproduce them on the database of the working site with up-to-date information. In a couple of weeks I'm leaving on a business trip and continue to work from a netbook, the local database of which is hopelessly outdated and I no longer remember what needs to be changed in it. Taking a full dump from a production server is not always convenient.
I probably need a program that would be able to build a diff for two dumps of the database structure and make a set of SQL commands (ALTER TABLE, etc.) from this diff to bring the database of one structure to another. Does such software exist?
Answer the question
In order to leave comments, you need to log in
Devart dbForge Studio for MySQL is a free program. Can synchronize both data in tables and only the structure itself
I did such a thing for a long time to auto-update the engine, a couple of functions may come in handy:
pastebin.com/qRRQyEyJ
I know for sure that Doctrine ORM has an extension for migrations - an analogue of the Rail ones, which drakmail wrote about, but this is unlikely to help the author.
In general, look for projects with the keyword migrations.
For example, a quick github search led to this project .
The eternal problem of deployment. If the site is small and not frequently updated, then the easiest way is to download the live database at night, make changes, upload it back.
If the site is large, then migrations are written - a code that upgrades / downgrades the database from version N to version N + 1, a script is also written that looks at the database version and applies the necessary migrations.
But this option, in my opinion, is bad, since the migration code must be written by hand, and this is a dubious pleasure. I'm already picking the database with requests, why can't migrations be generated somehow automatically? Why do everything twice? But ask Ruby lovers why they like to do the same job 2 times.
Migrations. Especially now most of the mechanisms / frameworks are moving in this direction.
If the data is test, then you can keep one script in which to keep records of the base and it there too ... in git in the sense.
As for software, differentials and other things - as you have already been advised - you immediately write all the changes to the sql file. If I understand that I will change a lot in the database, then I start sql with the name of the day and write all queries there at once.
For such purposes, you can use a remote test server. Deploy to a production server using Mercurial/Git and migrations.
Advantages of this approach:
- everything is in one place (files, databases), no problems with synchronization
- there is no need to install LAMP on all computers / laptops / netbooks
- the two previous points provide significant time savings
Cons:
- you usually have to pay for a remote test server , although it is inexpensive (for example, on the Selectel I have such a machine spends 275 rubles / month) and ultimately pays off (see the pros)
- theoretically, failures at the hosting provider are possible, but this did not happen on my test server
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question