W
W
web_dev2019-10-09 09:00:51
MySQL
web_dev, 2019-10-09 09:00:51

Speeding up migrations/changes of tables in MySQL?

I would like to improve/speed up mysql database migration.
We have a standard approach, the scripts are in liquibase, a new version is rolled out, the scripts are launched.
But a productive database is quite large and changing a table (adding a column, changing values) with 10-15 million records can take hours. Which leads to table locking, and so on. Therefore, they try to deploy with changes when there are not too many requests, and so on.
Although there is a fairly productive Galera Cluster.
Will moving from MySQL to PostreSQL partially solve the problem?

Answer the question

In order to leave comments, you need to log in

4 answer(s)
D
Dmitry Shitskov, 2019-10-09
@Zarom

By muscle I found such not too simple options
https://m.habr.com/ru/post/121129/
Postgresql will partially solve the problems. It is only if a new column is added to the table and without a default value. Such an Alter will only cause a short lock to modify the table's metadata. Those. first an empty column is created, and then in portions you start migrating data there and at the end, if necessary, set the Default value.
To help such migration there will be a rule - to support database versions n and n-1 at the level of the database and code. This will also simplify the cases when you have to roll back to the previous version of the software.

M
Mikhail Bobkov, 2019-10-09
@mike_bma

Try it.
https://github.com/github/gh-ost
Used to migrate loaded tables.

A
ayazer, 2019-10-09
@ayazer

1) github gh-ost.
+: works through a binary log, therefore it can migrate a table even if it has triggers.
-: works via binary log, therefore (relatively) slow. at some point, it starts to transfer data somewhere at the same speed as they appear.
2) Percon's pt-online-schema-change
+: works through triggers (which can load all available cores)
-: works through triggers, so if there are already triggers, you won't be able to use percon.
in both cases, a copy of the table is created, it can be a problem if the tables are a couple of TB.
+ in new versions of Mysql you can use online ddl for the same tasks.

O
Oleg Kleshchuk, 2019-10-10
@xenozauros

Did it through the usual asynchronous master-master replication.
The application works with master 1. We stop the slave on it, roll the migration to master 2. Switch the application to master 2, turn on the slave on master 1, wait for it to catch up, check that nothing has broken. We switch the application to master 1.
Some haproxy is perfect for switching.

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question