Answer the question
In order to leave comments, you need to log in
How is the database structure and data updated in production?
Please explain how they usually update data in production, which also implies a change in the very structure of the database.
Example: let's say there is a table with users, where there are only 2 fields id and fio . Accordingly, in the Full Name field, data is entered in the type "First Name Last Name Patronymic". And then it became necessary to change the structure of this table, breaking the fio field into 3 fields surname , firstname , middlename . But in production, there are already 1000+ records in this table.
If you make changes locally and run the migration in production, the data will be deleted. As I understand it, before that, somehow the data with users is saved somewhere and then uploaded. How is it done? And where do you need to make such changes with the data, on the local or in production?
Answer the question
In order to leave comments, you need to log in
Since the question of the transformation of the available data is individual for each task.
You need to start from here: https://habrahabr.ru/post/146901/
And, in fact, come to the conclusion that it is better to leave it as it is in one field. If in some places other forms of appeals are needed, write them explicitly, and not constructing according to the existing misconceptions about the full name.
And so:
0) an additional backup is made
1) an alter table is rolled out adding a new field
2) an application is rolled out that can write the old and new structure synchronously, but reads only the old one
3) data is converted by a separate process in a cycle in small parts - this is if it is subject to automatic processing
4) the application using only the new structure is rolled out
5) the original structure is archived and deleted
For just a thousand records, it may be easier to lock the plate for a few seconds.
If processing is not subject to trivial automation, then often, instead of 3 points, the primary key, source_data are unloaded, somehow they change and prepare the csv primary key, source_data, new_data, copy it to a temporary table, then merge the multitable update of these two tables with recheck by source_data. Then the data for which there was no update is unloaded, parsed, updated again, etc. until everything is filled.
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question