Answer the question
In order to leave comments, you need to log in
What is the correct workflow for team programming and bigdata?
The department has several developers. We use git. We make a branch for each task. After testing the task, a pull request is made to the master vector.
We work with big data (statistics). All data is in one database. Some other modules, after developing their tasks, should test the code on this data. When a programmer deploys an application (his branch) on test production (to show the work to the manager), he uses his database (for testing). But he needs to use data from the production database. What is the best way to make a connection?
The 1st option (as it works now) is in the models there is a condition: if something is not working with a test database in the production database (read-only rights). Therefore, the interface does not change. But now the tasks needed a record.
2nd make a copy of the desired tables in the test database. After testing and merging with master, the code will work with the production table.
Which option is better and how does one cope with such a task?
Answer the question
In order to leave comments, you need to log in
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question