Answer the question
In order to leave comments, you need to log in
How to set up a DevOps process with Django containers?
I would like to understand how to organize the build and test process before publishing to django production.
At the moment, there is a vds on which it spins in docker postgressql and django in separate containers. In fact, there are more containers (celery), but it doesn’t matter.
The processing process is organized as follows - on the developer's machine, docker is built with the DEBUG flag enabled, tests are run, then git pushes the master branch to the remote repository:
docker-compose -f local.yml build django
docker-compose -f local.yml up -d django
Then I go via ssh to vds, get the master branch from the repository, build the image on vds (but with DEBUG disabled and a number of settings for production) and the publication takes place:
docker-compose -f production.yml build django
docker-compose -f production.yml up -d django
everything works and builds, but there is a problem - the developer builds and builds on the combat machine with slightly different settings and libraries (I emphasized in bold difference in build commands)... And it happens that after the publish command, the site crashes due to some kind of error that the developer did not show up.
How I see the process. After pushing the master branch to git, the build of the image begins on some server, which then runs a series of tests - in particular, the availability of a number of key pages. On the working server, the assembly is not used, but a ready-made assembled image is obtained from the aforementioned assembly server. The problem is that on the build server, you also need to raise postgressql with some data - otherwise the djanga simply won't start.
Is there a description of this process somewhere? Googled, but the django assembly without the sql base is described everywhere. Are there open source or shareware solutions for one small project?
Answer the question
In order to leave comments, you need to log in
but just run the database with the data on the build server, or if there is a large production database, then cut it down.
Or I did not understand the essence of the problem.
How I see the process. After pushing the master branch to git, the build of the image begins on some server, which then runs a series of tests - in particular, the availability of a number of key pages.
In general, I wrote a detailed article on how to do this - https://garantum.ru/article/avtomatizaciya-predpri...
the article on Garantum is interesting, thanks (my project uses almost the same stack and modules/services/containers), but it would be great if it could be supplemented with information about the folder (volume) structure on the physical disk, with the binding/actual location of the Dockerfile files and docker-compose.yml, the application location folder, its connection with the gitea turnip, and setting up the data folder in the same gitea (also some kind of muddy moment), otherwise I’m trying to compile from different sources, but somehow not yet very good :(
Thanks in advance :)
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question