Answer the question
In order to leave comments, you need to log in
Application dockerization?
I recently started learning docker and I have a question. How to write applications correctly so that later you can safely transfer them?
In tutorials, they usually build a container from a ready-made application, write and then build or make a bash script to build the container (by the way, I just can’t understand why, because everything you need can be described in the Dockerfile)
I also found an option to start the container with ubuntu and write the application in it and then just throw the modified container where you need it, how crutch is that?
How and in what form, as a result, it is necessary to develop an application, so as not to fool around later?
Answer the question
In order to leave comments, you need to log in
Too broad question.
I'll tell you on the example of a small Django application
So - you must have at least two containers - a container with an application and a container with a base.
For the base, take the base postgresql container, and map the directory on your local system to the base postgresql container. Thus, the data is stored in your local system, and you do not need to rebuild the image with postgresql.
Now container organization with Django. Take a basic container with python3. You map the local directory with the application into a container. In the same local directory, create a virtual python environment to run the project. That's all - rebuilding the image is also not required - the base image is used.
Usually you just start it all with docker-compose. In a Django container, you simply run a startup script that installs the missing packages in your local environment, does migrations, collects static, and actually starts uwsgi.
That's actually all.
The project, along with docker-compose.yml and the start script, can be easily transferred anywhere. At startup, the necessary images will be automatically downloaded, and the start script will collect the necessary environment and launch the project.
In general, it is worth reading https://12factor.net/ru/ - but this is more of a conceptual article.
As for the development process, there are two ways (and, of course, options in between):
1) develop and test the old fashioned way, on your laptop. When something works, pack it into containers, then repack new versions.
2) And the right way - before writing code, create a repo in Git, write CI scripts to build images, deploy, testing, ...
And then with each commit (or just for PR) CI will run tests, build images , etc.
This is not as difficult to do as it was ten years ago - no need to install Jenkins or similar programs, GitHub has GitHub Actions (with a free plan) in which we describe in YAML what needs to be done when committing.
The docker philosophy is to host one service in one container.
For example, separate containers: MySQL, php-fpm, nginx. As if these are boxes in a local network, they see each other. And only port 443 nginx sticks out to the outside world.
For convenient management of a multi-container application, there is Docker Compose . All the services are concisely described in one docker-compose.yml file. It's shorter than a bash script and "so right".
All in one container with ubuntu -
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question