Answer the question
In order to leave comments, you need to log in
How to properly organize development using docker?
Yes, there are many such questions, let there be one more).
I need to tidy up work on a large application and I'm a bit at a loss.
Given: 2 backends (logic - symfony, api - lumen), 3 fronts (vue), sphinx, rabbitMQ, supervisor, a lot more + service scripts on go.
Now everything is in one repository and is launched using filezilla and such and such a mother, there is no docker in principle.
I want to split this whole mess into separate parts and set up a human deployment.
1. How to organize git? For now, I just want to make separate repositories for each part, i.e. 1 rep for symfony, one for each front, etc - how much will I regret this in the future? Will it be convenient for assembly and deployment? Where to store docker-compose.yml in this case? Maybe you should look towards submodules?
2. Containers - how to separate it all. For example, sphinx - put in a separate container? On what should this decision be based?
3. Which deployment tool to choose for the first time? Preferably something simple.
4. How to prepare VPS for deployment? OS, virtualization - does it matter?
5. How to deal with the server control panel (isp) - it doesn't seem to get in the way, but you never know.
6. What to do with temporary and user files on the server (logs, avatars, etc.).
As far as I understand, when deploying the next release, the old containers are demolished and new ones are installed - is that so? How to make sure nothing is lost?
I would appreciate it if you point me in the right direction.
Answer the question
In order to leave comments, you need to log in
Draft answer, because details may differ for everyone - do as you like.
1. There are fundamentally two approaches. The first is one repository - one artifact. It is quite convenient, because. allows you to distribute access to the repository to different teams if they are sawing different modules. It also allows you to conveniently implement different release cycles for different modules within the framework of the git. On the other hand, you immediately get the problem of integrating all these repositories into a single system. Usually solved by some meta-repository that knows how to put the project together from pieces. Or include all other repositories as submodules. Also, if there are a lot of small repositories and you need to make parallel changes to several at once, this is very inconvenient for developers. The second extreme is a monorepository. When the ENTIRE project consists of one repository. This is very convenient in a situation where you have only ONE, extreme version of the product. Because everything is always assembled from one commit and either everything grows together at once and there are guarantees for the compatibility of all modules, or the code needs to be corrected) In this case, you often have to think very clearly about the structure of the project (for example, lay out each individual module in a separate directory), you lose the opportunity to work with external contractors (they will have to start separate turnips + set up synchronization), make all sorts of wrappers so as not to assemble the entire project, but only the changed parts, because assembly of everything can be very long. But, yes, this approach also has the right to life. Especially until you try it yourself - you definitely won’t be able to understand which is better. lay out each individual module in a separate directory), you lose the ability to work with external contractors (they will have to start separate turnips + set up synchronization), make all sorts of wrappers so as not to build the entire project, but only the changed parts, because assembly of everything can be very long. But, yes, this approach also has the right to life. Especially until you try it yourself - you definitely won’t be able to understand which is better. lay out each individual module in a separate directory), you lose the ability to work with external contractors (they will have to start separate turnips + set up synchronization), make all sorts of wrappers so as not to build the entire project, but only the changed parts, because assembly of everything can be very long. But, yes, this approach also has the right to life. Especially until you try it yourself - you definitely won’t be able to understand which is better.
docker-compose is good for developing and simulating a bunch of services. Not very good for production.
2. Ideal - one container - one service. But for development purposes, you can use containers as a means of delivering anything, and cadavers with several services in one container are born there. But for production it's not very good.
3. ansible, gitlab-ci
4. everything matters. Depends on your capabilities and tasks. You should definitely avoid any OpenVZ, it is best to deploy to real virtual machines. As a rule they are on KVM technology. As for the operating system, it is better to take what you know how to work with, or you can attract specialists. Those. popular options are centos, ubuntu, debian. Everything else can be considered only in case of any _special_ requirements. For example, a very cool thing is CoreOS, if you run ONLY containers - nothing more, atomic updates, but it will only work well on virtual machines, but what if you need to run on an iron server? Then there are already nuances
5. no way. She is not friends with the dockers.
6. Think. Design. It is very important to understand how the application will be launched, how many replicas there will be, how they will interact, share shared resources (files, records in the database, queues, etc.). Regarding files - for docker containers - to ensure their safety, everything you need to write to either bind mount or volume - then the data will not be lost when the container is deleted.
> As far as I understand, when deploying the next release, old containers are demolished and new ones are installed - is that so?
Very high-level - yes, yes.
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question