Answer the question
In order to leave comments, you need to log in
Docker - architectural questions about deployment and more?
Good day, questions for people who know how to Docker
1) How to organize access to several sites on a domain from a host?
I did this: I switched the nginx port and registered 127.0.0.1 api.domain.local in hosts .
But what if there are several containers in nginx ? 2) How to deploy multiple php projects using docker?
Now I think like this:
A separate repository for docker-environment'a, a separate repo for each php project, then git clone them, and then just a folder with projects to mount in php and nginx containers. nginx ports are forwarded. 3) Where to keep crons to run tasks?
Inside the container that contains the script itself to perform the task?
Make a separate container for cron ( But then how to run tasks from other containers, even if they (containers) are on the same network )?
Or keep crons on the host system?
As I understand it, the correct way is to keep cron and its tasks on the system host, and run the jobs themselves as docker exec ...?
4) How to update software in a docker container? For example, now I have a version of libxml2-dev (2.9.1) installed in my Dockerfile via apt-get install , and the latest version is 2.9.3 .
How is the update to the latest software versions organized?
All software hands to break?
And if, for example , apt-get update && apt-get upgrade is written in the docker file , will it not be that the versions of the software will be different in the dev, stage, prod environment ? 5) Is it possible to build an image somewhere on your server, and from there take it to other machines?
So that only the build server is responsible for the build, and everyone else uses what it gives out. (Images must be the same at all stages of development ) 6) Is it correct to keep the composer in a container with PHP?
If not, how can I put it in a separate container? Inherit from PHP container and build for each project separately?
And if you keep it inside the container, then it turns out that you can’t write npm install in its scripts, since the node must be kept separately, how in this case to organize the deployment correctly? Separate all collectors into separate containers and collect on the host?
6.5) Where do you keep stuff like: composer, npm, gulp, webpack, bower?
7) Do you need a composer on the prod, or should everything come with Docker-Image there ?
8) Should the log collector / database dumper be placed in the container with the project? Or a separate container for each? Or keep on the host?
nine)How do you organize the deployment of a project consisting of several applications
? how to learn how to deploy a large project on a bare server in a reasonable time. + be able to quickly put it on the computer of a new developer in the team (At least in a virtual machine)
Answer the question
In order to leave comments, you need to log in
1) nginx-proxy
2) copy the sources to the image (in the dockerfile), build these images either locally or on the CI server and push them to docker/distribution (either a paid docker-hub or deploy your own, this is done with docker in about 10 minutes ).
3) Right in the container with PHP. Or get a separate container for php-cli and add a separate container for sources, and share between them via volumes_from. The option of cron on the host is also worthy of existence, but it is not ok in most cases.
4) update the base image. And how do you organize yourself.
5) Yes, see point 2.
6) In general, you can cheat here. You can also store dependencies directly in the repository, in the sense of committing vendors. But you don't. By the time you run docker build of your images, all dependencies should already be installed. And for each of the development tools you listed, there is already a container, ready. We take and use.
7) as we found out in point 6 - there should not be a composer on the prod. in general, how, you should simply “move” the staging image to production. In this regard, the risks during the release are minimal.
8) here again in different ways. It is more convenient for me to connect directly from the container, for example, to sentry or graylog and drop logs there. Well, or we have to shove the logs into the stdout / stderr of the container and then aggregate them from the outside, there are also a lot of options here.
9) these are all separate containers, all of this is bundled together by bash and docker-compose. All this is deployed either through docker-machine and CI or just through CI. Docker-machine will only be "handy" from version 0.7 or 0.8.
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question