Answer the question
In order to leave comments, you need to log in
How to properly deploy using docker registry?
Hello,
There is a Rails project that now needs to be raised on a server in the nginx + puma bundle, or rather, I have already raised it, I did everything using docker. A separate container for nginx and a container for the staging and production branches of my rails application. The database is running on a separate server and also in a docker container. All this was lifted and collected for the first time. those. this is my first experience with docker. I decided to start setting up the deployment, by this time I had already picked up vertices and was sure that the docker was just great for deployment. As a first step, I set up my docker-registry, pushed into it from the dev-machine, and on the hosting from the production directory, respectively, did a pull. Here the registry did not work quite as I expected, I thought that, like git, it would expand the entire project structure inside the directory in which I made the pool, but it just pulled the image.docker run , but for some reason the dev version of the project started.
It was in brief, now I will try to accompany all of the above with configs and commands. :)
I use docker-compose because I didn’t like to fence huge docker commands with a bunch of parameters, my docker-compose.yml looks something like this:
version: '2'
services:
development:
container_name: app_dev
build: .
expose:
- "3000"
network_mode: host
environment:
PORT: 3000
RACK_ENV: development
RAILS_ENV: development
DATABASE_URL: 'localhost:27017'
production:
container_name: app_prod
image: app_prod
build:
context: .
dockerfile: Dockerfile-prod
restart: always
env_file: prod.env
ports:
- '8080:8080'
volumes:
- /puma
- /puma/log
- /puma/pids
staging:
...
docker build -t app_prod.
docker-compose -f docker-compose.prod.yml build/up
docker-compose up production
Answer the question
In order to leave comments, you need to log in
The official documentation should still be read, showing perseverance. Then there will be no confusion in the basic concepts, because of which there is a mess in the head.
Let's look at the tools and their purpose that you use:
The team docker
does not juggle files, it juggles images and containers, and they are abstracted from us by Docker as something ephemeral. That is docker pull
, when executing a command, you do not download the image to the folder where you execute the command, and certainly do not download any files. All you do with this command is download the image to your local Docker store so that the Docker daemon can run a container based on that image.
The team docker-compose
is a completely different team. All it does is read the specified YAML manifest and execute the appropriate Docker commands. It just allows you to declaratively specify the desired scripts when working with Docker in a convenient format. But neither the team docker
nordocker-compose
, don't provide anything for transporting/updating versions of your manifests somewhere. Docker, again, juggles images and containers, nothing more.
You have a directive in your manifest build:
. So it docker-compose
tries to build the container first, instead of just running it from the image.
Regarding advicedocker-compose.yml
under each env, all correctly advised. That is exactly how it was intended. You specify in the manifest not different environments (development, production), but a set of containers that should run in one bundle as one logical unit (the concept of PODs). Nothing forbids doing the way you did, but it’s akin to using a colander as a bowl for food: today you normally wrap dumplings out of it, and tomorrow the soup in it is already leaking somewhere in the wrong place.
Workflow in your case can be organized as follows:
Bonus tip:
It's often convenient to automate building an image and uploading it to a remote registry using Makefiles.
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question