R
R
Ramil2017-07-05 08:58:43
Docker
Ramil, 2017-07-05 08:58:43

What should the Docker workflow look like?

After a long and constant torment in manually deploying applications to servers for tests and production, I had a desire at work to automate this process.
I am an ordinary developer and I am indirectly connected with DevOps. There is little experience in this, and we do not have a separate specialist for this. Therefore, you have to take everything into your own hands.
Now we have several private projects, consisting of a backend and a frontend. The projects are not big. The backend is built on the NodeJS + MongoDB + ElasticSearch + Redis stack.
To automate the deployment process, I started learning Docker. But I ran into some misunderstanding of how everything should work. Basically, all the materials on the net answer the question: What is Docker? And only a small part answers the question: How to work with it? And even in them all processes are described superficially.
I figured out the Docker client and brought up docker-machine. I understood how to install images from Docker Hub and run containers. And then everything ... Dead end.
There were many questions, what to do next?
How can I create my own container consisting of NodeJs, MongoDB, ElasticSearch, Redis images?
Where is all this stored?
How do I share project folders for Docker?
How can I integrate this with CI and CD?
I'm confused. Thoughts arise, do I need it? Maybe deploy via FTP in the old fashioned way?
In general, I ask for help to figure out how the process of deploying projects on Docker should be built?
Or share materials where it is shown in practice what and how to do.

Answer the question

In order to leave comments, you need to log in

4 answer(s)
P
paldraken, 2017-07-05
@rshaibakov

I'll try to describe it in simple terms without serious terminology (Devops don't kick).
The next step I would recommend is that you start using docker-compose .
It will allow you to describe the entire infrastructure in one configuration file, run everything with one command and create aliases for containers to communicate with each other.
For example, we have such a structure. I'm using php but for nodejs it might be similar.

project
   - scr/   #Код проета под контролем версий в git
        - Dockerfile
        - phpfile1.php
        - phpfile2.php
        - etc.php
   - db_data/ #папка где будут сохранятся база данных. (иначе каждый запуск контейнера будет ее обнулять)
   - docker-compose.yml
   - site.conf   #конфиг для виртуального хоста nginx
   - nginx.conf #конфиг nginx

The interaction is configured in a special file .
docker-compose.yml
version: '2'
services:
  nginx:
    image: nginx:latest
    ports:
      - "8080:80"
    volumes:
      - ../src/:/app
      - ./site.conf:/etc/nginx/conf.d/site.conf
      - ./nginx.conf:/etc/nginx/nginx.conf
    links:
      - php
  db:
    image: mysql:5.7
    volumes:
      - ./db_data:/var/lib/mysql
    environment:
      MYSQL_ROOT_PASSWORD: 123
      MYSQL_DATABASE: changeme
      MYSQL_USER: changeme
      MYSQL_PASSWORD: 123
    ports:
      - "33306:3306"
  php:
    build: ../src
    volumes:
      - ../src:/app
    depends_on:
      - db
    environment:
      PHP_DB_HOST: db:3306
      PHP_DB_USER: changeme
      PHP_DB_PASSWORD: 123

Here I use the nginx and mysql container from DockerHub and my php container described in
src/Dockerfile
FROM php:fpm

RUN apt-get update && \
  apt-get install -y \
    openssl \
    git \
    curl \
    unzip

RUN docker-php-ext-install pdo pdo_mysql

ADD . /app
WORKDIR /app

Now using docker-compose up we will conveniently launch all containers with the desired configuration.
Interaction between containers will take place via aliases,
for example, from php, the connection to the database goes like this:
db.php
return [
    'class' => 'yii\db\Connection',
    'dsn' => "mysql:host=db:3306;dbname=donor", // db:3306 - это services имя контейнера с mysql в docker-compose.yml 
    'username' => getenv('PHP_DB_USER'), // это переменные окружения для контейнера тоже из docker-compose.yml
    'password' => getenv('PHP_DB_PASSWORD'),
    'charset' => 'utf8',
];

We pass the code into 2 php and nginx containers (volumes section). That is, the /app directory is created inside the container, which refers to the directory on the host machine. It is very convenient for development, you change the code and you can immediately update the page.
In production, I update the code via git from the repository and restart the containers (if necessary).
ps. This is one of the easiest ways, of course, there are more "adult" and "correct" methods. But I hope my description will allow you to get off the ground in learning docker.

T
tupen, 2017-07-06
@tupen

For production, a ready-made solution is more convenient, for example, a lightweight version (with multi-server support) is Flynn.io.
Or Dokku for one server.
And for the development environment, not Docker, but Vagrant.
The Docker idea is that the container you need is created AGAIN EVERY TIME for the next version of your software.
This fact is often overlooked when trying to create a Docker container forever.
CI / CD works like this:
Push the code to Git,
and, for example, Gitlab launches test workers on git hooks.
Workers create new containers based on the same Docker Dockerfile description file (each time anew - for "purity of experiment", that is, for debugging stability).
If the tests pass successfully, then the same container is sent to the DockerRegistry of the production system,
from where it is taken by the orchestration / cluster management system (the same Flynn).
As an introduction to the development of the architecture of your system under containers, I recommend reading this short text:
https://12factor.net/ru/ What
DBMS in Docker, cross
yourself
... in general, think about whether you need this hemorrhoid:
soar.name/ru/pro/half-a-year-with-docker-swarm-mod...
The benefit is obviously great.
But Docker is not a silver bullet.

C
copyhold, 2017-07-05
@copyhold

In my humble opinion...
there should not be one container that contains everything - the base and the node and the elastic. These should be separate containers that connect via their shared addresses and ports.
Since there are several containers, and it is difficult to manage this, use Docker compose , which just describes this whole economy.
In the same place, describe external folders mounted to the container. Or is there some other Docker Storage (didn't try it)
That is, you still have to upload the code to the server and then start the container / s.

N
Nurlan, 2017-07-05
@daager

There are a lot of answers here, including about Docker compose (a tool for defining and running multi-container applications.)

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question