Y
Y
yiicoder2016-07-16 18:19:44
PHP
yiicoder, 2016-07-16 18:19:44

Continuous delivery, Continuous integration, Docker with a "multi-version" application. How to organize?

We develop backends for social media applications.
1) We want to organize a normal application deployment process. The main problem is that the application is deployed on different platforms in different versions (for example, we developed new functionality, launched testing in production on VKontakte), in parallel - some other functionality can be tested on another platform, i.e. these are different versions of the same code, somewhere it could go further, somewhere not.
Now we use gitlab, docker, ansible to deploy applications, the id of the platform on which the application needs to be deployed and the repository branch from which to pick it up are passed to the ansible task. Ansible updates the docker (the image itself is built manually), ansible takes the source code from the git, uploads it to the server, the sources are transferred to the containers through volume.
We want to set up a full cycle through gitlab+pipelines. The question is how to competently compile different versions for different platforms? The solution in the forehead is to start branches in the git for each platform (release_vk, release_..), in CI the image will be collected immediately with the source code, marked with the platform tag, deployed by tag to the server (latest_vk, latest_...).
I don’t like that as a result a bunch of tags, branches are started, maybe there is an alternative solution for such a task?
2) Different versions of applications run on the same servers, ie. requests to api are dns-balancing, there is nginx at the front, which scatters requests to other containers, in fact, to another nginx, which already drives the container with php.
The scheme seems to be a little redundant nginx->nginx->php, as a result, there are many different processes on the server, especially nginx.
The general question is how, with such business processes, it is possible to "optimize" the scheme of work. It seems like there must be some prettier solution for everything.

Answer the question

In order to leave comments, you need to log in

1 answer(s)
I
index0h, 2016-07-18
@index0h

Не нравится, что в итоге заводится куча тегов, веток, может есть альтернативное решение для такой задачи?

На самом деле это оптимальное решение. У вас создается релизная ветка в которую в случае проблем именно на этой платформе будут вливаться фиксы, не мешающие другим. Далее когда релизный образ оттестирован и дан зеленый свет - они размазывается по проду. То что тегов много - да какая разница?)) У вас есть возможность получить состояние любой сборки.
Тут все зависит от того, можно ли отдавать клиенту доступ на прямую к nginx2. Если нельзя - ваша схема вполне норм. Если же можно - тогда стоит это делать, смотрите в сторону своего балансировщика, который будет отдавать клиенту сервер, который А - жив, Б - минимально нагружен и штук типа consul.
Зачем? Контейнер как бы иммутабельный и все такое. Если у вас там кучка статики подсасывается не под git - смотрите в сторону mogilefs и т.д. Безусловно, для разработки volume - самое оно, но для прода - ну такое..., должна быть веская причина.

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question