U
U
Unknown Hero2014-12-12 07:11:27
Software Deployment
Unknown Hero, 2014-12-12 07:11:27

Docker. How to control code, database and release in production?

Good day.
I want to automate the process of testing and deploying for web applications.
Currently, this process consists of a manual call to deploy.sh on the server. There are no tests at all.
What I want:
1. I make changes in the code, I do git commit && git push
2. Continuous Integration server picks up the latest version of the production(master) and dev branches.
If there are changes, it tests the new code, makes the assembly of the code ready for deployment (minification, cleaning, etc.) and uploads it to the necessary servers.
And in this process I want to implement docker.
But I don't understand:
1. After the final build of the code has been done, you need to do "docker build ... / docker commit ..." again with the new code, push to the docker repository , and then update (docker pull) all containers on the servers to the new ones and restart them ("docker run ...") ?
Or should the code get into containers on production/test servers in other ways ( git pull , deb/rpm ) ?
2. What is the best way to make a database container for production/test ?
I read about data containers that you can make a separate container where the database data itself will be stored.
But here my poor knowledge in the field of OS and file systems cannot give me an answer - is it better to use the host system to store database data (docker run -v ... ) or still use data container ( docker run -volumes-from) ?
Will the speed of work drop when using these "synchronization" files?
I understand that there is no magic recipe. I would like to use fresh technologies and solve the problem of automating the process "from git push on a working laptop to an updated site on production/test". I would also like to easily connect new servers and monitor all systems.
Thank you for your attention :)

Answer the question

In order to leave comments, you need to log in

3 answer(s)
U
Unknown Hero, 2015-02-27
@UnknownHero

I'll add an answer to my own question.
Enough time has passed and I managed to look and try a lot of tools related to Docker.
I created 2 applications, the first one was the site itself for which I wanted to make an infrastructure, the second one is administration, testing and deployment tools.
The administration application is deployed on the 1st server, it has Docker Registry , Jenkins and a couple more web pages with different information. I wrapped it all in Nginx, it works great. The application itself also uses Docker, but you need to update it manually (ssh and etc).
The site (which actually consists of business logic , DAL , Postgresql , Rest API , web-frontend , web-backend and a couple more levels of abstraction :) ) uses about 10 Dockerfiles.
Inside the application, I use the build tools (grunt for nodejs) and build the application during the image build (docker build) or after running the container for long-term development with FIG.
After editing the code, I upload everything to the git repository, Jenkins collects the images (docker build) and sends them to the Docker Registry, after which it tells the servers (now it is 1) to update the images (docker pull) and restart the containers. Where you need to save data, I use data containers , I do not restart or touch them.
Over time, I want to save the state of data containers (docker commit) and upload them to the Docker registry (docker push) to backup some data.
The server collects and restarts the updated containers using self-written bash scripts (they are not complicated), because Docker-native tools for this purpose are still under development (Docker swarm , Docker machine , Docker compose), and third-party solutions are likely to die after the release of these tools.
Through Environment variables I tell the container in which mode it works (local/test/live), but this is only needed for minifications and the logging level. In these settings, the smaller the difference, the better.
I put all this into vagrant, it works great, but it requires good hardware for development.
In the plans:
- learn how to tag images so that you can roll back all the servers to a working state in case of bugs.
- add an automatic testing and quality assessment process to Jenkins (for docker applications, you need to raise another jenkins slave )
- fasten ansible for deployment and other conveniences for administration. Connect it with Jenkins
Result:
-I wrote it once, I use it everywhere.
- Automation to the commit level = staging deploy
- Separation of administrative tools and services from the business application.
- Independent components that can be easily replaced, loose coupling.
well, and cons:
- it's hard for one to keep track of such a zoo)) If there was an administrator / DevOps, everything would be much faster.

V
Vsevolod Kaloshin, 2016-05-17
@arzonus

Good afternoon!
There are several solutions for your problem (CI & CD), these are like:
- Docker Cloud (former Tutum)
- Last.Backend
- Rancher (with Kubernetis)
-
Cloud66 components.
Regarding the database, I rummage volume on the host machine. However, I wouldn't use docker to build a replication solution.

D
Denis, 2016-10-27
@ttys

As far as I remember, docker commit if actions take place with the container and not with the image
And buid, pull and push work with images (on the basis of which containers are created)
There are also save and load that save the container in the stdout well and read accordingly
IMHO in general docker commit brings confusion

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question