A
A
Andrey Kobyshev2015-09-15 00:23:30
Virtualization
Andrey Kobyshev, 2015-09-15 00:23:30

Why is this not done in docker (all-in-one black box)?

I often need small applications / microservices that I want to stuff into a "black box", run somewhere with a couple of commands, easily transfer and backup, restore on new hardware with all the giblets.
To do this, I delve into docker and began to build my images for the platforms that I use.
And almost everywhere I see advice, like "create 2 containers: one for your application, another container for the database, and then link them" and so far never - push everything, both the database and the application into one container. And here I fall into a stupor. If the main message of Docker is "containerization allows you to transfer and run your applications in any environment with all the dependencies", then what is the point of Docker if we again break the application into pieces, having all the crap that we had,
How do I see the best-way:

  1. We take baseimage
  2. We build a container based on it, which has everything to run a full-featured application - both the base, and nginx and the necessary libraries
  3. If necessary, we give the application the ability to climb somewhere on S3 / external base (as a rule, it is not necessary)
  4. We make a data volume based on this image in the environment where we will run it (if for the first time, or restore from a backup), all subsequent changes are written to this data-volume - the database grows, files created by the application are stored, etc.
  5. We launch the container and enjoy life
Of the benefits: independence from the external environment, easy and convenient backup and restoration / transfer via drag and drop only data-volume, a bunch of different applications do not store their data in one database (hence there is no havoc with transfer / backup / hacking or losing everything when the container base), only the port of the service sticks out, through which it interacts with it as with a "black box" and nothing prevents the service from climbing outside for something that it may need.
Cons: increased memory and disk consumption (but this is tolerable in my case).
My question is why in the whole Internet I practically did not find a mention of this approach? What could be wrong with him, and what pitfalls did I not see?

Answer the question

In order to leave comments, you need to log in

2 answer(s)
S
Sergey, 2015-09-15
@yokotoka

if we again break the application into pieces, having all the crap that we had when there was no docker?

From what? Your application can run under arch using only libs, using a database that runs under debian, while you don't worry about some other things. If you need a database - you just use the container with it as a black box. And given the fact that we have docker-compose, deploying such a system is not a problem at all, just run docker-compose up and that's it. We have achieved the same thing that could be done using a single container, but the whole system is much easier to maintain.
In fact, if we divide our application into separate services (database, reverse proxy, cache, etc.) and at the same time use a convenient format like docker-compose.yml in order to describe what we have there and how it should be linked together, we get all the advantages that you indicated and the ease of supporting containers.
In fact, if you cram everything into one container, you will transfer all the hell that was before in the Dockerfile. No profit, just setting up the environment and the possibility of versioning. And if so, then I'd better return to ansible.
Well, again, many do exactly as you do. They just stuff everything into one container.
And well and still - your approach badly approaches for scaling. Let's say I want my database to run on a separate server cluster, and the application on another. And here we are losing.

P
Puma Thailand, 2015-09-15
@opium

you are confusing application virtualization with operating system virtualization
, Docker has the first ideology, and you think of virtual machines and are trying to instill in it the standard virtualization approach.

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question