V
V
Vadim Remin2015-12-03 18:11:29
linux
Vadim Remin, 2015-12-03 18:11:29

Can I use my own repositories instead of Docker containers?

Hello,
I'm new to the Linux world. Yesterday I listened to a lecture about Docker containers and how easy it is to deploy them to production and testing. Docker solves the problem: to transfer the program and its environment to another computer.
The advantages of docker over other methods of testing and deployment include:
1) Instant creation of the necessary environment on any linux / osX machine.
Counterargument, testing: If your servers where you want to install the software work under a certain distribution (for example, Ubuntu), then most likely you will do final testing and preparation for deployment on a machine with a similar distribution kit (or on a virtual machine) before implementation.
Counterargument, deployment:When the container is running, computing resources are spent on servicing the kernel prescribed in the container. And why build a container on a distribution other than the one that is already on the server.
2) Encapsulation (if I may say so) of everything that works in the container: processes, memory, etc.
Counterargument: Why do you need to increase the vertical number of layers in the server, which should perform only one function in most cases. Why encapsulate in a capsule? In highly loaded systems (100 or more servers), it may be necessary to install several independent systems with different environments on one physical server?
3) Seamless transfer of all dependencies.
Counter argument:Ok, if we assume that dependencies downloaded from repositories can change their code and behavior without changing the version number or the abyss. As an option, you can make your own local repository and have everything in one place distributing over the network, why produce these huge containers and run them as virtual machines (essentially) on each server? We can register all dependencies in the configuration file of the package.
Please confirm or refute these theses. If I think completely wrong, I apologize, I have never worked with all the listed technologies, but I am going to, and I want to understand what and how to use it correctly.
Thanks in advance!

Answer the question

In order to leave comments, you need to log in

3 answer(s)
K
Kirill, 2015-12-03
@dkudrin1

everything is relatively simple there:
1 containers consume very few resources, containers use the core of the host machine
2 it is, the same Google with its borg, Yandex with cocaine go the way when machines are no longer counted in pieces, but are perceived as a whole computing space "in parrots", servers are being released more and more powerful, the old ones remain, here on them (depending on how many machines these "parrots" pull) and you can scatter different applications, with containers it's convenient
3 as in 2, - yes on one server different applications can "spin"; you can "compact" resources, in the same Google the principle is to use a piece of iron 100%
The advantages of docker include the fact that it has a rather successful API, which allows you to easily write a "manager" of containers on top of it, and sketch out the "orchestation / discovery" of services.
If there is a task to run only a specific application on the server (for example, store on php/python/go etc...) for "personal" or "corporate" purposes, then the docker is not really needed here

V
Vladimir Chernyshev, 2015-12-14
@VolCh

1.1 and partially 1.2: Often you don't know what axis your software will work under, even if it is purely internal for the company. Even (or especially?) If there is only one server in the company, the administrator may decide to update it because of a critical vulnerability or to install some other software.
partially 1.2 Many companies have long used virtualization for various purposes. Containers consume fewer resources than full-fledged virtual machines and allow you to respond more quickly to changing requirements.
2. A rarity, in my opinion, now when one physical service performs exactly one function. Yes, and somehow it was not practiced before, in my opinion. Containerization allows you to clearly distinguish services and isolate them from each other with much fewer resources than virtualization, both during development and deployment, and at runtime.
3. Dependencies of different processes will not conflict with each other. Some legacy software will work under an already unsupported distro, and software using the latest versions of some libraries will immediately work.

V
Vaavaan, 2016-06-03
@Vaavaan

If your servers where you want to install the software work under a certain distribution (for example, Ubuntu), then most likely you will do final testing and preparation for deployment on a machine with a similar distribution (or on a virtual machine) before implementation.

What is it?
You are talking about Docker. You don't need this with him.
You can even sit under Window, even under MacOSX, even under RedHat
Inside Docker, you will have what you need.
Get into the kernel sources and have a look. There, several dozen assembly instructions "slow down" this. Can you tell me how many such commands per second are on a 2.5 GHz processor, or can you guess?
You need to assemble the container on the one that you need for work.
The idea behind containers is that you don't care what's outside the container.
The server does not perform a single function from this point of view.
There are control services - in their container.
There are a couple of basic minimum - for reliability and green-blue deployment.
There is a balancer between these two.
And this is the minimum.
And now there is a trend - services are automatically placed on servers, compacting as much as possible. There is no guarantee that 1 large or 20 small services will run on this hardware and that this will persist after a server reboot.
Why install another OS? Usually they don't. There is always one in the container. But if they want, they will put another one. And thanks to the containers, you won't even notice it.
The next OS will take only +50 megabytes.
Some dependencies are inconsistent. Can't be placed at the same time. On bare OS.
And in different containers - you can.

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question