V
V
Vitaly2020-04-22 13:09:49
linux
Vitaly, 2020-04-22 13:09:49

How to correctly isolate services on a linux server?

I moved to a new server and want to achieve greater fault tolerance and isolation of services from each other.
The server will use:
1. Postfix + PostgreSQL
2. Several (3-5) projects with approximately the same technology stack (python, NodeJS, PostgreSQL, MongoDB, RabbitMQ)
3. Several (3-5) projects requiring apache, php, MySQL.
4. Bind
5. ownCloud

Once, on the old server, I came across a vulnerability in Exim, due to which I had to urgently move to a new server. If I reason correctly, then by isolating services from the root OS and from each other as much as possible, I will achieve greater fault tolerance.

From this the questions arise:
1. What (free) technology is better to choose for isolation? I have looked at KVM + Qemu, LXC and Docker.

2. How can I better distribute my services across containers?

Thinking about this distribution:
bind remains to work on the root OS.
Postfix with PostgreSQL (where mail accounts are stored), in a separate container.
Each project (python, NodeJS, PostgreSQL, MongoDB, RabbitMQ) is
placed in a separate container. Placing simple sites on php + MySQL in one separate
ownCloud container is also isolated separately.

I would appreciate your advice.

Answer the question

In order to leave comments, you need to log in

5 answer(s)
A
Alexey Cheremisin, 2020-04-22
@vitalysokolov

The technology is there, of course.
1) It's virtualization - KVM/Xen
In my opinion it's preferable to KVM, better support, no special host kernel needed.
A little, by 2-3 percent it loses before XEN, but it definitely wins in convenience. XEN is Linux only, and only with special patches in the kernel on the host and client.
In fact, you get a full-fledged virtual machine, put whatever your heart desires, even Linux, even BSD, even Windows.
There is only one problem - it requires a hard allocation of resources. Therefore, only a dozen or two virtual machines (and even that depends on the loads).
Compared to iron, it will gobble up from 3 to 7-10 percent of productivity.
However: my choice is KVM.
2) Containerization - Docker/LCX/Virtuozzo.
I’ll say it right away for virtuozzo - I won’t say anything about her. In principle, it is very similar to XEN.
The other two are based on CGroups, moreover, docker uses LXC internally.
Docker is very widespread and popular, in fact the market leader. Sharpened to run one task in one container. Containers can be combined into groups.
LXC/LXD - less common, but very handy technology if you need to containerize the operating system environment with a bunch of processes.
We use both Docker and LXC/LCD. And we even run Docker in LXC.
Everything depends on the task.
We need a service with a bunch of processes and an environment - LXC
We need one process - docker.
You need a full-fledged environment with a core, courtesans and hussars - KVM.
In fact, there are about 10 KVM virtual machines, about 10 LXC containers, and about 20 Docker containers.

S
Sanes, 2020-04-22
@Sanes

Better stacking via KVM or other normal virtualization.
For example, LAMP to one VM, mail to another, etc.

  1. LXC/LXD possible problems due to castrated technology
  2. Docker is not about this at all

Virtuozzo Containers is the only one more or less complete among containers. If divided into virtual machines.

M
mayton2019, 2020-04-22
@mayton2019

The topic is not supposed to sound like that at all. The author saw a vulnerability. What was it? Why did it have an effect in Exim (? what is Exim?) and suddenly have no effect in the virtual environment.
The attack can pass through the network port and there is no guarantee that if you put everything in the dockers, then you "hid in the house". Perhaps you solve human error by virtualization? So this is different. And it may not be necessary to virtualize. Just make sure you have multiple accounts.
But who generally certified Docker for security? Also my fortress.

0
0x131315, 2020-04-23
@0x131315

IMHO the most relevant is docker.
Trite - easy to set up, there are already assembled images for every taste, minimal overhead (you can afford less memory and cheaper hardware), easy monitoring and service management. Able to isolate services, networks and resources. Able to raise fallen services. Able to manage clusters. There are plenty of handy web-monitoring controls/monitoring from anywhere you have a browser.
By partitioning, I recommend moving the database into a separate common (global) container, because this will be one of the most memory-heavy parts. Multiple databases are expensive to maintain, and only make sense if the projects are incompatible with a common database, but this is rare.
Some projects can only work with mysql - then you need to keep two containers with a database: mysql and postgres
It is clear that each project needs a separate limited user in the database.
Such an organization not only saves resources, but also allows you to manage all the databases through one web-face, and also makes it easy to fasten backups for all projects at once.
It is also worth separately to take out nginx, which will accept connections, provide security and routing across containers. At a minimum, this is necessary to collect all common settings in one place, and to be able to share one external port for many projects.
nginx from projects can be replaced with a global one (nginx is quite flexible, it is enough to transfer the settings), or you can leave it alone - nginx is quite cheap in terms of memory.
Also with FPM, you can keep multiple shared containers with different versions - the global nginx can be easily made to route each project to the correct FPM version. But this is also not particularly critical, because. FPM is also cheap.
The specific FPM settings for each project can often be compensated by including polyfills inside the project to save memory.
Naturally, all this needs to be assembled under docker compose
And follow the rule: everything that the project needs to work must be in one docker compose file (read - each project in its own folder)
Well, in the host system, of course, there should not be native software that works in containers - there will be problems with settings, ports, and in general, software is not needed in the host system in fact, docker is much more convenient to manage, especially when there are several incompatible projects, and two or three versions of the same software are needed. Setting it up without a docker is hell.

A
Alexey Pyanov, 2020-04-30
@gohdan

"Each project / site in a separate container" is a fundamentally wrong approach. Containerization is a way of isolating processes, not projects. That is, on the project you will have several containers (one for a node, one for a rebbit, one for postgres, etc.), and in the main system you will immediately see this whole bunch of containers from all projects. Therefore, if you want to isolate projects from each other, your choice is not containers, but virtual machines on KVM (and already inside the virtual boxes it will be possible to split the project into containers process by process). So you will have a normal isolation of projects from each other, and problems in one project will affect others to a minimum.

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question