O
O
Oleg2019-02-12 10:21:32
linux
Oleg, 2019-02-12 10:21:32

Similar to differentiated virtual disks in Linux?

Good afternoon,
Windows Server 2016 (iSCSI Target role) has an interesting feature:

With differencing virtual hard disks (VHDs), you can use a single operating system image (the "master image") to boot up to 256 computers. For example, suppose you deployed Windows Server with an operating system image of approximately 20GB and used two mirrored disks as the boot volume. Booting 256 computers will require approximately 10 TB of memory - just for the operating system image. However, booting with an iSCSI target server uses 40 GB for the base operating system image and 2 GB for delta VHDs per server instance, for a total of 552 GB for operating system images. This saves over 90% of memory for operating system images alone.

Link to docs.microsoft.com
Is there something similar in Linux?
I reviewed the targets (LIO, istgt, STGT, IET, SCST), but did not see anything similar ...
Maybe in Linux this is implemented not by the iscsi service, but at the file system level?
Tell me, please, in which direction to dig ...

Answer the question

In order to leave comments, you need to log in

6 answer(s)
W
Wexter, 2019-02-12
@Wexter

I will assume that under the hood it connects two disks, one for the OS, the second for user data.
In theory, in Linux, you can organize something like this

V
Vitsliputsli, 2019-02-12
@Vitsliputsli

If I understand correctly, there is a base immutable image that all machines use, and changes specific to each machine are stored separately? If so, then read about AUFS and UnionFS, these file systems appeared a long time ago, so there may already be something new.

M
mikes, 2019-02-12
@mikes

Is there something similar in Linux?

depending on what you mean by the word Linux
in proxmox, there is (linked clone)
In general, there is storage in LVM when there are no "images" of disks, but there is also deduplication at the FS (ZFS) level,
but you need to understand that this will not be free in terms of performance. .as well as in MS.

P
pfg21, 2019-02-12
@pfg21

if you go purely from the file with the system root image on the partition with zfs.
then make copies of the original readonly file (instead of a complete copy, a new inode should be created with pointers to the data blocks of the original file).
and then use.
when changing, only the changed data blocks will be written to the copy and the links in the inode will change. and everything unchanged will point to the data blocks of the original. CoW however :)
the data of the original file will not change.
there will be a bunch of large files, which in total will take up not much more than the original.

V
Vladimir, 2019-02-12
@MechanID

This can be done using LVM thin or ZFS - from the original image, you need to make a number of snapshots that are then used to launch virtual machines or containers, for example, proxmox can do this in the free version right out of the box. https://pve.proxmox.com/wiki/VM_Templates_and_Clones you need linked Clone

R
rPman, 2019-02-12
@rPman

unix way - do not try to find a ready-made combiner, build your solution from bricks. Let the same istgt be responsible for iscsi and btrfs for snapshots.
If you close your eyes to iscsi (you never know how to connect virtual machines through them locally, I saw such constructions) qemu / kvm has options when you can connect a disk, but all changes are written to a separate file. All major virtualization systems have the same thing, although they are called differently everywhere.
If it is universal, then historically lvm allows you to make snapshots on block devices, but at the expense of a significant decrease in performance, i.e. you can create 100500 snapshots from your base disk, and send each one to its own machine. I do not recommend for your case 256 active snapshots it will be a fail.
You can use copy on write file systems such as btrfs or zfs (it works worse on linux), creating a snapshot does not reduce performance in them (i.e. you don’t have to pay for it), although the file systems themselves are less fast, as they fragment the content a lot , but when compared with lvm, it is an order of magnitude more efficient.
ps windows machines write very actively during updates, in gigabytes, there will come a time when all this boron cheese will create more problems than good.
btrfs and zfs have a feature - deduplication, i.e. you just add all copies of your containers side by side and the system itself finds the same blocks and optimizes, though at a rudimentary level, btrfs is only offline (this is a relatively new feature, there are almost no normal utilities, but if you install the latest version from the source, there are a lot of tasty things added ) and zfs under linux has terribly low performance (I played on desktop hardware, it is not recommended for hdd only ssd), and it is not justified in any way, and wild consumption of RAM (70 bytes per block, i.e. for 4kb blocks 1tb hdd will be eat 18GB of ram, though no one makes 4k blocks, 16 or 32 yes), it will be justified in your case and will automatically reduce the space occupied by identical machines.
pps a newly installed windows with zfs compression enabled takes up 8GB of disk space, btrfs a little more ... after a year of use, the space occupied by the container (no programs installed, this machine was exclusively for running google chrome) - 26GB (46GB inside the container) .

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question