M
M
MountyPiton2017-05-03 10:23:26
linux
MountyPiton, 2017-05-03 10:23:26

How to copy several million files as fast as possible in Linux?

There is a machine running Linux RHEL 5.6. She was given a block device with a volume of 30TB. The xfs file system has been created on this device. It contains several tens of millions of small files. They are divided into folders. There is a task to copy several of these folders to another 10TB block device. Several million objects. The file system on the new block device is also xfs. What is the fastest way to do this?

Answer the question

In order to leave comments, you need to log in

2 answer(s)
V
Victor Taran, 2017-05-03
@shambler81

zip -r -0
without compression, thus you get rid of the giant iO
, or alternatively
Format the attention "file" under ext2 and mount it as a device.
After that place your files there. In this case, there will be no problems with IO and the image itself can be copied.
3. option csync2 clusterfs and so on
4. rsync
and so on

G
Gintoki, 2017-05-11
@Gintoki

The best way would be, I think:
tar cf - *path to a directory with a million files* | ssh [email protected] "tar xf - -C *decompression path*"

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question