T
T
tvoyadres2020-02-14 16:03:59
linux
tvoyadres, 2020-02-14 16:03:59

Why are files deleted slowly and with pauses on Samsung centos 6 server SSDs?

There are 2 disks
SM961 - 256 GB EXT4 file system Worked for 3 years
PM883 - 8TB EXT4 file system absolutely new worked for 10 days
Centos 6.10 system file limits raised to 65535
I did not pay attention to this problem before, but yesterday I accidentally noticed.

Copying from 1 SSD SM951 NVME to 2 PM883 SATA folder MOD_TMP about 15 million session files, everything flies without brakes, but when I start deleting the folder, it doesn’t matter which disk the plug starts, it quickly deletes for a few seconds, then pauses, then quickly deletes again and pause again, etc. while people can’t use the site normally, for example, I want to upload files to the server through the site, it’s terribly slow.

It seems to me that I run into some settings when deleting files of small files. Record after all and reading plows without problems. Disks all the rules if that.

Answer the question

In order to leave comments, you need to log in

5 answer(s)
V
Vadim Priluzkiy, 2020-02-14
@Oxyd

You rested against restriction rm Use find.

find /path/to/folder -name "*" -type f -print | xargs /bin/rm -f

In this case, find will pass files one by one to the input of rm and the plugs will disappear. You can also try the xargs key -P x where x is the number of files to be deleted at the same time. Set by the number of processor cores, for example. -P 0 will automatically start as many processes as possible.

M
mayton2019, 2020-02-15
@mayton2019

If a folder with over a billion files is often deleted, then this task can be reconsidered architecturally. For example, mount this folder as volume and format it. It's faster. In contrast, 1-by-file deletions require committing a transaction for each file. And these are redundant actions that just create a stream of IOPs over ext4 data structures. Similar to DB. What the author is doing is deleting every row from the commit table. And that that I offer is on sense truncate table.

Z
zersh, 2020-02-18
@zersh

Most likely due to how many files you have, how full the disks are and how the ssd data is deleted.
unlike hdd, ssd - you need to perform much more manipulations.
description: https://recovery-software.ru/blog/how-ssd-drives-p...
to delete try to run with a lower priority

ionice -c2 -n7 COMMAND

and you can also use the crown, at the time of the least load on the server, for example, at night
for a certain time. something like this, combining the advice that was given earlier:
0 02 * * * /usr/bin/timeout 18000 /usr/bin/flock -n /tmp/remove_files.lock -c "ionice -c2 -n7 find /path/to/folder -name "*" -type f - print | xargs -P 0 /bin/rm -f"

the deletion process will run every night from 2:00 am to 7:00 am. (/usr/bin/timeout 18000 = 5h)

P
Puma Thailand, 2020-02-15
@opium

A common problem when there are a lot of files in one folder, scatter them into folders or store them in a memcache thread

T
tvoyadres, 2020-02-20
@tvoyadres

Thank you all for your participation, in general, I think there is some problem with simultaneous reading and deleting files, I put out the web server, I was able to easily delete the entire folder through VC, in a few minutes, without almost no delays, the volume of the folder was 37GB session files and temporary files. Most likely, the Samsung SM951 drive did not pull out reading and deleting 15 million files at the same time.

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question