Answer the question
In order to leave comments, you need to log in
Why are files deleted slowly and with pauses on Samsung centos 6 server SSDs?
There are 2 disks
SM961 - 256 GB EXT4 file system Worked for 3 years
PM883 - 8TB EXT4 file system absolutely new worked for 10 days
Centos 6.10 system file limits raised to 65535
I did not pay attention to this problem before, but yesterday I accidentally noticed.
Copying from 1 SSD SM951 NVME to 2 PM883 SATA folder MOD_TMP about 15 million session files, everything flies without brakes, but when I start deleting the folder, it doesn’t matter which disk the plug starts, it quickly deletes for a few seconds, then pauses, then quickly deletes again and pause again, etc. while people can’t use the site normally, for example, I want to upload files to the server through the site, it’s terribly slow.
It seems to me that I run into some settings when deleting files of small files. Record after all and reading plows without problems. Disks all the rules if that.
Answer the question
In order to leave comments, you need to log in
You rested against restriction rm Use find.
find /path/to/folder -name "*" -type f -print | xargs /bin/rm -f
If a folder with over a billion files is often deleted, then this task can be reconsidered architecturally. For example, mount this folder as volume and format it. It's faster. In contrast, 1-by-file deletions require committing a transaction for each file. And these are redundant actions that just create a stream of IOPs over ext4 data structures. Similar to DB. What the author is doing is deleting every row from the commit table. And that that I offer is on sense truncate table.
Most likely due to how many files you have, how full the disks are and how the ssd data is deleted.
unlike hdd, ssd - you need to perform much more manipulations.
description: https://recovery-software.ru/blog/how-ssd-drives-p...
to delete try to run with a lower priority
ionice -c2 -n7 COMMAND
0 02 * * * /usr/bin/timeout 18000 /usr/bin/flock -n /tmp/remove_files.lock -c "ionice -c2 -n7 find /path/to/folder -name "*" -type f - print | xargs -P 0 /bin/rm -f"
A common problem when there are a lot of files in one folder, scatter them into folders or store them in a memcache thread
Thank you all for your participation, in general, I think there is some problem with simultaneous reading and deleting files, I put out the web server, I was able to easily delete the entire folder through VC, in a few minutes, without almost no delays, the volume of the folder was 37GB session files and temporary files. Most likely, the Samsung SM951 drive did not pull out reading and deleting 15 million files at the same time.
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question