N
N
nonvon2022-01-20 13:57:23
linux
nonvon, 2022-01-20 13:57:23

When trying to read the ls /root directory, the process freezes, how to fix it?

Actually, 15 million files were generated in this folder by cron using wget

. I fixed the problem, cleaned the folder. but now at any attempt to read the directory everything hangs. ssh works. There is nothing suspicious in the logs. And yes, the hung process cannot be removed - it simply does not kill ((only reboot. In the meantime, you don’t remove the process - the server does not respond to other commands from root. With the rights to the folder, everything seems to be normal, the cd command works. How to fix it? maybe the file one is no longer cake? Nobody came across?

And yes - at first everything was normal.

61e949d4a3407759946592.png
61e949f00ae2b031463711.png

Confuses the size of the folder - it is still 600 megs

Answer the question

In order to leave comments, you need to log in

5 answer(s)
N
nonvon, 2022-01-20
@nonvon

In general, the problem is solved
61e950d2311dc988024137.png
, that's what was in the folder.
it turns out that somehow this script was launched (panel installer (() - possibly due to incorrect rights to the folder - the screenshots in the topic are correct)))
Thank you all for trying to help - I myself was shocked by the problem ((( sometimes it works, sometimes it doesn't work , by the
way, tell me why it was so?

S
shurshur, 2022-01-20
@shurshur

By default, ls not only reads the directory, but also does a stat on each file to color-code their type and executable bit. This greatly slows down the process.
Bypass calling ls without the options given in alias ls:

$ alias ls
alias ls='ls --color=auto'
$ ls # вызывает алиас
$ 'ls' # вызывает обычный ls без параметров

But it's better not to output to the console (especially if the server is open over the network) - it will take a very long time - but redirect to a file and then view it.
'ls' /root > /tmp/list_of_files
less /tmp/list_of_files

The easiest way to delete files is with find:
find /root -name 'index.html.*' -delete

A
AVKor, 2022-01-20
@AVKor

ls /root

Actually, 15 million cron files were generated in this folder via wget

Nobody faced?

No. rootexists to administer the system, and not to download trillions of files into its directory.

O
Oleg Volkov, 2022-01-20
@voleg4u

Ext4 is good, I was afraid that btrfs was there. And so everything is a bundle. "ls" hangs because there are still millions of files. This operation causes a frenzied disk IO and uses a lot of memory (or worse, swap). Therefore, everything freezes in this world. If you look at the size of "ls -lhd /root" of the directory itself, you'll be horrified. The best way to repair has already been suggested to you - save everything you need (usually .ssh , everything else is acquisitive) and demolish the entire directory, then create it again and return the saved files.

S
Saboteur, 2022-01-20
@saboteur_kiev

I suspect that the file system structure expands the size of the directory to fit a list of 15 million files, but it does not know how to reduce the size of the directory.
That is, when the files were created, the blocks were allocated under "/root" and that's it, now ls will subtract the entire volume, despite the fact that it only uses a few entries.
I recommend creating a new directory, moving all the visible contents into it, and bang the old one, then rename the new directory to root,
but it seems that this has already been done.

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question