P
P
PlatinumArcade2011-03-09 18:12:14
linux
PlatinumArcade, 2011-03-09 18:12:14

Linux: a large number of files in a folder - how much?

On hosting, I was scared that it is extremely undesirable to keep more than 3000 files in one folder, the performance of access to these files slows down. Googled, people keep hundreds of thousands, sort of.

So where is the truth?

Answer the question

In order to leave comments, you need to log in

6 answer(s)
J
javenue, 2011-03-09
@javenue

From my own experience:
10,000 is quite a normal number.
50 thousand and more - it is worth thinking about subfolders and the hierarchy of folders / documents.

S
Sergey Rogozhkin, 2011-03-09
@thecoder

The short answer is that it depends on the limitations of the file system.
This is, perhaps, an individual measure for the selected system, on how many files the brakes will begin.
In practice, the more files in a folder, the slower the listing. Individual access to one file out of 30 thousand may not differ in speed (if the directory is never entered through the terminal), but listing may take several minutes and load the processor.
Personally, I distribute according to the convenience of copying and archiving, about several hundred files (up to a thousand) in one folder.

S
Sergey Mushtuk, 2011-03-09
@Osaka

Under Linux, different file systems are used and they have different limitations.
The most common are ext3 and ext4, but RaserFS, XFS, etc. are not so rare.
For example, 655360 inodes are allocated per partition in ext3, and when they run out, nothing can be written.

J
Jazzist, 2011-03-10
@Jazzist

This is an incorrect question, since in a particular situation, performance significantly depends on a number of other factors.
On the same machine in different circumstances, you can get noticeable slowdowns when reading a directory with 50,000 files, and with 1000.

@
@sledopit, 2011-03-10
_

with insane amounts, some inconvenience in work may occur, some of which plunge inexperienced users into a panic (:
for example, df -h shows that there are still 20 gigs of space, and the file is not created, reporting that the space has run out (it only happens if the inodes have run out) or rm * and many masks do not work (the shell swears at a large number of arguments)
.

A
Anatoly, 2011-03-09
@taliban

A large number of files in a folder actually slows down access to them.
When I had the task of arranging hundreds of thousands of files and then quickly getting them, I made a hierarchy, as the person above wrote. As far as I remember, each fs has its own limits on the number of files in one folder, like 65 thousand. So if your number of files is growing all the time, then it's better to think about the hierarchy right away.

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question