Answer the question
In order to leave comments, you need to log in
Maintaining minimal free disk space Linux?
There is a server with a separate disk for cache. The cache size can vary by strong jumps, for example:
- 30 Gb
- 40 Gb
- 35 Gb
- 120 Gb
- 70 Gb
At the moment, the cache is cleared according to the "delete files older than X days" principle by cron.
In general, everything is fine, but the disk space is not being used optimally. On average, about 30% of the disk is used, and the rest is only at the peaks of the surges.
Tell me if there is some utility or a banal find . -delete which could delete the oldest files automatically when the disk reaches, say, 90% full. A kind of preemptive cache on the disk.
It is the storage of ordinary files that is of interest, without the use of cloud solutions, etc.
Answer the question
In order to leave comments, you need to log in
banal find . -delete
FILESYSTEM=/dev/sda1 # or whatever filesystem to monitor
CAPACITY=95 # delete if FS is over 95% of usage
CACHEDIR=/home/user/lotsa_cache_files/
# Proceed if filesystem capacity is over than the value of CAPACITY (using df POSIX syntax)
# using [ instead of [[ for better error handling.
if [ $(df -P $FILESYSTEM | awk '{ gsub("%",""); capacity = $5 }; END { print capacity }') -gt $CAPACITY ]
then
# lets do some secure removal (if $CACHEDIR is empty or is not a directory find will exit
# with error which is quite safe for missruns.):
find "$CACHEDIR" --maxdepth 1 --type f -exec rm -f {} \;
# remove "maxdepth and type" if you want to do a recursive removal of files and dirs
find "$CACHEDIR" -exec rm -f {} \;
fi
DIR=/tmp
FREESPACE=1000000
find $DIR -type f | xargs ls -1rt | while read f ; do
if [ `df --output=avail $DIR | tail -1` -ge $FREESPACE ] ; then
break
fi
# rm -f $f
done
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question