function df_func { local dfts=$(ssh $1 "df -lP | tail -n +2 | sed 's/%//'"); echo $dfts | awk '$5 > 90 {exit 1}' > /dev/null; if [ $? == 1 ]; then echo -n "$1 "; echo $dfts | awk '$5 > 90 {printf "%s %d%%\n", $6, $5}'; fi }

Reports file systems with disk usage exceeding 90% on the specified host


0
By: wytten12
2016-03-30 19:57:39

These Might Interest You

  • The above command will send 4GB of data from one host to the next over the network, without consuming any unnecessary disk on either the client nor the host. This is a quick and dirty way to benchmark network speed without wasting any time or disk space. Of course, change the byte size and count as necessary. This command also doesn't rely on any extra 3rd party utilities, as dd, ssh, cat, /dev/zero and /dev/null are installed on all major Unix-like operating systems. Show Sample Output


    5
    dd if=/dev/zero bs=4096 count=1048576 | ssh user@host.tld 'cat > /dev/null'
    atoponce · 2010-06-08 18:49:51 5
  • A short way to give us relevant report in a moment done about quantities on disk usage, memory and swap in our Linux Systems. Show Sample Output


    -9
    alias dfr='df;free'
    ximo88 · 2009-04-28 11:30:31 0
  • Reports all local partitions having more than 90% usage. Just add it in a crontab and you'll get a mail when a disk is full. (sending mail to the root user must work for that) Show Sample Output


    2
    df -l | grep -e "9.%" -e "100%"
    dooblem · 2010-04-26 17:57:54 0
  • ncdu is a text-mode ncurses-based disk usage analyzer. Useful for when you want to see where all your space is going. For a single flat directory it isn't more elaborate than an du|sort or some such thing, but this analyzes all directories below the one you specify so space consumed by files inside subdirectories is taken into account. This way you get the full picture. Features: file deletion, file size or size on disk and refresh as contents change. Homepage: http://dev.yorhel.nl/ncdu Show Sample Output


    4
    ncdu directory_name
    bwoodacre · 2009-06-09 00:02:48 3
  • (WARN) This will absolutely not work on all systems, unless you're running large, high speed, hardware RAID arrays. For example, systems using Dell PERC 5/i SAS/SATA arrays. If you have a hardware RAID array, try it. It certainly wont hurt. You may be can test the speed disk with some large file in your system, before and after using this: time dd if=/tmp/disk.iso of=/dev/null bs=256k To know the value of block device parameter known as readahead. blockdev --getra /dev/sdb And set the a value 1024, 2048, 4096, 8192, and maybe 16384... it really depends on the number of hard disks, their speed, your RAID controller, etc. (see sample) Show Sample Output


    2
    blockdev --setra 1024 /dev/sdb
    starchox · 2009-02-18 16:27:01 1
  • Recursively searches current directory and outputs sorted list of each directory's disk usage to a text file. Show Sample Output


    6
    du | sort -gr > file_sizes
    chrisdrew · 2009-01-26 01:12:54 2

What do you think?

Any thoughts on this command? Does it work on your machine? Can you do the same thing with only 14 characters?

You must be signed in to comment.

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.

Share Your Commands



Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for: