Commands tagged df (14)

  • Oracle DBA remove some logfiles which are still open by the database and he is complaining the space has not been reclaimed? Use the above command to find out what PID needs to be stopped. Or alternatively recover the file via: cp /proc/pid/fd/filehandle /new/file.txt Show Sample Output


    12
    find -L /proc/*/fd -links 0 2>/dev/null
    res0nat0r · 2009-06-26 18:42:51 2

  • 3
    df -h |awk '{a=$5;gsub(/%/,"",a);if(a > 50){print $0}}'
    kulldox · 2019-05-01 16:33:47 0
  • Reports all local partitions having more than 90% usage. Just add it in a crontab and you'll get a mail when a disk is full. (sending mail to the root user must work for that) Show Sample Output


    2
    df -l | grep -e "9.%" -e "100%"
    dooblem · 2010-04-26 17:57:54 0
  • Show disk space info, grepping out the uninteresting ones beginning with ^none while we're at it. The main point of this submission is the way it maintains the header row with the command grouping, by removing it from the pipeline before it gets fed into the sort command. (I'm surprised sort doesn't have an option to skip a header row, actually..) It took me a while to work out how to do this, I thought of it as I was drifting off to sleep last night! Show Sample Output


    2
    df -h | grep -v ^none | ( read header ; echo "$header" ; sort -rn -k 5)
    purpleturtle · 2011-03-16 14:25:45 1
  • Put into some file. No special purpouse, just for fun...


    1
    tail $0 >> $0
    yooreck · 2009-12-07 12:33:41 1
  • Display the size (human reading) of all the directories in your home path (~). Show Sample Output


    1
    du -sh ~/*
    unixmonkey13748 · 2010-11-05 10:20:16 2
  • show off how big your disks are Show Sample Output


    1
    df -h --total | awk 'NR==1; END{print}'
    PROJAK_SX · 2014-03-01 19:52:50 0
  • With this command, you can check the difference between the volumes mounted and the volume in /etc/fstab.


    0
    diff <(cat /etc/fstab | grep vol | grep -v "^#" | awk '{print $1}') <(df -h | grep vol)
    Koobiac · 2014-01-23 15:18:08 1
  • This will write to TAPE (LTO3-4 in my case) a backup of files/folders. Could be changed to write to DVD/Blueray. Go to the directory where you want to write the output files : cd /bklogs Enter a name in bkname="Backup1", enter folders/files in tobk="/home /var/www". It will create a tar and write it to the tape drive on /dev/nst0. In the process, it will 1) generate a sha512 sum of the tar to $bkname.sha512; so you can validate that your data is intact 2) generate a filelist of the content of the tar with filesize to $bkname.lst 3) buffer the tar file to prevent shoe-shining the tape (I use 4GB for lto3(80mb/sec), 8gb for lto4 (120mb/sec), 3Tb usb3 disks support those speed, else I use 3x2tb raidz. 4) show buffer in/out speed and used space in the buffer 5) show progress bar with time approximation using pv ADD : To eject the tape : ; sleep 75; mt-st -f /dev/nst0 rewoffl TODO: 1) When using old tapes, if the buffer is full and the drive slows down, it means the tape is old and would need to be replaced instead of wiping it and recycling it for an other backup. Logging where and when it slows down could provide good information on the wear of the tape. I don't know how to get that information from the mbuffer output and to trigger a "This tape slowed down X times at Y1gb, Y2gb, Y3gb down to Zmb/s for a total of 30sec. It would be wise to replace this tape next time you want to write to it." 2) Fix filesize approximation 3) Save all the output to $bkname.log with progress update being new lines. (any one have an idea?) 4) Support spanning on multiple tape. 5) Replace tar format with something else (dar?); looking at xar right now (https://code.google.com/p/xar/), xml metadata could contain per file checksum, compression algorithm (bzip2, xv, gzip), gnupg encryption, thumbnail, videopreview, image EXIF... But that's an other project. TIP: 1) You can specify the width of the progressbar of pv. If its longer than the terminal, line refresh will be written to new lines. That way you can see if there was speed slowdown during writing. 2) Remove the v in tar argument cvf to prevent listing all files added to the archive. 3) You can get tarsum (http://www.guyrutenberg.com/2009/04/29/tarsum-02-a-read-only-version-of-tarsum/) and add >(tarsum --checksum sha256 > $bkname_list.sha256) after the tee to generate checksums of individual files !


    0
    bkname="test"; tobk="*" ; totalsize=$(du -csb $tobk | tail -1 | cut -f1) ; tar cvf - $tobk | tee >(sha512sum > $bkname.sha512) >(tar -tv > $bkname.lst) | mbuffer -m 4G -P 100% | pv -s $totalsize -w 100 | dd of=/dev/nst0 bs=256k
    johnr · 2014-07-22 15:47:50 1
  • To be OS independent you should try df -Pk first (Linux) and if it does not work (that's the ||) then use df -k (e.g. for Solaris, HP UX, AIX). To get the output in a single line, use the additional cat.


    0
    (df -Pk 2>/dev/null|| df -k) | cat
    ffeldhaus · 2015-01-15 22:38:36 0

  • 0
    function df_func { local dfts=$(ssh $1 "df -lP | tail -n +2 | sed 's/%//'"); echo $dfts | awk '$5 > 90 {exit 1}' > /dev/null; if [ $? == 1 ]; then echo -n "$1 "; echo $dfts | awk '$5 > 90 {printf "%s %d%%\n", $6, $5}'; fi }
    wytten12 · 2016-03-30 19:57:39 0

  • -1
    df | awk '{if ($2!=dspace) print "different"; dspace=$2;}'
    adimania · 2013-03-08 12:09:35 1

  • -1
    df -h | sort -r -k 5 -i
    Mohammad · 2016-12-05 16:00:31 0
  • For disk space constraint testing. Leaves a little space available for creating temp files, etc. Easily free up the used disk space again by deleting the dummy00 file. Can tailor the testing by building smaller 'blocks' to suit the needs of the testing. WARNING: do not do this to the '/' (root) filesystem unless you know what you are doing... on some systems it could crash the OS.


    -3
    dd if=/dev/zero of=/fs/to/fill/dummy00 bs=8192 count=$(df --block-size=8192 / | awk 'NR!=1 {print $4-100}')
    arcege · 2009-12-03 15:20:18 0

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.

Share Your Commands


Check These Out

create a progress bar...
A simple way yo do a progress bar like wget.

Get all ip address for the host

Install Restricted Multimedia Codecs in Ubuntu 14.04
Install the unrestricted version of libavcodec . It will keep away from any issues or missing codecs in video editors or transcoders. Install unrestricted version of libavcodec by the command.

list block devices
Shows all block devices in a tree with descruptions of what they are.

Function to create an alias on the fly
Is used like this: mkalias rmcache "rm -rfv app/cache/*"

Remove color codes (special characters) with sed
Removes ANSI color and end of line codes to the [{attr1};...;{attrn}m format.

Show permissions of current directory and all directories upwards to /
NB not 'namei -m .', as it slices the path you give it.

Easily decode unix-time (funtion)

Run a command if today is the last day of the month
This is handy to just shove into a daily cron entry. If you do use cron, make sure to escape the %d with \%d or it will fail.

Which processes are listening on a specific port (e.g. port 80)
swap out "80" for your port of interest. Can use port number or named ports e.g. "http"


Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for: