Probably only works with GNU du and modern perls. Show Sample Output
What happens here is we tell tar to create "-c" an archive of all files in current dir "." (recursively) and output the data to stdout "-f -". Next we specify the size "-s" to pv of all files in current dir. The "du -sb . | awk ?{print $1}?" returns number of bytes in current dir, and it gets fed as "-s" parameter to pv. Next we gzip the whole content and output the result to out.tgz file. This way "pv" knows how much data is still left to be processed and shows us that it will take yet another 4 mins 49 secs to finish. Credit: Peteris Krumins http://www.catonmat.net/blog/unix-utilities-pipe-viewer/ Show Sample Output
I use this on debian testing, works like the other sorted du variants, but i like small numbers and suffixes :) Show Sample Output
Directories listed in human-readable format
In this case I'm just grabbing the next level of subdirectories (and same level regular files) with the --max-depth=1 flag. leaving out that flag will just give you finer resolution. Note that you have to use the -h switch with both 'du' and with 'sort.' Show Sample Output
i'm using gawk, you may get varying mileage with other varieties. You might want to change the / after du to say, /home/ or /var or something, otherwise this command might take quite some time to complete. Sorry it's so obsfucated, I had to turn a script into a one-liner under 255 characters for commandlinefu. Note: the bar ratio is relative, so the highest ratio of the total disk, "anchors" the rest of the graph. EDIT: the math was slightly wrong, fixed it. Also, made it compliant with older versions of df. Show Sample Output
A more efficient way, with reversed order to put the focus in the big ones. Show Sample Output
Use this as a quick and simple alternative to the slightly verbose "du -s --max-depth=1" Show Sample Output
Very useful when you need disk space. It calculates the disk usage of all files and dirs (descending them) located at the current directory (including hidden ones). Then sort puts them in order.
Search for files and list the 20 largest.
find . -type f
gives us a list of file, recursively, starting from here (.)
-print0 | xargs -0 du -h
separate the names of files with NULL characters, so we're not confused by spaces
then xargs run the du command to find their size (in human-readable form -- 64M not 64123456)
| sort -hr
use sort to arrange the list in size order. sort -h knows that 1M is bigger than 9K
| head -20
finally only select the top twenty out of the list
Show Sample Output
use "watch" instead of while-loops in these simple cases
why make it complicated ? : ] -------------------- I just noticed someone else has posted this on this site before me (sorry I am now a duplicate :/) http://www.commandlinefu.com/commands/view/4313
This command will tell you the 20 biggest directories starting from your working directory and skips directories on other filesystems. Useful for resolving disk space issues.
This allows the output to be sorted from largest to smallest in human readable format.
Recursively searches current directory and outputs sorted list of each directory's disk usage to a text file. Show Sample Output
Finds all directories containing more than 99MB of files, and prints them in human readable format. The directories sizes do not include their subdirectories, so it is very useful for finding any single directory with a lot of large files. Show Sample Output
commandlinefu.com is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.
Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.
» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10
Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):
Subscribe to the feed for: