sorts the files by integer megabytes, which should be enough to (interactively) find the space wasters. Now you can
dush
for the above output,
dush -n 3
for only the 3 biggest files and so on. It's always a good idea to have this line in your .profile or .bashrc
Show Sample Output
What happens here is we tell tar to create "-c" an archive of all files in current dir "." (recursively) and output the data to stdout "-f -". Next we specify the size "-s" to pv of all files in current dir. The "du -sb . | awk ?{print $1}?" returns number of bytes in current dir, and it gets fed as "-s" parameter to pv. Next we gzip the whole content and output the result to out.tgz file. This way "pv" knows how much data is still left to be processed and shows us that it will take yet another 4 mins 49 secs to finish. Credit: Peteris Krumins http://www.catonmat.net/blog/unix-utilities-pipe-viewer/ Show Sample Output
An NCurses version of the famous old 'du' unix command
In this case I'm just grabbing the next level of subdirectories (and same level regular files) with the --max-depth=1 flag. leaving out that flag will just give you finer resolution. Note that you have to use the -h switch with both 'du' and with 'sort.' Show Sample Output
Oracle DBA remove some logfiles which are still open by the database and he is complaining the space has not been reclaimed? Use the above command to find out what PID needs to be stopped. Or alternatively recover the file via:
cp /proc/pid/fd/filehandle /new/file.txt
Show Sample Output
A more efficient way, with reversed order to put the focus in the big ones. Show Sample Output
Search for files and list the 20 largest.
find . -type f
gives us a list of file, recursively, starting from here (.)
-print0 | xargs -0 du -h
separate the names of files with NULL characters, so we're not confused by spaces
then xargs run the du command to find their size (in human-readable form -- 64M not 64123456)
| sort -hr
use sort to arrange the list in size order. sort -h knows that 1M is bigger than 9K
| head -20
finally only select the top twenty out of the list
Show Sample Output
Finds all directories containing more than 99MB of files, and prints them in human readable format. The directories sizes do not include their subdirectories, so it is very useful for finding any single directory with a lot of large files. Show Sample Output
find largest file in /var
somewhat faster version to see the size of our directories. Size will be in Kilo Bytes. to view smallest first change '-k1nr' to '-k1n'.
This command simply outputs 10 files in human readable, that takes most space on your disk in current directory.
using mb it's still readable;) a symbol variation $ du -ms {,.[^.]}* | sort -nk1 Show Sample Output
Based on the MrMerry one, just add some visuals to differentiate files and directories
This is easy to type if you are looking for a few (hundred) "missing" megabytes (and don't mind the occasional K slipping in)...
A variation without false positives and also finding gigabytes (but - depending on your keyboard setup - more painful to type):
du -hs *|grep -P '^(\d|,)+(M|G)'|sort -n
(NOTE: you might want to replace the ',' according to your locale!)
Don't forget that you can
modify the globbing as needed! (e.g. '.[^\.]* *' to include hidden files and directories (w/ bash))
in its core similar to:
http://www.commandlinefu.com/commands/view/706/show-sorted-list-of-files-with-sizes-more-than-1mb-in-the-current-dir
Show Sample Output
Even simpler! Use du ... the -s and -c flags summarize and print a grand total of all files recursively. The -b flag prints in byte format. You can use the -h flag instead to print in human readable format. Show Sample Output
I had the problem that our monitoring showed that the "/" filesystem is >90% full. This command helped me to find out fast which subdirs are the biggest. The system has many NFS-mounts therefore the -x.
list the top 15 folders by decreasing size in MB Show Sample Output
Same result as with 'du -ks .[^.]* * | sort -n' but with size outputs in human readable format (e.g., 1K 234M 2G)
A little bit smaller, faster and should handle files with special characters in the name.
This deals nicely with filenames containing special characters and can deal with more files than can fit on a commandline. It also avoids spawning du. Show Sample Output
If you're only using -m or -k, you will need to remember they are either in Megabyte or kilobyte forms. So by using -B, it gives you the unit of the size measurement, which helps you from reading the result faster. You can try with -B K as well. Show Sample Output
commandlinefu.com is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.
Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.
» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10
Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):
Subscribe to the feed for: