Often you need to find the files that are taking up the most disk space in order to free up space asap. This script can be run on the enitre filesystem as root or on a home directory to find the largest files. Show Sample Output
Greater than 500M and sorted by size.
This combines the above two command into one. Note that you can leave off the last two commands and simply run the command as "find /home/ -type f -exec du {} \; 2>/dev/null | sort -n | tail -n 10" The last two commands above just convert the output into human readable format.
du -m option to not go across mounts (you usually want to run that command to find what to destroy in that partition) -a option to also list . files -k to display in kilobytes sort -n to sort in numerical order, biggest files last tail -10 to only display biggest 10
Show the top 10 file size
Counts the frequency of words in a file Show Sample Output
OpenVZ: Get disk quota usage for your VEID Show Sample Output
A shorter version Show Sample Output
If you're only using -m or -k, you will need to remember they are either in Megabyte or kilobyte forms. So by using -B, it gives you the unit of the size measurement, which helps you from reading the result faster. You can try with -B K as well. Show Sample Output
Original: https://bugzilla.redhat.com/show_bug.cgi?id=194342
Shows the full output of lsof.
We can get useful statistics from tcpdump with this simple command. Thanks "Babak Farrokhi" to teaching me this ;)
Useful for checking the number and state of TCP connections. Show Sample Output
You can use any dictionary you want, in any language. This command will output all single-word annotations that you have underlined in your Kindle device (provided the file) given a list of language-specific words. If you want to learn vocabulary, this command is ideal.
Evoke from the command like as:
timeDNS commandlinefu.com
.
This isn't too terribly practical, but it is a good code example of using subshells to run the queries in parallel and the use of an "anonymous function" (a/k/a "inline group") to group i/o.
.
I'm assuming you have already defined your local DNS cache as ${local_DNS}, (here, it's 192.168.0.1).
.
You do need to install `moreutils` to get `sponge`.
.
If you're willing to wait, a slower version w/o sponge, (and w/o sorting), is this:
.
DNS () { for x in "192.168.0.1" "208.67.222.222" "208.67.220.220" "198.153.192.1" "198.153.194.1" "156.154.70.1" "156.154.71.1" "8.8.8.8" "8.8.4.4"; do (echo -n "$x "; dig @"$x" "$*"|grep Query) ; done ; }
Show Sample Output
as per eightmillion's comment. Simply economical :)
creates associative array from apache logs, assumes "combined" log format or similar. replace awk column to suit needs. bandwidth per ip is also useful. have fun. I haven't found a more efficient way to do this as yet. sorry, FIXED TYPO: log file should obviously go after awk, which then pipes into sort.
commandlinefu.com is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.
Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.
» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10
Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):
Subscribe to the feed for: