commandlinefu.com is the place to record those command-line gems that you return to again and again.
Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.
If you have a new feature suggestion or find a bug, please get in touch via http://commandlinefu.uservoice.com/
You can sign-in using OpenID credentials, or register a traditional username and password.
First-time OpenID users will be automatically assigned a username which can be changed after signing in.
Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.
Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):
Subscribe to the feed for:
Gives the same results as the command by putnamhill using nine less characters.
This command is primarily going to work on linux boxes.
and needs to be changed, for example
Displays a connection histogram of active tcp connections. Works even better under an alias. Thanks @Areis1 for sharing this one.
Revised approach to and3k's version, using pipes and read rather than command substitution. This does not require fiddling with IFS when paths have whitespace, and does not risk hitting command-line size limits.
It's less verbose on the missing files, but it stops iterating at the first file that's missing, so it should be definitely faster.
I expanded all the qlist options to be more self-describing.
print members both in file1 and file2
Counts TCP states from Netstat and displays in an ordered list.
The cut should match the relevant timestamp part of the logfile, the uniq will count the number of occurrences during this time interval.
Change the cut range for hits per 10 sec, minute and so on... Grep can be used to filter on url or source IP.
awk is evil!
I created this command to give me a quick overview of how many file types a directory, and all its subdirectories, contains. It works based off file extension, rather than file(1)'s magic output, because it ended up being more accurate and less confusing.
Files that don't have an ext (README) are generally not important for me to want to count, but you're free to customize this fit your needs.
This searches the Apache error_log for each of the 5 most significant Apache error levels, if any are found the date is then cut from the output in order to sort then print the most common occurrence of each error.
This command will return a full list of Error 404 pages in the given access log. The following variables have been given to awk
Hostname ($2), ERROR Code ($9), Missing Item ($7), Referrer ($11)
You can then send this into a file (>> /path/to/file), which you can open with OpenOffice as a CSV
Finds the top ten pages returning an http response code of 404 in an apache log.
Most of the "most used commands" approaches does not consider pipes and other complexities.
This approach considers pipes, process substitution by backticks or $() and multiple commands separated by ;
Perl regular expression breaks up each line using | or < ( or ; or ` or $( and picks the first word (excluding "do" in case of for loops)
note: if you are using lots of perl one-liners, the perl commands will be counted as well in this approach, since semicolon is used as a separator
Get a list of all the unique hostnames from the apache configuration files. Handy to see what sites are running on a server.