commandlinefu.com is the place to record those command-line gems that you return to again and again.
Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.
If you have a new feature suggestion or find a bug, please get in touch via http://commandlinefu.uservoice.com/
You can sign-in using OpenID credentials, or register a traditional username and password.
First-time OpenID users will be automatically assigned a username which can be changed after signing in.
Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.
Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):
Subscribe to the feed for:
Make sure that find does not touch anything other than regular files, and handles non-standard characters in filenames while passing to xargs.
needs no GNU tools, as far as I see it
saves one command. Needs GNU grep though :-(
The grep switches eliminate the need for awk and sed. Modifying vim with -p will show all files in separate tabs, -o in separate vim windows. Just wish it didn't hose my terminal once I exit vim!!
This will drop you into vim to edit all files that contain your grep string.
This will allow you to watch as matches occur in real-time. To filter out only ACCEPT, DROP, LOG..etc, then run the following command: watch 'iptables -nvL | grep -v "0 0" && grep "ACCEPT"' The -v is used to do an inverted filter. ie. NOT "0 0"
Check which files are opened by Firefox then sort by largest size (in MB). You can see all files opened by just replacing grep to "/". Useful if you'd like to debug and check which extensions or files are taking too much memory resources in Firefox.
Best to put it in a file somewhere in your path. (I call the file spath)
IFS=:; find $PATH | grep $1
Usage: $ spath php
-exec works better and faster then using a pipe
doesn't do case-insensitive filenames like iname but otherwise likely to be faster
to omit "grep -v", put some brackets around a single character
Shows all those processes; useful when building some massively forking script that could lead to zombies when you don't have your waitpid()'s done just right.
Remove newlines from output.
One character shorter than awk /./ filename and doesn't use a superfluous cat.
To be fair though, I'm pretty sure fraktil was thinking being able to nuke newlines from any command is much more useful than just from one file.
Pipe any output to "grep ." and blank lines will not be printed.
Same thing as above, just uses fetch and ipchicken.com
xargs -P N spawns up to N worker processes. -n 40 means each grep command gets up to 40 file names each on the command line.
This one will work a little better, the regular expressions it is not 100% accurate for XML parsing but it will suffice any XML valid document for sure.
I wanted all the 'hidden' .flv files from the http link in the command line; wget seemed appropriate, fed with output from lynx, grep the flv files and the normalised via sed (to remove the numeric bullet). Similar to the 'Grab mp3 files' fu. Replace link with your own, grep arg with something more interesting ;) See here for something along the same lines...
Hope you find it useful! Improvements welcome, naturally.
Find Word docs by filename in the current directory, convert each of them to plain text using antiword (taking care of spaces in filenames), then grep for a search term in the particular file.
(Of course, it's better to save your data as plain text to make for easier grepping, but that's not always possible.)
Requires antiword. Or you can modify it to use catdoc instead.
grep -o puts each occurrence in a separate line
grep's -c outputs how may matches there are for a given file as "file:N", cut takes the N's and awk does the sum.
I often use "vim -p" to open in tabs rather than buffers.