commandlinefu.com is the place to record those command-line gems that you return to again and again.
Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.
You can sign-in using OpenID credentials, or register a traditional username and password.
First-time OpenID users will be automatically assigned a username which can be changed after signing in.
Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.
Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):
Subscribe to the feed for:
Wow, didn't really expect you to read this far down. The latest iteration of the site is in open beta. It's a gentle open beta-- not in prime-time just yet. It's being hosted over at UpGuard (link) and you are more than welcome to give it a shot. Couple things:
Show apps that use internet connection at the moment.
Can be used to discover what programms create internet traffic. Skip the part after awk to get more details, though it will not work showing only unique processes.
This version will work with other languages such as Spanish and Portuguese, if the word for "ESTABLISHED" still contain the fragment "STAB"(e.g. "ESTABELECIDO")
This corrects duplicate output from the previous command.
Can be used to discover what programms create internet traffic. Skip the part after awk to get more details.
Has anyone an idea why the uniq doesn't work propperly here (see sample output)?
Handles everything except octets with 255. Ran through ip generator with variable octet lengths.
This obey that you don't match any broadcast or network addresses and stay between 220.127.116.11 - 254.254.254.254
regex to match an ip
Make sure that find does not touch anything other than regular files, and handles non-standard characters in filenames while passing to xargs.
needs no GNU tools, as far as I see it
saves one command. Needs GNU grep though :-(
The grep switches eliminate the need for awk and sed. Modifying vim with -p will show all files in separate tabs, -o in separate vim windows. Just wish it didn't hose my terminal once I exit vim!!
This will drop you into vim to edit all files that contain your grep string.
This will allow you to watch as matches occur in real-time. To filter out only ACCEPT, DROP, LOG..etc, then run the following command: watch 'iptables -nvL | grep -v "0 0" && grep "ACCEPT"' The -v is used to do an inverted filter. ie. NOT "0 0"
Check which files are opened by Firefox then sort by largest size (in MB). You can see all files opened by just replacing grep to "/". Useful if you'd like to debug and check which extensions or files are taking too much memory resources in Firefox.
Best to put it in a file somewhere in your path. (I call the file spath)
IFS=:; find $PATH | grep $1
Usage: $ spath php
-exec works better and faster then using a pipe
doesn't do case-insensitive filenames like iname but otherwise likely to be faster
to omit "grep -v", put some brackets around a single character
Shows all those processes; useful when building some massively forking script that could lead to zombies when you don't have your waitpid()'s done just right.
Remove newlines from output.
One character shorter than awk /./ filename and doesn't use a superfluous cat.
To be fair though, I'm pretty sure fraktil was thinking being able to nuke newlines from any command is much more useful than just from one file.
Pipe any output to "grep ." and blank lines will not be printed.
Same thing as above, just uses fetch and ipchicken.com
xargs -P N spawns up to N worker processes. -n 40 means each grep command gets up to 40 file names each on the command line.
This one will work a little better, the regular expressions it is not 100% accurate for XML parsing but it will suffice any XML valid document for sure.