commandlinefu.com is the place to record those command-line gems that you return to again and again.
Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.
If you have a new feature suggestion or find a bug, please get in touch via http://commandlinefu.uservoice.com/
You can sign-in using OpenID credentials, or register a traditional username and password.
First-time OpenID users will be automatically assigned a username which can be changed after signing in.
Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.
Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):
Subscribe to the feed for:
you can find a special things(with defined -iname "*sql*") from in most of one direcroty(for example from both /etc/ and /pentest/) and then you can want to grep only include "map" word
Tested in bash on AIX & Linux, used for WAS versions 6.0 & up. Sorts by node name.
Useful when you have vertically-stacked instances of WAS/Portal. Cuts out all the classpath/optional parameter clutter that makes a simple "ps -ef | grep java" so difficult to sort through.
Good for finding outdated timthumb.php scripts which need to be updated, anything over 2.0 should be secure, below that timthimb is vulnerable and can be used to compromise your website.
Gets the current system user running a process with the specified pid
Hide comments and empty lines, included XML comments,
This fixes a bug found in the other scripts which fail when a branch has the same name as a file or directory in the current directory.
needs grep what supports '--recursive'
Uses sed with a regex to move the linenumbers to the line end. The plain regex (w/o escapes) looks like that:
calls grep on all non-binary files returned by find on its current working directory
since awk was already there one can use it instead of the 2 greps. might not be faster, but fast enough
Helps if you accidentally deleted files from an svn repo with plain rm and you would like to mark them for svn to delete too.
If both file1 and file2 are already sorted:
comm -13 file1 file2 > file-new
Tail is much faster than sed, awk because it doesn't check for regular expressions.
This command compares file2 with file1 and removes the lines that are in file1 from file2. Handy if you have a file where file1 was the origional and you want to remove the origional data from your file2.
This will recursively go through every file under the current directory showing all lines containing "TODO" as well as 10 lines after it. The output will be marked with line numbers to make it easier to find where the TODO is in the actual file.
Checks your gmail account every 30 seconds and display the number of new messages in the top right corner of the terminal.
A kind of CLI "Gmail notifier" if you will. :-)
This is a mashup of http://www.commandlinefu.com/commands/view/7916/put-a-console-clock-in-top-right-corner and http://www.commandlinefu.com/commands/view/3386/check-your-unread-gmail-from-the-command-line
If you have a bunch of small files that you want to cat to read, you can cat each alone (boring); do a cat *, and you won't see what line is for what file, or do a grep . *. "." will match any string and grep in multifile mode will place a $filename: before each matched line. It works recursively too!!
Fast and excludes words with apostrophes. For ubuntu, you can use wamerican or wbritish dictionaries, installable through aptitude.
-sl : show just file names
Ever ask yourself "How much data would be lost if I pressed the reset button?"
Scary, isn't it?
Expand a URL, aka do a head request, and get the URL. Copy this value to clipboard.