commandlinefu.com is the place to record those command-line gems that you return to again and again.
Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.
If you have a new feature suggestion or find a bug, please get in touch via http://commandlinefu.uservoice.com/
You can sign-in using OpenID credentials, or register a traditional username and password.
First-time OpenID users will be automatically assigned a username which can be changed after signing in.
Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.
Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):
Subscribe to the feed for:
needs grep what supports '--recursive'
Uses sed with a regex to move the linenumbers to the line end. The plain regex (w/o escapes) looks like that:
calls grep on all non-binary files returned by find on its current working directory
since awk was already there one can use it instead of the 2 greps. might not be faster, but fast enough
Helps if you accidentally deleted files from an svn repo with plain rm and you would like to mark them for svn to delete too.
If both file1 and file2 are already sorted:
comm -13 file1 file2 > file-new
Tail is much faster than sed, awk because it doesn't check for regular expressions.
This command compares file2 with file1 and removes the lines that are in file1 from file2. Handy if you have a file where file1 was the origional and you want to remove the origional data from your file2.
This will recursively go through every file under the current directory showing all lines containing "TODO" as well as 10 lines after it. The output will be marked with line numbers to make it easier to find where the TODO is in the actual file.
Checks your gmail account every 30 seconds and display the number of new messages in the top right corner of the terminal.
A kind of CLI "Gmail notifier" if you will. :-)
This is a mashup of http://www.commandlinefu.com/commands/view/7916/put-a-console-clock-in-top-right-corner and http://www.commandlinefu.com/commands/view/3386/check-your-unread-gmail-from-the-command-line
If you have a bunch of small files that you want to cat to read, you can cat each alone (boring); do a cat *, and you won't see what line is for what file, or do a grep . *. "." will match any string and grep in multifile mode will place a $filename: before each matched line. It works recursively too!!
Fast and excludes words with apostrophes. For ubuntu, you can use wamerican or wbritish dictionaries, installable through aptitude.
-sl : show just file names
Ever ask yourself "How much data would be lost if I pressed the reset button?"
Scary, isn't it?
Expand a URL, aka do a head request, and get the URL. Copy this value to clipboard.
manswitch grep -o
This will take you to the relevant part of the man page, so you can see the description of the switch underneath.
Videos are found using their MIME type. Thus no need to for an extension for the video file.
This is a efficent version of "jnash" cmd (4086). Thanks for jnash. This cmd will only show video files while his cmd show files having "video" anywhere in path.
Find the usage of a switch with out searching through the entire man page.
Usage: manswitch [cmd] [switch]
manswitch grep silent
In simple words
man <cmd> | grep "\-<switch>"
man grep | grep "\-o"
This is not a standard method but works.
No final count, but clean and simple output.
Much better alternatives - grep-alikes using perl regexps. With more options, and nicer outputs.
Grabs the Apache config file (yielded from httpd) and returns the path specified as DocumentRoot.
If you've ever tried "grep -P" you know how terrible it is. Even the man page describes it as "highly experimental". This function will let you 'grep' pipes and files using Perl syntax for regular expressions.
The first argument is the pattern, e.g. '/foo/'. The second argument is a filename (optional).
ls -F | grep /\$
but will break on directories containing newlines. Or the safe, POSIX sh way (but will miss dotfiles):
for i in *; do test -d "./$i" && printf "%s\n" "$i"; done