commandlinefu.com is the place to record those command-line gems that you return to again and again.
Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.
You can sign-in using OpenID credentials, or register a traditional username and password.
First-time OpenID users will be automatically assigned a username which can be changed after signing in.
Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.
» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10
Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):
Subscribe to the feed for:
Checks the Gmail ATOM feed for your account, parses it and outputs a list of unread messages.
For some reason sed gets stuck on OS X, so here's a Perl version for the Mac:
curl -u username:password --silent "https://mail.google.com/mail/feed/atom" | tr -d '\n' | awk -F '<entry>' '{for (i=2; i<=NF; i++) {print $i}}' | perl -pe 's/^<title>(.*)<\/title>.*<name>(.*)<\/name>.*$/$2 - $1/'
If you want to see the name of the last person, who added a message to the conversation, change the greediness of the operators like this:
curl -u username:password --silent "https://mail.google.com/mail/feed/atom" | tr -d '\n' | awk -F '<entry>' '{for (i=2; i<=NF; i++) {print $i}}' | perl -pe 's/^<title>(.*)<\/title>.*?<name>(.*?)<\/name>.*$/$2 - $1/'
This finds all the PowerPC apps recognized by OS X.
A better version is:
system_profiler SPApplicationsDataType 2> /dev/null | perl -
wnl -e '$i=$j=$k=$p=0; @al=; [email protected]; while($j
s[$i].=$al[$j]; $i++ if ($al[$j]) =~ /^\s\s\s\s\S.*:$/; $j++} while($k
apps[$k++]; if (/Kind: PowerPC/s) {print; $p++;}} print "$i applications, $p P
owerPC applications\n\n"'
but that is more than 255 characters...
When you have one of those (log)files that only has epoch for time (since no one will ever look at them as a date) this is a way to get the human readable date/time and do further inspection.
Mostly perl-fu :-/
time perl -e 'if(opendir D,"."){@a=readdir D;print $#a - 1,"\n"}'
205413
real 0m0.497s
user 0m0.220s
sys 0m0.268s
time { ls |wc -l; }
205413
real 0m3.776s
user 0m3.340s
sys 0m0.424s
*********
** EDIT: turns out this perl liner is mostly masturbation. this is slightly faster:
find . -maxdepth 1 | wc -l
sh-3.2$ time { find . -maxdepth 1|wc -l; }
205414
real 0m0.456s
user 0m0.116s
sys 0m0.328s
** EDIT: now a slightly faster perl version
perl -e 'if(opendir D,"."){++$c foreach readdir D}print $c-1,"\n"'
sh-3.2$ time perl -e 'if(opendir D,"."){++$c foreach readdir D}print $c-1,"\n"'
205414
real 0m0.415s
user 0m0.176s
sys 0m0.232s
substitute the URL with your private/public XML url from calendar sharing settings
substitute the dates YYYY-mm-dd
adjust the perl parsing part for your needs
Fetches the IPs and ONLY the IPs from ifconfig. Simplest, shortest, cleanest.
Perl is too good to be true...
(P.S.: credit should go to Peteris Krumins at catonmat.net)
This command will output 1 if the given argument is a valid ip address and 0 if it is not.
There's probably a more efficient way to do this rather than the relatively long perl program, but perl is my hammer, so text processing looks like a nail.
This is of course a lot to type all at once. You can make it better by putting this somewhere:
clf () { (curl -d "[email protected]" http://www.commandlinefu.com/search/autocomplete 2>/dev/null) | egrep 'autocomplete|votes|destination' | perl -pi -e 's/<a style="display:none" class="destination" href="//g;s/<[^>]*>//g;s/">$/\n\n/g;s/^ +|\([0-9]+ votes,//g;s/^\//http:\/\/commandlinefu.com\//g'; }
Then, to look up any command, you can do this:
clf diff
This is similar to http://www.colivre.coop.br/Aurium/CLFUSearch except that it's just one line, so more in the spirit of CLF, in my opinion.
OK, not the most useful but a good way to impress friends. Requires the "display" command from ImageMagick.
Good for summing the numbers embedded in text - a food journal entry for example with calories listed per food where you want the total calories. Use this to monitor and keep a total on anything that ouputs numbers.
Print out list of all branches with last commit date to the branch, including relative time since commit and color coding.
This command might not be useful for most of us, I just wanted to share it to show power of command line.
Download simple text version of novel David Copperfield from Poject Gutenberg and then generate a single column of words after which occurences of each word is counted by sort | uniq -c combination.
This command removes numbers and single characters from count. I'm sure you can write a shorter version.
There was another line that was dependent on having un-named screen sessions. This just wouldn't do. This one works no matter what the name is. A possible improvement would be removing the perl dependence, but that doesn't effect me.
When Ldapsearch queries an Active directory server, all the dates are shown using a timestamp of 18 digits. This perl regexp decodes them in a more human friendly notation. 11644473600 corresponds to some microsoft epoch.
Converts windows lined-style file to unix.
see http://en.wikipedia.org/wiki/Newline
Can be used to convert from linux2dos : just invert \r\n and \n.
This command will replace all instances of 'foo' with 'bar' in all files in the current working directory and any sub-directories.
Finds all directories containing more than 99MB of files, and prints them in human readable format. The directories sizes do not include their subdirectories, so it is very useful for finding any single directory with a lot of large files.