Hide

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again.

Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.


If you have a new feature suggestion or find a bug, please get in touch via http://commandlinefu.uservoice.com/

Get involved!

You can sign-in using OpenID credentials, or register a traditional username and password.

First-time OpenID users will be automatically assigned a username which can be changed after signing in.

Hide

Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for:

Hide

News

2011-03-12 - Confoo 2011 presentation
Slides are available from the commandlinefu presentation at Confoo 2011: http://presentations.codeinthehole.com/confoo2011/
2011-01-04 - Moderation now required for new commands
To try and put and end to the spamming, new commands require moderation before they will appear on the site.
2010-12-27 - Apologies for not banning the trolls sooner
Have been away from the interwebs over Christmas. Will be more vigilant henceforth.
2010-09-24 - OAuth and pagination problems fixed
Apologies for the delay in getting Twitter's OAuth supported. Annoying pagination gremlin also fixed.
Hide

Tags

Hide

Functions

Commands tagged grep from sorted by
Terminal - Commands tagged grep - 336 results
fetch -q -o - http://ipchicken.com | egrep -o '([[:digit:]]{1,3}\.){3}[[:digit:]]{1,3}'
2009-08-06 11:57:44
User: spackle
Functions: egrep
-1

Same thing as above, just uses fetch and ipchicken.com

find . -type f -print0 | xargs -0 -P 4 -n 40 grep -i foobar
2009-08-05 23:18:44
User: ketil
Functions: find grep xargs
4

xargs -P N spawns up to N worker processes. -n 40 means each grep command gets up to 40 file names each on the command line.

grep -Eho '<[a-ZA-Z_][a-zA-Z0-9_-:]*' * | sort -u | cut -c2-
2009-08-05 21:54:29
User: inkel
Functions: cut grep sort
Tags: sort grep cut xml
0

This one will work a little better, the regular expressions it is not 100% accurate for XML parsing but it will suffice any XML valid document for sure.

wget `lynx -dump http://www.ebow.com/ebowtube.php | grep .flv$ | sed 's/[[:blank:]]\+[[:digit:]]\+\. //g'`
2009-08-02 14:09:53
User: spaceyjase
Functions: grep sed wget
3

I wanted all the 'hidden' .flv files from the http link in the command line; wget seemed appropriate, fed with output from lynx, grep the flv files and the normalised via sed (to remove the numeric bullet). Similar to the 'Grab mp3 files' fu. Replace link with your own, grep arg with something more interesting ;) See here for something along the same lines...

http://www.commandlinefu.com/commands/view/1006/grab-mp3-files-from-your-favorite-netcasts-mp3blog-or-sites-that-often-have-good-mp3s

Hope you find it useful! Improvements welcome, naturally.

cat /proc/net/ip_conntrack | grep ESTABLISHED | grep -c -v ^#
find . -iname '*filename*.doc' | { while read line; do antiword "$line"; done; } | grep -C4 search_term;
2009-07-28 15:49:58
User: Ben
Functions: find grep read
3

Find Word docs by filename in the current directory, convert each of them to plain text using antiword (taking care of spaces in filenames), then grep for a search term in the particular file.

(Of course, it's better to save your data as plain text to make for easier grepping, but that's not always possible.)

Requires antiword. Or you can modify it to use catdoc instead.

$ grep -or string path/ | wc -l
grep -rc logged_in app/ | cut -d : -f 2 | awk '{sum+=$1} END {print sum}'
2009-07-15 14:16:44
User: terceiro
Functions: awk cut grep
-2

grep's -c outputs how may matches there are for a given file as "file:N", cut takes the N's and awk does the sum.

vim $(grep test *)
2009-07-15 10:15:04
User: goatboy
Functions: grep test vim
Tags: vim grep
4

I often use "vim -p" to open in tabs rather than buffers.

export LANG=C; grep string longBigFile.log
2009-07-14 12:48:02
User: ioggstream
Functions: export grep
Tags: grep LANG
0

greps using only ascii, skipping the overhead of matching UTF chars.

Some stats:

$ export LANG=C; time grep -c Quit /var/log/mysqld.log

7432

real 0m0.191s

user 0m0.112s

sys 0m0.079s

$ export LANG=en_US.UTF-8; time grep -c Quit /var/log/mysqld.log

7432

real 0m13.462s

user 0m9.485s

sys 0m3.977s

Try strace-ing grep with and without LANG=C

grep <pattern> -R . --exclude-dir='.svn'
fmiss() { grep -RL "$*" * }
2009-07-13 18:30:54
User: inkel
Functions: grep
Tags: grep
1

This one would be much faster, as it's only one executed command.

(curl -d q=grep http://www.commandlinefu.com/search/autocomplete) | egrep 'autocomplete|votes|destination' | perl -pi -e 's/a style="display:none" class="destination" href="//g;s/<[^>]*>//g;s/">$/\n\n/g;s/^ +//g;s/^\//http:\/\/commandlinefu.com\//g'
2009-07-08 22:10:49
User: isaacs
Functions: egrep perl
1

There's probably a more efficient way to do this rather than the relatively long perl program, but perl is my hammer, so text processing looks like a nail.

This is of course a lot to type all at once. You can make it better by putting this somewhere:

clf () { (curl -d "q=$@" http://www.commandlinefu.com/search/autocomplete 2>/dev/null) | egrep 'autocomplete|votes|destination' | perl -pi -e 's/<a style="display:none" class="destination" href="//g;s/<[^>]*>//g;s/">$/\n\n/g;s/^ +|\([0-9]+ votes,//g;s/^\//http:\/\/commandlinefu.com\//g'; }

Then, to look up any command, you can do this:

clf diff

This is similar to http://www.colivre.coop.br/Aurium/CLFUSearch except that it's just one line, so more in the spirit of CLF, in my opinion.

find . -not \( -name .svn -prune \) -type f -print0 | xargs --null grep <searchTerm>
2009-07-08 20:08:05
User: qazwart
Functions: find grep xargs
Tags: find xargs grep
7

By putting the "-not \( -name .svn -prune \)" in the very front of the "find" command, you eliminate the .svn directories in your find command itself. No need to grep them out.

You can even create an alias for this command:

alias svn_find="find . -not \( -name .svn -prune \)"

Now you can do things like

svn_find -mtime -3
echo alias grep=\'grep --color=auto\' >> ~/.bashrc ; . ~/.bashrc
2009-07-05 07:44:13
User: 0x2142
Functions: alias echo
Tags: color grep
7

This will create a permanent alias to colorize the search pattern in your grep output

sed -n '/START/,${/STOP/q;p}'
2009-06-19 15:27:36
User: mungewell
Functions: sed
Tags: sed grep
3

GNU Sed can 'address' between two regex, but it continues parsing through to the end of the file. This slight alteration causes it to terminate reading the input file once the STOP match is made.

In my example I have included an extra '/START/d' as my 'start' marker line contains the 'stop' string (I'm extracting data between 'resets' and using the time stamp as the 'start').

My previous coding using grep is slightly faster near the end of the file, but overall (extracting all the reset cycles in turn) the new SED method is quicker and a lot neater.

grep -v "^\W$" <filename>
2009-06-18 08:17:22
User: nikc
Functions: grep
Tags: grep non-empty
0

I had some trouble removing empty lines from a file (perhaps due to utf-8, as it's the source of all evil), \W did the trick eventually.

grep -2 -iIr "err\|warn\|fail\|crit" /var/log/*
2009-06-17 19:41:04
User: miketheman
Functions: grep
6

Using the grep command, retrieve all lines from any log files in /var/log/ that have one of the problem states

grep -h -o '<[^/!?][^ >]*' * | sort -u | cut -c2-
2009-06-17 00:22:18
User: thebodzio
Functions: cut grep sort
Tags: sort grep cut
2

This set of commands was very convenient for me when I was preparing some xml files for typesetting a book. I wanted to check what styles I had to prepare but coudn't remember all tags that I used. This one saved me from error-prone browsing of all my files. It should be also useful if one tries to process xml files with xsl, when using own xml application.

for k in `git branch|perl -pe s/^..//`;do echo -e `git show --pretty=format:"%Cgreen%ci %Cblue%cr%Creset" $k|head -n 1`\\t$k;done|sort -r
2009-06-03 08:25:00
User: brunost
Functions: echo head perl sort
14

Print out list of all branches with last commit date to the branch, including relative time since commit and color coding.

egrep -o '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}' file.txt
svn log fileName|cut -d" " -f 1|grep -e "^r[0-9]\{1,\}$"|awk {'sub(/^r/,"",$1);print "svn cat fileName@"$1" > /tmp/fileName.r"$1'}|sh
2009-05-27 02:11:58
User: fizz
Functions: awk cut grep
Tags: bash svn awk grep
2

exported files will get a .r23 extension (where 23 is the revision number)

curl -s checkip.dyndns.org | grep -Eo '[0-9\.]+'
2009-05-21 16:12:21
User: haivu
Functions: grep
4

The curl command retrieve the HTML text containing the IP address. The grep command picks out the IP address from that HTML text.

grep --color=always | less -R
2009-05-20 20:30:19
User: dinomite
Functions: grep less
30

Get your colorized grep output in less(1). This involves two things: forcing grep to output colors even though it's not going to a terminal and telling less to handle those properly.

echo 2006-10-10 | grep -c '^[0-9][0-9][0-9][0-9]-[0-9][0-9]-[0-9][0-9]$'
2009-05-11 22:18:43
User: rez0r
Functions: echo grep
-1

Quick and easy way of validating a date format of yyyy-mm-dd and returning a boolean, the regex can easily be upgraded to handle "in betweens" for mm dd or to validate other types of strings, ex. ip address.

Boolean output could easily be piped into a condition for a more complete one-liner.