Hide

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again.

Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.


If you have a new feature suggestion or find a bug, please get in touch via http://commandlinefu.uservoice.com/

Get involved!

You can sign-in using OpenID credentials, or register a traditional username and password.

First-time OpenID users will be automatically assigned a username which can be changed after signing in.

Hide

Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for:

Hide

News

2011-03-12 - Confoo 2011 presentation
Slides are available from the commandlinefu presentation at Confoo 2011: http://presentations.codeinthehole.com/confoo2011/
2011-01-04 - Moderation now required for new commands
To try and put and end to the spamming, new commands require moderation before they will appear on the site.
2010-12-27 - Apologies for not banning the trolls sooner
Have been away from the interwebs over Christmas. Will be more vigilant henceforth.
2010-09-24 - OAuth and pagination problems fixed
Apologies for the delay in getting Twitter's OAuth supported. Annoying pagination gremlin also fixed.
Hide

Tags

Hide

Functions

Commands tagged grep from sorted by
Terminal - Commands tagged grep - 343 results
find . -name "*.[ch]" -exec grep "TODO" {} +
2009-08-13 06:17:22
User: peshay
Functions: find grep
Tags: grep
3

-exec works better and faster then using a pipe

grep -r --include="*.[ch]" pattern .
2009-08-13 01:41:12
User: sitaram
Functions: grep
Tags: grep
10

doesn't do case-insensitive filenames like iname but otherwise likely to be faster

find . -name "*.[ch]" | xargs grep "TODO"
watch "ps auxw | grep [d]efunct"
2009-08-12 08:11:16
User: alvinx
Functions: watch
6

to omit "grep -v", put some brackets around a single character

watch "ps auxw | grep 'defunct' | grep -v 'grep' | grep -v 'watch'"
2009-08-11 12:22:13
Functions: watch
5

Shows all those processes; useful when building some massively forking script that could lead to zombies when you don't have your waitpid()'s done just right.

grep . filename
2009-08-09 05:33:58
Functions: grep
Tags: Linux grep
7

Remove newlines from output.

One character shorter than awk /./ filename and doesn't use a superfluous cat.

To be fair though, I'm pretty sure fraktil was thinking being able to nuke newlines from any command is much more useful than just from one file.

cat filename | grep .
2009-08-09 01:00:59
User: fraktil
Functions: cat grep
Tags: cat Linux grep
2

Pipe any output to "grep ." and blank lines will not be printed.

fetch -q -o - http://ipchicken.com | egrep -o '([[:digit:]]{1,3}\.){3}[[:digit:]]{1,3}'
2009-08-06 11:57:44
User: spackle
Functions: egrep
-1

Same thing as above, just uses fetch and ipchicken.com

find . -type f -print0 | xargs -0 -P 4 -n 40 grep -i foobar
2009-08-05 23:18:44
User: ketil
Functions: find grep xargs
4

xargs -P N spawns up to N worker processes. -n 40 means each grep command gets up to 40 file names each on the command line.

grep -Eho '<[a-ZA-Z_][a-zA-Z0-9_-:]*' * | sort -u | cut -c2-
2009-08-05 21:54:29
User: inkel
Functions: cut grep sort
Tags: sort grep cut xml
0

This one will work a little better, the regular expressions it is not 100% accurate for XML parsing but it will suffice any XML valid document for sure.

wget `lynx -dump http://www.ebow.com/ebowtube.php | grep .flv$ | sed 's/[[:blank:]]\+[[:digit:]]\+\. //g'`
2009-08-02 14:09:53
User: spaceyjase
Functions: grep sed wget
3

I wanted all the 'hidden' .flv files from the http link in the command line; wget seemed appropriate, fed with output from lynx, grep the flv files and the normalised via sed (to remove the numeric bullet). Similar to the 'Grab mp3 files' fu. Replace link with your own, grep arg with something more interesting ;) See here for something along the same lines...

http://www.commandlinefu.com/commands/view/1006/grab-mp3-files-from-your-favorite-netcasts-mp3blog-or-sites-that-often-have-good-mp3s

Hope you find it useful! Improvements welcome, naturally.

cat /proc/net/ip_conntrack | grep ESTABLISHED | grep -c -v ^#
find . -iname '*filename*.doc' | { while read line; do antiword "$line"; done; } | grep -C4 search_term;
2009-07-28 15:49:58
User: Ben
Functions: find grep read
3

Find Word docs by filename in the current directory, convert each of them to plain text using antiword (taking care of spaces in filenames), then grep for a search term in the particular file.

(Of course, it's better to save your data as plain text to make for easier grepping, but that's not always possible.)

Requires antiword. Or you can modify it to use catdoc instead.

$ grep -or string path/ | wc -l
grep -rc logged_in app/ | cut -d : -f 2 | awk '{sum+=$1} END {print sum}'
2009-07-15 14:16:44
User: terceiro
Functions: awk cut grep
-2

grep's -c outputs how may matches there are for a given file as "file:N", cut takes the N's and awk does the sum.

vim $(grep test *)
2009-07-15 10:15:04
User: goatboy
Functions: grep test vim
Tags: vim grep
4

I often use "vim -p" to open in tabs rather than buffers.

export LANG=C; grep string longBigFile.log
2009-07-14 12:48:02
User: ioggstream
Functions: export grep
Tags: grep LANG
0

greps using only ascii, skipping the overhead of matching UTF chars.

Some stats:

$ export LANG=C; time grep -c Quit /var/log/mysqld.log

7432

real 0m0.191s

user 0m0.112s

sys 0m0.079s

$ export LANG=en_US.UTF-8; time grep -c Quit /var/log/mysqld.log

7432

real 0m13.462s

user 0m9.485s

sys 0m3.977s

Try strace-ing grep with and without LANG=C

grep <pattern> -R . --exclude-dir='.svn'
fmiss() { grep -RL "$*" * }
2009-07-13 18:30:54
User: inkel
Functions: grep
Tags: grep
1

This one would be much faster, as it's only one executed command.

(curl -d q=grep http://www.commandlinefu.com/search/autocomplete) | egrep 'autocomplete|votes|destination' | perl -pi -e 's/a style="display:none" class="destination" href="//g;s/<[^>]*>//g;s/">$/\n\n/g;s/^ +//g;s/^\//http:\/\/commandlinefu.com\//g'
2009-07-08 22:10:49
User: isaacs
Functions: egrep perl
1

There's probably a more efficient way to do this rather than the relatively long perl program, but perl is my hammer, so text processing looks like a nail.

This is of course a lot to type all at once. You can make it better by putting this somewhere:

clf () { (curl -d "q=$@" http://www.commandlinefu.com/search/autocomplete 2>/dev/null) | egrep 'autocomplete|votes|destination' | perl -pi -e 's/<a style="display:none" class="destination" href="//g;s/<[^>]*>//g;s/">$/\n\n/g;s/^ +|\([0-9]+ votes,//g;s/^\//http:\/\/commandlinefu.com\//g'; }

Then, to look up any command, you can do this:

clf diff

This is similar to http://www.colivre.coop.br/Aurium/CLFUSearch except that it's just one line, so more in the spirit of CLF, in my opinion.

find . -not \( -name .svn -prune \) -type f -print0 | xargs --null grep <searchTerm>
2009-07-08 20:08:05
User: qazwart
Functions: find grep xargs
Tags: find xargs grep
8

By putting the "-not \( -name .svn -prune \)" in the very front of the "find" command, you eliminate the .svn directories in your find command itself. No need to grep them out.

You can even create an alias for this command:

alias svn_find="find . -not \( -name .svn -prune \)"

Now you can do things like

svn_find -mtime -3
echo alias grep=\'grep --color=auto\' >> ~/.bashrc ; . ~/.bashrc
2009-07-05 07:44:13
User: 0x2142
Functions: alias echo
Tags: color grep
7

This will create a permanent alias to colorize the search pattern in your grep output

sed -n '/START/,${/STOP/q;p}'
2009-06-19 15:27:36
User: mungewell
Functions: sed
Tags: sed grep
3

GNU Sed can 'address' between two regex, but it continues parsing through to the end of the file. This slight alteration causes it to terminate reading the input file once the STOP match is made.

In my example I have included an extra '/START/d' as my 'start' marker line contains the 'stop' string (I'm extracting data between 'resets' and using the time stamp as the 'start').

My previous coding using grep is slightly faster near the end of the file, but overall (extracting all the reset cycles in turn) the new SED method is quicker and a lot neater.

grep -v "^\W$" <filename>
2009-06-18 08:17:22
User: nikc
Functions: grep
Tags: grep non-empty
0

I had some trouble removing empty lines from a file (perhaps due to utf-8, as it's the source of all evil), \W did the trick eventually.

grep -2 -iIr "err\|warn\|fail\|crit" /var/log/*
2009-06-17 19:41:04
User: miketheman
Functions: grep
6

Using the grep command, retrieve all lines from any log files in /var/log/ that have one of the problem states