Hide

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again.

Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.


If you have a new feature suggestion or find a bug, please get in touch via http://commandlinefu.uservoice.com/

Get involved!

You can sign-in using OpenID credentials, or register a traditional username and password.

First-time OpenID users will be automatically assigned a username which can be changed after signing in.

Hide

Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for:

Hide

News

2011-03-12 - Confoo 2011 presentation
Slides are available from the commandlinefu presentation at Confoo 2011: http://presentations.codeinthehole.com/confoo2011/
2011-01-04 - Moderation now required for new commands
To try and put and end to the spamming, new commands require moderation before they will appear on the site.
2010-12-27 - Apologies for not banning the trolls sooner
Have been away from the interwebs over Christmas. Will be more vigilant henceforth.
2010-09-24 - OAuth and pagination problems fixed
Apologies for the delay in getting Twitter's OAuth supported. Annoying pagination gremlin also fixed.
Hide

Tags

Hide

Functions

Commands tagged grep from sorted by
Terminal - Commands tagged grep - 346 results
curl -s http://example.com | grep -o -P "<a.*href.*>" | grep -o "http.*.pdf" | xargs -d"\n" -n1 wget -c
2011-06-09 14:42:46
User: b_t
Functions: grep wget xargs
0

This example command fetches 'example.com' webpage and then fetches+saves all PDF files listed (linked to) on that webpage.

[*Note: of course there are no PDFs on example.com. This is just an example]

find . -name "*.[ch]" -print | xargs grep -i -H "search phrase"
2011-06-05 23:27:30
User: jblaine
Functions: find grep xargs
Tags: find grep
-3

Original submitter's command spawns a "grep" process for every file found. Mine spawns one grep with a long list of all matching files to search in. Learn xargs, everyone! It's a very powerful and always available tool.

ifconfig | awk '/HWaddr/ { print $1, $5 }'
ifconfig | grep HWaddr | awk '{print $1,$5}'
grep -E '<DT><A|<DT><H3' bookmarks.html | sed 's/<DT>//' | sed '/Bookmarks bar/d' | sed 's/ ADD_DATE=\".*\"//g' | sed 's/^[ \t]*//' | tr '<A HREF' '<a href'
2011-05-26 22:21:01
User: chrismccoy
Functions: grep sed tr
Tags: sed grep chrome
-1

chrome only lets you export in html format, with a lot of table junk, this command will just export the titles of the links and the links without all that extra junk

grep -v ^# /etc/somefile.conf | grep .
apt-get install `ssh root@host_you_want_to_clone "dpkg -l | grep ii" | awk '{print $2}'`
domain=google.com; for ns in $(whois $domain | awk -F: '/Name Server/{print $2}'); do echo ">>> Nameservers for $domain from $a <<<"; dig @$ns $domain ns +short; echo; done;
2011-05-08 04:46:34
User: laebshade
Functions: awk dig echo whois
2

Change the $domain variable to whichever domain you wish to query.

Works with the majority of whois info; for some that won't, you may have to compromise:

domain=google.com; for a in $(whois $domain | grep "Domain servers in listed order:" --after 3 | grep -v "Domain servers in listed order:"); do echo ">>> Nameservers for $domain from $a

Note that this doesn't work as well as the first one; if they have more than 3 nameservers, it won't hit them all.

As the summary states, this can be useful for making sure the whois nameservers for a domain match the nameserver records (NS records) from the nameservers themselves.

sudo aptitude remove -P $(dpkg -l|awk '/^ii linux-image-2/{print $2}'|sed 's/linux-image-//'|awk -v v=`uname -r` 'v>$0'|sed 's/-generic//'|awk '{printf("linux-headers-%s\nlinux-headers-%s-generic\nlinux-image-%s-generic\n",$0,$0,$0)}')
2011-04-25 05:19:57
User: Bonster
Functions: awk sed sudo
-1

Same as 7272 but that one was too dangerous

so i added -P to prompt users to continue or cancel

Note the double space: "...^ii␣␣linux-image-2..."

Like 5813, but fixes two bugs: [1]This leaves the meta-packages 'linux-headers-generic' and 'linux-image-generic' alone so that automatic upgrades work correctly in the future. [2]Kernels newer than the currently running one are left alone (this can happen if you didn't reboot after installing a new kernel).

find . -maxdepth 1 -type d | grep -Pv "^.$" | sort -rn --field-separator="-" | sed -n '3,$p' | xargs rm -rf
cgrep() { GREP_COLOR="1;3$((RANDOM%6+1))" grep --color=always "$@" }
2011-03-04 18:45:58
User: derekschrock
Functions: grep
Tags: grep
-2

Randomize GNU grep's colors 31-36 excluding black and white.

curl ifconfig.me
wget http://cmyip.com -O - -o /dev/null | grep -Po '[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+'
find . -type f | awk -F'.' '{print $NF}' | sort| uniq -c | sort -g
ls | grep -Eo "\..+" | sort -u
ls -Xp | grep -Eo "\.[^/]+$" | sort | uniq
2011-02-10 20:47:59
User: Amarok
Functions: grep ls sort
Tags: uniq ls grep
4

Works on current directory, with built-in sorting.

Command in description (Your command is too long - please keep it to less than 255 characters)
2011-02-03 08:25:42
User: __
Functions: command less
0
yt2mp3(){ for j in `seq 1 301`;do i=`curl -s gdata.youtube.com/feeds/api/users/$1/uploads\?start-index=$j\&max-results=1|grep -o "watch[^&]*"`;ffmpeg -i `wget youtube.com/$i -qO-|grep -o 'url_map"[^,]*'|sed -n '1{s_.*|__;s_\\\__g;p}'` -vn -ab 128k "`youtube-dl -e ${i#*=}`.mp3";done;}

squeezed the monster (and nifty ☺) command from 7776 from 531 characters to 284 characters, but I don't see a way to get it down to 255. This is definitely a kludge!

curl http://www.discogs.com/search?q=724349691704 2> /dev/null | grep \/release\/ | head -2 | tail -1 | sed -e 's/^<div>.*>\(.*\)<\/a><\/div>/\1/'
wget http://www.discogs.com/search?q=724349691704 -O foobar &> /dev/null ; grep \/release\/ foobar | head -2 | tail -1 | sed -e 's/^<div>.*>\(.*\)<\/a><\/div>/\1/' ; rm foobar
2011-01-30 23:34:54
User: TetsuyO
Functions: grep head rm sed tail wget
-1

Substitute that 724349691704 with an UPC of a CD you have at hand, and (hopefully) this oneliner should return the $Artist - $Title, querying discogs.com.

Yes, I know, all that head/tail/grep crap can be improved with a single sed command, feel free to send "patches" :D

Enjoy!

rm *[!teste0,teste1,teste2]
2011-01-25 22:00:29
Functions: rm
Tags: grep rm
-2

Remove all arquives except the list.

Can't have space between the commas.

rm *[!abc]
2011-01-25 19:41:41
User: Vilemirth
Functions: rm
Tags: grep rm
0

Bash method to remove all files but "abc".

It would be 'rm *~abc' in Zsh.

curl -s http://www.last.fm/user/$LASTFMUSER | grep -A 1 subjectCell | sed -e 's#<[^>]*>##g' | head -n2 | tail -n1 | sed 's/^[[:space:]]*//g'
find . | grep -v svn
2011-01-16 03:51:57
User: gwchamb
Functions: find grep
Tags: grep
-1

Unless you have files that include 'svn' in them, this should provide enough information to be useful. If you need to be certain, add the leading dot in the search pattern

nmap -oG - -T4 -p22 -v 192.168.0.254 | grep ssh
2011-01-11 16:12:23
User: SeeFor
Functions: grep
Tags: nmap grep
6

Using NMAP to check to see if port 22(SSH) is open on servers and network devices.

grep -l foo *cl*.log | xargs grep -lL bar
2011-01-10 20:18:30
User: dlebauer
Functions: grep xargs
Tags: xargs grep
0

same as

grep -lL "foo" $(grep -l bar *cl*.log)