Hide

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again.

Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.


If you have a new feature suggestion or find a bug, please get in touch via http://commandlinefu.uservoice.com/

Get involved!

You can sign-in using OpenID credentials, or register a traditional username and password.

First-time OpenID users will be automatically assigned a username which can be changed after signing in.

Hide

Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for:

Hide

News

2011-03-12 - Confoo 2011 presentation
Slides are available from the commandlinefu presentation at Confoo 2011: http://presentations.codeinthehole.com/confoo2011/
2011-01-04 - Moderation now required for new commands
To try and put and end to the spamming, new commands require moderation before they will appear on the site.
2010-12-27 - Apologies for not banning the trolls sooner
Have been away from the interwebs over Christmas. Will be more vigilant henceforth.
2010-09-24 - OAuth and pagination problems fixed
Apologies for the delay in getting Twitter's OAuth supported. Annoying pagination gremlin also fixed.
Hide

Tags

Hide

Functions

Commands tagged sed from sorted by
Terminal - Commands tagged sed - 322 results
grep -E '<DT><A|<DT><H3' bookmarks.html | sed 's/<DT>//' | sed '/Bookmarks bar/d' | sed 's/ ADD_DATE=\".*\"//g' | sed 's/^[ \t]*//' | tr '<A HREF' '<a href'
2011-05-26 22:21:01
User: chrismccoy
Functions: grep sed tr
Tags: sed grep chrome
-1

chrome only lets you export in html format, with a lot of table junk, this command will just export the titles of the links and the links without all that extra junk

ls * | while read fin;do fout=$(echo -n $fin | sed -e's/%\([0-9A-F][0-9A-F]\)/\\\\\x\1/g' | xargs echo -e);if [ "$fout" != "$fin" ];then echo "mv '$fin' '$fout'";fi;done | bash -x
2011-05-18 07:24:54
User: pawelb1973
Functions: bash echo ls read sed xargs
0

urldecode files in current directrory

tcpdump -w "$(sed 's/-//gi; s/ /_/gi'<<<"-vvv -s0 -ieth1 -c10 icmp").pcap"
links -dump "http://spaceflight.nasa.gov/realdata/sightings/cities/view.cgi?country=United_States&region=Wisconsin&city=Portage" | sed -n '/--/,/--/p'
2011-05-03 12:15:56
User: eightmillion
Functions: sed
Tags: sed links iss
5

This command outputs a table of sighting opportunities for the International Space Station. Find the URL for your city here: http://spaceflight.nasa.gov/realdata/sightings/

sed 's/^#\(.*DEBUG\)/\1/' $FILE
ls -1 | sort -R | sed -n 's/^/Selected /;1p'
sed "s/^ *//;s/ *$//;s/ \{1,\}/ /g" filename.txt
2011-03-09 10:35:02
User: EBAH
Functions: sed
Tags: sed
1

The command removes all the spaces whithin a file and leaves only one space.

find . -maxdepth 1 -type d | grep -Pv "^.$" | sort -rn --field-separator="-" | sed -n '3,$p' | xargs rm -rf
echo -n "String to MD5" | md5sum | sed -e 's/../& /g' -e 's/ -//'
echo -n "String to MD5" | md5sum | sed -e 's/[0-9a-f]\{2\}/& /g' -e 's/ -//'
2011-03-05 11:47:08
User: saibbot
Functions: echo md5sum sed
Tags: sed md5sum
1

Generates the md5 hash, without the trailing " -" and with the output "broken" into pairs of hexs.

echo -n "String to get MD5" | md5sum | sed "s/ -//"
anagram(){ s(){ sed 's/./\n\0/g'<<<$1|sort;};cmp -s <(s $1) <(s $2)||echo -n "not ";echo anagram; }; anagram foobar farboo;
2011-02-17 15:10:43
User: bbbco
Functions: cmp echo sed
2

This is just a slight alternative that wraps all of #7917 in a function that can be executed

s(){ sed 's/./\n\0/g'<<<$1|sort;};cmp -s <(s foobar) <(s farboo)||echo -n "not ";echo anagram
2011-02-17 12:42:45
User: flatcap
Functions: cmp echo sed
-3

Are the two strings anagrams of one another?

sed splits up the strings into one character per line

the result is sorted

cmp compares the results

Note: This is not pretty. I just wanted to see if I could do it in bash.

Note: It uses fewer characters than the perl version :-)

seq 1 2 99999999 | sed 's!^!4/!' | paste -sd-+ | bc -l
2011-02-09 23:36:07
User: flatcap
Functions: bc paste sed seq
Tags: sed seq bc paste math
0

Calculate pi from the infinite series 4/1 - 4/3 + 4/5 - 4/7 + ...

This expansion was formulated by Gottfried Leibniz: http://en.wikipedia.org/wiki/Leibniz_formula_for_pi

I helped rubenmoran create the sum of a sequence of numbers and he replied with a command for the sequence: 1 + 2 -3 + 4 ...

This set me thinking. Transcendental numbers!

seq provides the odd numbers 1, 3, 5

sed turns them into 4/1 4/3 4/5

paste inserts - and +

bc -l does the calculation

Note: 100 million iterations takes quite a while. 1 billion and I run out of memory.

ifconfig eth3|sed 's/^eth3.*HWaddr //;q'
2011-02-09 00:22:40
User: avedis
Functions: ifconfig sed
Tags: sed
0

Just replace eth3 with the interface you want the MAC for.

Command in description (Your command is too long - please keep it to less than 255 characters)
2011-02-03 08:25:42
User: __
Functions: command less
0
yt2mp3(){ for j in `seq 1 301`;do i=`curl -s gdata.youtube.com/feeds/api/users/$1/uploads\?start-index=$j\&max-results=1|grep -o "watch[^&]*"`;ffmpeg -i `wget youtube.com/$i -qO-|grep -o 'url_map"[^,]*'|sed -n '1{s_.*|__;s_\\\__g;p}'` -vn -ab 128k "`youtube-dl -e ${i#*=}`.mp3";done;}

squeezed the monster (and nifty ☺) command from 7776 from 531 characters to 284 characters, but I don't see a way to get it down to 255. This is definitely a kludge!

curl -s 'http://www.discogs.com/search?q=724349691704' | sed -n '\#/release/#{s/^<div>.*>\(.*\)<\/a><\/div>/\1/p}'
2011-01-30 23:49:22
User: infinull
Functions: sed
Tags: sed discogs UPC
1

I like curl better than wget, I just think that curl -s is a lot simpler than wget ... see I forget what you even have to do to get wget to pipe it's output

Anyway, all in one sed command as "requested"

curl http://www.discogs.com/search?q=724349691704 2> /dev/null | grep \/release\/ | head -2 | tail -1 | sed -e 's/^<div>.*>\(.*\)<\/a><\/div>/\1/'
wget http://www.discogs.com/search?q=724349691704 -O foobar &> /dev/null ; grep \/release\/ foobar | head -2 | tail -1 | sed -e 's/^<div>.*>\(.*\)<\/a><\/div>/\1/' ; rm foobar
2011-01-30 23:34:54
User: TetsuyO
Functions: grep head rm sed tail wget
-1

Substitute that 724349691704 with an UPC of a CD you have at hand, and (hopefully) this oneliner should return the $Artist - $Title, querying discogs.com.

Yes, I know, all that head/tail/grep crap can be improved with a single sed command, feel free to send "patches" :D

Enjoy!

curl -s http://www.last.fm/user/$LASTFMUSER | grep -A 1 subjectCell | sed -e 's#<[^>]*>##g' | head -n2 | tail -n1 | sed 's/^[[:space:]]*//g'
sed -i '/pattern/N; s/\n//' filename
println() {echo -n -e "\e[038;05;${2:-255}m";printf "%$(tput cols)s"|sed "s/ /${1:-=}/g"}
2011-01-09 18:08:18
User: joedhon
Functions: printf sed
Tags: sed tput printf
0

function for .bash_aliases that prints a line of the character of your choice in the color of your choice across the terminal.

Default character is "=", default color is white.

The command is too big to fit here. :( Look at the description for the command, in readable form! :)
2011-01-05 02:45:28
User: hunterm
Functions: at command
-6

Yep, now you can finally google from the command line!

Here's a readable version "for your pleasure"(c):

google() { # search the web using google from the commandline # syntax: google google query=$(echo "$*" | sed "s:%:%25:g;s:&:%26:g;s:+:%2b:g;s:;:%3b:g;s: :+:g") data=$(wget -qO - "https://ajax.googleapis.com/ajax/services/search/web?v=1.0&q=$query") title=$(echo "$data" | tr '}' '\n' | sed "s/.*,\"titleNoFormatting//;s/\":\"//;s/\",.*//;s/\\u0026/'/g;s/\\\//g;s/#39\;//g;s/'amp;/\&/g" | head -1) url="$(echo "$data" | tr '}' '\n' | sed 's/.*"url":"//;s/".*//' | head -1)" echo "${title}: ${url} | http://www.google.com/search?q=${query}" }

Enjoy :)

svn log -r '{YYYY-MM-DD}:{YYYY-MM-DD}' | sed -n '1p; 2,/^-/d; /USERNAME/,/^-/p' | grep -E -v '^(r[0-9]|---|$)' | sed 's/^/* /g'
2010-12-22 17:52:19
User: antic
Functions: grep sed
1

* Replace USERNAME with the desired svn username

* Replace the first YYYY-MM-DD with the date you want to get the log (this starts at the midnight event that starts this date)

* Replace the second YYYY-MM-DD with the date after you want to get the log (this will end the log scan on midnight of the previous day)

Example, if I want the log for December 10, 2010, I would put {2010-12-10}:{2010-12-11}

wget -qO - http://ngrams.googlelabs.com/datasets | grep -E href='(.+\.zip)' | sed -r "s/.*href='(.+\.zip)'.*/\1/" | uniq | while read line; do `wget $line`; done