Hide

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again.

Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.


If you have a new feature suggestion or find a bug, please get in touch via http://commandlinefu.uservoice.com/

Get involved!

You can sign-in using OpenID credentials, or register a traditional username and password.

First-time OpenID users will be automatically assigned a username which can be changed after signing in.

Hide

Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for:

Hide

News

2011-03-12 - Confoo 2011 presentation
Slides are available from the commandlinefu presentation at Confoo 2011: http://presentations.codeinthehole.com/confoo2011/
2011-01-04 - Moderation now required for new commands
To try and put and end to the spamming, new commands require moderation before they will appear on the site.
2010-12-27 - Apologies for not banning the trolls sooner
Have been away from the interwebs over Christmas. Will be more vigilant henceforth.
2010-09-24 - OAuth and pagination problems fixed
Apologies for the delay in getting Twitter's OAuth supported. Annoying pagination gremlin also fixed.
Hide

Tags

Hide

Functions

Commands using wget from sorted by
Terminal - Commands using wget - 238 results
wget -O - -q http://www.azlyrics.com/lyrics/abba/takeachanceonme.html | sed -e 's/[cC]hance/dump/g' > ~/tdom.htm && firefox ~/tdom.htm
2009-12-04 22:56:00
User: tighe
Functions: sed wget
0

ABBA would be more entertaining if they sang this.

wget `lynx --dump http://xkcd.com/|grep png`
wget -t inf -k -r -l 3 -p -m http://apod.nasa.gov/apod/archivepix.html
2009-12-03 10:27:57
User: ninadsp
Functions: wget
3

Mirror the entire NASA Astronomy Picture of the Day archive, all the way from 1995. The archive is close to 2.5 GB, with lots of files, so give it some time. The logs can be redirected to a file using '-o somefile'. You might also want to try '-nH' and the '--cut-dirs' options

xkcd(){ wget -qO- http://xkcd.com/|tee >(feh $(grep -Po '(?<=")http://imgs[^/]+/comics/[^"]+\.\w{3}'))|grep -Po '(?<=(\w{3})" title=").*(?=" alt)';}
2009-11-27 09:11:47
User: eightmillion
Functions: grep tee wget
24

This function displays the latest comic from xkcd.com. One of the best things about xkcd is the title text when you hover over the comic, so this function also displays that after you close the comic.

To get a random xkcd comic, I also use the following:

xkcdrandom(){ wget -qO- dynamic.xkcd.com/comic/random|tee >(feh $(grep -Po '(?<=")http://imgs[^/]+/comics/[^"]+\.\w{3}'))|grep -Po '(?<=(\w{3})" title=").*(?=" alt)';}
while true; do wget -r -l1 --no-clobber -A.txt http://911.wikileaks.org/files/index.html; done; cat *.txt | grep pass
wget -qO - http://www.sputnick-area.net/ip;echo
wget -q -O - `youtube-dl -b -g $url`| ffmpeg -i - -f mp3 -vn -acodec libmp3lame -| mpg123 -
wget -q -O - checkip.dyndns.org|sed -e 's/.*Current IP Address: //' -e 's/<.*$//'
wget -O - -q ip.boa.nu
clfavs(){ URL="http://www.commandlinefu.com";wget -O - --save-cookies c --post-data "username=$1&password=$2&submit=Let+me+in" $URL/users/signin;for i in `seq 0 25 $3`;do wget -O - --load-cookies c $URL/commands/favourites/plaintext/$i >>$4;done;rm -f c;}
2009-09-30 16:43:08
User: suhasgupta
Functions: c++ wget
24

Usage: clfavs username password num_favourite_commands file_in_which_to_backup

wget 'link of a Picasa WebAlbum' -O - |perl -e'while(<>){while(s/"media":{"content":\[{"url":"(.+?\.JPG)//){print "$1\n"}}' |wget -w1 -i -
wget -q -O - 'http://wap.weather.gov.hk/' | sed -r 's/<[^>]+>//g;/^UV/q' | tail -n4
2009-09-25 02:36:46
User: dakunesu
Functions: sed tail wget
-1

"get Hong Kong weather infomation from HK Observatory

From Hong Kong Observatory wap site ;)"

other one showed alot of blank lines for me

wget -q -O - 'http://wap.weather.gov.hk/' | sed -r 's/<[^>]+>//g;/^UV/q' | grep -v '^$'
wget http://twitter.com/help/test.json -q -O -
2009-09-15 23:22:26
User: ninadsp
Functions: wget
3

Returns a JSON object, by connecting to the 'test' endpoint of the Twitter API. Simplest way to check if you can connect to Twitter. Output also available in XML, use '/help/test.xml' for that

wget -q --spider http://server/cgi/script
2009-09-11 05:33:48
User: ashawley
Functions: wget
Tags: wget
0

I don't know if the --spider option works to execute a script, but it might be worth trying. Note that the Drupal project uses the following in a cron job.

wget -O - -q http://localhost/drupal/cron.php

The output is sent to standard out so it can be logged by cron.

wget -O /dev/null http://www.google.com
2009-09-10 14:43:57
Functions: wget
Tags: wget
3

I have a remote php file that I want to run once an hour. I set up cron to run this wget. I don't really care about what's in the file though, I don't want to save the results, so I run the -O and send it to /dev/null

wget -nv http://en.wikipedia.org/wiki/Linux -O- | egrep -o "http://[^[:space:]]*.jpg" | xargs -P 10 -r -n 1 wget -nv
2009-08-31 18:37:33
User: syssyphus
Functions: egrep wget xargs
10

xargs can be used in this manner to download multiple files at a time, and xargs will in this case run 10 processes at a time and initiate a new one when the number running falls below 10.

wget --reject html,htm --accept pdf,zip -rl1 url
2009-08-30 14:05:09
User: linuxswords
Functions: wget
16

If the site uses https, use:

wget --reject html,htm --accept pdf,zip -rl1 --no-check-certificate https-url
wget -U "QuickTime/7.6.2 (qtver=7.6.2;os=Windows NT 5.1Service Pack 3)" `echo http://movies.apple.com/movies/someHDmovie_720p.mov | sed 's/\([0-9][0-9]\)0p/h\10p/'`
2009-08-29 00:29:40
User: deadrabbit
Functions: sed wget
5

Copy the link to an HD movie trailer in to this command. It's more eleganant if it's put in a to a script, taking the URL as input.

mirror=ftp://somemirror.com/with/alot/versions/but/no/latest/link; latest=$(curl -l $mirror/ 2>/dev/null | grep util | tail -1); wget $mirror/$latest
2009-08-24 15:58:31
User: peshay
Functions: grep tail wget
4

to download latest version of "util", maybe insert a sort if they wont be shown in right order.

curl lists all files on mirror, grep your util, tail -1 will gets the one lists on the bottom and get it with wget

for ((i=1; i<67; i++)) do wget http://www.phrack.org/archives/tgz/phrack${i}.tar.gz -q; done
2009-08-20 23:27:01
User: Abiden
Functions: wget
0

This will download all the phracks! Enjoy!

wget -q -O- PAGE_URL | grep -o 'WORD_OR_STRING' | wc -w
wget -O - -q icanhazip.com
2009-08-14 10:24:30
User: andrepuel
Functions: wget
5

I dont have curl or links installed, so I use wget with write file as standard out.

wget -qO - http://www.commandlinefu.com/feed/tenup | xmlstarlet sel -T -t -o '&lt;x&gt;' -n -t -m rss/channel/item -o '&lt;y&gt;' -n -v description -o '&lt;/y&gt;' -n -t -o '&lt;/x&gt;' | xmlstarlet sel -T -t -m x/y -v code -n
2009-08-14 02:44:00
User: fsilveira
Functions: wget
0

This lengthy cryptic line will print the latest top 10 commandlinefu.com posts without their summaries. To print also their respective summaries use the following (even bigger) command line:

wget -qO - http://www.commandlinefu.com/feed/tenup | xmlstarlet sel -T -t -o '<doc>' -n -t -m rss/channel/item -o '<item>' -n -o '<title>' -v title -o '</title>' -n -o '<description>' -v description -o '</description>' -n -o '</item>' -n -t -o '</doc>' | xmlstarlet sel -T -t -m doc/item -v description/code -n -v title -n -n

It is recommended to include this line into a shell script to be easily run, as I do myself. You could also use the following URLs to browse the top 3 commands:

wget -qO - http://www.commandlinefu.com/feed/threeup | xmlstarlet ...

.. or all others:

wget -qO - http://feeds2.feedburner.com/Command-line-fu | xmlstarlet ...

PS: You need to install "xmlstarlet" to run it. It is found in Debian APT repositories (apt-get install xmlstarlet) or under the http://xmlstar.sourceforge.net/ URL.

wget http://checkip.dyndns.org && clear && echo && echo My IP && egrep -o '([[:digit:]]{1,3}\.){3}[[:digit:]]{1,3}' index.html && echo && rm index.html