Hide

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again.

Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.


If you have a new feature suggestion or find a bug, please get in touch via http://commandlinefu.uservoice.com/

Get involved!

You can sign-in using OpenID credentials, or register a traditional username and password.

First-time OpenID users will be automatically assigned a username which can be changed after signing in.

Hide

Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for:

Hide

News

2011-03-12 - Confoo 2011 presentation
Slides are available from the commandlinefu presentation at Confoo 2011: http://presentations.codeinthehole.com/confoo2011/
2011-01-04 - Moderation now required for new commands
To try and put and end to the spamming, new commands require moderation before they will appear on the site.
2010-12-27 - Apologies for not banning the trolls sooner
Have been away from the interwebs over Christmas. Will be more vigilant henceforth.
2010-09-24 - OAuth and pagination problems fixed
Apologies for the delay in getting Twitter's OAuth supported. Annoying pagination gremlin also fixed.
Hide

Tags

Hide

Functions

Commands using wget from sorted by
Terminal - Commands using wget - 234 results
wget -qO- www.commandlinefu.com/commands/by/PhillipNordwall | awk -F\> '/num-votes/{S+=$2; I++}END{print S/I}'
head -100000 /dev/urandom | strings|tr '[A-Z]' '[a-z]'|sort >temp.txt && wget -q http://www.mavi1.org/web_security/wordlists/webster-dictionary.txt -O-|tr '[A-Z]' '[a-z]'|sort >temp2.txt&&comm -12 temp.txt temp2.txt
wget -qO - http://ngrams.googlelabs.com/datasets | grep -E href='(.+\.zip)' | sed -r "s/.*href='(.+\.zip)'.*/\1/" | uniq | while read line; do `wget $line`; done
wget -q -nd http://www.biranchi.com/ip.php; echo "Your external ip is : `cat ip.php`"
2010-12-20 09:53:59
User: pebkac
Functions: echo wget
-10

This is a convinient way to do it in scripts. You also want to rm the ip.php file afterwards

for ((;;)) do pgrep wget ||shutdown -h now; sleep 5; done
wget -O gsplitter.crx "https://clients2.google.com/service/update2/crx?response=redirect&x=id%3Dlnlfpoefmdfplomdfppalohfbmlapjjo%26uc%26lang%3Den-US&prod=chrome&prodversion=8.0.552.224" ; google-chrome --load-extension gspliter.crx
2010-12-14 19:12:18
User: strzel_a
Functions: wget
-3

Download Gsplitter extension, and execute it with Chrome !

Or download it here :

https://chrome.google.com/extensions/detail/lnlfpoefmdfplomdfppalohfbmlapjjo

function 4get () { curl $1 | grep -i "File<a href" | awk -F '<a href="' '{print $4}' | awk -F '" ' '{print $1}' | xargs wget }
2010-12-11 09:01:32
User: gml
Functions: awk grep wget xargs
2

Useful for ripping wallpaper from 4chan.org/wg

cd /usr/src ; wget http://www.rarlab.com/rar/unrarsrc-4.0.2.tar.gz ; tar xvfz unrarsrc-4.0.2.tar.gz ; cd unrar ; ln -s makefile.unix Makefile ; make clean ; make ; make install
wget -qO - http://www.commandlinefu.com/commands/random/plaintext | sed -n '1d; /./p'
2010-12-05 15:32:14
User: dramaturg
Functions: sed wget
7

Seeing that we get back plain text anyway we don't need lynx. Also the sed-part removes the credit line.

wget -qO- "VURL" | grep -o "googleplayer.swf?videoUrl\\\x3d\(.\+\)\\\x26thumbnailUrl\\\x3dhttp" | grep -o "http.\+" | sed -e's/%\([0-9A-F][0-9A-F]\)/\\\\\x\1/g' | xargs echo -e | sed 's/.\{22\}$//g' | xargs wget -O OUPUT_FILE
2010-12-03 17:27:08
Functions: echo grep sed wget xargs
2

Download google video with wget. Or, if you wish, pass video URL to ie mplayer to view as stream.

1. VURL: replace with url. I.e. http://video.google.com/videoplay?docid=12312312312312313#

2. OUPUT_FILE : optionally change to a more suited name. This is the downloaded file. I.e. foo.flv

# Improvements greatly appreciated. (close to my first linux command after ls -A :) )

Breakedown pipe by pipe:

1. wget: html from google, pass to stdout

2. grep: get the video url until thumbnailUrl (not needed)

3. grep: Strip off everything before http://

4. sed: urldecode

5. echo: hex escapes

6. sed: stipr of tailing before thumbnailUrl

7. wget: download. Here one could use i.e. mplayer or other...

wget -O LICENSE.txt http://www.gnu.org/licenses/gpl-3.0.txt
wget --auth-no-challenge --server-response -O- $url 2>&1 | grep "Cookie" | sed "s/^ Set-//g" > cookie.txt; wget --auth-no-challenge --server-response --http-user="user" --http-password="pw" --header="$(cat cookie.txt)" -O- $url
2010-12-01 11:24:35
User: glaudiston
Functions: grep sed wget
1

I have a server with a php requiring basic authentication, like this:

header('WWW-Authenticate: Basic realm="do auth"');

header('HTTP/1.0 401 Unauthorized');

...?>

And the basic authentication in wget do not worked:

wget --auth-no-challenge --http-user="username" --http-password="password" -O- "http://url" wget --keep-session-cookies --save-cookies=cookies.txt --load-cookies=cokies.txt --http-user="username" --http-password="password" -O- "http://url"

I always received the 401 Authorization failed.

The saved cookie is always empty.

With my way, I received the header from the server and save the cookie, then resend the session cookie with authentication data

cat video.ogg | nc -l -p 4232 & wget http://users.bshellz.net/~bazza/?nombre=name -O - & sleep 10; mplayer http://users.bshellz.net/~bazza/datos/name.ogg
wget -q -O- --header\="Accept-Encoding: gzip" <url> | gunzip > out.html
2010-11-27 22:14:42
User: ashish_0x90
Functions: gunzip wget
1

Get gzip compressed web page using wget.

Caution: The command will fail in case website doesn't return gzip encoded content, though most of thw websites have gzip support now a days.

sudo apt-add-repository 'deb http://archive.offensive-security.com pwnsauce main microverse macroverse restricted universe multiverse' && wget -q http://archive.offensive-security.com/backtrack.gpg -O- | sudo apt-key add -
2010-11-16 18:23:48
User: kzh
Functions: sudo wget
Tags: Debian
3

Add the BackTrack repositories to your Debian based GNU/Linux distribution. Thanks to http://it-john.com/home/technology/linux-technology/add-back-track-4-repo-to-ubuntu/

wget `youtube-dl -g 'http://www.youtube.com/watch?v=-S3O9qi2E2U'` -O - | tee -a parachute-ending.flv | mplayer -cache 8192 -
2010-10-28 13:51:59
User: artagnon
Functions: tee wget
0

Watch a video while it's downloading. It's additionally saved to the disk for later viewing.

wget -O xkcd_$(date +%y-%m-%d).png `lynx --dump http://xkcd.com/|grep png`; eog xkcd_$(date +%y-%m-%d).png
diff <(wget -q -O - URL1) <(wget -q -O - URL2)
svn co http://simile.mit.edu/repository/crowbar/trunk&& cd ./trunk/xulapp/ xulrunner --install-app && Xvfb :1 && DISPLAY=:1 xulrunner application.ini 2>/dev/null 1>/dev/null && wget -O- "127.0.0.1:10000/&url=http://www.facebook.com"
2010-10-16 05:12:11
User: argv
Functions: cd wget
-1

some other options:

&delay=1000

&mode=links

much more with piggybank as scraper

works well with your favourite curses or non-curses http clients

ip2loc() { wget -qO - www.ip2location.com/$1 | grep "<span id=\"dgLookup__ctl2_lblICountry\">" | sed 's/<[^>]*>//g; s/^[\t]*//; s/&quot;/"/g; s/</</g; s/>/>/g; s/&amp;/\&/g'; }
2010-10-13 00:19:35
User: bkuri
Functions: grep sed wget
0

Grabs the ip2location site and removes everything but the span tag containing the country value. Place it inside your .bashrc or .bash_aliases file.

wget -qO - http://i18n.counter.li.org/ | grep 'users registered' | sed 's/.*\<font size=7\>//g' | tr '\>' ' ' | sed 's/<br.*//g' | tr ' ' '\0'
wget -O - 'https://USERNAMEHERE:PASSWORDHERE@mail.google.com/mail/feed/atom' --no-check-certificate
2010-09-26 14:47:13
User: PLA
Functions: wget
-3

I use this command in my Conky script to display the number of messages in my Gmail inbox and to list the from: and subject: fields.

wget -q $(lynx --dump 'http://geekandpoke.typepad.com/' | grep '\/.a\/' | grep '\-pi' | head -n 1 | awk '{print $2}') -O geekandpoke.jpg
wget -O- -q http://www.nomachine.com/download-package.php?Prod_Id=2067 | sed -n -e 'H;${x;s/\n/ /g;p;}' | sed -e "s/[Hh][Rr][Ee][Ff]=\"/\n/g" | cut -d "\"" -f1 | sort -u | grep deb$
wget -k -r -l 5 http://gentoo-install.com
2010-09-03 01:42:50
User: fecub
Functions: wget
1

Download Websites to 5 Level and browse offline!

-k -> convert-links (to browse offline)

-r -> recursive download

-l 5 -> level 5

example.

http://gentoo-install.com

:-)