Hide

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again.

Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.


If you have a new feature suggestion or find a bug, please get in touch via http://commandlinefu.uservoice.com/

Get involved!

You can sign-in using OpenID credentials, or register a traditional username and password.

First-time OpenID users will be automatically assigned a username which can be changed after signing in.

Hide

Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for:

Hide

News

2011-03-12 - Confoo 2011 presentation
Slides are available from the commandlinefu presentation at Confoo 2011: http://presentations.codeinthehole.com/confoo2011/
2011-01-04 - Moderation now required for new commands
To try and put and end to the spamming, new commands require moderation before they will appear on the site.
2010-12-27 - Apologies for not banning the trolls sooner
Have been away from the interwebs over Christmas. Will be more vigilant henceforth.
2010-09-24 - OAuth and pagination problems fixed
Apologies for the delay in getting Twitter's OAuth supported. Annoying pagination gremlin also fixed.
Hide

Tags

Hide

Functions

Commands tagged wget from sorted by
Terminal - Commands tagged wget - 84 results
wget -q -O- --header\="Accept-Encoding: gzip" <url> | gunzip > out.html
2010-11-27 22:14:42
User: ashish_0x90
Functions: gunzip wget
2

Get gzip compressed web page using wget.

Caution: The command will fail in case website doesn't return gzip encoded content, though most of thw websites have gzip support now a days.

wget -O xkcd_$(date +%y-%m-%d).png `lynx --dump http://xkcd.com/|grep png`; eog xkcd_$(date +%y-%m-%d).png
Check the Description below.
2010-10-07 04:22:32
User: hunterm
-1

The command was too long for the command box, so here it is:

echo $(( `wget -qO - http://i18n.counter.li.org/ | grep 'users registered' | sed 's/.*\<font size=7\>//g' | tr '\>' ' ' | sed 's/<br.*//g' | tr ' ' '\0'` + `curl --silent http://www.dudalibre.com/gnulinuxcounter?lang=en | grep users | head -2 | tail -1 | sed 's/.*<strong>//g' | sed 's/<\/strong>.*//g'` ))

This took me about an hour to do. It uses wget and curl because, dudalibre.com blocks wget, and wget worked nicely for me.

wget -qO - http://i18n.counter.li.org/ | grep 'users registered' | sed 's/.*\<font size=7\>//g' | tr '\>' ' ' | sed 's/<br.*//g' | tr ' ' '\0'
wget -O - 'https://USERNAMEHERE:PASSWORDHERE@mail.google.com/mail/feed/atom' --no-check-certificate
2010-09-26 14:47:13
User: PLA
Functions: wget
-3

I use this command in my Conky script to display the number of messages in my Gmail inbox and to list the from: and subject: fields.

wget -k $URL
2010-08-21 17:39:53
User: minnmass
Functions: wget
Tags: wget
-2

The "-k" flag will tell wget to convert links for local browsing; it works with mirroring (ie with "-r") or single-file downloads.

wget randomfunfacts.com -O - 2>/dev/null | grep \<strong\> | sed "s;^.*<i>\(.*\)</i>.*$;\1;" | while read FUNFACT; do notify-send -t $((1000+300*`echo -n $FUNFACT | wc -w`)) -i gtk-dialog-info "RandomFunFact" "$FUNFACT"; done
2010-04-02 09:43:32
User: mtron
Functions: grep read sed wc wget
2

extension to tali713's random fact generator. It takes the output & sends it to notify-osd. Display time is proportional to the lengh of the fact.

wget randomfunfacts.com -O - 2>/dev/null | grep \<strong\> | sed "s;^.*<i>\(.*\)</i>.*$;\1;"
2010-03-30 23:49:30
User: tali713
Functions: grep sed wget
13

Though without infinite time and knowledge of how the site will be designed in the future this may stop working, it still will serve as a simple straight forward starting point.

This uses the observation that the only item marked as strong on the page is the single logical line that includes the italicized fact.

If future revisions of the page show failure, or intermittent failure, one may simply alter the above to read.

wget randomfunfacts.com -O - 2>/dev/null | tee lastfact | grep \<strong\> | sed "s;^.*<i>\(.*\)</i>.*$;\1;"

The file lastfact, can then be examined whenever the command fails.

if [ "$(ping -q -c1 google.com)" ];then wget -mnd -q http://www.google.com/intl/en_ALL/images/logo.gif ;fi
2010-03-23 04:15:03
User: alf
Functions: wget
-1

Bash scrip to test if a server is up, you can use this before wget'ing a file to make sure a blank one isn't downloaded.

pronounce(){ wget -qO- $(wget -qO- "http://dictionary.reference.com/browse/$@" | grep 'soundUrl' | head -n 1 | sed 's|.*soundUrl=\([^&]*\)&.*|\1|' | sed 's/%3A/:/g;s/%2F/\//g') | mpg123 -; }
pronounce(){ wget -qO- $(wget -qO- "http://www.m-w.com/dictionary/$@" | grep 'return au' | sed -r "s|.*return au\('([^']*)', '([^'])[^']*'\).*|http://cougar.eb.com/soundc11/\2/\1|") | aplay -q; }
2010-03-12 17:44:16
User: matthewbauer
Functions: aplay grep sed wget
5

The original was a little bit too complicated for me. This one does not use any variables.

cmd=$(wget -qO- "http://www.m-w.com/dictionary/$(echo "$@"|tr '[A-Z]' '[a-z]')" | sed -rn "s#return au\('([^']+?)', '([^'])[^']*'\);.*#\nwget -qO- http://cougar.eb.com/soundc11/\2/\1 | aplay -q#; s/[^\n]*\n//p"); [ "$cmd" ] && eval "$cmd" || exit 1
2010-03-12 13:56:41
User: hackerb9
Functions: eval exit sed wget
3

Looks up a word on merriam-webster.com, does a screen scrape for the FIRST audio pronunciation and plays it.

USAGE: Put this one-liner into a shell script (e.g., ~/bin/pronounce) and run it from the command line giving it the word to say:

pronounce lek

If the word isn't found in merriam-webster, no audio is played and the script returns an error value. However, M-W is a fairly complete dictionary (better than howjsay.com which won't let you hear how to pronounce naughty words).

ASSUMPTIONS: GNU's sed (which supports -r for extended regular expressions) and Linux's aplay. Aplay can be replaced by any program that can play .WAV files from stdin.

KNOWN BUGS: only the FIRST pronunciation is played, which is problematic if you wanted a particular form (plural, adjectival, etc) of the word. For example, if you run this:

pronounce onomatopoetic

you'll hear a voice saying "onomatopoeia".

Playing the correct form of the word is possible, but doing so might make the screen scraper even more fragile than it already is. (The slightest change to the format of m-w.com could break it).

{ u="http://twitter.com/commandlinefu"; echo "Subject: $u"; echo "Mime-Version: 1.0"; echo -e "Content-Type: text/html; charset=utf-8\n\n"; curl $u ; } | sendmail recipient@example.com
2010-02-24 04:18:30
User: pascalv
Functions: echo sendmail
-1

This will send the web page at $u to recipient@example.com . To send the web page to oneself, recipient@example.com can be replaced by $(whoami) .

The "charset" is UTF-8 here, but any alternative charset of your choice would work.

`wget -O - -o /dev/null $u` may be considered instead of `curl $u` .

On some systems the complete path to sendmail may be necessary, for instance /sys/pkg/libexec/sendmail/sendmail for some NetBSD.

set-proxy () { P=webproxy:1234; DU="fred"; read -p "username[$DU]:" USER; printf "%b"; UN=${USER:-$DU}; read -s -p "password:" PASS; printf "%b" "\n"; export http_proxy="http://${UN}:${PASS}@$P/"; export ftp_proxy="http://${UN}:${PASS}@$P/"; }
2010-02-04 13:12:59
User: shadycraig
Functions: export printf read set
1

Prompts the user for username and password, that are then exported to http_proxy for use by wget, yum etc

Default user, webproxy and port are used.

Using this script prevent the cleartext user and pass being in your bash_history and on-screen

wget -q -U busybox -O- "http://www.google.com/search?ie=UTF8&q=define%3A$1" | tr '<' '\n' | sed -n 's/^li>\(.*\)/\1\n/p'
2010-02-01 13:01:47
User: hackerb9
Functions: sed tr wget
0

This is a minimalistic version of the ubiquitious Google definition screen scraper. This version was designed not only to run fast, but to work using BusyBox. BusyBox is a collection of basic Unix tools that have been compiled into a single binary to save space on tiny installations of Unix. For example, although my phone doesn't have perl or the GNU utilities, it does have BusyBox's stripped down versions of wget, tr, and sed. It turns out that those tools suffice for many tasks.

Known Bugs: This script does not handle HTML entities at all. I don't think there's an easy way to do that within BusyBox, but I'd love to see it if someone could do it. Also, this script can only define a single word, not phrases. (Well, you could if you typed in %20, but that'd be gross.) Lastly, this script does not show the URL where definitions were found. Given the randomness of the Net, that last bit of information is often key.

wget -q http://dynamic.xkcd.com/comic/random/ -O-| sed -n '/<img src="http:\/\/imgs.xkcd.com\/comics/{s/.*\(http:.*\)" t.*/\1/;p}' | awk '{system ("wget -q " $1 " -O- | display -title $(basename " $1") -write /tmp/$(basename " $1")");}'
2009-12-09 13:41:25
User: laugg
Functions: awk sed wget
Tags: sed awk wget comic
0

Only need to install Image Magick package.

Display a xkcd comic with its title and save it in /tmp directory

If you prefer to view the newest xkcd, use this command:

wget -q http://xkcd.com/ -O-| sed -n '/<img src="http:\/\/imgs.xkcd.com\/comics/{s/.*\(http:.*\)" t.*/\1/;p}' | awk '{system ("wget -q " $1 " -O- | display -title $(basename " $1") -write /tmp/$(basename " $1")");}'
wget `lynx --dump http://xkcd.com/|grep png`
alias whatismyip="wget -q -O - http://whatismyip.com/automation/n09230945.asp"
2009-10-30 15:42:52
User: gibboris
Functions: alias
-3

The preferred way for scripts (and easier to parse)

check_dns_no() { for i in $* ; do if `wget -O - -q http://www.norid.no/domenenavnbaser/whois/?query=$i.no | grep "no match" &>/dev/null` ; then echo $i.no "available" ; fi ; sleep 1 ;done }
2009-09-30 21:17:33
User: xeor
Functions: echo grep sleep
Tags: wget dig dns
0

Mostly for Norwegians, but easily adoptable to others. Very handy if you are brainstorming for a new domainname.

Will only display the available ones..

You can usually do this better with dig, but if you dont have dig, or the TLD only have an online service to check with, this will be usefull..

clfavs(){ URL="http://www.commandlinefu.com";wget -O - --save-cookies c --post-data "username=$1&password=$2&submit=Let+me+in" $URL/users/signin;for i in `seq 0 25 $3`;do wget -O - --load-cookies c $URL/commands/favourites/plaintext/$i >>$4;done;rm -f c;}
2009-09-30 16:43:08
User: suhasgupta
Functions: c++ wget
24

Usage: clfavs username password num_favourite_commands file_in_which_to_backup

display "$(wget -q http://dynamic.xkcd.com/comic/random/ -O - | grep -Po '(?<=")http://imgs.xkcd.com/comics/[^"]+(png|jpg)')"
2009-09-14 18:40:14
User: syssyphus
Tags: wget xkcd
6

i sorta stole this from

http://www.shell-fu.org/lister.php?id=878#MTC_form

but it didn't work, so here it is, fixed.

---

updated to work with jpegs, and to use a fancy positive look behind assertion.

wget -q --spider http://server/cgi/script
2009-09-11 05:33:48
User: ashawley
Functions: wget
Tags: wget
0

I don't know if the --spider option works to execute a script, but it might be worth trying. Note that the Drupal project uses the following in a cron job.

wget -O - -q http://localhost/drupal/cron.php

The output is sent to standard out so it can be logged by cron.

wget -O /dev/null http://www.google.com
2009-09-10 14:43:57
Functions: wget
Tags: wget
3

I have a remote php file that I want to run once an hour. I set up cron to run this wget. I don't really care about what's in the file though, I don't want to save the results, so I run the -O and send it to /dev/null

while [ -n "`pgrep wget`" ]; do sleep 2 ;done; [ -e "/tmp/nosleep"] || echo mem >/sys/power/state
2009-09-06 05:51:20
User: kamathln
Functions: echo sleep
1

[Note: This command needs to be run as root].

If you are downloading something large at night, you can start wget as a normal user and issue the above command as root. When the download is done, the computer will automatically go to sleep. If at any time you feel the computer should not go to sleep automatically(like if you find the download still continuing in the morning), just create an empty file called nosleep in /tmp directory.

wget `lynx -dump http://www.ebow.com/ebowtube.php | grep .flv$ | sed 's/[[:blank:]]\+[[:digit:]]\+\. //g'`
2009-08-02 14:09:53
User: spaceyjase
Functions: grep sed wget
3

I wanted all the 'hidden' .flv files from the http link in the command line; wget seemed appropriate, fed with output from lynx, grep the flv files and the normalised via sed (to remove the numeric bullet). Similar to the 'Grab mp3 files' fu. Replace link with your own, grep arg with something more interesting ;) See here for something along the same lines...

http://www.commandlinefu.com/commands/view/1006/grab-mp3-files-from-your-favorite-netcasts-mp3blog-or-sites-that-often-have-good-mp3s

Hope you find it useful! Improvements welcome, naturally.