Hide

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again.

Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.


If you have a new feature suggestion or find a bug, please get in touch via http://commandlinefu.uservoice.com/

Get involved!

You can sign-in using OpenID credentials, or register a traditional username and password.

First-time OpenID users will be automatically assigned a username which can be changed after signing in.

Universal configuration monitoring and system of record for IT.
Hide

Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for:

Hide

News

May 19, 2015 - A Look At The New Commandlinefu
I've put together a short writeup on what kind of newness you can expect from the next iteration of clfu. Check it out here.
March 2, 2015 - New Management
I'm Jon, I'll be maintaining and improving clfu. Thanks to David for building such a great resource!
Hide

Top Tags

Hide

Functions

Hide

Credits

Commands using wget from sorted by
Terminal - Commands using wget - 253 results
curl -s http://example.com | grep -o -P "<a.*href.*>" | grep -o "http.*.pdf" | xargs -d"\n" -n1 wget -c
2011-06-09 14:42:46
User: b_t
Functions: grep wget xargs
0

This example command fetches 'example.com' webpage and then fetches+saves all PDF files listed (linked to) on that webpage.

[*Note: of course there are no PDFs on example.com. This is just an example]

wget -U Mozilla -qO - "http://thepiratebay.org/search/your_querry_here/0/7/0" | grep -o 'http\:\/\/torrents\.thepiratebay\.org\/.*\.torrent'
2011-04-15 15:01:16
User: sairon
Functions: grep wget
3

This one-liner greps first 30 direct URLs for .torrent files matching your search querry, ordered by number of seeds (descending; determined by the second number after your querry, in this case 7; for other options just check the site via your favorite web-browser).

You don't have to care about grepping the torrent names as well, because they are already included in the .torrent URL (except for spaces and some other characters replaced by underscores, but still human-readable).

Be sure to have some http://isup.me/ macro handy (someone often kicks the ethernet cables out of their servers ;) ).

I've also coded a more user-friendly ash (should be BASH compatible) script, which also lists the total size of download and number of seeds/peers (available at http://saironiq.blogspot.com/2011/04/my-shell-scripts-4-thepiratebayorg.html - may need some tweaking, as it was written for a router running OpenWrt and transmission).

Happy downloading!

wget -O/dev/null -q URLtoCheck && echo exists || echo not exist
2011-04-07 20:55:33
User: xeonproject
Functions: echo wget
1

put your link [url] to check if exist the remote file

wget --spider -o wget.log -e robots=off --wait 1 -r -p http://www.example.com
2011-04-05 13:42:14
User: lele
Functions: wget
-1

This will visit recursively all linked urls starting from the specified URL. It won't save anything locally and it will produce a detailed log.

Useful to find broken links in your site. It ignores robots.txt, so just use it on a site you own!

http_proxy=<proxy.server:port> wget <url>
2011-03-30 13:06:19
User: rdc
Functions: wget
0

On a machine behind a firewall, it's possible to pass the proxy server address in as a prefix to wget to avoid having to set it as an environment variable first.

down4me() { wget -qO - "http://www.downforeveryoneorjustme.com/$1" | sed '/just you/!d;s/<[^>]*>//g' ; }
2011-03-11 14:38:38
User: vando
Functions: sed wget
13

Check if a site is down with downforeveryoneorjustme.com

wget -qO - "http://www.google.com/dictionary/json?callback=dict_api.callbacks.id100&q=steering+wheel&sl=en&tl=en&restrict=pr,de&client=te" | sed 's/dict_api\.callbacks.id100.//' | sed 's/,200,null)//'
2011-03-08 15:00:39
User: sairon
Functions: sed wget
1
wget -qO - "http://www.google.com/dictionary/json?callback=dict_api.callbacks.id100&q=steering+wheel&sl=en&tl=en&restrict=pr,de&client=te"

this does the actual google dictionary query, returns a JSON string encapsulated in some fancy tag

sed 's/dict_api\.callbacks.id100.//'

here we remove the tag beginning

sed 's/,200,null)//'

and here the tag end

There are also some special characters which could cause problems with some JSON parsers, so if you get some errors, this is probably the case (sed is your friend).

I laso like to trim the "webDefinitions" part, because it (sometimes) contains misleading information.

sed 's/\,\"webDefinitions.*//'

(but remember to append a "}" at the end, because the JSON string will be invalid)

The output also contains links to mp3 files with pronounciation.

As of now, this is only usable in the English language. If you choose other than English, you will only get webDefinitions (which are crap).

wget -q -U Mozilla -O output.mp3 "http://translate.google.com/translate_tts?ie=UTF-8&tl=en&q=hello+world
2011-03-08 14:05:36
User: sairon
Functions: wget
35

EDIT: command updated to support accented characters!

Works in any of 58 google supported languages (some sound like crap, english is the best IMO).

You get a mp3 file containing your query in spoken language. There is a limit of 100 characters for the "q" parameter, so be careful. The "tl" parameter contains target language.

wget -q -U "Mozilla/5.0" --post-file speech.flac --header="Content-Type: audio/x-flac; rate=16000" -O - "http://www.google.com/speech-api/v1/recognize?lang=en-us&client=chromium"
2011-03-08 13:39:01
User: sairon
Functions: wget
3

The FLAC audio must be encoded at 16000Hz sampling rate (SoX is your friend).

Outputs a short JSON string, the actual speech is in the hypotheses->utterance, the accuracy is stored in hypotheses->confidence (ranging from 0 to 1).

Google also accepts audio in some special speex format (audio/x-speex-with-header-byte), which is much smaller in comparison with losless FLAC, but I haven't been able to encode such a sample.

wget -U "Mozilla/5.0" -qO - "http://translate.google.com/translate_a/t?client=t&text=translation+example&sl=auto&tl=fr" | sed 's/\[\[\[\"//' | cut -d \" -f 1
2011-03-06 13:46:16
User: sairon
Functions: cut sed wget
5

substitute "example" with desired string;

tl = target language (en, fr, de, hu, ...);

you can leave sl parameter as-is (autodetection works fine)

wget http://cmyip.com -O - -o /dev/null | grep -Po '[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+'
site="http://gifbin.com/"; for i in $(wget -qO- "$site"random| sed -r "s/^.*(bin\/.+\.gif).*$/\1/m" | grep "^bin"); do wget -c "$site$i"; filename=`basename $i`; [ `identify $filename | wc -l` -gt 1 ] || rm -f $filename; done
2011-02-15 10:05:37
User: az
Functions: grep rm sed wc wget
-2

Download a bunch of random animated gifs from http://gifbin.com/

wget http://cmyip.com -O - -o /dev/null | awk '/\<title/ {print $4}'
wget --mirror -A.jpg http://www.xs4all.nl/~dassel/wall/
wget http://www.discogs.com/search?q=724349691704 -O foobar &> /dev/null ; grep \/release\/ foobar | head -2 | tail -1 | sed -e 's/^<div>.*>\(.*\)<\/a><\/div>/\1/' ; rm foobar
2011-01-30 23:34:54
User: TetsuyO
Functions: grep head rm sed tail wget
-1

Substitute that 724349691704 with an UPC of a CD you have at hand, and (hopefully) this oneliner should return the $Artist - $Title, querying discogs.com.

Yes, I know, all that head/tail/grep crap can be improved with a single sed command, feel free to send "patches" :D

Enjoy!

mplayer $(wget -q -O - "http://europarse.real.com/hurl/gratishurl.ram?pid=eu_aljazeera&amp;file=al_jazeera_en_lo.rm" | sed -e 's#lo.rm#hi.rm#')
2011-01-30 14:36:37
User: torrid
Functions: sed wget
3

One cannot call the high quality livestream directly, but command this gives you a session ID and the high quality stream. #egypt #jan25

wget http://www.youtube.com/watch?v=dQw4w9WgXcQ -qO- | sed -n "/fmt_url_map/{s/[\'\"\|]/\n/g;p}" | sed -n '/^fmt_url_map/,/videoplayback/p' | sed -e :a -e '$q;N;5,$D;ba' | tr -d '\n' | sed -e 's/\(.*\),\(.\)\{1,3\}/\1/' | wget -i - -O surprise.flv
2011-01-25 04:19:06
User: Eno
Functions: sed tr wget
37

Nothing special required, just wget, sed & tr!

wget http://URL/FILE.tar.gz -O - | tar xfz -
2011-01-18 12:17:16
Functions: tar wget
16

This will uncompress the file while it's being downloaded which makes it much faster

wget -e robots=off -E -H -k -K -p http://<page>
wget -qO- www.commandlinefu.com/commands/by/PhillipNordwall | awk -F\> '/num-votes/{S+=$2; I++}END{print S/I}'
head -100000 /dev/urandom | strings|tr '[A-Z]' '[a-z]'|sort >temp.txt && wget -q http://www.mavi1.org/web_security/wordlists/webster-dictionary.txt -O-|tr '[A-Z]' '[a-z]'|sort >temp2.txt&&comm -12 temp.txt temp2.txt
wget -qO - http://ngrams.googlelabs.com/datasets | grep -E href='(.+\.zip)' | sed -r "s/.*href='(.+\.zip)'.*/\1/" | uniq | while read line; do `wget $line`; done
wget -q -nd http://www.biranchi.com/ip.php; echo "Your external ip is : `cat ip.php`"
2010-12-20 09:53:59
User: pebkac
Functions: echo wget
-10

This is a convinient way to do it in scripts. You also want to rm the ip.php file afterwards

for ((;;)) do pgrep wget ||shutdown -h now; sleep 5; done
wget -O gsplitter.crx "https://clients2.google.com/service/update2/crx?response=redirect&x=id%3Dlnlfpoefmdfplomdfppalohfbmlapjjo%26uc%26lang%3Den-US&prod=chrome&prodversion=8.0.552.224" ; google-chrome --load-extension gspliter.crx
2010-12-14 19:12:18
User: strzel_a
Functions: wget
-3

Download Gsplitter extension, and execute it with Chrome !

Or download it here :

https://chrome.google.com/extensions/detail/lnlfpoefmdfplomdfppalohfbmlapjjo