Hide

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again.

Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.


If you have a new feature suggestion or find a bug, please get in touch via http://commandlinefu.uservoice.com/

Get involved!

You can sign-in using OpenID credentials, or register a traditional username and password.

First-time OpenID users will be automatically assigned a username which can be changed after signing in.

Hide

Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for:

Hide

News

2011-03-12 - Confoo 2011 presentation
Slides are available from the commandlinefu presentation at Confoo 2011: http://presentations.codeinthehole.com/confoo2011/
2011-01-04 - Moderation now required for new commands
To try and put and end to the spamming, new commands require moderation before they will appear on the site.
2010-12-27 - Apologies for not banning the trolls sooner
Have been away from the interwebs over Christmas. Will be more vigilant henceforth.
2010-09-24 - OAuth and pagination problems fixed
Apologies for the delay in getting Twitter's OAuth supported. Annoying pagination gremlin also fixed.
Hide

Tags

Hide

Functions

Commands tagged wget from sorted by
Terminal - Commands tagged wget - 80 results
wget -nd -r -l 2 -A jpg,jpeg,png,gif http://website-url.com
wget -A mp3,mpg,mpeg,avi -r -l 3 http://www.site.com/
expandurl() { wget -S $1 2>&1 | grep ^Location; }
2011-10-18 18:50:54
User: atoponce
Functions: grep wget
Tags: wget
0

This shell function uses wget(1) to show what site a shortened URL is pointing to, even if there are many nested shortened URLs. This is a great way to test whether or not the shortened URL is sending you to a malicious site, or somewhere nasty that you don't want to visit. The sample output is from:

expandurl http://t.co/LDWqmtDM
isgd() { /usr/bin/wget -qO - "http://is.gd/create.php?format=simple&url=$1" ;}
wgetall () { wget -r -l2 -nd -Nc -A.$@ $@ }
2011-09-28 09:43:25
Functions: wget
0

Recursively download all files of a certain type down to two levels, ignoring directory structure and local duplicates.

Usage:

wgetall mp3 http://example.com/download/

wget -O - -q http://checkip.dyndns.org/ | cut -d':' -f2 | cut -d'<' -f1| cut -c2-
2011-09-17 13:42:01
User: ztank1013
Functions: cut wget
-2

This is just a "cut" addicted variant of the previous unixmonkey24730 command...

wget http://checkip.dyndns.org/ -q -O - | grep -Eo '\<[[:digit:]]{1,3}(\.[[:digit:]]{1,3}){3}\>'
curl -sm1 http://www.website.com/ | grep -o 'http://[^"]*jpg' | sort -u | wget -qT1 -i-
wget -q -O - http://www.perl.org/get.html | grep -m1 '\.tar\.gz' | sed 's/.*perl-//; s/\.tar\.gz.*//'
for i in `seq -w 1 50`; do wget --continue \ http://commandline.org.uk/images/posts/animal/$i.jpg; done
echo ".mode tabs select host, case when host glob '.*' then 'TRUE' else 'FALSE' end, path, case when isSecure then 'TRUE' else 'FALSE' end, expiry, name, value from moz_cookies;" | sqlite3 ~/.mozilla/firefox/*.default/cookies.sqlite
2011-08-15 14:49:47
User: euridice
Functions: echo
6

useful to use after with the --load-cookies option of wget

wget --spider $URL 2>&1 | awk '/Length/ {print $2}'
2011-07-03 00:14:58
User: d3Xt3r
Functions: awk wget
5

- Where $URL is the URL of the file.

- Replace the $2 by $3 at the end to get a human-readable size.

Credits to svanberg @ ArchLinux forums for original idea.

Edit: Replaced command with better version by FRUiT. (removed unnecessary grep)

wget -r -A .pdf -l 5 -nH --no-parent http://example.com
2011-06-09 17:17:03
User: houghi
Functions: wget
Tags: wget pdf
7

See man wget if you want linked files and not only those hosted on the website.

curl -s http://example.com | grep -o -P "<a.*href.*>" | grep -o "http.*.pdf" | xargs -d"\n" -n1 wget -c
2011-06-09 14:42:46
User: b_t
Functions: grep wget xargs
0

This example command fetches 'example.com' webpage and then fetches+saves all PDF files listed (linked to) on that webpage.

[*Note: of course there are no PDFs on example.com. This is just an example]

wget --spider -o wget.log -e robots=off --wait 1 -r -p http://www.example.com
2011-04-05 13:42:14
User: lele
Functions: wget
-1

This will visit recursively all linked urls starting from the specified URL. It won't save anything locally and it will produce a detailed log.

Useful to find broken links in your site. It ignores robots.txt, so just use it on a site you own!

http_proxy=<proxy.server:port> wget <url>
2011-03-30 13:06:19
User: rdc
Functions: wget
0

On a machine behind a firewall, it's possible to pass the proxy server address in as a prefix to wget to avoid having to set it as an environment variable first.

wget --mirror -A.jpg http://www.xs4all.nl/~dassel/wall/
Command in description (Your command is too long - please keep it to less than 255 characters)
2011-02-03 08:25:42
User: __
Functions: command less
0
yt2mp3(){ for j in `seq 1 301`;do i=`curl -s gdata.youtube.com/feeds/api/users/$1/uploads\?start-index=$j\&max-results=1|grep -o "watch[^&]*"`;ffmpeg -i `wget youtube.com/$i -qO-|grep -o 'url_map"[^,]*'|sed -n '1{s_.*|__;s_\\\__g;p}'` -vn -ab 128k "`youtube-dl -e ${i#*=}`.mp3";done;}

squeezed the monster (and nifty ☺) command from 7776 from 531 characters to 284 characters, but I don't see a way to get it down to 255. This is definitely a kludge!

wget http://www.discogs.com/search?q=724349691704 -O foobar &> /dev/null ; grep \/release\/ foobar | head -2 | tail -1 | sed -e 's/^<div>.*>\(.*\)<\/a><\/div>/\1/' ; rm foobar
2011-01-30 23:34:54
User: TetsuyO
Functions: grep head rm sed tail wget
-1

Substitute that 724349691704 with an UPC of a CD you have at hand, and (hopefully) this oneliner should return the $Artist - $Title, querying discogs.com.

Yes, I know, all that head/tail/grep crap can be improved with a single sed command, feel free to send "patches" :D

Enjoy!

The command is too big to fit here. :( Look at the description for the command, in readable form! :)
2011-01-05 02:45:28
User: hunterm
Functions: at command
-6

Yep, now you can finally google from the command line!

Here's a readable version "for your pleasure"(c):

google() { # search the web using google from the commandline # syntax: google google query=$(echo "$*" | sed "s:%:%25:g;s:&:%26:g;s:+:%2b:g;s:;:%3b:g;s: :+:g") data=$(wget -qO - "https://ajax.googleapis.com/ajax/services/search/web?v=1.0&q=$query") title=$(echo "$data" | tr '}' '\n' | sed "s/.*,\"titleNoFormatting//;s/\":\"//;s/\",.*//;s/\\u0026/'/g;s/\\\//g;s/#39\;//g;s/'amp;/\&/g" | head -1) url="$(echo "$data" | tr '}' '\n' | sed 's/.*"url":"//;s/".*//' | head -1)" echo "${title}: ${url} | http://www.google.com/search?q=${query}" }

Enjoy :)

wget -qO - http://ngrams.googlelabs.com/datasets | grep -E href='(.+\.zip)' | sed -r "s/.*href='(.+\.zip)'.*/\1/" | uniq | while read line; do `wget $line`; done
wget -q -O- --header\="Accept-Encoding: gzip" <url> | gunzip > out.html
2010-11-27 22:14:42
User: ashish_0x90
Functions: gunzip wget
1

Get gzip compressed web page using wget.

Caution: The command will fail in case website doesn't return gzip encoded content, though most of thw websites have gzip support now a days.

wget -O xkcd_$(date +%y-%m-%d).png `lynx --dump http://xkcd.com/|grep png`; eog xkcd_$(date +%y-%m-%d).png
Check the Description below.
2010-10-07 04:22:32
User: hunterm
-1

The command was too long for the command box, so here it is:

echo $(( `wget -qO - http://i18n.counter.li.org/ | grep 'users registered' | sed 's/.*\<font size=7\>//g' | tr '\>' ' ' | sed 's/<br.*//g' | tr ' ' '\0'` + `curl --silent http://www.dudalibre.com/gnulinuxcounter?lang=en | grep users | head -2 | tail -1 | sed 's/.*<strong>//g' | sed 's/<\/strong>.*//g'` ))

This took me about an hour to do. It uses wget and curl because, dudalibre.com blocks wget, and wget worked nicely for me.

wget -qO - http://i18n.counter.li.org/ | grep 'users registered' | sed 's/.*\<font size=7\>//g' | tr '\>' ' ' | sed 's/<br.*//g' | tr ' ' '\0'