Hide

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again.

Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.


If you have a new feature suggestion or find a bug, please get in touch via http://commandlinefu.uservoice.com/

Get involved!

You can sign-in using OpenID credentials, or register a traditional username and password.

First-time OpenID users will be automatically assigned a username which can be changed after signing in.

Hide

Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for:

Hide

News

2011-03-12 - Confoo 2011 presentation
Slides are available from the commandlinefu presentation at Confoo 2011: http://presentations.codeinthehole.com/confoo2011/
2011-01-04 - Moderation now required for new commands
To try and put and end to the spamming, new commands require moderation before they will appear on the site.
2010-12-27 - Apologies for not banning the trolls sooner
Have been away from the interwebs over Christmas. Will be more vigilant henceforth.
2010-09-24 - OAuth and pagination problems fixed
Apologies for the delay in getting Twitter's OAuth supported. Annoying pagination gremlin also fixed.
Hide

Tags

Hide

Functions

Commands using wget from sorted by
Terminal - Commands using wget - 246 results
wget --reject html,htm --accept pdf,zip -rl1 url
2009-08-30 14:05:09
User: linuxswords
Functions: wget
16

If the site uses https, use:

wget --reject html,htm --accept pdf,zip -rl1 --no-check-certificate https-url
wget -U "QuickTime/7.6.2 (qtver=7.6.2;os=Windows NT 5.1Service Pack 3)" `echo http://movies.apple.com/movies/someHDmovie_720p.mov | sed 's/\([0-9][0-9]\)0p/h\10p/'`
2009-08-29 00:29:40
User: deadrabbit
Functions: sed wget
5

Copy the link to an HD movie trailer in to this command. It's more eleganant if it's put in a to a script, taking the URL as input.

mirror=ftp://somemirror.com/with/alot/versions/but/no/latest/link; latest=$(curl -l $mirror/ 2>/dev/null | grep util | tail -1); wget $mirror/$latest
2009-08-24 15:58:31
User: peshay
Functions: grep tail wget
4

to download latest version of "util", maybe insert a sort if they wont be shown in right order.

curl lists all files on mirror, grep your util, tail -1 will gets the one lists on the bottom and get it with wget

for ((i=1; i<67; i++)) do wget http://www.phrack.org/archives/tgz/phrack${i}.tar.gz -q; done
2009-08-20 23:27:01
User: Abiden
Functions: wget
0

This will download all the phracks! Enjoy!

wget -q -O- PAGE_URL | grep -o 'WORD_OR_STRING' | wc -w
wget -O - -q icanhazip.com
2009-08-14 10:24:30
User: andrepuel
Functions: wget
5

I dont have curl or links installed, so I use wget with write file as standard out.

wget -qO - http://www.commandlinefu.com/feed/tenup | xmlstarlet sel -T -t -o '&lt;x&gt;' -n -t -m rss/channel/item -o '&lt;y&gt;' -n -v description -o '&lt;/y&gt;' -n -t -o '&lt;/x&gt;' | xmlstarlet sel -T -t -m x/y -v code -n
2009-08-14 02:44:00
User: fsilveira
Functions: wget
0

This lengthy cryptic line will print the latest top 10 commandlinefu.com posts without their summaries. To print also their respective summaries use the following (even bigger) command line:

wget -qO - http://www.commandlinefu.com/feed/tenup | xmlstarlet sel -T -t -o '<doc>' -n -t -m rss/channel/item -o '<item>' -n -o '<title>' -v title -o '</title>' -n -o '<description>' -v description -o '</description>' -n -o '</item>' -n -t -o '</doc>' | xmlstarlet sel -T -t -m doc/item -v description/code -n -v title -n -n

It is recommended to include this line into a shell script to be easily run, as I do myself. You could also use the following URLs to browse the top 3 commands:

wget -qO - http://www.commandlinefu.com/feed/threeup | xmlstarlet ...

.. or all others:

wget -qO - http://feeds2.feedburner.com/Command-line-fu | xmlstarlet ...

PS: You need to install "xmlstarlet" to run it. It is found in Debian APT repositories (apt-get install xmlstarlet) or under the http://xmlstar.sourceforge.net/ URL.

wget http://checkip.dyndns.org && clear && echo && echo My IP && egrep -o '([[:digit:]]{1,3}\.){3}[[:digit:]]{1,3}' index.html && echo && rm index.html
wget -O - http://checkip.dyndns.org|sed 's/[^0-9.]//g'
wget --spider -v http://www.server.com/path/file.ext
wget `lynx -dump http://www.ebow.com/ebowtube.php | grep .flv$ | sed 's/[[:blank:]]\+[[:digit:]]\+\. //g'`
2009-08-02 14:09:53
User: spaceyjase
Functions: grep sed wget
3

I wanted all the 'hidden' .flv files from the http link in the command line; wget seemed appropriate, fed with output from lynx, grep the flv files and the normalised via sed (to remove the numeric bullet). Similar to the 'Grab mp3 files' fu. Replace link with your own, grep arg with something more interesting ;) See here for something along the same lines...

http://www.commandlinefu.com/commands/view/1006/grab-mp3-files-from-your-favorite-netcasts-mp3blog-or-sites-that-often-have-good-mp3s

Hope you find it useful! Improvements welcome, naturally.

sudo wget -c "http://nmap.org/dist/nmap-5.00.tar.bz2" && bzip2 -cd nmap-5.00.tar.bz2 | tar xvf - && cd nmap-5.00 && ./configure && make && sudo make install
2009-07-26 11:36:53
User: hemanth
Functions: bzip2 cd make sudo tar wget
-6

Just copy and paste the code in your terminal.

Note : sudo apt-get for debian versions , change as per your requirement .

Source : www.h3manth.com

wget -q -O - 'URL/full?orderby=starttime&singleevents=true&start-min=2009-06-01&start-max=2009-07-31' | perl -lane '@m=$_=~m/<title type=.text.>(.+?)</g;@a=$_=~m/startTime=.(2009.+?)T/g;shift @m;for ($i=0;$i<@m;$i++){ print $m[$i].",".$a[$i];}';
2009-07-23 14:48:54
Functions: perl wget
0

substitute the URL with your private/public XML url from calendar sharing settings

substitute the dates YYYY-mm-dd

adjust the perl parsing part for your needs

wget <URL> -O- | wget -i -
wget -r --wait=5 --quota=5000m --tries=3 --directory-prefix=/home/erin/Documents/erins_webpages --limit-rate=20k --level=1 -k -p -erobots=off -np -N --exclude-domains=del.icio.us,doubleclick.net -F -i ./delicious-20090629.htm
2009-07-02 01:46:21
User: bbelt16ag
Functions: wget
-1

just a alternative using a saved html file of all of my bookmarks. works well although it takes awhile.

wget -q --user=<username> --password=<password> 'https://updates.opendns.com/nic/update?hostname=your_opendns_hostname&myip=your_ip' -O -
2009-06-22 18:08:42
User: Alanceil
Functions: wget
5

Intended for dynamic ip OpenDNS users, this command will update your OpenDNS network IP.

For getting your IP, you can use one of the many one-liners here on commandlinefu.

Example:

I use this in a script which is run by kppp after it has successfully connected to my ISP:

---

#!/bin/bash

IP="`curl -s http://checkip.dyndns.org/ | grep -o '[[:digit:].]\+'`"

PW="hex-obfuscated-pw-here"

if [ "$IP" == "" ] ; then echo 'Not online.' ; exit 1

else

wget -q --user=topsecret --password="`echo $PW | xxd -ps -r`" 'https://updates.opendns.com/nic/update?hostname=myhostname&myip='"$IP" -O -

/etc/init.d/ntp-client restart &

fi

---

PS: DynDNS should use a similar method, if you know the URL, please post a comment. (Something with members.dyndns.org, if I recall correctly)

wget $URL | htmldoc --webpage -f "$URL".pdf - ; xpdf "$URL".pdf &
wget -H -r -nv --level=1 -k -p -erobots=off -np -N --exclude-domains=del.icio.us,doubleclick.net --exclude-directories=
wget -q -O- http://www.gutenberg.org/dirs/etext96/cprfd10.txt | sed '1,419d' | tr "\n" " " | tr " " "\n" | perl -lpe 's/\W//g;$_=lc($_)' | grep "^[a-z]" | awk 'length > 1' | sort | uniq -c | awk '{print $2"\t"$1}'
2009-05-04 16:00:39
User: alperyilmaz
Functions: awk grep perl sed sort tr uniq wget
-4

This command might not be useful for most of us, I just wanted to share it to show power of command line.

Download simple text version of novel David Copperfield from Poject Gutenberg and then generate a single column of words after which occurences of each word is counted by sort | uniq -c combination.

This command removes numbers and single characters from count. I'm sure you can write a shorter version.

wget -q -O - "$@" <url>
wget --server-response --spider http://www.example.com/
2009-03-31 18:49:14
User: penpen
Functions: wget
5

Let me suggest using wget for obtaining the HTTP header only as the last resort because it generates considerable textual overhead. The first ellipsis of the sample output stands for

Spider mode enabled. Check if remote file exists.

--2009-03-31 20:42:46-- http://www.example.com/

Resolving www.example.com... 208.77.188.166

Connecting to www.example.com|208.77.188.166|:80... connected.

HTTP request sent, awaiting response...

and the second one looks for

Length: 438 [text/html]

Remote file exists and could contain further links,

but recursion is disabled -- not retrieving.

wget --http-user=YourUsername --http-password=YourPassword http://YourWebsiteUrl:2082/getbackup/backup-YourWebsiteUrl-`date +"%-m-%d-%Y"`.tar.gz
2009-03-31 17:50:41
User: nadavkav
Functions: wget
4

this will connect to your hosted website service through the cPanel interface and use its backup tool to backup and download the entire website, locally.

(do not forget to replace : YourUsername , YourPassword and YourWebsiteUrl for it to work )

wget -c -t 1 --load-cookies ~/.cookies/rapidshare <URL>
2009-03-28 09:13:35
User: cammarin
Functions: wget
6

The download content part.

NOTE: the '-c' seems to not work very well and the download stuck at 99% sometimes. Just finish wget with no problem. Also, the download may restart after complete. You can also cancel. I don't know if it is a wget or Rapidshare glitch since I don't have problems with Megaupload, for example.

UPDATE: as pointed by roebek the restart glitch can be solved by the "-t 1" option. Thanks a lot.

wget --save-cookies ~/.cookies/rapidshare --post-data "login=USERNAME&password=PASSWORD" -O - https://ssl.rapidshare.com/cgi-bin/premiumzone.cgi > /dev/null
2009-03-28 09:12:02
User: cammarin
Functions: wget
4

In order to do that, first you need to save a cookie file with your account info. These commands do it (maybe you need to create the '.cookies' dir before). Also, you need to check the "Direct downloads" option on the Premium Zone >> Settings tab.

You need to do this once (as long you maintain the file or your Rapidshare Premium account).

wget -qO- whatismyip.org