Hide

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again.

Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.


If you have a new feature suggestion or find a bug, please get in touch via http://commandlinefu.uservoice.com/

Get involved!

You can sign-in using OpenID credentials, or register a traditional username and password.

First-time OpenID users will be automatically assigned a username which can be changed after signing in.

Hide

Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for:

Hide

News

2011-03-12 - Confoo 2011 presentation
Slides are available from the commandlinefu presentation at Confoo 2011: http://presentations.codeinthehole.com/confoo2011/
2011-01-04 - Moderation now required for new commands
To try and put and end to the spamming, new commands require moderation before they will appear on the site.
2010-12-27 - Apologies for not banning the trolls sooner
Have been away from the interwebs over Christmas. Will be more vigilant henceforth.
2010-09-24 - OAuth and pagination problems fixed
Apologies for the delay in getting Twitter's OAuth supported. Annoying pagination gremlin also fixed.
Hide

Tags

Hide

Functions

Commands tagged url from sorted by
Terminal - Commands tagged url - 19 results
MYURL=http://test.example.com ; awk -F/ '{ print $3 }' <<< $MYURL | awk -F. '{ if ( $(NF-1) == "co" || $(NF-1) == "com" ) printf $(NF-2)"."; print $(NF-1)"."$(NF); }'
2014-05-26 07:31:40
User: snafu
Functions: awk printf
Tags: bash url domain
0

Extracts domain and subdomain from given URl. See examples.

dig +short <domain>
host example.com | head -1 | awk '{print $4}'
nslookup www.example.com | tail -2 | head -1 | awk '{print $2}'
2013-09-05 20:26:45
User: wsams
Functions: awk head nslookup tail
1

I'm not sure how reliable this command is, but it works for my needs. Here's also a variant using grep.

nslookup www.example.com | grep "^Address: " | awk '{print $2}'

sh -c 'url="http://youtu.be/MejbOFk7H6c"; vid="`for i in ".*youtu\.be/\([^\/&?#]\+\)" ".*youtu.\+v[=/]\([^\/&?#]\+\)" ".*youtu.\+embed/\([^\/&?#]\+\)"; do expr "${url}" : "${i}"; done`"; if [ -n "${vid}" ]; then echo ${vid}; else echo "${url}"; fi'
2013-09-04 19:33:09
User: qwertyroot
Functions: echo sh
2

url can be like any one of followings:

url="MejbOFk7H6c" url="http://youtu.be/MejbOFk7H6c" url="https://youtube.com/watch?feature=player_embedded&v=MejbOFk7H6c#t" url="//www.youtube.com/v/MejbOFk7H6c?hl=ru_RU&version=3&rel=0" url="http://www.youtube.com/embed/MejbOFk7H6c?feature=player_embedded"

If url mismatching, whole url will be returned.

convert_path2uri () { echo -n 'file://'; echo -n "$1" | perl -pe 's/([^a-zA-Z0-9_\/.])/sprintf("%%%.2x", ord($1))/eg' ;} #convert2uri '/tmp/a b' ### convert file path to URI
2013-07-01 08:54:45
User: totti
Functions: echo file perl
Tags: encoding PATH url
1

Really helpfull when play with files having spaces an other bad name. Easy to store and access names and path in just a field while saving it in a file.

This format (URL) is directly supported by nautilus and firefox (and other browsers)

python -c 'import googl; print googl.Googl("<your_google_api_key>").shorten("'$someurl'")[u"id"]'
2012-05-31 17:14:17
User: shr386
Functions: python
1

(1) required: python-googl ( install by: pip install python-googl )

(2) get from google API console https://code.google.com/apis/console/

curl $URL -s -L -o /dev/null -w '%{url_effective}'
2012-01-25 16:11:24
User: labynocle
Tags: curl redirect url
0

Sometimes it could be very useful to obtain the final URL you'll use after several redirects.

(I use this command line for my automated tests to check if every redirections are ok)

tpb() { wget -U Mozilla -qO - $(echo "http://thepiratebay.org/search/$@/0/7/0" | sed 's/ /\%20/g') | grep -o 'http\:\/\/torrents\.thepiratebay\.org\/.*\.torrent' | tac; }
2011-10-26 12:15:55
User: Bonster
Functions: echo grep sed wget
3

usage: tpb searchterm

example: tpb the matrix trilogy

This searches for torrents from thepiratebay and displays the top results in reverse order,

so the 1st result is at the bottom instead of the top -- which is better for command line users

isgd () { curl 'http://is.gd/create.php?format=simple&url='"$1" ; printf "\n" }
2011-08-14 23:31:39
User: dbbolton
Functions: printf
Tags: curl shorturl url
1

Check the API. You shouldn't need sed. The print-newline at the end is to prevent zsh from inserting a % after the end-of-output.

Also works with http://v.gd

wget -U Mozilla -qO - "http://thepiratebay.org/search/your_querry_here/0/7/0" | grep -o 'http\:\/\/torrents\.thepiratebay\.org\/.*\.torrent'
2011-04-15 15:01:16
User: sairon
Functions: grep wget
3

This one-liner greps first 30 direct URLs for .torrent files matching your search querry, ordered by number of seeds (descending; determined by the second number after your querry, in this case 7; for other options just check the site via your favorite web-browser).

You don't have to care about grepping the torrent names as well, because they are already included in the .torrent URL (except for spaces and some other characters replaced by underscores, but still human-readable).

Be sure to have some http://isup.me/ macro handy (someone often kicks the ethernet cables out of their servers ;) ).

I've also coded a more user-friendly ash (should be BASH compatible) script, which also lists the total size of download and number of seeds/peers (available at http://saironiq.blogspot.com/2011/04/my-shell-scripts-4-thepiratebayorg.html - may need some tweaking, as it was written for a router running OpenWrt and transmission).

Happy downloading!

for file in `cat urls.txt`; do echo -n "$file " >> log.txt; curl --head $file >> log.txt ; done
2010-10-19 02:54:13
User: Glutnix
Functions: echo file
-1

urls.txt should have a fully qualified url on each line

prefix with

rm log.txt;

to clear the log

change curl command to

curl --head $file | head -1 >> log.txt

to just get the http status

googl () { curl -s -d "url=${1}" http://goo.gl/api/url | sed -n "s/.*:\"\([^\"]*\).*/\1\n/p" ;}
curl -s -d'&url=URL' http://goo.gl/api/url | sed -e 's/{"short_url":"//' -e 's/","added_to_history":false}/\n/'
2010-10-01 23:20:08
User: Soubsoub
Functions: sed
5

Use curl and sed to shorten an URL using goo.gl without any other api

curl -s 'http://ggl-shortener.appspot.com/?url='"$1" | sed -e 's/{"short_url":"//' -e 's/"}/\n/g'
2010-03-26 22:31:06
User: mvrilo
Functions: sed
3

use curl and sed to shorten an url via goo.gl

curl -s "http://is.gd/api.php?longurl=[long_url]"
2009-12-07 18:52:04
User: Josso
0

// This is description for the old command:

Unfortunately we to encode the URL.

It can't be done with bash (without building it ourselves) so I used Perl?

Example with Perl:

curl -s http://is.gd/api.php?longurl=`perl -MURI::Escape -e "print uri_escape('http://www.google.com/search?hl=en&source=hp&q=commandlinefu&aq=0&oq=commandline');"`

Example without Perl:

curl http://is.gd/api.php?longurl=http://www.google.com

Most urls doesn't use & and ? anymore (SEO etc) so in most cases you can just use the simple version. :)

egrep 'https?://([[:alpha:]]([-[:alnum:]]+[[:alnum:]])*\.)+[[:alpha:]]{2,3}(:\d+)?(/([-\w/_\.]*(\?\S+)?)?)?'
2009-11-28 15:41:42
User: putnamhill
Functions: egrep
5

For the record: I didn't build this. Just shared what I found that worked. Apologies to the original author!

I decided I should fix the case where http://example.com is not matched for the next time I need this. So I read rfc1035 and formalized the host name regex.

If anyone finds any more holes, please comment.

cho "(Something like http://foo.com/blah_blah)" | awk '{for(i=1;i<=NF;i++){if($i~/^(http|ftp):\/\//)print $i}}'
2009-11-28 03:31:41
Functions: awk
-1

don't have to be that complicated

echo "(Something like http://foo.com/blah_blah)" | grep -oP "\b(([\w-]+://?|www[.])[^\s()<>]+(?:\([\w\d]+\)|([^[:punct:]\s]|/)))"