For the record: I didn't build this. Just shared what I found that worked. Apologies to the original author! I decided I should fix the case where http://example.com is not matched for the next time I need this. So I read rfc1035 and formalized the host name regex. If anyone finds any more holes, please comment.
Use curl and sed to shorten an URL using goo.gl without any other api Show Sample Output
tracing redirects for a given url shortener Show Sample Output
Shorter and made into a function. Show Sample Output
This one-liner greps first 30 direct URLs for .torrent files matching your search querry, ordered by number of seeds (descending; determined by the second number after your querry, in this case 7; for other options just check the site via your favorite web-browser). You don't have to care about grepping the torrent names as well, because they are already included in the .torrent URL (except for spaces and some other characters replaced by underscores, but still human-readable). Be sure to have some http://isup.me/ macro handy (someone often kicks the ethernet cables out of their servers ;) ). I've also coded a more user-friendly ash (should be BASH compatible) script, which also lists the total size of download and number of seeds/peers (available at http://saironiq.blogspot.com/2011/04/my-shell-scripts-4-thepiratebayorg.html - may need some tweaking, as it was written for a router running OpenWrt and transmission). Happy downloading!
usage: tpb searchterm example: tpb the matrix trilogy This searches for torrents from thepiratebay and displays the top results in reverse order, so the 1st result is at the bottom instead of the top -- which is better for command line users
use curl and sed to shorten an url via goo.gl
url can be like any one of followings:
url="MejbOFk7H6c"
url="http://youtu.be/MejbOFk7H6c"
url="https://youtube.com/watch?feature=player_embedded&v=MejbOFk7H6c#t"
url="//www.youtube.com/v/MejbOFk7H6c?hl=ru_RU&version=3&rel=0"
url="http://www.youtube.com/embed/MejbOFk7H6c?feature=player_embedded"
If url mismatching, whole url will be returned.
Show Sample Output
(1) required: python-googl ( install by: pip install python-googl ) (2) get from google API console https://code.google.com/apis/console/ Show Sample Output
Check the API. You shouldn't need sed. The print-newline at the end is to prevent zsh from inserting a % after the end-of-output. Also works with http://v.gd Show Sample Output
Really helpfull when play with files having spaces an other bad name. Easy to store and access names and path in just a field while saving it in a file. This format (URL) is directly supported by nautilus and firefox (and other browsers) Show Sample Output
I'm not sure how reliable this command is, but it works for my needs. Here's also a variant using grep. nslookup www.example.com | grep "^Address: " | awk '{print $2}' Show Sample Output
shorter (thus better ;-) Show Sample Output
This command line assumes that "${url}" is the URL of the web resource. It can be useful to check the "freshness" of a download URL before a GET request. Show Sample Output
From http://daringfireball.net/2009/11/liberal_regex_for_matching_urls Thought it would be useful to commandlinefuers. Show Sample Output
// This is description for the old command:
Unfortunately we to encode the URL.
It can't be done with bash (without building it ourselves) so I used Perl?
Example with Perl:
curl -s http://is.gd/api.php?longurl=`perl -MURI::Escape -e "print uri_escape('http://www.google.com/search?hl=en&source=hp&q=commandlinefu&aq=0&oq=commandline');"`
Example without Perl:
curl http://is.gd/api.php?longurl=http://www.google.com
Most urls doesn't use & and ? anymore (SEO etc) so in most cases you can just use the simple version. :)
Show Sample Output
Sometimes it could be very useful to obtain the final URL you'll use after several redirects. (I use this command line for my automated tests to check if every redirections are ok) Show Sample Output
Extracts domain and subdomain from given URl. See examples. Show Sample Output
This command will use grep to read the shortcut (which in the above examle is file.url), and filter out all but the only important line, which contains the website URL, and some extra characters that will need to be removes (for example, URL=http://example.com). The cut command is then used to get rid of the URL= at the beginning. The output is then piped into Firefox, which should interpret the it as a web URL to be opened. Of course, you can replace Firefox with any other broswer. Tested in bash and sh.
Poor man's Clipular.com
don't have to be that complicated
urls.txt should have a fully qualified url on each line
prefix with
rm log.txt;
to clear the log
change curl command to
curl --head $file | head -1 >> log.txt
to just get the http status
Show Sample Output
commandlinefu.com is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.
Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.
» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10
Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):
Subscribe to the feed for: