commandlinefu.com is the place to record those command-line gems that you return to again and again.
You can sign-in using OpenID credentials, or register a traditional username and password.
Subscribe to the feed for:
Extracts domain and subdomain from given URl. See examples.
shorter (thus better ;-)
url can be like any one of followings:
If url mismatching, whole url will be returned.
Really helpfull when play with files having spaces an other bad name. Easy to store and access names and path in just a field while saving it in a file.
This format (URL) is directly supported by nautilus and firefox (and other browsers)
(1) required: python-googl ( install by: pip install python-googl )
(2) get from google API console https://code.google.com/apis/console/
Sometimes it could be very useful to obtain the final URL you'll use after several redirects.
(I use this command line for my automated tests to check if every redirections are ok)
usage: tpb searchterm
example: tpb the matrix trilogy
This searches for torrents from thepiratebay and displays the top results in reverse order,
so the 1st result is at the bottom instead of the top -- which is better for command line users
This one-liner greps first 30 direct URLs for .torrent files matching your search querry, ordered by number of seeds (descending; determined by the second number after your querry, in this case 7; for other options just check the site via your favorite web-browser).
You don't have to care about grepping the torrent names as well, because they are already included in the .torrent URL (except for spaces and some other characters replaced by underscores, but still human-readable).
Be sure to have some http://isup.me/ macro handy (someone often kicks the ethernet cables out of their servers ;) ).
I've also coded a more user-friendly ash (should be BASH compatible) script, which also lists the total size of download and number of seeds/peers (available at http://saironiq.blogspot.com/2011/04/my-shell-scripts-4-thepiratebayorg.html - may need some tweaking, as it was written for a router running OpenWrt and transmission).
urls.txt should have a fully qualified url on each line
to clear the log
change curl command to
curl --head $file | head -1 >> log.txt
to just get the http status
Shorter and made into a function.
Use curl and sed to shorten an URL using goo.gl without any other api
use curl and sed to shorten an url via goo.gl
// This is description for the old command:
Unfortunately we to encode the URL.
It can't be done with bash (without building it ourselves) so I used Perl?
Example with Perl:
curl -s http://is.gd/api.php?longurl=`perl -MURI::Escape -e "print uri_escape('http://www.google.com/search?hl=en&source=hp&q=commandlinefu&aq=0&oq=commandline');"`
Example without Perl:
Most urls doesn't use & and ? anymore (SEO etc) so in most cases you can just use the simple version. :)
For the record: I didn't build this. Just shared what I found that worked. Apologies to the original author!
I decided I should fix the case where http://example.com is not matched for the next time I need this. So I read rfc1035 and formalized the host name regex.
If anyone finds any more holes, please comment.
don't have to be that complicated
Thought it would be useful to commandlinefuers.