Useful for download mulitple files Show Sample Output
activate the first alert and the next ones are activated automatically. Show Sample Output
If you are downloading a big file (or even a small one) and the connection breaks or times out, use this command in order to RESUME the download where it failed, instead of having to start downloading from the beginning. This is a real win for downloading debian ISO images over a buggy DSL modem. Take the partially downloaded file and cat it into the STDIN of curl, as shown. Then use the "-C -" option followed by the URL of the file you were originally downloading. Show Sample Output
Uses curl to download page of membership of US Congress. Use sed to strip HTML then perl to print a line starting with two tabs (a line with a representative) Show Sample Output
Same thing just a different way to get there. You will need lynx
Bulk downloads the comic strip JPG files for the adult cartoon Savitabhabhi, storing each set in it's own folder. Requires manual removal of "non-image" files that maybe created because each series may differ in length. The command can be easily adapted for UNIX flavours. You need to have cURL in your path.
This will send the web page at $u to recipient@example.com . To send the web page to oneself, recipient@example.com can be replaced by $(whoami) or $USER. The "charset" is UTF-8 here, but any alternative charset of your choice would work. `wget -O - -o /dev/null $u` may be considered instead of `curl $u` . On some systems the complete path to sendmail may be necessary, for instance /sys/pkg/libexec/sendmail/sendmail for some NetBSD.
Request all information about my IP address in xml format
Output the html from xkcd's index.html, filter out the html tags, and then view it in gwenview. Show Sample Output
Shorter version with curl and awk
The command was too long for the command box, so here it is:
echo $(( `wget -qO - http://i18n.counter.li.org/ | grep 'users registered' | sed 's/.*\<font size=7\>//g' | tr '\>' ' ' | sed 's/<br.*//g' | tr ' ' '\0'` + `curl --silent http://www.dudalibre.com/gnulinuxcounter?lang=en | grep users | head -2 | tail -1 | sed 's/.*<strong>//g' | sed 's/<\/strong>.*//g'` ))
This took me about an hour to do. It uses wget and curl because, dudalibre.com blocks wget, and wget worked nicely for me.
Show Sample Output
urls.txt should have a fully qualified url on each line
prefix with
rm log.txt;
to clear the log
change curl command to
curl --head $file | head -1 >> log.txt
to just get the http status
Show Sample Output
Just added view with the eog viewer.
Additionally it may give your geolocation if it's known by hostip.info Show Sample Output
XML version. Additionally it may give your geolocation if it's known by hostip.info Show Sample Output
This uses curl to find out the access times of a web service Show Sample Output
This commands queries the delicious api then runs the xml through xml2, grabs the urls cuts out the first two columns, passes through uniq to remove duplicates if any, and then goes into linkchecker who checks the links. the links go the blacklist in ~/.linkchecker/blacklist. please see the manual pages for further info peeps. I took me a few days to figure this one out. I how you enjoy it. Also don't run these api more then once a few seconds you can get banned by delicious see their site for info. ~updated for no recursive Show Sample Output
The only pre-requisite is jq (and curl, obviously). The other version used grep, but jq is much more suited to JSON parsing than that. Show Sample Output
you can use xmlstarlet to parse output instead of perl
Instead of having someone else read you the Digg headlines, Have OSX do it. Requires Curl+Sed+Say. This could probably be easily modified to use espeak for Linux.
With a lolcat favicon if you access it from your browser Show Sample Output
commandlinefu.com is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.
Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.
» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10
Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):
Subscribe to the feed for: