Usage: clfavs username password num_favourite_commands file_in_which_to_backup
This one uses dictionary.com
Though without infinite time and knowledge of how the site will be designed in the future this may stop working, it still will serve as a simple straight forward starting point.
This uses the observation that the only item marked as strong on the page is the single logical line that includes the italicized fact.
If future revisions of the page show failure, or intermittent failure, one may simply alter the above to read.
wget randomfunfacts.com -O - 2>/dev/null | tee lastfact | grep \<strong\> | sed "s;^.*<i>\(.*\)</i>.*$;\1;"
The file lastfact, can then be examined whenever the command fails.
See man wget if you want linked files and not only those hosted on the website.
If the username includes an @ you can use this one: wget -r --user=username_here --password=pass_here ftp://ftp.example.com
The download content part. NOTE: the '-c' seems to not work very well and the download stuck at 99% sometimes. Just finish wget with no problem. Also, the download may restart after complete. You can also cancel. I don't know if it is a wget or Rapidshare glitch since I don't have problems with Megaupload, for example. UPDATE: as pointed by roebek the restart glitch can be solved by the "-t 1" option. Thanks a lot.
i sorta stole this from http://www.shell-fu.org/lister.php?id=878#MTC_form but it didn't work, so here it is, fixed. --- updated to work with jpegs, and to use a fancy positive look behind assertion.
useful to use after with the --load-cookies option of wget
I used to use the Firefox "View page info" feature a lot to determine how stale the web page I was looking at was. Now that I use mostly Chrome I miss that feature, so here is a command line alternative using wget. The -S says to display the server response, the --spider says to not download any files/pages, just fetch the header. The output goes to stderr, so to grep it you use 2>&1 to combine the stderr stream with stdout, the pipe that to grep for Last-Modified.
You can use curl instead if you have it installed, like this:
curl --head -s http://osswin.sourceforge.net | grep Mod
Show Sample Output
The original was a little bit too complicated for me. This one does not use any variables.
- Where $URL is the URL of the file. - Replace the $2 by $3 at the end to get a human-readable size. Credits to svanberg @ ArchLinux forums for original idea. Edit: Replaced command with better version by FRUiT. (removed unnecessary grep)
when we add a new package to a aptitude (the debian package manager) we need to add the gpg, otherwise it will show warning / error for missing key Show Sample Output
In order to do that, first you need to save a cookie file with your account info. These commands do it (maybe you need to create the '.cookies' dir before). Also, you need to check the "Direct downloads" option on the Premium Zone >> Settings tab. You need to do this once (as long you maintain the file or your Rapidshare Premium account).
this will connect to your hosted website service through the cPanel interface and use its backup tool to backup and download the entire website, locally. (do not forget to replace : YourUsername , YourPassword and YourWebsiteUrl for it to work )
This will download all files of the type specified after "-A" from a website. Here is a breakdown of the options: -r turns on recursion and downloads all links on page -l1 goes only one level of links into the page(this is really important when using -r) -H spans domains meaning it will download links to sites that don't have the same domain -nd means put all the downloads in the current directory instead of making all the directories in the path -A mp3 filters to only download links that are mp3s(this can be a comma separated list of different file formats to search for multiple types) -e robots=off just means to ignore the robots.txt file which stops programs like wget from crashing the site... sorry http://example/url lol..
website recursive offline mirror with wget
Writes hybrid ISO directly to USB stick; replace /dev/sdb with USB device in question and the ISO image link with the link of your choice
I wanted all the 'hidden' .flv files from the http link in the command line; wget seemed appropriate, fed with output from lynx, grep the flv files and the normalised via sed (to remove the numeric bullet). Similar to the 'Grab mp3 files' fu. Replace link with your own, grep arg with something more interesting ;) See here for something along the same lines... http://www.commandlinefu.com/commands/view/1006/grab-mp3-files-from-your-favorite-netcasts-mp3blog-or-sites-that-often-have-good-mp3s Hope you find it useful! Improvements welcome, naturally.
I have a remote php file that I want to run once an hour. I set up cron to run this wget. I don't really care about what's in the file though, I don't want to save the results, so I run the -O and send it to /dev/null
Looks up a word on merriam-webster.com, does a screen scrape for the FIRST audio pronunciation and plays it.
USAGE: Put this one-liner into a shell script (e.g., ~/bin/pronounce) and run it from the command line giving it the word to say:
pronounce lek
If the word isn't found in merriam-webster, no audio is played and the script returns an error value. However, M-W is a fairly complete dictionary (better than howjsay.com which won't let you hear how to pronounce naughty words).
ASSUMPTIONS: GNU's sed (which supports -r for extended regular expressions) and Linux's aplay. Aplay can be replaced by any program that can play .WAV files from stdin.
KNOWN BUGS: only the FIRST pronunciation is played, which is problematic if you wanted a particular form (plural, adjectival, etc) of the word. For example, if you run this:
pronounce onomatopoetic
you'll hear a voice saying "onomatopoeia".
Playing the correct form of the word is possible, but doing so might make the screen scraper even more fragile than it already is. (The slightest change to the format of m-w.com could break it).
Show Sample Output
Get gzip compressed web page using wget. Caution: The command will fail in case website doesn't return gzip encoded content, though most of thw websites have gzip support now a days.
Returns the current price of a troy ounce of gold, in USD. Requires the "jq" JSON parser. Show Sample Output
extension to tali713's random fact generator. It takes the output & sends it to notify-osd. Display time is proportional to the lengh of the fact.
commandlinefu.com is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.
Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.
» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10
Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):
Subscribe to the feed for: