download all jpg in webpage

curl -sm1 http://www.website.com/ | grep -o 'http://[^"]*jpg' | sort -u | wget -qT1 -i-

0
By: kev
2011-09-10 19:21:13

These Might Interest You

  • this function will give you a status webpage code using curl. Show Sample Output


    1
    e() { echo $(curl -o /dev/null --silent --head --write-out '%{http_code}\n' $1); }
    leprechuanese · 2014-08-11 20:51:45 0
  • This example command fetches 'example.com' webpage and then fetches+saves all PDF files listed (linked to) on that webpage. [*Note: of course there are no PDFs on example.com. This is just an example]


    1
    curl -s http://example.com | grep -o -P "<a.*href.*>" | grep -o "http.*.pdf" | xargs -d"\n" -n1 wget -c
    b_t · 2011-06-09 14:42:46 1
  • Updated to the new version of the MW webpage (seems MW does not use cougar anymore, so the other commands do not work nowadays), and using Xidel to parse the page with a html parser instead regex. Example usage: pronounce onomatopoetic I'm not sure how well Xidel works with binary streams (although it seems to work great in tests), so using wget to download the actual wav file might be safer, i.e.: pronounce(){ wget -qO- $(xidel "http://www.m-w.com/dictionary/$*" -f "replace(css('.au')[1]/@onclick,\".*'([^']+)', *'([^']+)'.*\", '/audio.php?file=\$1&word=\$2')" -e 'css("embed")[1]/@src') | aplay -q;} Xidel is not a standard cli tool and has to be downloaded from xidel.sourceforge.net


    0
    pronounce(){ xidel "http://www.m-w.com/dictionary/$*" -f "replace(css('.au')[1]/@onclick,\".*'([^']+)', *'([^']+)'.*\", '/audio.php?file=\$1&word=\$2')" -f 'css("embed")[1]/@src' --download - | aplay -q;}
    BeniBela · 2013-04-18 13:03:16 0
  • The only zipped version of an album available for download is the lossy mp3 version. To download lossless files, because of their size, you must download them individually. This command scrapes the page for all the FLAC (or also SHN) files.


    0
    wget -rc -A.flac --tries=5 http://archive.org/the/url/of/the/album
    meunierd · 2010-01-20 07:36:25 0

What Others Think

- will not work for any images that use relative URLs - will not download images with other filename extensions such as 'jpeg' or 'JPG' - will not download images that do not have the CPM/MS-DOS style filenames but are still JPEG images (ie. images hosted by tumblr or mlkshk) wget -nd -p http://www.site.com/ # --no-directories --page-requisites will download all the images in the page (irregardless of type or filename) to the current directory. You can then pick through them to find the images you wish to keep.
Mozai · 353 weeks and 3 days ago

What do you think?

Any thoughts on this command? Does it work on your machine? Can you do the same thing with only 14 characters?

You must be signed in to comment.

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.

Share Your Commands



Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for: