Download just html of a whole website

wget --mirror --random-wait --recursive robots=off -U mozilla -R gif,jpg,pdf --reject-regex '((.*)\?(.*))|(.*)' -c [URLGOESHERE]
--mirror >>> all pages --random-wait >>> makes it look like not a bot --recursive >>> all pages, follow links ? robots=off >>> ignore no robots request -U mozilla >>> makes it look like real user on a browser not commandline -R >>> reject these file types we only want html -c >>>continue. In case you had to stop the wget you can pick it right back up! --reject-regex '((.*)\?(.*))|(.*)' >>> skip urls with parameters (don't download the same page a million times -

By: shwaydogg
2017-01-05 02:12:59

What do you think?

Any thoughts on this command? Does it work on your machine? Can you do the same thing with only 14 characters?

You must be signed in to comment.

What's this? is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.

Share Your Commands

Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.


Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for: