No curl and no sed, just wget :)
in place of "output-filename.mp4" put the name you want the file to be named with.
in place of "youtube-video-link" put the link of the Video page eg: http://www.youtube.com/watch?v=AclA-7YntvE
in place of "format-number" put the number of the file format you would like
How to get the "format-number"
to get format number type in below command before running this command
youtube-dl -F "youtube-video-link"
and it will list all the available formats with the format number, like to download in 360p mp4 use the number "18"
To automatically let it fetch the best quality available just remove the -f "format-number" and you are good to go.
Show Sample Output
Optionally, pipe the output into http://sed.sourceforge.net/grabbag/scripts/html2iso.sed Or: wget -qO - http://www.asciiartfarts.com/random.cgi | sed -n '//,//p' | sed -n '/ Show Sample Output
The difference between the original version provided and this one is that this one works rather than outputting a wget error
[Note: This command needs to be run as root]. If you are downloading something large at night, you can start wget as a normal user and issue the above command as root. When the download is done, the computer will automatically go to sleep. If at any time you feel the computer should not go to sleep automatically(like if you find the download still continuing in the morning), just create an empty file called nosleep in /tmp directory.
Prompts the user for username and password, that are then exported to http_proxy for use by wget, yum etc Default user, webproxy and port are used. Using this script prevent the cleartext user and pass being in your bash_history and on-screen Show Sample Output
To learn more about Google Ngram Viewer: http://ngrams.googlelabs.com/info
This example command fetches 'example.com' webpage and then fetches+saves all PDF files listed (linked to) on that webpage. [*Note: of course there are no PDFs on example.com. This is just an example]
A simple script for download all the MegaTokyo strips from the first to the last one
I is for headers only s is for silence curl -Is outputs ONLY headers the pipe and grep is to filter them to Modified only.. Show Sample Output
If you have to deal with MS Sharepoint which is (rarely, let's hope) used in e.g. certain corporate environments). This uses Cntlm. For single files, just use cURL -- its NTLM authentication works quite well. # /etc/cntlm.conf: # Username account # Domain domain # Password ############ # Proxy 10.20.30.40 (IP of the sharepoint site) # NoProxy * # Listen 3128
alias speedtest='wget --output-document=/dev/null http://speedtest.wdc01.softlayer.com/downloads/test500.zip'
first grep all href images then sed the url part then wget
First (and only) argument should be a 4chan thread URL.
Returns your external IP address to the command line using only wget Show Sample Output
Download latest NVIDIA Geforce x64 Windows7-8 driver from Nvidia's website. Pulls the latest download version (which includes beta). This is the "English" version. The following command includes a 'sed' line to replace "english" with "international" if needed. You can also replace the starting subdomain with "eu." "uk." and others. Enjoy this one liner! 1 character under the max :)
wget "us.download.nvidia.com$(wget -qO- "$(wget -qO- "nvidia.com/Download/processFind.aspx?psid=95&pfid=695&osid=19&lid=1&lang=en-us" | awk '/driverResults.aspx/ {print $4}' | cut -d "'" -f2 | head -n 1)" | awk '/url=/ {print $2}' | sed -e "s/english/international/" | cut -d '=' -f3 | cut -d '&' -f1)"
Show Sample Output
Directly send the content of a url to standard out. This command is most convenient for sending the output of a download directly to another command. Show Sample Output
Like the original command, but the -f allows this one to succeed even if the website returns uncompressed data. From gzip(1) on the -f flag: If the input data is not in a format recognized by gzip, and if the --stdout is also given, copy the input data without change to the standard output: let zcat behave as cat.
commandlinefu.com is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.
Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.
» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10
Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):
Subscribe to the feed for: