This is the command line I use to get my IP address in order to update my zoneedit account. Full script on my blog http://akim.sissaoui.com/linux-attitude/script-de-mise-a-jour-ddns-zoneedit-com-en-bashsh/ Show Sample Output
Just pulls a quote for each day and displays it in a notification bubble...
or you can change it a bit and just have it run in the terminal
wget -q -O "quote" https://www.goodreads.com/quotes_of_the_day;echo "Quote of the Day";cat quote | grep '“\|/author/show' | sed -e 's/<[a-zA-Z\/][^>]*>//g' | sed 's/“//g' | sed 's/”//g'; rm -f quote
Show Sample Output
Change *.ext to the appropriate extension
Downloads this raw script https://github.com/git/git/blob/master/contrib/completion/git-completion.bash from Github, copies it to your home directory, autoloads it in ~/.bashrc and sources it.
Easiest way to get the external IP address.
This command crawls a domain with the typical WGET output. It logs every thing to a WGET-LOG file with any errors repeated at the end. It also had the added benefit of not flooding your terminal without ouput, so it is safe to run in the background.
The only requirement to make this work is some research on what your local NOAA "zone" is. One place to take a look is: http://www.nws.noaa.gov/mirs/public/prods/maps/pfzones_list.htm Show Sample Output
Use this command to execute the contents of http://www.example.com/automation/remotescript.sh in the local environment. The parameters are optional.
Alterrnatives to wget:
CURL:
curl -s http://www.example.com/automation/remotescript.sh | bash /dev/stdin param1 param2
W3M:
w3m -dump http://www.example.com/automation/remotescript.sh | bash /dev/stdin [param1] [param2]
LYNX:
lynx -source http://www.example.com/automation/remotescript.sh | bash /dev/stdin [param1] [param2]
Returns the global weighted BTC rate in EUR. Requires the "jq" JSON parser. Show Sample Output
Just added a little url encoding with sed - urls with spaces don't work well - this also works against instead of enclosure and adds a sample to show that you can filter against links at a certain domain Show Sample Output
Replace thread_link with the link of the thread you want to download images of.
Get all files of particular type (say, mp3) listed on some web page (say, audio.org)
--mirror >>> all pages --random-wait >>> makes it look like not a bot --recursive >>> all pages, follow links ? robots=off >>> ignore no robots request -U mozilla >>> makes it look like real user on a browser not commandline -R >>> reject these file types we only want html -c >>>continue. In case you had to stop the wget you can pick it right back up! --reject-regex '((.*)\?(.*))|(.*)' >>> skip urls with parameters (don't download the same page a million times -
mirroring / copy whole website for offline use localls with wget
I couldn't find movie library on any of the SQLlite Stremio databases, but on ~/.config/stremio/backgrounds2 the background image filenames corresponds to IMDB URL. So I foreach files and wget HTML title of each movie and save it to a file. This will retrieve all movie names, not just the Library.
commandlinefu.com is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.
Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.
» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10
Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):
Subscribe to the feed for: