commandlinefu.com is the place to record those command-line gems that you return to again and again.
Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.
If you have a new feature suggestion or find a bug, please get in touch via http://commandlinefu.uservoice.com/
You can sign-in using OpenID credentials, or register a traditional username and password.
First-time OpenID users will be automatically assigned a username which can be changed after signing in.
Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.
Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):
Subscribe to the feed for:
Returns a JSON object, by connecting to the 'test' endpoint of the Twitter API. Simplest way to check if you can connect to Twitter. Output also available in XML, use '/help/test.xml' for that
I don't know if the --spider option works to execute a script, but it might be worth trying. Note that the Drupal project uses the following in a cron job.
wget -O - -q http://localhost/drupal/cron.php
The output is sent to standard out so it can be logged by cron.
I have a remote php file that I want to run once an hour. I set up cron to run this wget. I don't really care about what's in the file though, I don't want to save the results, so I run the -O and send it to /dev/null
xargs can be used in this manner to download multiple files at a time, and xargs will in this case run 10 processes at a time and initiate a new one when the number running falls below 10.
If the site uses https, use:
wget --reject html,htm --accept pdf,zip -rl1 --no-check-certificate https-url
Copy the link to an HD movie trailer in to this command. It's more eleganant if it's put in a to a script, taking the URL as input.
to download latest version of "util", maybe insert a sort if they wont be shown in right order.
curl lists all files on mirror, grep your util, tail -1 will gets the one lists on the bottom and get it with wget
This will download all the phracks! Enjoy!
Can be used to help perform some SEO optimizations.
I dont have curl or links installed, so I use wget with write file as standard out.
This lengthy cryptic line will print the latest top 10 commandlinefu.com posts without their summaries. To print also their respective summaries use the following (even bigger) command line:
wget -qO - http://www.commandlinefu.com/feed/tenup | xmlstarlet sel -T -t -o '<doc>' -n -t -m rss/channel/item -o '<item>' -n -o '<title>' -v title -o '</title>' -n -o '<description>' -v description -o '</description>' -n -o '</item>' -n -t -o '</doc>' | xmlstarlet sel -T -t -m doc/item -v description/code -n -v title -n -n
It is recommended to include this line into a shell script to be easily run, as I do myself. You could also use the following URLs to browse the top 3 commands:
wget -qO - http://www.commandlinefu.com/feed/threeup | xmlstarlet ...
.. or all others:
wget -qO - http://feeds2.feedburner.com/Command-line-fu | xmlstarlet ...
PS: You need to install "xmlstarlet" to run it. It is found in Debian APT repositories (apt-get install xmlstarlet) or under the http://xmlstar.sourceforge.net/ URL.
I wanted all the 'hidden' .flv files from the http link in the command line; wget seemed appropriate, fed with output from lynx, grep the flv files and the normalised via sed (to remove the numeric bullet). Similar to the 'Grab mp3 files' fu. Replace link with your own, grep arg with something more interesting ;) See here for something along the same lines...
Hope you find it useful! Improvements welcome, naturally.
Just copy and paste the code in your terminal.
Note : sudo apt-get for debian versions , change as per your requirement .
Source : www.h3manth.com
substitute the URL with your private/public XML url from calendar sharing settings
substitute the dates YYYY-mm-dd
adjust the perl parsing part for your needs
just a alternative using a saved html file of all of my bookmarks. works well although it takes awhile.
Intended for dynamic ip OpenDNS users, this command will update your OpenDNS network IP.
For getting your IP, you can use one of the many one-liners here on commandlinefu.
I use this in a script which is run by kppp after it has successfully connected to my ISP:
IP="`curl -s http://checkip.dyndns.org/ | grep -o '[[:digit:].]\+'`"
if [ "$IP" == "" ] ; then echo 'Not online.' ; exit 1
wget -q --user=topsecret --password="`echo $PW | xxd -ps -r`" 'https://updates.opendns.com/nic/update?hostname=myhostname&myip='"$IP" -O -
/etc/init.d/ntp-client restart &
PS: DynDNS should use a similar method, if you know the URL, please post a comment. (Something with members.dyndns.org, if I recall correctly)
Uses htmldoc to perform the conversion
This command might not be useful for most of us, I just wanted to share it to show power of command line.
Download simple text version of novel David Copperfield from Poject Gutenberg and then generate a single column of words after which occurences of each word is counted by sort | uniq -c combination.
This command removes numbers and single characters from count. I'm sure you can write a shorter version.
Let me suggest using wget for obtaining the HTTP header only as the last resort because it generates considerable textual overhead. The first ellipsis of the sample output stands for
Spider mode enabled. Check if remote file exists.
--2009-03-31 20:42:46-- http://www.example.com/
Resolving www.example.com... 184.108.40.206
Connecting to www.example.com|220.127.116.11|:80... connected.
HTTP request sent, awaiting response...
and the second one looks for
Length: 438 [text/html]
Remote file exists and could contain further links,
but recursion is disabled -- not retrieving.