Commands by iamarchimedes (2)

  • Note, you need to replace the email address with your private Instapaper email address. There are a bunch of possible improvements such as, - Not writing a temp file - Doesnt strip tags (tho Instapaper does thankfully) - Shouldnt require 2 curls


    0
    for url in `cat urls `; do title=`curl $url 2>&1 | grep -i '<title>.*</title>'` && curl $url > /tmp/u && mail -s "$title" your-private-instapaper-address@instapaper.com < /tmp/u ; done
    iamarchimedes · 2010-10-16 19:10:19 0
  • Only useful for really flakey connections (but im stuck with one for now). Though if youre in this situation ive found this to be a good way to run autossh and it does a pretty good job of detecting when the session is down and restarting. Combined with the -t and screen commands this pops you back into your working session lickety split w/ as few headaches as possible. And if autossh is a bit slow at detecting the downed ssh connection, just run this in another tab/terminal window to notify autossh that it should drop it and start over. Basically for when polling is too slow. kill -SIGUSR1 `pgrep autossh` Show Sample Output


    11
    AUTOSSH_POLL=1 autossh -M 21010 hostname -t 'screen -Dr'
    iamarchimedes · 2009-10-11 06:04:29 0

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.

Share Your Commands


Check These Out

remove *.jpg smaller than 500x500

Create a mirror of a local folder, on a remote server
Create a exact mirror of the local folder "/root/files", on remote server 'remote_server' using SSH command (listening on port 22) (all files & folders on destination server/folder will be deleted)

small CPU benchmark with PI, bc and time.
$ # 4 cores with 2500 pi digits $ CPUBENCH 4 2500 $. $ every core will use 100% cpu and you can see how fast they calculate it. $ if you do 50000 digitits and more it can take hours or days

Watch the progress of 'dd'
The 'dd' command doesn't provide a progress when writing data. So, sending the "USR1" signal to the process will spit out its progress as it writes data. This command is superior to others on the site, as it doesn't require you to previously know the PID of the dd command.

Retry the previous command until it exits successfully

Get a list of all TODO/FIXME tasks left to be done in your project
Place this in your .bashrc (or run it once) to set the `tasks` alias. Next time you enter `tasks` into a terminal, it will give you a list of all TODO and FIXME comments in the current directory and child directories, giving you a quick overview of what you still have to do!

recursive search and replace old with new string, inside files
Using -Z with grep and -0 with xargs handles file names with spaces and special characters.

quickly rename a file

rsync + find
rsync from source to dest all between >30

Batch Convert SVG to PNG (in parallel)
Convert some SVG files into PNG using ImageMagick's convert command. Run the conversions in parallel to save time. This is safer than robinro's forkbomb approach :-) xargs runs four processes at a time -P4


Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for: