Commands using jobs (4)

  • make, find and a lot of other programs can take a lot of time. And can do not. Supppose you write a long, complicated command and wonder if it will be done in 3 seconds or 20 minutes. Just add "R" (without quotes) suffix to it and you can do other things: zsh will inform you when you can see the results. You can replace zenity with other X Window dialogs program.


    1
    alias -g R=' &; jobs | tail -1 | read A0 A1 A2 cmd; echo "running $cmd"; fg "$cmd"; zenity --info --text "$cmd done"; unset A0 A1 A2 cmd'
    pipeliner · 2010-12-13 17:44:36 3
  • List background jobs, grep their number - not process id - and then kill them Show Sample Output


    0
    jobs | grep -o "[0-9]" | while read j; do kill %$j; done
    haggen · 2012-04-12 17:29:58 7
  • Execute commands serially on a list of hosts. Each ssh connection is made in the background so that if, after five seconds, it hasn't closed, it will be killed and the script will go on to the next system. Maybe there's an easier way to set a timeout in the ssh options...


    0
    for host in $MYHOSTS; do ping -q -c3 $H 2>&1 1>/dev/null && ssh -o 'AllowedAuthe ntications publickey' $host 'command1; command2' & for count in 1 2 3 4 5; do sleep 1; jobs | wc -l | grep -q ^0\$ && continue; done; kill %1; done
    a8ksh4 · 2012-11-13 23:12:27 6
  • Cleaned up and silent with &>/dev/null at the end. Show Sample Output


    0
    for host in $HOSTNAMES; do ping -q -c3 $host && ssh $host 'command' & for count in {1..15}; do sleep 1; jobs | wc -l | grep -q ^0\$ && continue; done; kill %1; done &>/dev/null
    somaddict · 2012-11-16 02:31:27 5

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.

Share Your Commands


Check These Out

UPS Tracking Script

Concatenate video files to YouTube ready output
Takes two input video files and an external audio track and encodes them together to an MPEG-4 DivX output video file with the correct size ready for uploading.

remove *.jpg smaller than 500x500

bulk rename files with sed, one-liner
Far from my favorite, but works in sh and with an old sed that doesn't support '-E'

Rename files in batch

Install pip with Proxy
Installs pip packages defining a proxy

Content search.
Grep will read the contents of each file in PWD and will use the REs $1 $2 ... $n to match the contents. In case of match, grep will print the appropriate file, line number and the matching line. It's just easier to write $ ff word1 word2 word3 Instead of $ grep -rinE 'word1|word2|word3' .

remove accented chars

Split a large file, without wasting disk space
It's common to want to split up large files and the usual method is to use split(1). If you have a 10GiB file, you'll need 10GiB of free space. Then the OS has to read 10GiB and write 10GiB (usually on the same filesystem). This takes AGES. . The command uses a set of loop block devices to create fake chunks, but without making any changes to the file. This means the file splitting is nearly instantaneous. The example creates a 1GiB file, then splits it into 16 x 64MiB chunks (/dev/loop0 .. loop15). . Note: This isn't a drop-in replacement for using split. The results are block devices. tar and zip won't do what you expect when given block devices. . These commands will work: $ hexdump /dev/loop4 . $ gzip -9 < /dev/loop6 > part6.gz . $ cat /dev/loop10 > /media/usb/part10.bin

Create a local compressed tarball from remote host directory
This improves on #9892 by compressing the directory on the remote machine so that the amount of data transferred over the network is much smaller. The command uses ssh(1) to get to a remote host, uses tar(1) to archive and compress a remote directory, prints the result to STDOUT, which is written to a local file. In other words, we are archiving and compressing a remote directory to our local box.


Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for: