Wait for processes, even if not childs of current shell

wait 536; anywait 536; anywaitd 537; anywaitp 5562 5563 5564
Silent: anywait () { for pid in "$@"; do while kill -0 "$pid" >/dev/null 2>&1; do sleep 0.5; done; done } Prints dots: anywaitd () { for pid in "$@"; do while kill -0 "$pid" >/dev/null 2>&1; do sleep 0.5; echo -n '.'; done; done } Prints process ids: anywaitp () { for pid in "$@"; do while kill -0 "$pid" >/dev/null 2>&1; do sleep 0.5; echo -n $pid' '; done; echo; done } You cannot anywait for other users processes.
Sample Output
$ wait 536
-bash: wait: pid 536 is not a child of this shell

$ anywait 536
^C

$ anywaitd 537
...............................^C

$ anywaitp 5562 5563 5564
5562 5562 5562 5562 5562 5562 5562 5562 5562 5562
5563 5563 5563 5563 5563 5563 5563 5563
5564 5564 5564 5564 5564 5564 5564 5564

0
By: colemar
2014-10-22 06:31:47

These Might Interest You

  • Referring to the original post, if you are using $! then that means the process is a child of the current shell, so you can just use `wait $!`. If you are trying to wait for a process created outside of the current shell, then the loop on `kill -0 $PID` is good; although, you can't get the exit status of the process.


    3
    wait $!
    noahspurrier · 2010-06-07 21:56:36 0
  • This command explains how to manage some asynchronous PID in a global process. The command uses 4 processes in a global process. The asynchronous scripts are simulated by a time.sh script more infos : http://code-esperluette.blogspot.fr/2012/03/bash-gestion-de-processus-asynchrones.html http://www.youtube.com/watch?v=TxsPyAtD70I


    0
    sh time.sh 1 20 & var1="$!" & sh time.sh 2 10 & var2="$!" & sh time.sh 3 40 & var3="$!" & sh time.sh 4 30 & var4="$!" ; wait $var1 && wait $var2 && wait $var3 && wait $var4
    julnegre · 2012-03-31 10:03:58 0
  • There is a limit to how many processes you can run at the same time for each user, especially with web hosts. If the maximum # of processes for your user is 200, then the following sets OPTIMUM_P to 100. OPTIMUM_P=$(( (`ulimit -u` - `find /proc -maxdepth 1 \( -user $USER -o -group $GROUPNAME \) -type d|wc -l`) / 2 )) This is very useful in scripts because this is such a fast low-resource-intensive (compared to ps, who, lsof, etc) way to determine how many processes are currently running for whichever user. The number of currently running processes is subtracted from the high limit setup for the account (see limits.conf, pam, initscript). An easy to understand example- this searches the current directory for shell scripts, and runs up to 100 'file' commands at the same time, greatly speeding up the command. find . -type f | xargs -P $OPTIMUM_P -iFNAME file FNAME | sed -n '/shell script text/p' I am using it in my http://www.askapache.com/linux-unix/bash_profile-functions-advanced-shell.html especially for the xargs command. Xargs has a -P option that lets you specify how many processes to run at the same time. For instance if you have 1000 urls in a text file and wanted to download all of them fast with curl, you could download 100 at a time (check ps output on a separate [pt]ty for proof) like this: cat url-list.txt | xargs -I '{}' -P $OPTIMUM_P curl -O '{}' I like to do things as fast as possible on my servers. I have several types of servers and hosting environments, some with very restrictive jail shells with 20processes limit, some with 200, some with 8000, so for the jailed shells my xargs -P10 would kill my shell or dump core. Using the above I can set the -P value dynamically, so xargs always works, like this. cat url-list.txt | xargs -I '{}' -P $OPTIMUM_P curl -O '{}' If you were building a process-killer (very common for cheap hosting) this would also be handy. Note that if you are only allowed 20 or so processes, you should just use -P1 with xargs. Show Sample Output


    1
    echo $(( `ulimit -u` - `find /proc -maxdepth 1 \( -user $USER -o -group $GROUPNAME \) -type d|wc -l` ))
    AskApache · 2010-03-12 08:42:49 0
  • When dealing with system resource limits like max number of processes and open files per user, it can be hard to tell exactly what's happening. The /etc/security/limits.conf file defines the ceiling for the values, but not what they currently are, while ulimit -a will show you the current values for your shell, and you can set them for new logins in /etc/profile and/or ~/.bashrc with a command like: ulimit -S -n 100000 >/dev/null 2>&1 But with the variability in when those files get read (login vs any shell startup, interactive vs non-interactive) it can be difficult to know for sure what values apply to processes that are currently running, like database or app servers. Just find the PID via "ps aux | grep programname", then look at that PID's "limits" file in /proc. Then you'll know for sure what actually applies to that process. Show Sample Output


    11
    cat /proc/PID/limits
    dmmst19 · 2011-12-14 16:49:06 0

What do you think?

Any thoughts on this command? Does it work on your machine? Can you do the same thing with only 14 characters?

You must be signed in to comment.

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.

Share Your Commands



Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for: