Commands tagged process (48)

  • top accecpts a comma separated list of PIDs.


    15
    top -p $(pgrep -d , foo)
    michelsberg · 2012-06-27 20:59:09 6
  • Very useful in shell scripts because you can run a task nicely in the background using job-control and output progress until it completes. Here's an example of how I use it in backup scripts to run gpg in the background to encrypt an archive file (which I create in this same way). $! is the process ID of the last run command, which is saved here as the variable PI, then sleeper is called with the process id of the gpg task (PI), and sleeper is also specified to output : instead of the default . every 3 seconds instead of the default 1. So a shorter version would be sleeper $!; The wait is also used here, though it may not be needed on your system. echo ">>> ENCRYPTING SQL BACKUP" gpg --output archive.tgz.asc --encrypt archive.tgz 1>/dev/null & PI=$!; sleeper $PI ":" 3; wait $PI && rm archive.tgz &>/dev/null Previously to get around the $! not always being available, I would instead check for the existance of the process ID by checking if the directory /proc/$PID existed, but not everyone uses proc anymore. That version is currently the one at http://www.askapache.com/linux-unix/bash_profile-functions-advanced-shell.html but I plan on upgrading to this new version soon. Show Sample Output


    14
    sleeper(){ while `ps -p $1 &>/dev/null`; do echo -n "${2:-.}"; sleep ${3:-1}; done; }; export -f sleeper
    AskApache · 2009-09-21 07:36:25 8
  • Define a function vert () { echo $1 | grep -o '.'; } Use it to print some column headers paste <(vert several) <(vert parallel) <(vert vertical) <(vert "lines of") <(vert "text can") <(vert "be used") <(vert "for labels") <(vert "for columns") <(vert "of numbers") Show Sample Output


    12
    echo "vertical text" | grep -o '.'
    dennisw · 2009-09-11 03:45:04 10
  • It identifies the parents of the Zombie processes and kill them. So the new parent of orphan Zombies will be the Init process and he is already waiting for reaping them. Be careful! It may also kill your useful processes just because they are not taking care and waiting for their children (bad parents!). Show Sample Output


    11
    kill -9 `ps -xaw -o state -o ppid | grep Z | grep -v PID | awk '{print $2}'`
    khashmeshab · 2010-10-27 07:29:14 12
  • This command is useful when you want to know what process is responsible for a certain GUI application and what command you need to issue to launch it in terminal. Show Sample Output


    9
    xprop | awk '/PID/ {print $3}' | xargs ps h -o pid,cmd
    jackhab · 2009-02-16 07:55:19 83
  • pgrep foo may return several pids for process foobar footy01 etc. like this: 11427 12576 12577 sed puts "-p " in front and we pass a list to top: top -p 11427 -p 12576 -p 12577


    5
    top $(pgrep foo | sed 's|^|-p |g')
    michelsberg · 2012-06-14 15:13:00 3
  • Nethogs groups bandwidth by process. Show Sample Output


    5
    sudo nethogs eth0
    totti · 2013-01-25 08:20:44 6
  • An easy function to get a process tree listing (very detailed) for all the processes of any gived user. This function is also in my http://www.askapache.com/linux-unix/bash_profile-functions-advanced-shell.html Show Sample Output


    4
    psu(){ command ps -Hcl -F S f -u ${1:-$USER}; }
    AskApache · 2009-11-13 06:10:33 4
  • When trying to play a sound you may sometimes get an error saying that your sound card is already used, but not by what process. This will list all processes playing sound, useful to kill processes that you no longer need but that keep using your sound card. Show Sample Output


    4
    lsof | grep pcm
    Miles · 2010-05-16 12:12:01 3

  • 4
    ps -ef --sort=-%cpu
    aguslr · 2011-10-14 21:57:51 3

  • 3
    echo "vertical text" | fold -1
    zude · 2009-10-05 23:20:14 4

  • 3
    ps -o thcount -p <process id>
    lukasz · 2009-12-04 14:39:53 4
  • Referring to the original post, if you are using $! then that means the process is a child of the current shell, so you can just use `wait $!`. If you are trying to wait for a process created outside of the current shell, then the loop on `kill -0 $PID` is good; although, you can't get the exit status of the process.


    3
    wait $!
    noahspurrier · 2010-06-07 21:56:36 4
  • Like command 10870, but no need for sed


    3
    top '-p' $(pgrep -d ' -p ' foo)
    __ · 2012-06-27 18:32:03 6
  • Check out the usage of 'trap', you may not have seen this one much. This command provides a way to schedule commands at certain times by running them after sleep finishes sleeping. In the example 'sleep 2h' sleeps for 2 hours. What is cool about this command is that it uses the 'trap' builtin bash command to remove the SIGHUP trap that normally exits all processes started by the shell upon logout. The 'trap 1' command then restores the normal SIGHUP behaviour. It also uses the 'nice -n 19' command which causes the sleep process to be run with minimal CPU. Further, it runs all the commands within the 2nd parentheses in the background. This is sweet cuz you can fire off as many of these as you want. Very helpful for shell scripts.


    2
    ( trap '' 1; ( nice -n 19 sleep 2h && command rm -v -rf /garbage/ &>/dev/null && trap 1 ) & )
    AskApache · 2009-10-10 04:43:44 7
  • I've wanted this for a long time, finally just sat down and came up with it. This shows you the sorted output of ps in a pretty format perfect for cron or startup scripts. You can sort by changing the k -vsz to k -pmem for example to sort by memory instead. If you want a function, here's one from my http://www.askapache.com/linux-unix/bash_profile-functions-advanced-shell.html aa_top_ps(){ local T N=${1:-10};T=${2:-vsz}; ps wwo pid,user,group,vsize:8,size:8,sz:6,rss:6,pmem:7,pcpu:7,time:7,wchan,sched=,stat,flags,comm,args k -${T} -A|sed -u "/^ *PID/d;${N}q"; } Show Sample Output


    2
    command ps wwo pid,user,group,vsize:8,size:8,sz:6,rss:6,pmem:7,pcpu:7,time:7,wchan,sched=,stat,flags,comm,args k -vsz -A|sed -u '/^ *PID/d;10q'
    AskApache · 2010-05-18 18:41:38 6
  • Add that and "cont () { ps -ec | grep $@ | kill -SIGCONT `awk '{print $1}'`; }" (without the quotes) to you bash profile and then use it to pause and resume processes safely


    1
    stop () { ps -ec | grep $@ | kill -SIGSTOP `awk '{print $1}'`; }
    iridium172 · 2009-12-27 19:40:09 7
  • There is a limit to how many processes you can run at the same time for each user, especially with web hosts. If the maximum # of processes for your user is 200, then the following sets OPTIMUM_P to 100. OPTIMUM_P=$(( (`ulimit -u` - `find /proc -maxdepth 1 \( -user $USER -o -group $GROUPNAME \) -type d|wc -l`) / 2 )) This is very useful in scripts because this is such a fast low-resource-intensive (compared to ps, who, lsof, etc) way to determine how many processes are currently running for whichever user. The number of currently running processes is subtracted from the high limit setup for the account (see limits.conf, pam, initscript). An easy to understand example- this searches the current directory for shell scripts, and runs up to 100 'file' commands at the same time, greatly speeding up the command. find . -type f | xargs -P $OPTIMUM_P -iFNAME file FNAME | sed -n '/shell script text/p' I am using it in my http://www.askapache.com/linux-unix/bash_profile-functions-advanced-shell.html especially for the xargs command. Xargs has a -P option that lets you specify how many processes to run at the same time. For instance if you have 1000 urls in a text file and wanted to download all of them fast with curl, you could download 100 at a time (check ps output on a separate [pt]ty for proof) like this: cat url-list.txt | xargs -I '{}' -P $OPTIMUM_P curl -O '{}' I like to do things as fast as possible on my servers. I have several types of servers and hosting environments, some with very restrictive jail shells with 20processes limit, some with 200, some with 8000, so for the jailed shells my xargs -P10 would kill my shell or dump core. Using the above I can set the -P value dynamically, so xargs always works, like this. cat url-list.txt | xargs -I '{}' -P $OPTIMUM_P curl -O '{}' If you were building a process-killer (very common for cheap hosting) this would also be handy. Note that if you are only allowed 20 or so processes, you should just use -P1 with xargs. Show Sample Output


    1
    echo $(( `ulimit -u` - `find /proc -maxdepth 1 \( -user $USER -o -group $GROUPNAME \) -type d|wc -l` ))
    AskApache · 2010-03-12 08:42:49 6
  • Grabs the cmdline used to execute the process, and the environment that the process is being run under. This is much different than the 'env' command, which only lists the environment for the shell. This is very useful (to me at least) to debug various processes on my server. For example, this lets me see the environment that my apache, mysqld, bind, and other server processes have. Here's a function I use: aa_ps_all () { ( cd /proc && command ps -A -opid= | xargs -I'{}' sh -c 'test $PPID -ne {}&&test -r {}/cmdline&&echo -e "\n[{}]"&&tr -s "\000" " "<{}/cmdline&&echo&&tr -s "\000\033" "\nE"<{}/environ|sort&&cat {}/limits' ); } From my .bash_profile at http://www.askapache.com/linux-unix/bash_profile-functions-advanced-shell.html Show Sample Output


    1
    cd /proc&&ps a -opid=|xargs -I+ sh -c '[[ $PPID -ne + ]]&&echo -e "\n[+]"&&tr -s "\000" " "<+/cmdline&&echo&&tr -s "\000\033" "\nE"<+/environ|sort'
    AskApache · 2010-10-22 02:34:33 14
  • Tested on FreeBSD 8.1 and CSH. The scripts works correctly but the Zombies do not die! I hope it will run and function as expected in Linux and others. Show Sample Output


    1
    kill -9 `ps -xaw -o state -o pid | grep Z | grep -v PID | awk '{print $2}'`
    khashmeshab · 2010-10-27 07:19:52 3
  • Execute a process or list of commands in the given interval and output the difference in output. Show Sample Output


    1
    diffprocess () { diff <($*) <(sleep 3; $*); }
    totti · 2013-01-25 08:46:41 5
  • Run "ps -x" (process status) in the background every hour (in this example). The outputs of both "nohup" and "ps -x" are sent to the e-mail (instead of nohup.out and stdout and stderr). If you like it, replace "ps -x" by the command of your choice, replace 3600 (1 hour) by the period of your choice. You can run the command in the loop any time by killing the sleep process. For example ps -x 2925 ? S 0:00.00 sh -c unzip E.zip >/dev/null 2>&1 11288 ? O 0:00.00 unzip E.zip 25428 ? I 0:00.00 sleep 3600 14346 pts/42- I 0:00.01 bash -c while true; do ps -x | mail (...); sleep 3600; done 643 pts/66 Ss 0:00.03 -bash 14124 pts/66 O+ 0:00.00 ps -x kill 25428 You have mail in /mail/(...) Show Sample Output


    1
    nohup bash -c "while true; do ps -x | mail $USER ; sleep 3600; done" | mail $USER &
    pascalvaucheret · 2013-08-19 17:21:37 19
  • Sometimes we install programs, we forget about them, and they stay there wasting RAM. This one-liner try to find them. Show Sample Output


    1
    ps -eo cmd | awk '{print $1}'| sort -u | grep "^/" | xargs dpkg -S 2>/dev/null | awk -F: '{print $1}' | sort -u | xargs apt-mark showmanual
    pabloab · 2020-03-26 06:16:45 81

  • 1
    mem() { ps -C "$1" -O rss | awk '{ count ++; sum += $2 }; END {count --; print "Number of processes:\t",count; print "Mem. usage per process:\t",sum/1024/count, "MB"; print "Total memory usage:\t", sum/1024, "MB" ;};'; }
    mikhail · 2022-01-23 16:14:24 358
  • This will email user@example.com a message with the body: "rsync done" when there are no processes of rsync running. This can be changed for other uses by changing $(pgrep rsync) to something else, and echo "rsync done" | mailx user@example.com to another command.


    0
    $(while [ ! -z "$(pgrep rsync)" ]; do echo; done; echo "rsync done" | mailx user@example.com) > /dev/null &
    matthewbauer · 2009-08-14 19:46:59 4
  •  1 2 > 

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.

Share Your Commands


Check These Out

tail -f a log file over ssh into growl

Display disk partition sizes
It is the same but more faster real 0m0,007s user 0m0,011s sys 0m0,000s with my solution real 0m0,038s user 0m0,044s sys 0m0,000s with your solution :)

dump database from postgresql to a file

Keep a copy of the raw Youtube FLV,MP4,etc stored in /tmp/
Certain Flash video players (e.g. Youtube) write their video streams to disk in /tmp/ , but the files are unlinked. i.e. the player creates the file and then immediately deletes the filename (unlinking files in this way makes it hard to find them, and/or ensures their cleanup if the browser or plugin should crash etc.) But as long as the flash plugin's process runs, a file descriptor remains in its /proc/ hierarchy, from which we (and the player) still have access to the file. The method above worked nicely for me when I had 50 tabs open with Youtube videos and didn't want to have to re-download them all with some tool.

Convert seconds to [DD:][HH:]MM:SS
Converts any number of seconds into days, hours, minutes and seconds. sec2dhms() { declare -i SS="$1" D=$(( SS / 86400 )) H=$(( SS % 86400 / 3600 )) M=$(( SS % 3600 / 60 )) S=$(( SS % 60 )) [ "$D" -gt 0 ] && echo -n "${D}:" [ "$H" -gt 0 ] && printf "%02g:" "$H" printf "%02g:%02g\n" "$M" "$S" }

Change the homepage of Firefox
Pros: Works in all Windows computers, most updated and compatible command. Cons: 3 liner Replace fcisolutions.com with your site name.

Copy without overwriting

Fast, built-in pipe-based data sink
This is shorter and actually much faster than >/dev/null (see sample output for timings) Plus, it looks like a disappointed face emoticon.

bash screensaver off

return the latest kernel version from a Satellite / Spacewalk server software channel


Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for: