It identifies the parents of the Zombie processes and kill them. So the new parent of orphan Zombies will be the Init process and he is already waiting for reaping them. Be careful! It may also kill your useful processes just because they are not taking care and waiting for their children (bad parents!). Show Sample Output
or "Execute a command with a timeout"
Run a command in background, sleep 10 seconds, kill it.
! is the process id of the most recently executed background command.
You can test it with:
find /& sleep10; kill $!
This will kill all. e.g. killall firefox
Send signal 0 to the process. The return status ($?) can be used to determine if the process is running. 0 if it is, non-zero otherwise.
This is a more accurate way to watch the progress of a dd process. The $DDPID=$! is needed so that you don't get the PID of the sleep. The sleep 1 is needed because in my testing at least, if you run kill -USR1 against dd too quickly, it will kill it off instead of display the status. So you need to wait a second, probably so that it can configure itself to trap the USR1 signal. Show Sample Output
Somtime one wants to kill process not by name of executable, but by a parameter name. In such cases killall is not suitable method.
The 'dd' command doesn't provide a progress when writing data. So, sending the "USR1" signal to the process will spit out its progress as it writes data. This command is superior to others on the site, as it doesn't require you to previously know the PID of the dd command. Show Sample Output
if you dont want to alias also then you can do killall rapidly_spawning_process ; !! ; !! ; !!
See also: killall
for when a program is hogging the sound output. finds, and kills. add -9 to the end for wedged processes. add in 'grep ^program' after lsof to filter. Show Sample Output
A timeout is great, but what if the command is taking longer than expected because it's hung up or ran into some other problem? That's where the -k option comes in. Run "some_command" and timeout after 30s. If the command is still running after 1 minute, it will receive a kill signal.
explanation: grep -- displays process ids -v -- negates the matching, displays all but what is specified in the other options -u -- specifies the user to display, or in this case negate The process loops through all PIDs that are found by pgrep, then orders a forced kill to the processes in numerical order, effectively killing the parent processes first including the shells in use which will force the users to logout. Tested on Slackware Linux 12.2 and Slackware-current
There is a limit to how many processes you can run at the same time for each user, especially with web hosts. If the maximum # of processes for your user is 200, then the following sets OPTIMUM_P to 100.
OPTIMUM_P=$(( (`ulimit -u` - `find /proc -maxdepth 1 \( -user $USER -o -group $GROUPNAME \) -type d|wc -l`) / 2 ))
This is very useful in scripts because this is such a fast low-resource-intensive (compared to ps, who, lsof, etc) way to determine how many processes are currently running for whichever user. The number of currently running processes is subtracted from the high limit setup for the account (see limits.conf, pam, initscript).
An easy to understand example- this searches the current directory for shell scripts, and runs up to 100 'file' commands at the same time, greatly speeding up the command.
find . -type f | xargs -P $OPTIMUM_P -iFNAME file FNAME | sed -n '/shell script text/p'
I am using it in my http://www.askapache.com/linux-unix/bash_profile-functions-advanced-shell.html especially for the xargs command. Xargs has a -P option that lets you specify how many processes to run at the same time. For instance if you have 1000 urls in a text file and wanted to download all of them fast with curl, you could download 100 at a time (check ps output on a separate [pt]ty for proof) like this:
cat url-list.txt | xargs -I '{}' -P $OPTIMUM_P curl -O '{}'
I like to do things as fast as possible on my servers. I have several types of servers and hosting environments, some with very restrictive jail shells with 20processes limit, some with 200, some with 8000, so for the jailed shells my xargs -P10 would kill my shell or dump core. Using the above I can set the -P value dynamically, so xargs always works, like this.
cat url-list.txt | xargs -I '{}' -P $OPTIMUM_P curl -O '{}'
If you were building a process-killer (very common for cheap hosting) this would also be handy.
Note that if you are only allowed 20 or so processes, you should just use -P1 with xargs.
Show Sample Output
Another way of counting the line output of tail over 10s not requiring pv. Cut to have the average per second rate : tail -n0 -f access.log>/tmp/tmp.log & sleep 10; kill $! ; wc -l /tmp/tmp.log | cut -c-2 You can also enclose it in a loop and send stderr to /dev/null : while true; do tail -n0 -f access.log>/tmp/tmp.log & sleep 2; kill $! ; wc -l /tmp/tmp.log | cut -c-2; done 2>/dev/null
dcfldd is a forensic version of dd that shows a process indicator by default.
Tested on FreeBSD 8.1 and CSH. The scripts works correctly but the Zombies do not die! I hope it will run and function as expected in Linux and others. Show Sample Output
Look mah! All pipes
Alternative if "Lazy unmount" (umount -l) doesn't obey.
Alternative for NFS:
umount -f /media/sdb1
Use with caution: forcing to unmount a busy partition can cause data loss!
A nice way to interrupt a sleep with a signal. Show Sample Output
Useful command to get information about running java process and treads, to see log look into the default log for your java application
Using -f treats the process name as a pattern so you don't have to include the full path in the command. Thus 'pkill -f firefox' works, even with iceweasel.
just a leaner, smaller version. Love the original idea!
useful signals are:
pkill -SIGSTOP mpg321 #pause
pkill -SIGCONT mpg321 #resume
pkill -SIGHUP mpg321 #stop
pkill -SIGKILL mpg321 #force exit
TIP: use aliases or shortcuts to control mpg321 from the Desktop Manager
commandlinefu.com is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.
Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.
» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10
Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):
Subscribe to the feed for: