Hide

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again.

Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.


If you have a new feature suggestion or find a bug, please get in touch via http://commandlinefu.uservoice.com/

Get involved!

You can sign-in using OpenID credentials, or register a traditional username and password.

First-time OpenID users will be automatically assigned a username which can be changed after signing in.

Hide

Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for:

Hide

News

2011-03-12 - Confoo 2011 presentation
Slides are available from the commandlinefu presentation at Confoo 2011: http://presentations.codeinthehole.com/confoo2011/
2011-01-04 - Moderation now required for new commands
To try and put and end to the spamming, new commands require moderation before they will appear on the site.
2010-12-27 - Apologies for not banning the trolls sooner
Have been away from the interwebs over Christmas. Will be more vigilant henceforth.
2010-09-24 - OAuth and pagination problems fixed
Apologies for the delay in getting Twitter's OAuth supported. Annoying pagination gremlin also fixed.
Hide

Tags

Hide

Functions

Commands tagged proc from sorted by
Terminal - Commands tagged proc - 18 results
grep VmHWM /proc/$(pgrep -d '/status /proc/' FOO)/status
2014-11-05 15:06:29
User: michelsberg
Functions: grep
2

Show the maximum amount of memory that was needed by a process at any time. My use case: Having a long-running computation job on $BIG_COMPUTER and judging whether it will also run on $SMALL_COMPUTER.

http://man7.org/linux/man-pages/man5/proc.5.html

VmHWM: Peak resident set size ("high water mark")

sed -e 's/ .*//' -e 's/\.//' -e 's/^0*//' /proc/loadavg
2014-04-18 19:12:05
User: flatcap
Functions: sed
5

Show the current load of the CPU as a percentage.

Read the load from /proc/loadavg and convert it using sed:

Strip everything after the first whitespace:

sed -e 's/ .*//'

Delete the decimal point:

sed -e 's/\.//'

Remove leading zeroes:

sed -e 's/^0*//'
watch -d "ls -l /proc/$!/fd"
2014-01-31 23:51:17
User: flatcap
Functions: watch
1

You're running a program that reads LOTS of files and takes a long time.

But it doesn't tell you about its progress.

First, run a command in the background, e.g.

find /usr/share/doc -type f -exec cat {} + > output_file.txt

Then run the watch command.

"watch -d" highlights the changes as they happen

In bash: $! is the process id (pid) of the last command run in the background.

You can change this to $(pidof my_command) to watch something in particular.

echo 0$(awk '/Pss/ {printf "+"$2}' /proc/$PID/smaps)|bc
2013-09-26 18:20:22
User: atoponce
Functions: awk echo
Tags: Linux awk echo bc proc
5

The "proportional set size" is probably the closest representation of how much active memory a process is using in the Linux virtual memory stack. This number should also closely represent the %mem found in ps(1), htop(1), and other utilities.

ls -l /proc/*/fd/* | grep 'deleted'| grep "\/proc.*\file-name-part"
2012-09-13 09:54:16
User: totti
Functions: grep ls
0

Accidentally deleted some file while used by a program ? (Eg: a song)

Use this command to find the file handle and recover using

cp /proc/pid/fd/filehandle /new/recoverd-file.ext
r="readlink /proc/`pgrep -o mplayer`/fd/3";while [ -e "`$r`" ];do if [ "$f" = "`$r`" ];then sleep 1;else f="`$r`";notify-send " $f";fi;done
2012-02-26 06:49:16
User: aix
Functions: sleep
0

Will finish automagically when mplayer quits. Can be run from any directory.

It seems to finish by it self rarely, probably because of some timing issue? There's probably a way around that which I can't think of right now

PID=`pgrep -o <process_name>`; grep -A 2 heap /proc/$PID/smaps
PID=`ps | grep process_name | grep -v grep | head -n 1 | awk '{print $1}'`; cat /proc/$PID/smaps | grep heap -A 2
grep "cpu " /proc/stat | awk -F ' ' '{total = $2 + $3 + $4 + $5} END {print "idle \t used\n" $5*100/total "% " $2*100/total "%"}'
2012-01-21 04:12:50
User: Goez
Functions: awk grep
0

This command displays the CPU idle + used time using stats from /proc/stat.

ps -ef --sort=-%cpu
grep ^Dirty /proc/meminfo
2011-08-24 08:48:49
User: h3xx
Functions: grep
30

Ever ask yourself "How much data would be lost if I pressed the reset button?"

Scary, isn't it?

echo $(($(ulimit -u)-$(pgrep -u $USER|wc -l))
zgrep CONFIG_MAGIC_SYSRQ /proc/config.gz
grep '^MemFree:' /proc/meminfo | awk '{ mem=($2)/(1024) ; printf "%0.0f MB\n", mem }'
2010-06-30 18:33:29
User: dbbolton
Functions: awk grep printf
4

This will show the amount of physical RAM that is left unused by the system.

echo $(( `ulimit -u` - `find /proc -maxdepth 1 \( -user $USER -o -group $GROUPNAME \) -type d|wc -l` ))
2010-03-12 08:42:49
User: AskApache
Functions: echo wc
1

There is a limit to how many processes you can run at the same time for each user, especially with web hosts. If the maximum # of processes for your user is 200, then the following sets OPTIMUM_P to 100.

OPTIMUM_P=$(( (`ulimit -u` - `find /proc -maxdepth 1 \( -user $USER -o -group $GROUPNAME \) -type d|wc -l`) / 2 ))

This is very useful in scripts because this is such a fast low-resource-intensive (compared to ps, who, lsof, etc) way to determine how many processes are currently running for whichever user. The number of currently running processes is subtracted from the high limit setup for the account (see limits.conf, pam, initscript).

An easy to understand example- this searches the current directory for shell scripts, and runs up to 100 'file' commands at the same time, greatly speeding up the command.

find . -type f | xargs -P $OPTIMUM_P -iFNAME file FNAME | sed -n '/shell script text/p'

I am using it in my http://www.askapache.com/linux-unix/bash_profile-functions-advanced-shell.html especially for the xargs command. Xargs has a -P option that lets you specify how many processes to run at the same time. For instance if you have 1000 urls in a text file and wanted to download all of them fast with curl, you could download 100 at a time (check ps output on a separate [pt]ty for proof) like this:

cat url-list.txt | xargs -I '{}' -P $OPTIMUM_P curl -O '{}'

I like to do things as fast as possible on my servers. I have several types of servers and hosting environments, some with very restrictive jail shells with 20processes limit, some with 200, some with 8000, so for the jailed shells my xargs -P10 would kill my shell or dump core. Using the above I can set the -P value dynamically, so xargs always works, like this.

cat url-list.txt | xargs -I '{}' -P $OPTIMUM_P curl -O '{}'

If you were building a process-killer (very common for cheap hosting) this would also be handy.

Note that if you are only allowed 20 or so processes, you should just use -P1 with xargs.

sleeper(){ while `ps -p $1 &>/dev/null`; do echo -n "${2:-.}"; sleep ${3:-1}; done; }; export -f sleeper
12

Very useful in shell scripts because you can run a task nicely in the background using job-control and output progress until it completes.

Here's an example of how I use it in backup scripts to run gpg in the background to encrypt an archive file (which I create in this same way). $! is the process ID of the last run command, which is saved here as the variable PI, then sleeper is called with the process id of the gpg task (PI), and sleeper is also specified to output : instead of the default . every 3 seconds instead of the default 1. So a shorter version would be sleeper $!;

The wait is also used here, though it may not be needed on your system.

echo ">>> ENCRYPTING SQL BACKUP" gpg --output archive.tgz.asc --encrypt archive.tgz 1>/dev/null & PI=$!; sleeper $PI ":" 3; wait $PI && rm archive.tgz &>/dev/null

Previously to get around the $! not always being available, I would instead check for the existance of the process ID by checking if the directory /proc/$PID existed, but not everyone uses proc anymore. That version is currently the one at http://www.askapache.com/linux-unix/bash_profile-functions-advanced-shell.html but I plan on upgrading to this new version soon.

echo 1 > /proc/sys/kernel/sysrq; echo b > /proc/sysrq-trigger
2009-07-31 19:07:40
User: tiagocruz
Functions: echo
Tags: proc
2

Useful when you have some wrong on a server (nfs freeze/ immortal process)

find -L /proc/*/fd -links 0 2>/dev/null
2009-06-26 18:42:51
User: res0nat0r
Functions: find
Tags: du proc df
12

Oracle DBA remove some logfiles which are still open by the database and he is complaining the space has not been reclaimed? Use the above command to find out what PID needs to be stopped. Or alternatively recover the file via:

cp /proc/pid/fd/filehandle /new/file.txt