What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again.

Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.

If you have a new feature suggestion or find a bug, please get in touch via http://commandlinefu.uservoice.com/

Get involved!

You can sign-in using OpenID credentials, or register a traditional username and password.

First-time OpenID users will be automatically assigned a username which can be changed after signing in.

Universal configuration monitoring and system of record for IT.

Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for:



May 19, 2015 - A Look At The New Commandlinefu
I've put together a short writeup on what kind of newness you can expect from the next iteration of clfu. Check it out here.
March 2, 2015 - New Management
I'm Jon, I'll be maintaining and improving clfu. Thanks to David for building such a great resource!

Top Tags



All commands from sorted by
Terminal - All commands - 12,301 results
qemu-img create ubuntu.qcow 10G
for i in "*.txt"; do tar -c -v -z -f $i.tar.gz "$i" && rm -v "$i"; done
qemu -cdrom /dev/cdrom -hda ubuntu.qcow -boot d -net nic -net user -m 196 -localtime
2011-10-15 09:21:49
User: anhpht

Boot without CD-Rom:

qemu fedora.qcow -boot c -net nic -net user -m 196 -localtime

echo "Set Twitter Status" ; read STATUS; curl -u user:pass -d status="$STATUS" http://twitter.com/statuses/update.xml
2009-02-16 14:34:05
User: ronz0
Functions: echo read

Modify the script for your username and password, and save it as a script. Run the script, and enjoy ./tweet

LC_ALL=C svn info | grep Revision | awk '{print $2}'
2009-02-16 14:53:52
Functions: awk grep info

This is the simple revision number on stdout, that can be fed to any useful/fun script of yours. Setting LC_ALL is useful if you use another locale, in which case "Revision" is translated and cannot be found. I use this with doxygen to insert my source files revisions into the doc. An example in Doxyfile:

FILE_VERSION_FILTER = "function svn_filter { LC_ALL=C svn info $1 | grep Revision | awk '{print $2}'; }; svn_filter"

Share your ideas about what to do with the revision number !

killall rapidly_spawning_process ; killall rapidly_spawning_process ; killall rapidly_spawning_process
2010-05-20 00:26:10
Functions: killall
Tags: Linux unix kill

Use this if you can't type repeated killall commands fast enough to kill rapidly spawning processes.

If a process keeps spawning copies of itself too rapidly, it can do so faster than a single killall can catch them and kill them. Retyping the command at the prompt can be too slow too, even with command history retrieval.

Chaining a few killalls on single command line can start up the next killall more quickly. The first killall will get most of the processes, except for some that were starting up in the meanwhile, the second will get most of the rest, and the third mops up.

grep -o "\(new \(\w\+\)\|\w\+::\)" file.php | sed 's/new \|:://' | sort | uniq -c | sort
2009-01-26 12:08:47
User: root
Functions: grep sed sort uniq

This grabs all lines that make an instantation or static call, then filters out the cruft and displays a summary of each class called and the frequency.

find -type l -xtype l
df | grep -w '/media/mountpoint' | cut -d " " -f 1
2011-01-21 05:38:05
User: ntropia
Functions: cut df grep

Identical output but a different way without having to shoot with the Awk cannon :)

takeown.exe /F "FILE_or_DIR" /A /R /D O
egrep '(\[error\])+.*(PHP)+' /var/log/apache2/error.log
pwgen 30
df | grep -w '/media/armadillo' | cut -d " " -f 1
S=$SSH_TTY && (sleep 3 && echo -n 'Peace... '>$S & ) && (sleep 5 && echo -n 'Love... '>$S & ) && (sleep 7 && echo 'and Intergalactic Happiness!'>$S & )
2009-08-19 07:57:16
User: AskApache
Functions: echo sleep

Ummmm.. Saw that gem on some dead-head hippies VW bus at phish this summer.. It's actually one of my favorite ways of using bash, very clean. It shows what you can do with the cool advanced features like job control, redirection, combining commands that don't wait for each other, and the thing I like the most is the use of the ( ) to make this process heirarchy below, which comes in very handy when using fifos for adding optimization to your scripts or commands with similar acrobatics.


1 gplovr 30667 1 wait 1324 1 -bash

0 gplovr 30672 30667 - 516 3 \_ sleep 3

1 gplovr 30669 1 wait 1324 1 -bash

0 gplovr 30673 30669 - 516 0 \_ sleep 5

1 gplovr 30671 1 wait 1324 1 -bash

0 gplovr 30674 30671 - 516 1 \_ sleep 7

cat /var/log/httpd/access_log | grep q= | awk '{print $11}' | awk -F 'q=' '{print $2}' | sed 's/+/ /g;s/%22/"/g;s/q=//' | cut -d "&" -f 1 | mail [email protected] -s "[your-site] search strings for `date`"
2009-11-22 03:03:06
User: isma
Functions: awk cat grep sed strings

It's not a big line, and it *may not* work for everybody, I guess it depends on the detail of access_log configuration in your httpd.conf. I use it as a prerotate command for logrotate in httpd section so it executes before access_log rotation, everyday at midnight.

find . -type f -printf %s\\n | numsum
2011-06-27 12:39:16
User: Strawp
Functions: find
Tags: find numsum

pipe into

| sed "s/$/\/(1024\*1024\*1024)/" | bc

to get size in GB

for i in $(seq 1 11) 13 14 15 16; do man iso-8859-$i; done
2009-03-31 19:40:15
User: penpen
Functions: man seq
Tags: Linux unix

Depending on the installation only certain of these man pages are installed. 12 is left out on purpose because ISO/IEC 8859-12 does not exist. To also access those manpages that are not installed use opera (or any other browser that supports all the character sets involved) to display online versions of the manpages hosted at kernel.org:

for i in $(seq 1 11) 13 14 15 16; do opera http://www.kernel.org/doc/man-pages/online/pages/man7/iso_8859-$i.7.html; done
for file in $(ls /usr/bin ) ; do man -w $file 2>> nomanlist.txt >/dev/null ; done
2010-07-26 19:39:53
User: camocrazed
Functions: file ls man
Tags: man

This takes quite a while on my system. You may want to test it out with /bin first, or background it and keep working.

If you want to get rid of the "No manual entry for [whatever]" and just have the [whatever], use the following sed command after this one finishes.

sed -n 's/^No manual entry for \(.*\)/\1/p' nomanlist.txt
sed "s/\([a-zA-Z]*\:\/\/[^,]*\),\(.*\)/\<a href=\"\1\"\>\2\<\/a\>/"
2012-01-06 13:55:05
User: chrismccoy
Functions: sed
Tags: sed html link

an extension of command 9986 by c3w, allows for link text.

http://google.com,search engine

will link the hyperlink with the text after the url instead of linking with the url as linktext

pgrep -lf
find . -name "*.php" -print0 | xargs -0 grep -i "search phrase"
2010-07-27 20:52:37
User: randy909
Functions: find grep xargs

xargs avoids having to remember the "{} \;" (although definitely a useful thing to know. Unfortunately I always forget it). xargs version runs 2x faster on my test fwiw.

edit: fixed to handle spaces in filenames correctly.

ps -eo pid,args | grep -v grep | grep catalina | awk '{print $1}'
sed -i 's/\t/ /g' yourfile
xe vm-import -h <host ip> -pw <yourpass> filename=./Ubuntu-9.1032bitPV.xva sr-uuid=<your SR UUID>
2010-09-28 18:31:22
User: layer8

Imports a backed up or exported virtual machine into XenServer

dumpe2fs -h /dev/sdX
2011-01-22 23:50:03
User: dmmst19
Functions: dumpe2fs

You are probably aware that some percent of disk space on an ext2/ext3 file system is reserved for root (typically 5%). As documented elsewhere this can be reduced to 1% with

tune2fs -m 1 /dev/sdX (where X = drive/partition, like /dev/sda1)

but how do you check to see what the existing reserved block percentage actually is before making the change? You can find that with

dumpe2fs -h /dev/sdX

You get a raw block count and reserved block count, from which you can calculate the percentage. In the example here you can easily see that it's currently 1%, so you won't get any more available space by setting it to 1% again.

FYI If your disks are IDE instead of SCSI, your filesystems will be /dev/hdX instead of /dev/sdX.