What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again.

Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.

Get involved!

You can sign-in using OpenID credentials, or register a traditional username and password.

First-time OpenID users will be automatically assigned a username which can be changed after signing in.

Universal configuration monitoring and system of record for IT.

Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for:



May 19, 2015 - A Look At The New Commandlinefu
I've put together a short writeup on what kind of newness you can expect from the next iteration of clfu. Check it out here.
March 2, 2015 - New Management
I'm Jon, I'll be maintaining and improving clfu. Thanks to David for building such a great resource!

Top Tags



Psst. Open beta.

Wow, didn't really expect you to read this far down. The latest iteration of the site is in open beta. It's a gentle open beta-- not in prime-time just yet. It's being hosted over at UpGuard (link) and you are more than welcome to give it a shot. Couple things:

  • » The open beta is running a copy of the database that will not carry over to the final version. Don't post anything you don't mind losing.
  • » If you wish to use your user account, you will probably need to reset your password.
Your feedback is appreciated via the form on the beta page. Thanks! -Jon & CLFU Team

All commands from sorted by
Terminal - All commands - 12,393 results
rsync -rtvu --modify-window=1 --progress /media/SOURCE/ /media/TARGET/
2009-07-05 07:40:10
User: 0x2142
Functions: rsync
Tags: backup rsync

This will backup the _contents_ of /media/SOURCE to /media/TARGET where TARGET is formatted with ntfs. The --modify-window lets rsync ignore the less accurate timestamps of NTFS.

echo 2+3 |bc
2009-07-04 22:03:41
User: Kaio
Functions: echo

Handy use of bc in the command line. No need to get 'into' the bc to perform calculations

FEATURES=keepwork emerge --resume
2009-07-04 21:09:27

For Gentoo:

If you do not use this command, portage will fetch the source again, and rebuild the hole application from the top.

This command make portage keep all files that ar allready built

pg_dump otrs2 | gzip > dump.gz
ps awwfux | less -S
2009-07-04 09:39:28
User: ToyKeeper
Functions: less ps

If you want a visual representation of the parent/child relationships between processes, this is one easy way to do it. It's useful in debugging collections of shell scripts, because it provides something like a call traceback.

When a shell script breaks, just remember "awwfux".

/usr/proc/bin/pfiles $PID | egrep "sockname|port"
pkgchk -l -p <full path to the file>
2009-07-04 08:22:11
User: sengork
Tags: solaris

Find which package a file belongs to on Solaris along with it's packaging system metadata.

cat /var/log/auth.log | logtool -o HTML > auth.html
2009-07-03 18:17:22
Functions: cat

Logtool is a nice tool that can export log file to various format, but its strength lies in the capacity of colorize logs. This command take a log as input and colorize it, then export it to an html file for a more confortable view. Logtool is part of logtool package.Tested on Debian.

vim -c new myfile
2009-07-03 17:54:43
Functions: vim

: new command allow to split a Vim screen in two separate windows. Each window can handle its own buffer.

Passing the -c new options when Vim start cause to split screen automatically.

nload -u m eth0
2009-07-03 17:47:38

Nload is part of nload package, tested under Debian. Nload display network bandwidth statistics, -u m options stands for MBit unit measure.

for file in `find /var/log/ -type f -size +5000k`; do > $file; done
2009-07-03 17:38:21
User: svg
Functions: file

you don't need to echo, just a plain redirect is enough to empty the file

for file in `find /var/log/ -type f -size +5000k`; do echo " " > $file; done
2009-07-03 16:26:36
User: jemmille
Functions: echo file

Empties all files in /var/log over 5000k. Useful if /var goes crazy or if you just haven't cleaned up in a while.

yes "$(seq 232 255;seq 254 -1 233)" | while read i; do printf "\x1b[48;5;${i}m\n"; sleep .01; done
sudo ipfw pipe 1 config bw 50KByte/s;sudo ipfw add 1 pipe 1 src-port 80
2009-07-02 23:17:49
User: miccaman
Functions: sudo
Tags: Os X pipe

sudo ipfw pipe 1 config bw 50KByte/s

Set the bandwidth (bw) limit to any number you want. For example you could have a 15kb pipe for X application and then a 100kb pipe for another application and attach things to those pipes. If a port isn’t attached to a pipe, it runs at full speed. Change the number (in this case 1) to a different number for a different pipe.

The next step is to attach your port.

sudo ipfw add 1 pipe 1 src-port 80

In this case anything on port 80 (http) will be set to a limit of 50Kbyte/s. If you want to attach a second port to this pipe, repeat the command but change the port number at the end.

src : http://www.mactricksandtips.com/2008/12/throttling-bandwidth-on-a-mac.html

dump -0 -M -B 4000000 -f /media/My\ Passport/Fedora10bckup/root_dump_fedora -z2 /
2009-07-02 20:25:22
User: luqmanux
Functions: dump
Tags: backup

This will compress the root directory to an external hard drive and split it to parts once it reaches the 4 Gigs file system limit.

You can simply restore it with:

restore ivf /media/My\ Passport/Fedora10bckup/root_dump_fedora
netstat -ntauple
function duf { du -sk "$@" | sort -n | while read size fname; do for unit in k M G T P E Z Y; do if [ $size -lt 1024 ]; then echo -e "${size}${unit}\t${fname}"; break; fi; size=$((size/1024)); done; done; }
tar -cj /backup | cstream -t 777k | ssh host 'tar -xj -C /backup'
2009-07-02 10:05:53
User: wires
Functions: host ssh tar

this bzips a folder and transfers it over the network to "host" at 777k bit/s.

cstream can do a lot more, have a look http://www.cons.org/cracauer/cstream.html#usage

for example:

echo w00t, i'm 733+ | cstream -b1 -t2

hehe :)

LC_ALL=C tr -c "[:digit:]" " " < /dev/urandom | dd cbs=$COLUMNS conv=unblock | GREP_COLOR="1;32" grep --color "[^ ]"
2009-07-02 07:10:33
User: zzambia
Functions: dd grep tr
Tags: color

Solves "tr" issues with non C-locales under BSD-like systems (like OS X)

wget -r --wait=5 --quota=5000m --tries=3 --directory-prefix=/home/erin/Documents/erins_webpages --limit-rate=20k --level=1 -k -p -erobots=off -np -N --exclude-domains=del.icio.us,doubleclick.net -F -i ./delicious-20090629.htm
2009-07-02 01:46:21
User: bbelt16ag
Functions: wget

just a alternative using a saved html file of all of my bookmarks. works well although it takes awhile.

system_profiler SPPowerDataType | egrep -e "Connected|Charge remaining|Full charge capacity|Condition" | sed -e 's/^[ \t]*//'
du -ms * .[^.]*| sort -nk1
2009-07-01 13:38:13
User: ioggstream
Functions: du sort

using mb it's still readable;) a symbol variation

$ du -ms {,.[^.]}* | sort -nk1

kpartx -a /dev/mapper/space-foobar
2009-07-01 09:53:33
User: flart

kpartx can be found inside of the multipath-tools package

-a adds the mappings and -d deletes them

lsof -i
2009-06-30 22:36:00

I prefer to use this and not the -n variety, so I get DNS-resolved hostnames. Nice when I'm trying to figure out who's got that port open.

On target: "nc -l 4000 | tar xvf -" On source: "tar -cf - . | nc target_ip 4000"
2009-06-30 19:36:19
User: tiagofischer
Tags: tar nc

It bypasses encryption overhead of SSH and depending on configuration can be significantly faster.

It's recommended to use only in trusted networks.