Hide

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again.

Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.


If you have a new feature suggestion or find a bug, please get in touch via http://commandlinefu.uservoice.com/

Get involved!

You can sign-in using OpenID credentials, or register a traditional username and password.

First-time OpenID users will be automatically assigned a username which can be changed after signing in.

Hide

Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for:

Hide

News

2011-03-12 - Confoo 2011 presentation
Slides are available from the commandlinefu presentation at Confoo 2011: http://presentations.codeinthehole.com/confoo2011/
2011-01-04 - Moderation now required for new commands
To try and put and end to the spamming, new commands require moderation before they will appear on the site.
2010-12-27 - Apologies for not banning the trolls sooner
Have been away from the interwebs over Christmas. Will be more vigilant henceforth.
2010-09-24 - OAuth and pagination problems fixed
Apologies for the delay in getting Twitter's OAuth supported. Annoying pagination gremlin also fixed.
Hide

Tags

Hide

Functions

All commands from sorted by
Terminal - All commands - 11,930 results
echo alias grep=\'grep --color=auto\' >> ~/.bashrc ; . ~/.bashrc
2009-07-05 07:44:13
User: 0x2142
Functions: alias echo
Tags: color grep
7

This will create a permanent alias to colorize the search pattern in your grep output

rsync -rtvu --modify-window=1 --progress /media/SOURCE/ /media/TARGET/
2009-07-05 07:40:10
User: 0x2142
Functions: rsync
Tags: backup rsync
12

This will backup the _contents_ of /media/SOURCE to /media/TARGET where TARGET is formatted with ntfs. The --modify-window lets rsync ignore the less accurate timestamps of NTFS.

echo 2+3 |bc
2009-07-04 22:03:41
User: Kaio
Functions: echo
0

Handy use of bc in the command line. No need to get 'into' the bc to perform calculations

FEATURES=keepwork emerge --resume
2009-07-04 21:09:27
2

For Gentoo:

If you do not use this command, portage will fetch the source again, and rebuild the hole application from the top.

This command make portage keep all files that ar allready built

pg_dump otrs2 | gzip > dump.gz
ps awwfux | less -S
2009-07-04 09:39:28
User: ToyKeeper
Functions: less ps
39

If you want a visual representation of the parent/child relationships between processes, this is one easy way to do it. It's useful in debugging collections of shell scripts, because it provides something like a call traceback.

When a shell script breaks, just remember "awwfux".

/usr/proc/bin/pfiles $PID | egrep "sockname|port"
pkgchk -l -p <full path to the file>
2009-07-04 08:22:11
User: sengork
Tags: solaris
0

Find which package a file belongs to on Solaris along with it's packaging system metadata.

cat /var/log/auth.log | logtool -o HTML > auth.html
2009-07-03 18:17:22
Functions: cat
3

Logtool is a nice tool that can export log file to various format, but its strength lies in the capacity of colorize logs. This command take a log as input and colorize it, then export it to an html file for a more confortable view. Logtool is part of logtool package.Tested on Debian.

vim -c new myfile
2009-07-03 17:54:43
Functions: vim
1

: new command allow to split a Vim screen in two separate windows. Each window can handle its own buffer.

Passing the -c new options when Vim start cause to split screen automatically.

nload -u m eth0
2009-07-03 17:47:38
6

Nload is part of nload package, tested under Debian. Nload display network bandwidth statistics, -u m options stands for MBit unit measure.

for file in `find /var/log/ -type f -size +5000k`; do > $file; done
2009-07-03 17:38:21
User: svg
Functions: file
0

you don't need to echo, just a plain redirect is enough to empty the file

for file in `find /var/log/ -type f -size +5000k`; do echo " " > $file; done
2009-07-03 16:26:36
User: jemmille
Functions: echo file
0

Empties all files in /var/log over 5000k. Useful if /var goes crazy or if you just haven't cleaned up in a while.

yes "$(seq 232 255;seq 254 -1 233)" | while read i; do printf "\x1b[48;5;${i}m\n"; sleep .01; done
sudo ipfw pipe 1 config bw 50KByte/s;sudo ipfw add 1 pipe 1 src-port 80
2009-07-02 23:17:49
User: miccaman
Functions: sudo
Tags: Os X pipe
1

sudo ipfw pipe 1 config bw 50KByte/s

Set the bandwidth (bw) limit to any number you want. For example you could have a 15kb pipe for X application and then a 100kb pipe for another application and attach things to those pipes. If a port isn’t attached to a pipe, it runs at full speed. Change the number (in this case 1) to a different number for a different pipe.

The next step is to attach your port.

sudo ipfw add 1 pipe 1 src-port 80

In this case anything on port 80 (http) will be set to a limit of 50Kbyte/s. If you want to attach a second port to this pipe, repeat the command but change the port number at the end.

src : http://www.mactricksandtips.com/2008/12/throttling-bandwidth-on-a-mac.html

dump -0 -M -B 4000000 -f /media/My\ Passport/Fedora10bckup/root_dump_fedora -z2 /
2009-07-02 20:25:22
User: luqmanux
Functions: dump
Tags: backup
2

This will compress the root directory to an external hard drive and split it to parts once it reaches the 4 Gigs file system limit.

You can simply restore it with:

restore ivf /media/My\ Passport/Fedora10bckup/root_dump_fedora
netstat -ntauple
function duf { du -sk "$@" | sort -n | while read size fname; do for unit in k M G T P E Z Y; do if [ $size -lt 1024 ]; then echo -e "${size}${unit}\t${fname}"; break; fi; size=$((size/1024)); done; done; }
tar -cj /backup | cstream -t 777k | ssh host 'tar -xj -C /backup'
2009-07-02 10:05:53
User: wires
Functions: host ssh tar
24

this bzips a folder and transfers it over the network to "host" at 777k bit/s.

cstream can do a lot more, have a look http://www.cons.org/cracauer/cstream.html#usage

for example:

echo w00t, i'm 733+ | cstream -b1 -t2

hehe :)

LC_ALL=C tr -c "[:digit:]" " " < /dev/urandom | dd cbs=$COLUMNS conv=unblock | GREP_COLOR="1;32" grep --color "[^ ]"
2009-07-02 07:10:33
User: zzambia
Functions: dd grep tr
Tags: color
6

Solves "tr" issues with non C-locales under BSD-like systems (like OS X)

wget -r --wait=5 --quota=5000m --tries=3 --directory-prefix=/home/erin/Documents/erins_webpages --limit-rate=20k --level=1 -k -p -erobots=off -np -N --exclude-domains=del.icio.us,doubleclick.net -F -i ./delicious-20090629.htm
2009-07-02 01:46:21
User: bbelt16ag
Functions: wget
-1

just a alternative using a saved html file of all of my bookmarks. works well although it takes awhile.

system_profiler SPPowerDataType | egrep -e "Connected|Charge remaining|Full charge capacity|Condition" | sed -e 's/^[ \t]*//'
du -ms * .[^.]*| sort -nk1
2009-07-01 13:38:13
User: ioggstream
Functions: du sort
3

using mb it's still readable;) a symbol variation

$ du -ms {,.[^.]}* | sort -nk1

kpartx -a /dev/mapper/space-foobar
2009-07-01 09:53:33
User: flart
1

kpartx can be found inside of the multipath-tools package

-a adds the mappings and -d deletes them

lsof -i
2009-06-30 22:36:00
13

I prefer to use this and not the -n variety, so I get DNS-resolved hostnames. Nice when I'm trying to figure out who's got that port open.