What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again.

Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.

If you have a new feature suggestion or find a bug, please get in touch via http://commandlinefu.uservoice.com/

Get involved!

You can sign-in using OpenID credentials, or register a traditional username and password.

First-time OpenID users will be automatically assigned a username which can be changed after signing in.

Universal configuration monitoring and system of record for IT.

Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for:



May 19, 2015 - A Look At The New Commandlinefu
I've put together a short writeup on what kind of newness you can expect from the next iteration of clfu. Check it out here.
March 2, 2015 - New Management
I'm Jon, I'll be maintaining and improving clfu. Thanks to David for building such a great resource!

Top Tags





All commands from sorted by
Terminal - All commands - 12,230 results
echo `date +%m/%d/%y%X |awk '{print $1;}' `" => "` cat /proc/acpi/thermal_zone/THRM/temperature | awk '{print $2, $3;}'` >> datetmp.log
2009-08-24 21:26:29
User: ninadsp
Functions: awk cat echo

Uses the data in the /proc system, provided by the acpid, to find out the CPU temperature. Can be run on systems without lm-sensors installed as well.

find /dir | awk '{print length, $0}' | sort -nr | sed 's/^[[:digit:]]* //' | while read dirfile; do outfile="$(echo "$(basename "$dirfile")" | unaccent UTF-8)"; mv "$dirfile" "$(dirname "$dirfile")/$outfile"; done
2009-08-24 21:24:18
User: Patola
Functions: awk basename find mv read sed sort

This command changes all filename and directories within a directory tree to unaccented ones. I had to do this to 'sanitize' some samba-exported trees. The reason it works might seem a little difficult to see at first - it first reverses-sort by pathname length, then it renames only the basename of the path. This way it'll always go in the right order to rename everything.

Some notes:

1. You'll have to have the 'unaccent' command. On Ubuntu, just aptitude install unaccent.

2. In this case, the encoding of the tree was UTF-8 - but you might be using another one, just adjust the command to your encoding.

3. The program might spit a few harmless errors saying the files are the same - not to fear.

tar -cf - /home/user/test | gzip -c | ssh user@sshServer 'cd /tmp; tar xfz -'
2009-08-24 18:35:38
User: esplinter
Functions: gzip ssh tar
Tags: ssh file move

Useful to move many files (thousands or millions files) over ssh. Faster than scp because this way you save a lot of tcp connection establishments (syn/ack packets).

If using a fast lan (I have just tested gigabyte ethernet) it is faster to not compress the data so the command would be:

tar -cf - /home/user/test | ssh user@sshServer 'cd /tmp; tar xf -'

mirror=ftp://somemirror.com/with/alot/versions/but/no/latest/link; latest=$(curl -l $mirror/ 2>/dev/null | grep util | tail -1); wget $mirror/$latest
2009-08-24 15:58:31
User: peshay
Functions: grep tail wget

to download latest version of "util", maybe insert a sort if they wont be shown in right order.

curl lists all files on mirror, grep your util, tail -1 will gets the one lists on the bottom and get it with wget

tar dfz horde-webmail-1.2.3.tar.gz
ssh root@pyramid \ "tcpdump -nn -i eth1 -w -" | snort -c /etc/snort/snort.conf -r -
2009-08-24 14:04:06
User: omish_man
Functions: ssh

I have a small embedded linux device that I wanted to use for sniffing my external network, but I didn't want to recompile/cross-compile snort for the embedded platform. So I used tcpdump over ssh to pass all the traffic as pcap data to a "normal" Linux system that then takes the pcap data and passes it to snort for processing.

sniff_host: tcpdump -nn -i eth1 -w - | nc 666
jot -s '' -r -n 8 0 9
2009-08-24 13:35:20
User: Hal_Pomeranz
Tags: random jot rs

Don't need to pipe the output into rs if you just tell jot to use a null separator character.

curl -s "http://services.digg.com/stories?link=$NEWSURL&appkey=http://www.whatever.com&type=json" | python -m simplejson.tool | grep diggs
find /backup/directory -name "FILENAME_*" -mtime +15 -exec rm -vf {};
mount -t ntfs-3g -o ro,loop,uid=user,gid=group,umask=0007,fmask=0117,offset=0x$(hd -n 1000000 image.vdi | grep "eb 52 90 4e 54 46 53" | cut -c 1-8) image.vdi /mnt/vdi-ntfs
tweet(){ curl -u "$1" -d status="$2" "http://twitter.com/statuses/update.xml"; }
2009-08-23 16:56:24
User: Code_Bleu

Type the command in the terminal and press enter to create the tweet() function. Then run as follows:

tweet MyTwitterAccount "My message goes here"

It will prompt you for password. Make sure that you use escape "\" character in message for showing varialbles or markup.

awk '!a[$0]++' file
2009-08-23 15:28:43
User: voyeg3r
Functions: awk

This create an array 'a' with wole lines. only one occurrence of each line - Not Get lines ++ !

mount -t unionfs -o dirs=/tmp/unioncache=rw:/mnt/readonly=ro unionfs /mnt/unionfs
2009-08-23 14:16:13
User: Cowboy
Functions: mount

First look into /etc/modules if you have unionfs (or squashfs) support. If not, add the modules. UnionFS combines two filesystems. If there is a need to write a file, /tmp/unioncache will be used to write files (first create that directory). Reads will be done where the file is found first.


locate -e somefile | xargs ls -l
2009-08-23 13:16:59
User: nadavkav
Functions: locate ls xargs

use the locate command to find files on the system and verify they exist (-e) then display each one in full details.

rm -vf /backup/directory/**/FILENAME_*(m+15)
awk '!($0 in a) {a[$0];print}' file
structcp(){ ( mkdir -pv $2;f="$(realpath "$1")";t="$(realpath "$2")";cd "$f";find * -type d -exec mkdir -pv $t/{} \;);}
2009-08-23 11:26:38
User: frozenfire
Functions: mkdir
Tags: copy

Copies a dir structure without the files in it.

shutdown -h 60
2009-08-23 11:02:39
User: MikeTheGreat
Functions: shutdown
Tags: Shutdown

Replace 60 with the number of minutes until you want the machine to shut down.

Alternatively give an absolute time in the format hh:mm (shutdown -h 9:30)

Or shutdown right away (shutdown -h now)

nautilus -q
2009-08-23 10:38:30
User: deasc

man nautilus (1) - the GNOME File Manager




Quit Nautilus.

ps But if it hang up, you should kill it with -9 of couse

cp -pr olddirectory newdirectory
2009-08-22 22:11:24
User: stanishjohnd
Functions: cp

cp options:

-p will preserve the file mode, ownership, and timestamps

-r will copy files recursively

also, if you want to keep symlinks in addition to the above: use the -a/--archive option

y=http://www.youtube.com;for i in $(curl -s $f|grep -o "url='$y/watch?v=[^']*'");do d=$(echo $i|sed "s|url\='$y/watch?v=\(.*\)&.*'|\1|");wget -O $d.flv "$y/get_video.php?video_id=$d&t=$(curl -s "$y/watch?v=$d"|sed -n 's/.* "t": "\([^"]*\)",.*/\1/p')";done
2009-08-22 21:31:29
User: matthewbauer
Functions: echo grep sed

This will download a Youtube playlist and mostly anything http://code.google.com/apis/youtube/2.0/reference.html#Video_Feeds

The files will be saved by $id.flv

yes n
tar -C /oldirectory -cvpf - . | tar -C /newdirector -xvf -
2009-08-22 20:05:49
User: Cowboy
Functions: tar

It's the same like 'cp -p' if available. It's faster over networks than scp. If you have to copy gigs of data you could also use netcat and the tar -z option in conjunction -- on the receiving end do:

# nc -l 7000 | tar -xzvpf -

...and on the sending end do:

# tar -czf - * | nc otherhost 7000