What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again.

Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.

If you have a new feature suggestion or find a bug, please get in touch via http://commandlinefu.uservoice.com/

Get involved!

You can sign-in using OpenID credentials, or register a traditional username and password.

First-time OpenID users will be automatically assigned a username which can be changed after signing in.

Universal configuration monitoring and system of record for IT.

Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for:



May 19, 2015 - A Look At The New Commandlinefu
I've put together a short writeup on what kind of newness you can expect from the next iteration of clfu. Check it out here.
March 2, 2015 - New Management
I'm Jon, I'll be maintaining and improving clfu. Thanks to David for building such a great resource!

Top Tags





Commands using ls from sorted by
Terminal - Commands using ls - 463 results
ls -1 | while read a; do mv "$a" `echo $a | sed -e 's/\ //g'`; done
bar() { foo=$(ls -rt|tail -1) && read -ep "cat $foo? <y/n> " a && [[ $a != "n" ]] && eval "cat $foo" ;}
2015-10-21 20:09:33
User: knoppix5
Functions: eval ls read tail

This command will display the file, but you can change 'cat' to anything else

(type 'n' when prompted to cancel the command or anything else to proceed).


Some hints for newbies:


unset bar

to make 'bar' function annihilated.

For permanent usage you can put this (bar) function in your .bashrc (for bash) or in .profile (for sh).


. ~/.bashrc

you can get all new inserted functions in .bashrc (so the function 'bar'

or whatever name you choose) immediately available.

ls -lt --time=atime *.txt
2015-05-21 21:03:44
User: miccaman
Functions: ls
Tags: ls

list all txt files order by time, newest first

ls -l /dev/disk/by-id |gawk 'match($11, /[a-z]{3}$/) && match($9, /^ata-/) { gsub("../", ""); print $11,"\t",$9 }' |sort
2015-05-18 15:42:33
User: lig0n
Functions: gawk ls
Tags: zfs disk info

Scrap everything and use `gawk` to do all the magic, since it's like the future or something.

gawk 'match($11, /[a-z]{3}$/) && match($9, /^ata-/) { gsub("../", ""); print $11,"\t",$9 }'

Yank out only ata- lines that have a drive letter (ignore lines with partitions). Then strip ../../ and print the output.

Yay awk. Be sure to see the alternatives as my initial command is listed there. This one is a revision of the original.

while [ "$(ls -l --full-time TargetFile)" != "$a" ] ; do a=$(ls -l --full-time TargetFile); sleep 10; done
2015-05-09 03:19:49
User: dmmst19
Functions: ls sleep

Here's a way to wait for a file (a download, a logfile, etc) to stop changing, then do something. As written it will just return to the prompt, but you could add a "; echo DONE" or whatever at the end.

This just compares the full output of "ls" every 10 seconds, and keeps going as long as that output has changed since the last interval. If the file is being appended to, the size will change, and if it's being modified without growing, the timestamp from the "--full-time" option will have changed. The output of just "ls -l" isn't sufficient since by default it doesn't show seconds, just minutes.

Waiting for a file to stop changing is not a very elegant or reliable way to measure that some process is finished - if you know the process ID there are much better ways. This method will also give a false positive if the changes to the target file are delayed longer than the sleep interval for any reason (network timeouts, etc). But sometimes the process that is writing the file doesn't exit, rather it continues on doing something else, so this approach can be useful if you understand its limitations.

for a in $(ls /usr/sbin /usr/bin); do ps -fC $a;done|grep -v PPID
2015-04-27 18:15:56
User: knoppix5
Functions: grep ls ps

Thanks to pooderbill for the idea :-)

ls | while read line; do ln -s "$(pwd)/$line" "/usr/bin/$line"; done
ls -l /dev/disk/by-id |grep -v "wwn-" |egrep "[a-zA-Z]{3}$" |sed 's/\.\.\/\.\.\///' |sed -E 's/.*[0-9]{2}:[0-9]{2}\s//' |sed -E 's/->\ //' |sort -k2 |awk '{print $2,$1}' |sed 's/\s/\t/'
2015-01-25 19:29:40
User: lig0n
Functions: awk egrep grep ls sed sort
Tags: zfs disk info

This is much easier to parse and do something else with (eg: automagically create ZFS vols) than anything else I've found. It also helps me keep track of which disks are which, for example, when I want to replace a disk, or image headers in different scenarios. Being able to match a disk to the kernels mapping of said drive the disks serial number is very helpful

ls -l /dev/disk/by-id

Normal `ls` command to list contents of /dev/disk/by-id

grep -v "wwn-"

Perform an inverse search - that is, only output non-matches to the pattern 'wwn-'

egrep "[a-zA-Z]{3}$"

A regex grep, looking for three letters and the end of a line (to filter out fluff)

sed 's/\.\.\/\.\.\///'

Utilize sed (stream editor) to remove all occurrences of "../../"

sed -E 's/.*[0-9]{2}:[0-9]{2}\s//'

Strip out all user and permission fluff. The -E option lets us use extended (modern) regex notation (larger control set)

sed -E 's/->\ //'

Strip out ascii arrows "-> "

sort -k2

Sort the resulting information alphabetically, on column 2 (the disk letters)

awk '{print $2,$1}'

Swap the order of the columns so it's easier to read/utilize output from

sed 's/\s/\t/'

Replace the space between the two columns with a tab character, making the output more friendly

For large ZFS pools, this made creating my vdevs immeasurably easy. By keeping track of which disks were in which slot (spreadsheet) via their serial numbers, I was able to then create my vols simply by copying and pasting the full output of the disk (not the letter) and pasting it into my command. Thereby allowing me to know exactly which disk, in which slot, was going into the vdev. Example command below.

zpool create tank raidz2 -o ashift=12 ata-... ata-... ata-... ata-... ata-... ata-...
cd(), do a ls (or whatever you can imagine) after a cd, func to long please refer to description
2015-01-01 20:50:19
User: Xk2c
Functions: cd ls

some people on the net already use a cd(), but most of them break 'cd -' functionality,

that is "go back where you have been previosly", or 'cd' which is "go back home".

This cd() copes with that. Also when given a file name, go to the directory where this file is in.



if [[ -n ${*} ]]


if [[ s${*}e == s-e ]]


builtin cd -

elif [[ ! -d ${*} ]]


builtin cd "${*%/*}"


builtin cd "${*}"



builtin cd ~


ls -la


ls -l | head -n 65535 | awk '{if (NR > 1) total += $5} END {print total/(1024*1024*1024)}'
ls *.png | cut -d . -f 1 | xargs -L1 -i convert -strip -interlace Plane -quality 80 {}.png {}.jpg
ls | tr '[[:punct:][:space:]]' '\n' | grep -v "^\s*$" | sort | uniq -c | sort -bn
2014-10-14 09:52:28
User: qdrizh
Functions: grep ls sort tr uniq
Tags: sort uniq ls grep tr

I'm sure there's a more elegant sed version for the tr + grep section.

ls -la | grep ^l
ls /EMRCV5/
find . -type f -size +50000k -exec ls -lh {} \; | awk '{ print $9 ": " $5 }'
for d in `ls -d *`; do svn status $d | awk '{print $2}'; done | xargs ls -l {} \;
2014-05-27 19:07:45
User: dronamk
Functions: awk ls xargs

Find all files in SVN workspace directories which are uncommitted. List them and find their properties

cd <mntpoint>; find . -xdev -size +10000000c -exec ls -l {} \; | sort -n -k 5
2014-05-20 14:13:54
User: deritchie
Functions: cd find ls sort

This is a quick way to find what is hogging disk space when you get a full disk alert on your

monitoring system. This won't work as is with filesystems that allow embedded spaces in user

names or groups (read "Mac OS X attached to a Windows Domain"). In those cases, you will need to change the -k 5 to something that works in your situation.

cdn() { cd $(ls -1d */ | sed -n $@p); }
watch ls -lh /path/to/folder
2014-03-27 10:51:36
User: vonElfensenf
Functions: ls watch
Tags: pv

forgot to use a pv or rsync and want to know how much has been copied.

ls -lF -darth `find . -mmin -3`
2014-03-22 16:52:20
User: UncleLouie
Functions: ls

Provides a recursive time ordered list of the current directory over the last 3 minutes.

Excluding zero byte files:

ls -lF -darth `find . -size +0 -mmin -3`

For the last day's files, change "-mmin -3" to "-mtime -1":

ls -lF -darth `find . -size +0 -mtime -1`
find ./ -type l -print0 | xargs -0 ls -plah
2014-03-20 20:36:39
Functions: find ls xargs

shows you the symlinks in the current directory, recursively, but without following them

2014-03-12 18:00:21
User: pdxdoughnut
Functions: ls xargs

xargs will automatically determine how namy args are too many and only pass a reasonable number of them at a time. In the example, 500,002 file names were split across 26 instantiations of the command "echo".

ls | grep ".txt$" | xargs -i WHATEVER_COMMAND {}
find . \( -iname "*.doc" -o -iname "*.docx" \) -type f -exec ls -l --full-time {} +|sort -k 6,7
find . -type d| while read i; do echo $(ls -1 "$i"|wc -m) $(du -s "$i"); done|sort -s -n -k1,1 -k2,2 |awk -F'[ \t]+' '{ idx=$1$2; if (array[idx] == 1) {print} else if (array[idx]) {print array[idx]; print; array[idx]=1} else {array[idx]=$0}}'
2014-02-25 22:50:09
User: knoppix5
Functions: awk du echo find ls read sort wc

Very quick! Based only on the content sizes and the character counts of filenames. If both numbers are equal then two (or more) directories seem to be most likely identical.

if in doubt apply:

diff -rq path_to_dir1 path_to_dir2

AWK function taken from here: