What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again.

Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.

Get involved!

You can sign-in using OpenID credentials, or register a traditional username and password.

First-time OpenID users will be automatically assigned a username which can be changed after signing in.

Universal configuration monitoring and system of record for IT.

Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for:



May 19, 2015 - A Look At The New Commandlinefu
I've put together a short writeup on what kind of newness you can expect from the next iteration of clfu. Check it out here.
March 2, 2015 - New Management
I'm Jon, I'll be maintaining and improving clfu. Thanks to David for building such a great resource!

Top Tags



Psst. Open beta.

Wow, didn't really expect you to read this far down. The latest iteration of the site is in open beta. It's a gentle open beta-- not in prime-time just yet. It's being hosted over at UpGuard (link) and you are more than welcome to give it a shot. Couple things:

  • » The open beta is running a copy of the database that will not carry over to the final version. Don't post anything you don't mind losing.
  • » If you wish to use your user account, you will probably need to reset your password.
Your feedback is appreciated via the form on the beta page. Thanks! -Jon & CLFU Team

Commands using sort from sorted by
Terminal - Commands using sort - 668 results
sudo netselect -v -s3 $(curl -s http://dns.comcast.net/dns-ip-addresses2.php | egrep -o '[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+' | sort | uniq)
2010-01-27 00:03:44
User: hackerb9
Functions: egrep sort sudo

Comcast is an ISP in the United States that has started hijacking DNS requests as a "service" for its customers. For example, in Firefox, one used to be able to do a quick "I'm Feeling Lucky" Google search by typing a single word into the URL field, assuming the word is not an existing domain when surrounded by www.*.com. Comcast customers never receive the correct NX (non-existent domain) error from DNS. Instead, they are shown a page full of advertising. There is a way to "opt out" from their service, but that requires having the account password and the MAC address of your modem handy. For me, it was easier just to set static DNS servers. But the problem is, which ones to choose? That's what this command answers. It'll show you the three _non-hijacked_ Comcast DNS servers that are the shortest distance away.

Perhaps you don't have Comcast (lucky you!), but hopefully this command can serve as an example of using netselect to find the fastest server from a list. Note that, although this example doesn't show it, netselect will actually perform the uniq and DNS resolution for you.

Requires: netselect, curl, sort, uniq, grep

nmap -sP <subnet>.* | egrep -o '[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+' > results.txt ; for IP in {1..254} ; do echo "<subnet>.${IP}" ; done >> results.txt ; cat results.txt | sort -n -t . -k 1,1 -k 2,2 -k 3,3 -k 4,4 | uniq -u
for i in $(ps -ef | awk '{print $2}') ; { swp=$( awk '/Swap/{sum+=$2} END {print sum}' /proc/$i/smaps ); if [[ -n $swp && 0 != $swp ]] ; then echo -n "\n $swp $i "; cat /proc/$i/cmdline ; fi; } | sort -nr
paste -d "." <(curl http://.../dist.female.first http://.../dist.male.first | cut -d " " -f 1 | sort -uR) <(curl http://..../dist.all.last | cut -d " " -f 1 | sort -R | head -5163) | tr "[:upper:]" "[:lower:]" | sed [email protected]/g'
2010-01-21 19:52:28
User: connorsy
Functions: cut head paste sed sort tr

** Replace the ... in URLS with:


Couldn't fit in 256

Created on Ubuntu 9.10 but nothing out of the ordinary, should work anywhere with a little tweaking. 5163 is the number of unique first names you get when combine the male and female first name files from. http://www.census.gov/genealogy/www/data/1990surnames/names_files.html

cat -n <file> | sort -k 2 | uniq -f 1 | sort -n | cut -f 2-
2010-01-21 18:55:58
User: fpunktk
Functions: cat cut sort uniq

i wanted to delete all duplicate lines from .bash_history and keep the order of the other lines.

the command cat's the file and adds line numbers, then sorts by the second column. afterwards uniq omits repeated lines, but skips the first field (the line number). then it sorts by the line numbers and at the end cuts the numbers off.

lynx --width=200 --dump 'http://quake.usgs.gov/recenteqs/Maps/San_Francisco_eqs.htm'|sed -ne '/MAG.*/,/^References/{;s/\[[0-9][0-9]*\]//;1,/h:m:s/d;/Back to map/,$d;/^$/d;/^[ \t][ \t]*[3-9]\.[0-9][0-9]*[ \t][ \t]*/p; }'|sort -k1nr
2010-01-08 20:52:28
User: KevinM
Functions: sed sort

To see only earthquakes for today, add another pipe to egrep "`date '+%Y/%m/%d'`"

git log --format='%aN' | sort -u
2010-01-07 18:55:21
User: jedahan
Functions: sort
Tags: git

This should work even if the output format changes.

(bzcat BZIP2_FILES && cat TEXT_FILES) | grep -E "Invalid user|PAM" | grep -o -E "from .+" | awk '{print $2}' | sort | uniq >> /etc/hosts.deny
2010-01-03 04:41:51
User: jayhawkbabe
Functions: awk cat grep sort uniq

Searches all log files (including archived bzip2 files) for invalid user and PAM authentication errors, both of which are indicative of brute force attempts at logging into computer. A list of all unique IP addresses and domain names is appended to hosts.deny. The command (and grep error messages) will work on Mac OS X 10.6, small adjustments may be needed for other OSs.

find . \! -type d | rev | sort | rev | tar c --files-from=- --format=ustar | bzip2 --best > a.tar.bz2
2009-12-20 14:04:39
User: pornel
Functions: bzip2 c++ find rev sort tar

Avoids creating useless directory entries in archive, and sorts files by (roughly) extension, which is likely to group similar files together for better compression. 1%-5% improvement.

zcat access_log.*.gz | awk '{print $7}' | sort | uniq -c | sort -n | tail -n 20
du -ms * 2>/dev/null |sort -nr|head
find . -maxdepth 1 -type f -size +1M -printf "%f:%s\n" | sort -t":" -k2
ls -l | awk '$5 > 1000000' | sort -k5n
du | sort -nr | cut -f2- | xargs du -hs
cat *.c | { printf "se te du\nplot '-' t '' w dots\n"; tr '[[:upper:]]' '[[:lower:]]' | tr -s [[:punct:][:space:]] '\n' | sort | uniq -c | sort -nr | head -n 100 | awk '{print $1}END{print "e"}'; } | gnuplot
2009-11-20 14:53:26
User: taliver
Functions: awk cat head printf sort tr uniq

Uses the dumb terminal option in gnuplot to plot a graph of frequencies. In this case, we are looking at a frequency analysis of words in all of the .c files.

find -name '*.avi' | while read i ; do echo $(mplayer -identify -frames 0 -vo null -nosound "$i" 2>&1 | grep ID_LENGTH | cut -d= -f2)" ""$i" ;done | sort -k1 -r -n | sed 's/^\([^\ ]*\)\ \(.*\)$/\2:\1/g'
2009-11-09 17:14:59
User: ZungBang
Functions: cut echo find grep read sed sort

handles file names with spaces and colons, fixes sort (numeric!), uses mplayer, same output format as other alternatives

for i in *.avi; do echo -n "$i:";mediainfo $i|head | grep PlayTime | cut -d: -f2 ; done | sort -t: -k2 -r
2009-11-09 12:42:20
User: yooreck
Functions: cut echo grep head sort

Similar but using mediainfo instead of totem-something

for i in *.avi; do echo -n "$i:";totem-gstreamer-video-indexer $i | grep DURATION | cut -d "=" -f 2 ; done | sort -t: -k2 -r
2009-11-09 02:59:53
User: iadbungler
Functions: cut echo grep sort

Sort .avi movies by time length, print the longest first, and so on...

ps -eo pcpu,user,pid,cmd | sort -r | head -5
dh() { du -ch --max-depth=1 "${@-.}"|sort -h }
git ls-files | while read i; do git blame $i | sed -e 's/^[^(]*(//' -e 's/^\([^[:digit:]]*\)[[:space:]]\+[[:digit:]].*/\1/'; done | sort | uniq -ic | sort -nr
2009-10-25 09:40:01
User: pipping
Functions: read sed sort uniq
Tags: statistics git

You'll run into trouble if you have files w/ missing newlines at the end. I tried to use

PAGER='sed \$q' git blame

and even

PAGER='sed \$q' git -p blame

to force a newline at the end, but as soon as the output is redirected, git seems to ignore the pager.

dpkg-query -Wf '${Installed-Size}\t${Package}\n' | sort -n
git ls-files | xargs -n1 -d'\n' -i git-blame {} | perl -n -e '/\s\((.*?)\s[0-9]{4}/ && print "$1\n"' | sort -f | uniq -c -w3 | sort -r
2009-10-25 01:44:03
User: askedrelic
Functions: perl sort uniq xargs
Tags: statistics git

Figures out total line contribution per author for an entire GIT repo. Includes binary files, which kind of mess up the true count.

If crashes or takes too long, mess with the ls-file option at the start:

git ls-files -x "*pdf" -x "*psd" -x "*tif" to remove really random binary files

git ls-files "*.py" "*.html" "*.css" to only include specific file types

Based off my original SVN version: http://www.commandlinefu.com/commands/view/2787/prints-total-line-count-contribution-per-user-for-an-svn-repository

shuf -i 1-49 | head -n6 | sort -n| xargs
2009-10-22 12:54:08
User: ioggstream
Functions: head sort

note the xargs at the end

echo $(shuf -i 1-49 | head -n6 | sort -n)