What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again.

Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.

Get involved!

You can sign-in using OpenID credentials, or register a traditional username and password.

First-time OpenID users will be automatically assigned a username which can be changed after signing in.

UpGuard checks and validates configurations for every major OS, network device, and cloud provider.

Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for:



May 19, 2015 - A Look At The New Commandlinefu
I've put together a short writeup on what kind of newness you can expect from the next iteration of clfu. Check it out here.
March 2, 2015 - New Management
I'm Jon, I'll be maintaining and improving clfu. Thanks to David for building such a great resource!

Top Tags



Commands using sort from sorted by
Terminal - Commands using sort - 674 results
grep -R usepackage * | cut -d']' -f2 | cut -s -d'{' -f 2 | sed s/"}"/.sty"}"/g | cut -d'}' -f1 | sort | uniq | xargs dpkg -S | cut -d':' -f1 | sort | uniq
sortwc () { local L;while read -r L;do builtin printf "${#L}@%s\n" "$L";done|sort -n|sed -u 's/^[^@]*@//'; }
2010-05-20 20:13:52
User: AskApache
Functions: printf read sed sort

This provides a way to sort output based on the length of the line, so that shorter lines appear before longer lines. It's an addon to the sort that I've wanted for years, sometimes it's very useful. Taken from my http://www.askapache.com/linux-unix/bash_profile-functions-advanced-shell.html

grep -hIr -m 1 :name ~/.mozilla/firefox/*.$USER/extensions | tr '<>=' '"""' | cut -f3 -d'"' | sort -u
2010-05-18 14:49:44
User: new_user
Functions: grep sort tr

1.) my profile ends with $USER not with .default

2.) only grep for the first occurrence because some extensions have the translated name also inside the install.rdf

grep -hIr :name ~/.mozilla/firefox/*.default/extensions | tr '<>=' '"""' | cut -f3 -d'"' | sort -u
du -s * | sort -nr | head
git log --all --pretty=format:" " --name-only | sort -u
2010-05-11 16:06:42
Functions: sort
Tags: history git

What was the name of that module we wrote and deleted about 3 months ago? windowing-something?

git log --all --pretty=format:" " --name-only | sort -u | grep -i window
tail -n2000 /var/www/domains/*/*/logs/access_log | awk '{print $1}' | sort | uniq -c | sort -n | awk '{ if ($1 > 20)print $1,$2}'
netstat -an | awk '/tcp/ {print $6}' | sort | uniq -c
2010-05-06 17:04:37
User: Kered557
Functions: awk netstat sort uniq

Counts TCP states from Netstat and displays in an ordered list.

find . \( -iname '*.[ch]' -o -iname '*.php' -o -iname '*.pl' \) -exec wc -l {} + | sort -n
2010-05-03 00:16:02
User: hackerb9
Functions: find sort wc

The same as the other two alternatives, but now less forking! Instead of using '\;' to mark the end of an -exec command in GNU find, you can simply use '+' and it'll run the command only once with all the files as arguments.

This has two benefits over the xargs version: it's easier to read and spaces in the filesnames work automatically (no -print0). [Oh, and there's one less fork, if you care about such things. But, then again, one is equal to zero for sufficiently large values of zero.]

history | awk '{a[$'$(echo "1 2 $HISTTIMEFORMAT" | wc -w)']++}END{for(i in a){print a[i] " " i}}' | sort -rn | head
2010-05-02 21:48:53
User: bandie91
Functions: awk echo sort wc
Tags: history awk wc

If you use HISTTIMEFORMAT environment e.g. timestamping typed commands, $(echo "1 2 $HISTTIMEFORMAT" | wc -w)

gives the number of columns that containing non-command parts per lines.

It should universify this command.

find . \( -iname '*.[ch]' -o -iname '*.php' -o -iname '*.pl' \) | xargs wc -l | sort -n
2010-04-30 12:21:28
User: rbossy
Functions: find sort wc xargs
Tags: find count

find -exec is evil since it launches a process for each file. You get the total as a bonus.

Also, without -n sort will sort by lexical order (that is 9 after 10).

ps hax -o user | sort | uniq -c
zenity --list --width 500 --height 500 --column 'radio' --column 'url' --print-column 2 $(curl -s http://www.di.fm/ | awk -F '"' '/href="http:.*\.pls.*96k/ {print $2}' | sort | awk -F '/|\.' '{print $(NF-1) " " $0}') | xargs mplayer
2010-04-28 23:45:35
User: polaco
Functions: awk sort xargs

This is a very simple and lightweight way to play DI.FM stations

For a more complete version of the command with proper strings in the menu, try: (couldnt fit in the command field above)

zenity --list --width 500 --height 500 --title 'DI.FM' --text 'Pick a Radio' --column 'radio' --column 'url' --print-column 2 $(curl -s http://www.di.fm/ | awk -F '"' '/href="http:.*\.pls.*96k/ {print $2}' | sort | awk -F '/|\.' '{print $(NF-1) " " $0}') | xargs mplayer

This command line parses the html returned from http://di.fm and display all radio stations in a nice graphical menu. After the radio is chosen, the url is passed to mplayer so the music can start


- x11 with gtk environment

- zenity: simple app for displaying gtk menus (sudo apt-get install zenity on ubuntu)

- mplayer: simple audio player (sudo apt-get install mplayer on ubuntu)

find -name `egrep -s '.' * | awk -F":" '{print $1}' | sort -u` -exec stat {} \;
2010-04-26 20:01:44
Functions: awk find sort stat

This will run stat on each file in the directory.

find ./ -name *.h -exec egrep -cH "// | /\*" {} \; | awk -F':' '{print $2 ":" $1}' | sort -gr
2010-04-23 19:00:07
User: blocky
Functions: awk egrep find sort

This shows you which files are most in need of commenting (one line of output per file)

rpm -q -a --qf '%10{SIZE}\t%{NAME}\n' | sort -k1,1n
printf "\n%25s%10sTOTAL\n" 'FILE TYPE' ' '; for ext in $(find . -iname \*.* | egrep -o '\.[^[:space:].]+$' | egrep -v '\.svn*' | sort -f | uniq -i); do count=$(find . -iname \*$ext | wc -l); printf "%25s%10s%d\n" $ext ' ' $count; done
2010-04-16 21:12:11
User: rkulla
Functions: egrep find printf sort uniq wc

I created this command to give me a quick overview of how many file types a directory, and all its subdirectories, contains. It works based off file extension, rather than file(1)'s magic output, because it ended up being more accurate and less confusing.

Files that don't have an ext (README) are generally not important for me to want to count, but you're free to customize this fit your needs.

grep ^lease /var/lib/dhcp/dhcpd.leases | cut -d ' ' -f 2 | sort -t . -k 1,1n -k 2,2n -k 3,3n -k 4,4n | uniq
for i in emerg alert crit error warn ; do awk '$6 ~ /^\['$i'/ {print substr($0, index($0,$6)) }' error_log | sort | uniq -c | sort -n | tail -1; done
2010-04-15 21:47:18
User: zlemini
Functions: awk sort tail uniq

This searches the Apache error_log for each of the 5 most significant Apache error levels, if any are found the date is then cut from the output in order to sort then print the most common occurrence of each error.

rpm -qa --qf "%-30{NAME} %-10{SIZE}\n" | sort -n | less
rpm -qa --qf "%-10{SIZE} %-30{NAME}\n" | sort -nr | less
sudo awk '($9 ~ /404/)' /var/log/httpd/www.domain-access_log | awk '{print $2,$9,$7,$11}' | sort | uniq -c
2010-04-09 10:31:50
User: ninjasys
Functions: awk sort sudo uniq
Tags: log error apache

This command will return a full list of Error 404 pages in the given access log. The following variables have been given to awk

Hostname ($2), ERROR Code ($9), Missing Item ($7), Referrer ($11)

You can then send this into a file (>> /path/to/file), which you can open with OpenOffice as a CSV

awk '$9 == 404 {print $7}' access_log | uniq -c | sort -rn | head
2010-04-08 21:40:53
User: zlemini
Functions: awk sort uniq

Finds the top ten pages returning an http response code of 404 in an apache log.

cat /etc/apache2/sites-enabled/* | egrep 'ServerAlias|ServerName' | tr -s ' ' | sed 's/^\s//' | cut -d ' ' -f 2 | sed 's/www.//' | sort | uniq
2010-04-08 15:50:34
User: chronosMark
Functions: cat cut egrep sed sort tr

Get a list of all the unique hostnames from the apache configuration files. Handy to see what sites are running on a server. A slightly shorter version.

history | perl -F"\||<\(|;|\`|\\$\(" -alne 'foreach (@F) { print $1 if /\b((?!do)[a-z]+)\b/i }' | sort | uniq -c | sort -nr | head
2010-04-08 13:46:09
User: alperyilmaz
Functions: perl sort uniq

Most of the "most used commands" approaches does not consider pipes and other complexities.

This approach considers pipes, process substitution by backticks or $() and multiple commands separated by ;

Perl regular expression breaks up each line using | or < ( or ; or ` or $( and picks the first word (excluding "do" in case of for loops)

note: if you are using lots of perl one-liners, the perl commands will be counted as well in this approach, since semicolon is used as a separator