Hide

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again.

Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.


If you have a new feature suggestion or find a bug, please get in touch via http://commandlinefu.uservoice.com/

Get involved!

You can sign-in using OpenID credentials, or register a traditional username and password.

First-time OpenID users will be automatically assigned a username which can be changed after signing in.

Hide

Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for:

Hide

News

2011-03-12 - Confoo 2011 presentation
Slides are available from the commandlinefu presentation at Confoo 2011: http://presentations.codeinthehole.com/confoo2011/
2011-01-04 - Moderation now required for new commands
To try and put and end to the spamming, new commands require moderation before they will appear on the site.
2010-12-27 - Apologies for not banning the trolls sooner
Have been away from the interwebs over Christmas. Will be more vigilant henceforth.
2010-09-24 - OAuth and pagination problems fixed
Apologies for the delay in getting Twitter's OAuth supported. Annoying pagination gremlin also fixed.
Hide

Tags

Hide

Functions

Commands using sort from sorted by
Terminal - Commands using sort - 593 results
tail -n2000 /var/www/domains/*/*/logs/access_log | awk '{print $1}' | sort | uniq -c | sort -n | awk '{ if ($1 > 20)print $1,$2}'
netstat -an | awk '/tcp/ {print $6}' | sort | uniq -c
2010-05-06 17:04:37
User: Kered557
Functions: awk netstat sort uniq
1

Counts TCP states from Netstat and displays in an ordered list.

find . \( -iname '*.[ch]' -o -iname '*.php' -o -iname '*.pl' \) -exec wc -l {} + | sort -n
2010-05-03 00:16:02
User: hackerb9
Functions: find sort wc
4

The same as the other two alternatives, but now less forking! Instead of using '\;' to mark the end of an -exec command in GNU find, you can simply use '+' and it'll run the command only once with all the files as arguments.

This has two benefits over the xargs version: it's easier to read and spaces in the filesnames work automatically (no -print0). [Oh, and there's one less fork, if you care about such things. But, then again, one is equal to zero for sufficiently large values of zero.]

history | awk '{a[$'$(echo "1 2 $HISTTIMEFORMAT" | wc -w)']++}END{for(i in a){print a[i] " " i}}' | sort -rn | head
2010-05-02 21:48:53
User: bandie91
Functions: awk echo sort wc
Tags: history awk wc
0

If you use HISTTIMEFORMAT environment e.g. timestamping typed commands, $(echo "1 2 $HISTTIMEFORMAT" | wc -w)

gives the number of columns that containing non-command parts per lines.

It should universify this command.

find . \( -iname '*.[ch]' -o -iname '*.php' -o -iname '*.pl' \) | xargs wc -l | sort -n
2010-04-30 12:21:28
User: rbossy
Functions: find sort wc xargs
Tags: find count
0

find -exec is evil since it launches a process for each file. You get the total as a bonus.

Also, without -n sort will sort by lexical order (that is 9 after 10).

ps hax -o user | sort | uniq -c
zenity --list --width 500 --height 500 --column 'radio' --column 'url' --print-column 2 $(curl -s http://www.di.fm/ | awk -F '"' '/href="http:.*\.pls.*96k/ {print $2}' | sort | awk -F '/|\.' '{print $(NF-1) " " $0}') | xargs mplayer
2010-04-28 23:45:35
User: polaco
Functions: awk sort xargs
15

This is a very simple and lightweight way to play DI.FM stations

For a more complete version of the command with proper strings in the menu, try: (couldnt fit in the command field above)

zenity --list --width 500 --height 500 --title 'DI.FM' --text 'Pick a Radio' --column 'radio' --column 'url' --print-column 2 $(curl -s http://www.di.fm/ | awk -F '"' '/href="http:.*\.pls.*96k/ {print $2}' | sort | awk -F '/|\.' '{print $(NF-1) " " $0}') | xargs mplayer

This command line parses the html returned from http://di.fm and display all radio stations in a nice graphical menu. After the radio is chosen, the url is passed to mplayer so the music can start

dependencies:

- x11 with gtk environment

- zenity: simple app for displaying gtk menus (sudo apt-get install zenity on ubuntu)

- mplayer: simple audio player (sudo apt-get install mplayer on ubuntu)

find -name `egrep -s '.' * | awk -F":" '{print $1}' | sort -u` -exec stat {} \;
2010-04-26 20:01:44
Functions: awk find sort stat
1

This will run stat on each file in the directory.

find ./ -name *.h -exec egrep -cH "// | /\*" {} \; | awk -F':' '{print $2 ":" $1}' | sort -gr
2010-04-23 19:00:07
User: blocky
Functions: awk egrep find sort
1

This shows you which files are most in need of commenting (one line of output per file)

rpm -q -a --qf '%10{SIZE}\t%{NAME}\n' | sort -k1,1n
printf "\n%25s%10sTOTAL\n" 'FILE TYPE' ' '; for ext in $(find . -iname \*.* | egrep -o '\.[^[:space:].]+$' | egrep -v '\.svn*' | sort -f | uniq -i); do count=$(find . -iname \*$ext | wc -l); printf "%25s%10s%d\n" $ext ' ' $count; done
2010-04-16 21:12:11
User: rkulla
Functions: egrep find printf sort uniq wc
0

I created this command to give me a quick overview of how many file types a directory, and all its subdirectories, contains. It works based off file extension, rather than file(1)'s magic output, because it ended up being more accurate and less confusing.

Files that don't have an ext (README) are generally not important for me to want to count, but you're free to customize this fit your needs.

grep ^lease /var/lib/dhcp/dhcpd.leases | cut -d ' ' -f 2 | sort -t . -k 1,1n -k 2,2n -k 3,3n -k 4,4n | uniq
for i in emerg alert crit error warn ; do awk '$6 ~ /^\['$i'/ {print substr($0, index($0,$6)) }' error_log | sort | uniq -c | sort -n | tail -1; done
2010-04-15 21:47:18
User: zlemini
Functions: awk sort tail uniq
4

This searches the Apache error_log for each of the 5 most significant Apache error levels, if any are found the date is then cut from the output in order to sort then print the most common occurrence of each error.

rpm -qa --qf "%-30{NAME} %-10{SIZE}\n" | sort -n | less
rpm -qa --qf "%-10{SIZE} %-30{NAME}\n" | sort -nr | less
sudo awk '($9 ~ /404/)' /var/log/httpd/www.domain-access_log | awk '{print $2,$9,$7,$11}' | sort | uniq -c
2010-04-09 10:31:50
User: ninjasys
Functions: awk sort sudo uniq
Tags: log error apache
1

This command will return a full list of Error 404 pages in the given access log. The following variables have been given to awk

Hostname ($2), ERROR Code ($9), Missing Item ($7), Referrer ($11)

You can then send this into a file (>> /path/to/file), which you can open with OpenOffice as a CSV

awk '$9 == 404 {print $7}' access_log | uniq -c | sort -rn | head
2010-04-08 21:40:53
User: zlemini
Functions: awk sort uniq
8

Finds the top ten pages returning an http response code of 404 in an apache log.

cat /etc/apache2/sites-enabled/* | egrep 'ServerAlias|ServerName' | tr -s ' ' | sed 's/^\s//' | cut -d ' ' -f 2 | sed 's/www.//' | sort | uniq
2010-04-08 15:50:34
User: chronosMark
Functions: cat cut egrep sed sort tr
2

Get a list of all the unique hostnames from the apache configuration files. Handy to see what sites are running on a server. A slightly shorter version.

history | perl -F"\||<\(|;|\`|\\$\(" -alne 'foreach (@F) { print $1 if /\b((?!do)[a-z]+)\b/i }' | sort | uniq -c | sort -nr | head
2010-04-08 13:46:09
User: alperyilmaz
Functions: perl sort uniq
4

Most of the "most used commands" approaches does not consider pipes and other complexities.

This approach considers pipes, process substitution by backticks or $() and multiple commands separated by ;

Perl regular expression breaks up each line using | or < ( or ; or ` or $( and picks the first word (excluding "do" in case of for loops)

note: if you are using lots of perl one-liners, the perl commands will be counted as well in this approach, since semicolon is used as a separator

cat /etc/apache2/sites-enabled/* | egrep 'ServerAlias|ServerName' | tr -s " " | sed 's/^[ ]//g' | uniq | cut -d ' ' -f 2 | sed 's/www.//g' | sort | uniq
2010-04-08 08:51:17
User: chronosMark
Functions: cat cut egrep sed sort tr uniq
0

Get a list of all the unique hostnames from the apache configuration files. Handy to see what sites are running on a server.

find ./ -iname "*.djvu" -execdir perl -e '@s=`djvutxt \"$ARGV[0]\"\|grep -c Berlekamp`; chomp @s; print $s[0]; print " $ARGV[0]\n"' '{}' \;|sort -n
2010-04-07 11:15:26
Functions: find grep perl sort
0

Count the occurences of the word 'Berlekamp' in the DJVU files that are in the current directory, printing file names from the one having the least to the most occurences.

cut -d\ -f 1 ~/.bash_history | sort | uniq -c | sort -rn | head -n 10 | sed 's/.*/ &/g'
du -cks * | sort -rn | while read size fname; do for unit in k M G T P E Z Y; do if [ $size -lt 1024 ]; then echo -e "${size}${unit}\t${fname}"; break; fi; size=$((size/1024)); done; done
du -kd | egrep -v "/.*/" | sort -n
2010-03-30 15:40:35
User: rmbjr60
Functions: du egrep sort
-1

Thanks for the submit! My alternative produces summaries only for directories. The original post additionally lists all files in the current directory. Sometimes the files, they just clutter up the output. Once the big directory is located, *then* worry about which file(s) are consuming so much space.

TR=`free|grep Mem:|awk '{print $2}'`;ps axo rss,comm,pid|awk -v tr=$TR '{proc_list[$2]+=$1;} END {for (proc in proc_list) {proc_pct=(proc_list[proc]/tr)*100; printf("%d\t%-16s\t%0.2f%\n",proc_list[proc],proc,proc_pct);}}'|sort -n |tail -n 10
2010-03-27 01:34:50
User: d34dh0r53
Functions: awk grep sort tail
3

Prints the top 10 memory consuming processes (with children and instances aggregated) sorted by total RSS and calculates the percentage of total RAM each uses. Please note that since RSS can include shared libraries it is possible for the percentages to add up to more that the total amount of RAM, but this still gives you a pretty good idea. Also note that this does not work with the mawk version of awk, but it works fine with GNU Awk which is on most Linux systems. It also does not work on OS X.