Commands using sort (719)

  • What was the name of that module we wrote and deleted about 3 months ago? windowing-something? git log --all --pretty=format:" " --name-only | sort -u | grep -i window


    2
    git log --all --pretty=format:" " --name-only | sort -u
    christian_oudard · 2010-05-11 16:06:42 0

  • 4
    tail -n2000 /var/www/domains/*/*/logs/access_log | awk '{print $1}' | sort | uniq -c | sort -n | awk '{ if ($1 > 20)print $1,$2}'
    allrightname · 2010-05-10 19:08:37 0
  • Counts TCP states from Netstat and displays in an ordered list. Show Sample Output


    1
    netstat -an | awk '/tcp/ {print $6}' | sort | uniq -c
    Kered557 · 2010-05-06 17:04:37 1
  • The same as the other two alternatives, but now less forking! Instead of using '\;' to mark the end of an -exec command in GNU find, you can simply use '+' and it'll run the command only once with all the files as arguments. This has two benefits over the xargs version: it's easier to read and spaces in the filesnames work automatically (no -print0). [Oh, and there's one less fork, if you care about such things. But, then again, one is equal to zero for sufficiently large values of zero.] Show Sample Output


    5
    find . \( -iname '*.[ch]' -o -iname '*.php' -o -iname '*.pl' \) -exec wc -l {} + | sort -n
    hackerb9 · 2010-05-03 00:16:02 0
  • If you use HISTTIMEFORMAT environment e.g. timestamping typed commands, $(echo "1 2 $HISTTIMEFORMAT" | wc -w) gives the number of columns that containing non-command parts per lines. It should universify this command. Show Sample Output


    0
    history | awk '{a[$'$(echo "1 2 $HISTTIMEFORMAT" | wc -w)']++}END{for(i in a){print a[i] " " i}}' | sort -rn | head
    bandie91 · 2010-05-02 21:48:53 0
  • find -exec is evil since it launches a process for each file. You get the total as a bonus. Also, without -n sort will sort by lexical order (that is 9 after 10).


    0
    find . \( -iname '*.[ch]' -o -iname '*.php' -o -iname '*.pl' \) | xargs wc -l | sort -n
    rbossy · 2010-04-30 12:21:28 5
  • awk is evil! Show Sample Output


    21
    ps hax -o user | sort | uniq -c
    buzzy · 2010-04-29 10:43:03 4
  • This is a very simple and lightweight way to play DI.FM stations For a more complete version of the command with proper strings in the menu, try: (couldnt fit in the command field above) zenity --list --width 500 --height 500 --title 'DI.FM' --text 'Pick a Radio' --column 'radio' --column 'url' --print-column 2 $(curl -s http://www.di.fm/ | awk -F '"' '/href="http:.*\.pls.*96k/ {print $2}' | sort | awk -F '/|\.' '{print $(NF-1) " " $0}') | xargs mplayer This command line parses the html returned from http://di.fm and display all radio stations in a nice graphical menu. After the radio is chosen, the url is passed to mplayer so the music can start dependencies: - x11 with gtk environment - zenity: simple app for displaying gtk menus (sudo apt-get install zenity on ubuntu) - mplayer: simple audio player (sudo apt-get install mplayer on ubuntu) Show Sample Output


    15
    zenity --list --width 500 --height 500 --column 'radio' --column 'url' --print-column 2 $(curl -s http://www.di.fm/ | awk -F '"' '/href="http:.*\.pls.*96k/ {print $2}' | sort | awk -F '/|\.' '{print $(NF-1) " " $0}') | xargs mplayer
    polaco · 2010-04-28 23:45:35 1
  • This will run stat on each file in the directory. Show Sample Output


    1
    find -name `egrep -s '.' * | awk -F":" '{print $1}' | sort -u` -exec stat {} \;
    unixmonkey8594 · 2010-04-26 20:01:44 2
  • This shows you which files are most in need of commenting (one line of output per file)


    1
    find ./ -name *.h -exec egrep -cH "// | /\*" {} \; | awk -F':' '{print $2 ":" $1}' | sort -gr
    blocky · 2010-04-23 19:00:07 0

  • 3
    rpm -q -a --qf '%10{SIZE}\t%{NAME}\n' | sort -k1,1n
    octopus · 2010-04-19 07:44:49 1
  • I created this command to give me a quick overview of how many file types a directory, and all its subdirectories, contains. It works based off file extension, rather than file(1)'s magic output, because it ended up being more accurate and less confusing. Files that don't have an ext (README) are generally not important for me to want to count, but you're free to customize this fit your needs. Show Sample Output


    0
    printf "\n%25s%10sTOTAL\n" 'FILE TYPE' ' '; for ext in $(find . -iname \*.* | egrep -o '\.[^[:space:].]+$' | egrep -v '\.svn*' | sort -f | uniq -i); do count=$(find . -iname \*$ext | wc -l); printf "%25s%10s%d\n" $ext ' ' $count; done
    rkulla · 2010-04-16 21:12:11 0

  • 0
    grep ^lease /var/lib/dhcp/dhcpd.leases | cut -d ' ' -f 2 | sort -t . -k 1,1n -k 2,2n -k 3,3n -k 4,4n | uniq
    niallh · 2010-04-16 09:52:32 0
  • This searches the Apache error_log for each of the 5 most significant Apache error levels, if any are found the date is then cut from the output in order to sort then print the most common occurrence of each error. Show Sample Output


    4
    for i in emerg alert crit error warn ; do awk '$6 ~ /^\['$i'/ {print substr($0, index($0,$6)) }' error_log | sort | uniq -c | sort -n | tail -1; done
    zlemini · 2010-04-15 21:47:18 0

  • 2
    rpm -qa --qf "%-30{NAME} %-10{SIZE}\n" | sort -n | less
    betsubetsu · 2010-04-14 07:30:37 0
  • It's all said in the title. Show Sample Output


    2
    rpm -qa --qf "%-10{SIZE} %-30{NAME}\n" | sort -nr | less
    betsubetsu · 2010-04-14 07:28:41 0
  • This command will return a full list of Error 404 pages in the given access log. The following variables have been given to awk Hostname ($2), ERROR Code ($9), Missing Item ($7), Referrer ($11) You can then send this into a file (>> /path/to/file), which you can open with OpenOffice as a CSV


    1
    sudo awk '($9 ~ /404/)' /var/log/httpd/www.domain-access_log | awk '{print $2,$9,$7,$11}' | sort | uniq -c
    ninjasys · 2010-04-09 10:31:50 3
  • Finds the top ten pages returning an http response code of 404 in an apache log.


    8
    awk '$9 == 404 {print $7}' access_log | uniq -c | sort -rn | head
    zlemini · 2010-04-08 21:40:53 1
  • Get a list of all the unique hostnames from the apache configuration files. Handy to see what sites are running on a server. A slightly shorter version.


    2
    cat /etc/apache2/sites-enabled/* | egrep 'ServerAlias|ServerName' | tr -s ' ' | sed 's/^\s//' | cut -d ' ' -f 2 | sed 's/www.//' | sort | uniq
    chronosMark · 2010-04-08 15:50:34 0
  • Most of the "most used commands" approaches does not consider pipes and other complexities. This approach considers pipes, process substitution by backticks or $() and multiple commands separated by ; Perl regular expression breaks up each line using | or < ( or ; or ` or $( and picks the first word (excluding "do" in case of for loops) note: if you are using lots of perl one-liners, the perl commands will be counted as well in this approach, since semicolon is used as a separator Show Sample Output


    4
    history | perl -F"\||<\(|;|\`|\\$\(" -alne 'foreach (@F) { print $1 if /\b((?!do)[a-z]+)\b/i }' | sort | uniq -c | sort -nr | head
    alperyilmaz · 2010-04-08 13:46:09 1
  • Get a list of all the unique hostnames from the apache configuration files. Handy to see what sites are running on a server.


    0
    cat /etc/apache2/sites-enabled/* | egrep 'ServerAlias|ServerName' | tr -s " " | sed 's/^[ ]//g' | uniq | cut -d ' ' -f 2 | sed 's/www.//g' | sort | uniq
    chronosMark · 2010-04-08 08:51:17 1
  • Count the occurences of the word 'Berlekamp' in the DJVU files that are in the current directory, printing file names from the one having the least to the most occurences.


    0
    find ./ -iname "*.djvu" -execdir perl -e '@s=`djvutxt \"$ARGV[0]\"\|grep -c Berlekamp`; chomp @s; print $s[0]; print " $ARGV[0]\n"' '{}' \;|sort -n
    unixmonkey4437 · 2010-04-07 11:15:26 0

  • 1
    cut -d\ -f 1 ~/.bash_history | sort | uniq -c | sort -rn | head -n 10 | sed 's/.*/ &/g'
    er0k · 2010-04-06 22:48:26 1

  • 8
    du -cks * | sort -rn | while read size fname; do for unit in k M G T P E Z Y; do if [ $size -lt 1024 ]; then echo -e "${size}${unit}\t${fname}"; break; fi; size=$((size/1024)); done; done
    askedrelic · 2010-04-05 17:09:14 0
  • Thanks for the submit! My alternative produces summaries only for directories. The original post additionally lists all files in the current directory. Sometimes the files, they just clutter up the output. Once the big directory is located, *then* worry about which file(s) are consuming so much space.


    -1
    du -kd | egrep -v "/.*/" | sort -n
    rmbjr60 · 2010-03-30 15:40:35 0
  • ‹ First  < 19 20 21 22 23 >  Last ›

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.

Share Your Commands



Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for: