Commands tagged sort (162)

  • Sort using kth column using : delimiter


    0
    sort -t: -k 2 names.txt
    ankush108 · 2012-06-26 19:15:30 0
  • cut -f1,2 - IP range 16 cut -f1,2,3 - IP range 24 cut -f1,2,3,4 - IP range 24 Show Sample Output


    0
    netstat -tn | grep :80 | awk '{print $5}'| grep -v ':80' | cut -f1 -d: |cut -f1,2,3 -d. | sort | uniq -c| sort -n
    krishnan · 2012-06-26 08:29:37 0
  • here's a version which works on OS X.


    0
    find . -type f -exec stat -f '%m %N' {} \; | sort -n
    cellularmitosis · 2012-06-20 00:13:52 0
  • Per country GET report, based on access log. Easy to transform to unique IP Show Sample Output


    -1
    cat /var/log/nginx/access.log | grep -oe '^[0-9.]\+' | perl -ne 'system("geoiplookup $_")' | grep -v found | grep -oe ', [A-Za-z ]\+$' | sort | uniq -c | sort -n
    theist · 2012-05-08 13:28:25 0
  • Search for files and list the 20 largest. find . -type f gives us a list of file, recursively, starting from here (.) -print0 | xargs -0 du -h separate the names of files with NULL characters, so we're not confused by spaces then xargs run the du command to find their size (in human-readable form -- 64M not 64123456) | sort -hr use sort to arrange the list in size order. sort -h knows that 1M is bigger than 9K | head -20 finally only select the top twenty out of the list Show Sample Output


    9
    find . -type f -print0 | xargs -0 du -h | sort -hr | head -20
    flatcap · 2012-03-30 10:21:12 3
  • Take a file and ,."()?!;: give a list of all the words in order of increasing length. First of all use tr to map all alphabetic characters to lower case and also strip out any puntuation. A-Z become a-z ,."()?!;: all become \n (newline) I've ignored - (hyphen) and ' (apostrophe) because they occur in words. Next use bash to print the length ${#w} and the word Finally sort the list numerically (sort -n) and remove any duplicates (sort -u). Note: sort -nu performs strangely on this list. It outputs one word per length. Show Sample Output


    0
    for w in $(tr 'A-Z ,."()?!;:' 'a-z\n' < sample.txt); do echo ${#w} $w; done | sort -u | sort -n
    flatcap · 2012-03-15 14:14:11 2

  • 0
    du -s $(ls -l | grep '^d' | awk '{print $9}') | sort -nr
    j3ffyang · 2012-03-15 09:04:13 0
  • sort is way slow by default. This tells sort to use a buffer equal to half of the available free memory. It also will use multiple process for the sort equal to the number of cpus on your machine (if greater than 1). For me, it is magnitudes faster. If you put this in your bash_profile or startup file, it will be set correctly when bash is started. sort -S1 --parallel=2 <(echo) &>/dev/null && alias sortfast='sort -S$(($(sed '\''/MemF/!d;s/[^0-9]*//g'\'' /proc/meminfo)/2048)) $([ `nproc` -gt 1 ]&&echo -n --parallel=`nproc`)' Alternative echo|sort -S10M --parallel=2 &>/dev/null && alias sortfast="command sort -S$(($(sed '/MemT/!d;s/[^0-9]*//g' /proc/meminfo)/1024-200)) --parallel=$(($(command grep -c ^proc /proc/cpuinfo)*2))" Show Sample Output


    3
    alias sortfast='sort -S$(($(sed '\''/MemF/!d;s/[^0-9]*//g'\'' /proc/meminfo)/2048)) $([ `nproc` -gt 1 ]&&echo -n --parallel=`nproc`)'
    AskApache · 2012-02-28 01:34:58 3

  • 1
    sed -e 's/[;|][[:space:]]*/\n/g' .bash_history | cut --delimiter=' ' --fields=1 | sort | uniq --count | sort --numeric-sort --reverse | head --lines=20
    WissenForscher · 2012-02-17 23:34:16 0
  • See who is using a specific port. Especially when you're using AIX. In Ubuntu, for example, this can easily be seen with the netstat command. Show Sample Output


    0
    netstat -Aan | grep .80 | grep -v 127.0.0.1 | grep EST | awk '{print $6}' | cut -d "." -f1,2,3,4 | sort | uniq
    janvanderwijk · 2012-02-03 13:54:11 0
  • A different approach to the problem - maintain a small sorted list, print the largest as we go, then the top 10 at the end. I often find that the find and sort take a long time, and the large file might appear near the start of the find. By printing as we go, I get better feedback. The sort used in this will be much slower on perls older than 5.8. Show Sample Output


    -2
    find . -type f|perl -lne '@x=sort {$b->[0]<=>$a->[0]}[(stat($_))[7],$_],@x;splice(@x,11);print "@{$x[0]}";END{for(@x){print "@$_"}'
    bazzargh · 2012-01-08 14:43:43 0
  • Tested in bash on AIX & Linux, used for WAS versions 6.0 & up. Sorts by node name. Useful when you have vertically-stacked instances of WAS/Portal. Cuts out all the classpath/optional parameter clutter that makes a simple "ps -ef | grep java" so difficult to sort through. Show Sample Output


    0
    ps -ef | grep [j]ava | awk -F ' ' ' { print $1," ",$2,"\t",$(NF-2),"\t",$(NF-1),"\t",$NF } ' | sort -k4
    drockney · 2012-01-05 16:05:48 0
  • Save some CPU, and some PIDs. :)


    5
    awk -F ':' '{print $1 | "sort";}' /etc/passwd
    atoponce · 2011-12-20 12:46:52 0

  • 0
    cut -d: -f1 /etc/passwd | sort
    snaewe · 2011-12-20 11:50:13 0

  • 6
    cut -d: -f1 /etc/passwd | sort
    dan · 2011-12-20 10:46:52 5
  • Seeing that _sort_ its been used, why not just _use_ it. ;) Show Sample Output


    -2
    sort --random-sort file
    arld101 · 2011-12-10 20:28:54 1
  • sort command can sort month-wise (first three letters of each month). See the sample output for clarification. Sorting Stable ? NO. Take note if that matters to you. Sample output suggests that sort performs unstable sorting (see the relative order of two 'feb' entries). Show Sample Output


    2
    sort -M filename
    b_t · 2011-12-10 12:50:30 0
  • In this case I'm just grabbing the next level of subdirectories (and same level regular files) with the --max-depth=1 flag. leaving out that flag will just give you finer resolution. Note that you have to use the -h switch with both 'du' and with 'sort.' Show Sample Output


    13
    du -h --max-depth=1 |sort -rh
    jambino · 2011-11-15 20:30:00 6
  • as per eightmillion's comment. Simply economical :)


    1
    du -h | sort -hr
    mooselimb · 2011-11-06 23:15:36 0
  • Enhancement for the 'busy' command originally posted by busybee : less chars, no escape issue, and most important it exclude small files ( opening a 5 lines file isn't that persuasive I think ;) ) This makes an alias for a command named 'busy'. The 'busy' command opens a random file in /usr/include to a random line with vim.


    0
    alias busy='rnd_file=$(find /usr/include -type f -size +5k | sort -R | head -n 1) && vim +$((RANDOM%$(wc -l $rnd_file | cut -f1 -d" "))) $rnd_file'
    frntn · 2011-10-16 00:05:59 0
  • If both file1 and file2 are already sorted: comm -13 file1 file2 > file-new


    -2
    comm -13 <(sort file1) <(sort file2) > file-new
    daa · 2011-10-01 18:07:54 0
  • Find which directories on your system contain a lot of files. Edit: much shorter and betterer with -n switch. Show Sample Output


    0
    sudo find / -type f | perl -MFile::Basename -ne '$counts{dirname($_)}++; END { foreach $d (sort keys %counts) {printf("%d\t%s\n",$counts{$d},$d);} }'|sort -rn | tee /tmp/sortedfilecount.out | head
    tamouse · 2011-09-14 19:41:19 0
  • PmWiki stores wiki pages as Group.Name. Simply split the directory listing and count frequency of group occurances. Show Sample Output


    -2
    cd /path/to/pmwiki/wiki.d;/bin/ls -1 | perl -ne 'my ($group,$name)=split(/\./);$counts{$group}++;' -e 'END { foreach $group (sort keys %counts) {printf("%d\t%s\n",$counts{$group},$group);} }'|sort -rn
    tamouse · 2011-09-14 19:33:39 0
  • (separator = $IFS)


    2
    ps aux | sort -nk 6
    totti · 2011-08-16 11:04:45 0
  • Tells you everything you could ever want to know about all files and subdirectories. Great for package creators. Totally secure too. On my Slackware box, this gets set upon login: LS_OPTIONS='-F -b -T 0 --color=auto' and alias ls='/bin/ls $LS_OPTIONS' which works great. Show Sample Output


    2
    lsr() { find "${@:-.}" -print0 |sort -z |xargs -0 ls $LS_OPTIONS -dla; }
    h3xx · 2011-08-15 03:10:58 0
  •  < 1 2 3 4 5 >  Last ›

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.

Share Your Commands



Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for: