Commands using sort (800)


  • 1
    grep -R Subject /var/spool/exim/input/ | sed s/^.*Subject:\ // | sort | uniq -c | sort -n > ~/email_sort.$(date +%m.%d.%y).txt
    unixmonkey27702 · 2011-11-24 14:13:29 3

  • 1
    du --max-depth=1 | sort -nr | awk ' BEGIN { split("KB,MB,GB,TB", Units, ","); } { u = 1; while ($1 >= 1024) { $1 = $1 / 1024; u += 1 } $1 = sprintf("%.1f %s", $1, Units[u]); print $0; } '
    threv · 2011-12-08 17:43:09 4
  • This one line Perl script will display the smallest to the largest files sizes in all directories on a server. Show Sample Output


    1
    du -k | sort -n | perl -ne 'if ( /^(\d+)\s+(.*$)/){$l=log($1+.1);$m=int($l/log(1024)); printf ("%6.1f\t%s\t%25s %s\n",($1/(2**(10*$m))),(("K","M","G","T","P")[$m]),"*"x (1.5*$l),$2);}' | more
    Q_Element · 2012-02-07 15:49:19 10
  • It grabs the PID's top resource users with $(ps -eo pid,pmem,pcpu| sort -k 3 -r|grep -v PID|head -10) The sort -k is sorting by the third field which would be CPU. Change this to 2 and it will sort accordingly. The rest of the command is just using diff to display the output of 2 commands side-by-side (-y flag) I chose some good ones for ps. pidstat comes with the sysstat package(sar, mpstat, iostat, pidstat) so if you don't have it, you should. I might should take off the timestamp... :| Show Sample Output


    1
    for i in $(ps -eo pid,pmem,pcpu| sort -k 3 -r|grep -v PID|head -10|awk '{print $1}');do diff -yw <(pidstat -p $i|grep -v Linux) <(ps -o euser,pri,psr,pmem,stat -p $i|tail);done
    bob_the_builder · 2012-02-16 20:54:32 23

  • 1
    sed -e 's/[;|][[:space:]]*/\n/g' .bash_history | cut --delimiter=' ' --fields=1 | sort | uniq --count | sort --numeric-sort --reverse | head --lines=20
    WissenForscher · 2012-02-17 23:34:16 5

  • 1
    find <directory> -type f -printf "%T@\t%p\n"|sort -n|cut -f2|xargs ls -lrt
    rik · 2012-03-02 12:51:06 3
  • This works in combination with http://www.commandlinefu.com/commands/view/10496/identify-exported-sonames-in-a-path as it reports the NEEDED entries present in the files within a given path. You can then compare it with the libraries that are exported to make sure that, when cross-building a firmware image, you're not bringing in dependencies from the build host. The short version of it as can be seen in the same output is scanelf -RBnq -F "+n#f" $1 | tr ',' '\n' | sort -u Show Sample Output


    1
    scanelf --nobanner --recursive --quiet --needed --format "+n#F" $1 | tr ',' '\n' | sort -u
    Flameeyes · 2012-03-29 18:30:45 3
  • from my bashrc ;)


    1
    find . -mount -type f -printf "%k %p\n" | sort -rg | cut -d \ -f 2- | xargs -I {} du -sh {} | less
    bashrc · 2012-03-30 07:37:52 3
  • or tree -ifsF --noreport .|sort -n -k2|grep -v '/$' (rows presenting directory names become hidden)


    1
    tree -ifs --noreport .|sort -n -k2
    knoppix5 · 2012-05-04 09:18:39 6
  • See the summary. Show Sample Output


    1
    lsof +c 15 | awk '{print $1}' | sort | uniq -c | sort -rn | head
    SEJeff · 2012-05-25 16:31:46 3
  • This command give a human readable result without messing up the sorting.


    1
    for i in G M K; do du -hx /var/ | grep [0-9]$i | sort -nr -k 1; done | less
    jlaunay · 2012-06-26 22:57:17 6

  • 1
    tcpdump -ntr NAME_OF_CAPTURED_FILE.pcap 'tcp[13] = 0x02 and dst port 80' | awk '{print $4}' | tr . ' ' | awk '{print $1"."$2"."$3"."$4}' | sort | uniq -c | awk ' {print $2 "\t" $1 }'
    efuoax · 2012-08-22 21:26:10 7
  • The lastb command presents you with the history of failed login attempts (stored in /var/log/btmp). The reference file is read/write by root only by default. This can be quite an exhaustive list with lots of bots hammering away at your machine. Sometimes it is more important to see the scale of things, or in this case the volume of failed logins tied to each source IP. The awk statement determines if the 3rd element is an IP address, and if so increments the running count of failed login attempts associated with it. When done it prints the IP and count. The sort statement sorts numerically (-n) by column 3 (-k 3), so you can see the most aggressive sources of login attempts. Note that the ':' character is the 2nd column, and that the -n and -k can be combined to -nk. Please be aware that the btmp file will contain every instance of a failed login unless explicitly rolled over. It should be safe to delete/archive this file after you've processed it. Show Sample Output


    1
    sudo lastb | awk '{if ($3 ~ /([[:digit:]]{1,3}\.){3}[[:digit:]]{1,3}/)a[$3] = a[$3]+1} END {for (i in a){print i " : " a[i]}}' | sort -nk 3
    sgowie · 2012-09-11 14:51:10 4
  • In OSX you would have to make sure that you "sudo -s" your way to happiness since it will give a few "Permission denied" errors before finally spitting out the results. In OSX the directory structure has to start with the "Users" Directory then it will recursively perform the operation. Your Lord and master, Mematron Show Sample Output


    1
    sudo -s du -sm /Users/* | sort -nr | head -n 10
    mematron · 2012-09-13 10:15:23 4
  • You can simply run "largest", and list the top 10 files/directories in ./, or you can pass two parameters, the first being the directory, the 2nd being the limit of files to display. Best off putting this in your bashrc or bash_profile file Show Sample Output


    1
    largest() { dir=${1:-"./"}; count=${2:-"10"}; echo "Getting top $count largest files in $dir"; du -sx "$dir/"* | sort -nk 1 | tail -n $count | cut -f2 | xargs -I file du -shx file; }
    jhyland87 · 2013-01-21 09:45:21 7
  • Enhanced version: fixes sorting by human readable numbers, and filters out non MB or GB entries that have a G or an M in their name.


    1
    du --max-depth=1 -h * |sort -h -k 1 |egrep '(M|G)\s'
    TerDale · 2013-02-14 08:56:56 6
  • Replace \-dev with whatever you wanna search for


    1
    dpkg-query -Wf '${Installed-Size}\t${Package}\n' | grep "\-dev" | sort -n | awk '{ sum+=$1} END {print sum/1024 "MB"}'
    threv · 2013-02-15 20:29:18 4
  • Interesting to see which packages are larger than the kernel package. Useful to understand which RPMs might be candidates to remove if drive space is restricted. Show Sample Output


    1
    rpm -qa --queryformat '%{size} %{name}-%{version}-%{release}\n' | sort -k 1,1 -rn | nl | head -16
    mpb · 2013-03-19 21:10:54 6
  • the column number is '6'


    1
    cut -d',' -f6 file.csv | sort | uniq
    richie · 2013-04-10 14:05:32 6
  • make usable on OSX with filenames containing spaces. note: will still break if filenames contain newlines... possible, but who does that?!


    1
    svn ls -R | egrep -v -e "\/$" | tr '\n' '\0' | xargs -0 svn blame | awk '{print $2}' | sort | uniq -c | sort -nr
    rymo · 2013-04-10 19:37:53 5
  • parallel can be installed on your central node and can be used to run a command multiple times. In this example, multiple ssh connections are used to run commands. (-j is the number of jobs to run at the same time). The result can then be piped to commands to perform the "reduce" stage. (sort then uniq in this example). This example assumes "keyless ssh login" has been set up between the central node and all machines in the cluster. bashreduce may also do what you want. Show Sample Output


    1
    parallel -j 50 ssh {} "ls" ::: host1 host2 hostn | sort | uniq -c
    macoda · 2013-04-12 11:56:41 7

  • 1
    awk '{print $1}' ~/.bash_history | sort | uniq -c | sort -rn | head -n 10
    nesses · 2013-05-03 16:24:30 6

  • 1
    du -m --max-depth=1 [DIR] | sort -nr
    Paulus · 2013-07-26 07:04:41 12
  • The other commands were good, but they included packages that were installed and then removed. This command only shows packages that are currently installed, sorts smallest to largest, and formats the sizes to be human readable. Show Sample Output


    1
    dpkg-query --show --showformat='${Package;-50}\t${Installed-Size}\n' `aptitude --display-format '%p' search '?installed!?automatic'` | sort -k 2 -n | grep -v deinstall | awk '{printf "%.3f MB \t %s\n", $2/(1024), $1}'
    EvilDennisR · 2013-07-26 23:18:20 13
  • Compute the md5 checksums for the contents of two mirrored directories, then sort and diff the results. If everything matches, nothing is returned. Otherwise, any checksums which do not match, or which exist in one tree but not the other, are returned. As you might imagine, the output is useful only if no errors are found, because only the checksums, not filenames, are returned. I hope to address this, or that someone else will!


    1
    diff <(sort <(md5deep -r /directory/1/) |cut -f1 -d' ') <(sort <(md5deep -r /directory/2/) |cut -f1 -d' ')
    unixmonkey64021 · 2013-08-18 22:13:07 6
  • ‹ First  < 12 13 14 15 16 >  Last ›

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.

Share Your Commands


Check These Out

Don't like the cut command? Tired of typing awk '{print $xxx}', try this

Count the number of deleted files
It does not work without the verbose mode (-v is important)

Convert seconds to [DD:][HH:]MM:SS
Converts any number of seconds into days, hours, minutes and seconds. sec2dhms() { declare -i SS="$1" D=$(( SS / 86400 )) H=$(( SS % 86400 / 3600 )) M=$(( SS % 3600 / 60 )) S=$(( SS % 60 )) [ "$D" -gt 0 ] && echo -n "${D}:" [ "$H" -gt 0 ] && printf "%02g:" "$H" printf "%02g:%02g\n" "$M" "$S" }

Find out current working directory of a process

See how many more processes are allowed, awesome!
There is a limit to how many processes you can run at the same time for each user, especially with web hosts. If the maximum # of processes for your user is 200, then the following sets OPTIMUM_P to 100. $ OPTIMUM_P=$(( (`ulimit -u` - `find /proc -maxdepth 1 \( -user $USER -o -group $GROUPNAME \) -type d|wc -l`) / 2 )) This is very useful in scripts because this is such a fast low-resource-intensive (compared to ps, who, lsof, etc) way to determine how many processes are currently running for whichever user. The number of currently running processes is subtracted from the high limit setup for the account (see limits.conf, pam, initscript). An easy to understand example- this searches the current directory for shell scripts, and runs up to 100 'file' commands at the same time, greatly speeding up the command. $ find . -type f | xargs -P $OPTIMUM_P -iFNAME file FNAME | sed -n '/shell script text/p' I am using it in my http://www.askapache.com/linux-unix/bash_profile-functions-advanced-shell.html especially for the xargs command. Xargs has a -P option that lets you specify how many processes to run at the same time. For instance if you have 1000 urls in a text file and wanted to download all of them fast with curl, you could download 100 at a time (check ps output on a separate [pt]ty for proof) like this: $ cat url-list.txt | xargs -I '{}' -P $OPTIMUM_P curl -O '{}' I like to do things as fast as possible on my servers. I have several types of servers and hosting environments, some with very restrictive jail shells with 20processes limit, some with 200, some with 8000, so for the jailed shells my xargs -P10 would kill my shell or dump core. Using the above I can set the -P value dynamically, so xargs always works, like this. $ cat url-list.txt | xargs -I '{}' -P $OPTIMUM_P curl -O '{}' If you were building a process-killer (very common for cheap hosting) this would also be handy. Note that if you are only allowed 20 or so processes, you should just use -P1 with xargs.

split a string (3)

A signal trap that logs when your script was killed and what other processes were running at that time
trap is the bash builtin that allows you to execute commands when the current script receives a particular signal. Uses $0 for the script name, $$ for the script PID, tee to output to STDOUT as well as a log file and ps to log other running processes.

take a look to command before action
add |sh when you agree the list, I often use that method to prevent typos in dangerous or long operations

Sum columns from CSV column $COL

Launch a game, like Tetris, when apt-get installing an app larger than 50 Megabytes
Change the APP variable's value to whatever you want to install. Depending on how fast your machine is, you'll want to adjust the value 50 to something else. You might also want to play a different game than Gnometris - just make sure it's a GUI game.


Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for: