Hide

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again.

Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.


If you have a new feature suggestion or find a bug, please get in touch via http://commandlinefu.uservoice.com/

Get involved!

You can sign-in using OpenID credentials, or register a traditional username and password.

First-time OpenID users will be automatically assigned a username which can be changed after signing in.

Hide

Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for:

Hide

News

2011-03-12 - Confoo 2011 presentation
Slides are available from the commandlinefu presentation at Confoo 2011: http://presentations.codeinthehole.com/confoo2011/
2011-01-04 - Moderation now required for new commands
To try and put and end to the spamming, new commands require moderation before they will appear on the site.
2010-12-27 - Apologies for not banning the trolls sooner
Have been away from the interwebs over Christmas. Will be more vigilant henceforth.
2010-09-24 - OAuth and pagination problems fixed
Apologies for the delay in getting Twitter's OAuth supported. Annoying pagination gremlin also fixed.
Hide

Tags

Hide

Functions

Commands using sort from sorted by
Terminal - Commands using sort - 637 results
sort -t. -k1,1n -k2,2n -k3,3n -k4,4n
equery s | sed 's/(\|)/ /g' | sort -n -k 9 | gawk '{print $1" "$9/1048576"m"}'
2009-07-30 01:12:10
User: Alanceil
Functions: gawk sed sort
0

On a Gentoo system, this command will tell you which packets you have installed and sort them by how much space they consume. Good for finding out space-hogs when tidying up disk space.

svn ls -R | egrep -v -e "\/$" | xargs svn blame | awk '{print $2}' | sort | uniq -c | sort -r
2009-07-29 02:10:45
User: askedrelic
Functions: awk egrep ls sort uniq xargs
Tags: svn count
16

I'm working in a group project currently and annoyed at the lack of output by my teammates. Wanting hard metrics of how awesome I am and how awesome they aren't, I wrote this command up.

It will print a full repository listing of all files, remove the directories which confuse blame, run svn blame on each individual file, and tally the resulting line counts. It seems quite slow, depending on your repository location, because blame must hit the server for each individual file. You can remove the -R on the first part to print out the tallies for just the current directory.

DD=`cat /etc/my.cnf | sed "s/#.*//g;" | grep datadir | tr '=' ' ' | gawk '{print $2;}'` && ( cd $DD ; find . -mindepth 2 | grep -v db\.opt | sed 's/\.\///g; s/\....$//g; s/\//./;' | sort | uniq | tr '/' '.' | gawk '{print "CHECK TABLE","`"$1"`",";";}' )
2009-07-25 03:42:31
User: atcroft
Functions: cd find gawk grep sed sort tr uniq
-1

This command will generate "CHECK TABLE `db_name.table_name` ;" statements for all tables present in databases on a MySQL server, which can be piped into the mysql command. (Can also be altered to perform OPTIMIZE and REPAIR functions.)

Tested on MySQL 4.x and 5.x systems in a Linux environment under bash.

cat /var/log/secure.log | awk '{print substr($0,0,12)}' | uniq -c | sort -nr | awk '{printf("\n%s ",$0) ; for (i = 0; i<$1 ; i++) {printf("*")};}'
2009-07-24 07:20:06
User: knassery
Functions: awk cat sort uniq
15

Busiest seconds:

cat /var/log/secure.log | awk '{print substr($0,0,15)}' | uniq -c | sort -nr | awk '{printf("\n%s ",$0) ; for (i = 0; i<$1 ; i++) {printf("*")};}'
sort -R file.txt | head -1
for i in `rpm -qva | sort ` ; do ; echo "===== $i =====" ; rpm -qvl $i ; done > /tmp/pkgdetails
2009-07-14 20:34:55
User: tkunz
Functions: echo rpm sort
0

This will create the file /tmp/pkgdetails, which will contain a listing of all the files installed on your RPM-based system (RedHat, Fedora, CentOS, etc). Useful should the RPM system/database become corrupted to find which package installed which files.

find . -name '*.html' -print0| xargs -0 -L1 cat |sed "s/[\"\<\>' \t\(\);]/\n/g" |grep "http://" |sort -u
2009-07-14 07:00:15
User: jamespitt
Functions: cat find grep sed sort xargs
4

Just a handy way to get all the unique links from inside all the html files inside a directory. Can be handy on scripts etc.

infile=$1 for i in $(cat $infile) do echo $i | tr "," "\n" | sort -n | tr "\n" "," | sed "s/,$//" echo done
2009-07-12 21:23:37
User: iframe
Functions: cat echo sed sort tr
Tags: cat bash sort sed tr
0

Save the script as: sort_file

Usage: sort_file < sort_me.csv > out_file.csv

This script was originally posted by Admiral Beotch in LinuxQuestions.org on the Linux-Software forum.

I modified this script to make it more portable.

lsof |awk ' {if ( $0 ~ /home/) print substr($0, index($0,"/home") ) }'|cut -d / -f 1-4|sort|uniq -c|sort -bgr
du -ms * | sort -nk1
function duf { du -sk "$@" | sort -n | while read size fname; do for unit in k M G T P E Z Y; do if [ $size -lt 1024 ]; then echo -e "${size}${unit}\t${fname}"; break; fi; size=$((size/1024)); done; done; }
du -ms * .[^.]*| sort -nk1
2009-07-01 13:38:13
User: ioggstream
Functions: du sort
3

using mb it's still readable;) a symbol variation

$ du -ms {,.[^.]}* | sort -nk1

function duf { du -k $@ | sort -rn | perl -ne '($s,$f)=split(/\t/,$_,2);for(qw(K M G T)){if($s<1024){$x=($s<10?"%.1f":"%3d");printf("$x$_\t%s",$s,$f);last};$s/=1024}' }
svn log | tr -d '\n' | sed -r 's/-{2,}/\n/g' | sed -r 's/ \([^\)]+\)//g' | sed -r 's/^r//' | sed -r "s/[0-9]+ lines?//g" | sort -g
2009-06-29 12:48:30
User: Cowboy
Functions: sed sort tr
3

Note you have also the --xml option ;)

echo $numbers | sed "s/\( \|$\)/\n/g" | sort -nu | tr "\n" " " | sed -e "s/^ *//" -e "s/ $//"
2009-06-24 15:12:17
User: chickenzilla
Functions: echo sed sort tr
-1

You can replace "sort -nu" with "sort -u" for a word list sorted or "sort -R" for a random-sorted line

(edit: corrected)

find . -depth -type d -exec du -s {} \; | sort -k1nr
2009-06-23 20:52:35
User: mohan43u
Functions: du find sort
Tags: sort find du
4

somewhat faster version to see the size of our directories. Size will be in Kilo Bytes. to view smallest first change '-k1nr' to '-k1n'.

gdiff --unified=10000 input.file1 inpute.file2 | egrep -v "(^\+[a-z]|^\-[a-z])"| sort > outputfile.sorted
2009-06-18 20:35:00
User: slashdot
Functions: egrep sort
-1

This commands will make it easier to select only common items between two files being compared. If your lines start with things other than lowercase a-z, adjust this Regex appropriately. Number of lines in the output has been set to no more than 10000, and should be adjusted as needed.

ls -s | sort -nr | more
grep -h -o '<[^/!?][^ >]*' * | sort -u | cut -c2-
2009-06-17 00:22:18
User: thebodzio
Functions: cut grep sort
Tags: sort grep cut
2

This set of commands was very convenient for me when I was preparing some xml files for typesetting a book. I wanted to check what styles I had to prepare but coudn't remember all tags that I used. This one saved me from error-prone browsing of all my files. It should be also useful if one tries to process xml files with xsl, when using own xml application.

sort -k1.x
2009-06-16 00:04:21
User: leper421
Functions: sort
3

Tells sort to ignore all characters before the Xth position in the first field per line. If you have a list of items one per line and want to ignore the first two characters for sorting purposes, you would type "sort -k1.3". Change the "1" to change the field being sorted. The decimal value is the offset in the specified field to sort by.

( nw=192.168.0 ; h=1; while [ $h -lt 255 ] ; do ( ping -c2 -i 0.2 -W 0.5 -n $nw.$h & ); h=$[ $h + 1 ] ; done ) | awk '/^64 bytes.*/ { gsub( ":","" ); print $4 }' | sort -u
2009-06-07 15:14:46
Functions: awk ping sort
3

What do you do when nmap is not available and you want to see the hosts responding to an icmp echo request ? This one-liner will print all hosts responding with their ipv4 address.

ls -la | sort -k 5bn
2009-06-07 14:35:17
Functions: ls sort
6

Sort ls output of all files in current directory in ascending order

Just the 20 biggest ones:

ls -la | sort -k 5bn | tail -n 20

A variant for the current directory tree with subdirectories and pretty columns is:

find . -type f -print0 | xargs -0 ls -la | sort -k 5bn | column -t

And finding the subdirectories consuming the most space with displayed block size 1k:

du -sk ./* | sort -k 1bn | column -t
svn log -v -r{2009-05-21}:HEAD | awk '/^r[0-9]+ / {user=$3} /yms_web/ {if (user=="george") {print $2}}' | sort | uniq
2009-06-05 14:07:28
User: jemptymethod
Functions: awk sort
Tags: svn awk log
4

just change the date following the -r flag, and/or the user name in the user== conditional statement, and substitute yms_web with the name of your module

URL=http://svn.example.org/project; diff -u <(TZ=UTC svn -q log -r1:HEAD $URL | grep \|) <(TZ=UTC svn log -q $URL | grep \| | sort -k3 -t \|)
2009-06-03 14:26:55
User: sunny256
Functions: diff grep sort
Tags: bash svn
2

Lists revisions in a Subversion repository with a timestamp that doesn't follow the revision numbering order. If everything is OK, nothing is displayed.