Hide

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again.

Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.


If you have a new feature suggestion or find a bug, please get in touch via http://commandlinefu.uservoice.com/

Get involved!

You can sign-in using OpenID credentials, or register a traditional username and password.

First-time OpenID users will be automatically assigned a username which can be changed after signing in.

Hide

Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for:

Hide

News

2011-03-12 - Confoo 2011 presentation
Slides are available from the commandlinefu presentation at Confoo 2011: http://presentations.codeinthehole.com/confoo2011/
2011-01-04 - Moderation now required for new commands
To try and put and end to the spamming, new commands require moderation before they will appear on the site.
2010-12-27 - Apologies for not banning the trolls sooner
Have been away from the interwebs over Christmas. Will be more vigilant henceforth.
2010-09-24 - OAuth and pagination problems fixed
Apologies for the delay in getting Twitter's OAuth supported. Annoying pagination gremlin also fixed.
Hide

Tags

Hide

Functions

Commands using sort from sorted by
Terminal - Commands using sort - 593 results
svn stat -u | sort | sed -e "s/^M.*/\o033[31m&\o033[0m/" -e "s/^A.*/\o033[34m&\o033[0m/" -e "s/^D.*/\o033[35m&\o033[0m/"
2010-03-26 15:44:04
Functions: sed sort stat
Tags: bash svn sed
1

Use color escape sequences and sed to colorize the output of svn stat -u.

Colors: http://www.faqs.org/docs/abs/HTML/colorizing.html

svn stat characters: http://svnbook.red-bean.com/en/1.4/svn-book.html#svn.ref.svn.c.status

GNU Extensions for Escapes in Regular Expressions: http://www.gnu.org/software/sed/manual/html_node/Escapes.html

du -hs *|grep M|sort -n
2010-03-25 19:20:24
User: tuxlifan
Functions: du grep sort
3

This is easy to type if you are looking for a few (hundred) "missing" megabytes (and don't mind the occasional K slipping in)...

A variation without false positives and also finding gigabytes (but - depending on your keyboard setup - more painful to type):

du -hs *|grep -P '^(\d|,)+(M|G)'|sort -n

(NOTE: you might want to replace the ',' according to your locale!)

Don't forget that you can

modify the globbing as needed! (e.g. '.[^\.]* *' to include hidden files and directories (w/ bash))

in its core similar to:

http://www.commandlinefu.com/commands/view/706/show-sorted-list-of-files-with-sizes-more-than-1mb-in-the-current-dir

rpm --querytags | egrep -v HEADERIMMUTABLE | sort | while read tag ; do rpm -q --queryformat "$tag: [%{$tag} ]\n" -p $SomeRPMfile ; done
2010-03-25 05:40:48
Functions: egrep read rpm sort
0

If you want to relocate a package on your own, or you just want to know what those PREIN/UN and POSTIN/UN scripts will do, this will dump out all that detail simply.

You may want to expand the egrep out other verbose flags like CHANGELOGTEXT etc, as your needs require.

It isn't clear, but the formatting around $tag is important: %{$tag} just prints out the first line, while [%{$tag }] iterates thru multi-line output, joining the lines with a space (yes, there's a space between the g and } characters. To break it out for all newlines, use [%{$tag\n}] but the output will be long.

This is aside from rpm2cpio | cpio -ivd to extract the package files.

LC_ALL=C sort file | uniq -c | sort -n -k1 -r
find /dev/ -name random -exec bash -c '[ -r $0 -a -w $0 ] && dd if=$0 | sort | dd of=$0' {} \;
( du -xSk || du -kod ) | sort -nr | head
2010-03-16 04:05:14
Functions: du sort
4

No need to type out the full OR clause if you know which OS you're on, but this is easy cut-n-paste or alias to get top ten directories by singleton.

To avoid the error output from du -xSk you could always 2>/dev/null but you might miss relevant STDERR.

alias busy='my_file=$(find /usr/include -type f | sort -R | head -n 1); my_len=$(wc -l $my_file | awk "{print $1}"); let "r = $RANDOM % $my_len" 2>/dev/null; vim +$r $my_file'
2010-03-09 21:48:41
User: busybee
Functions: alias awk find head sort vim wc
22

This makes an alias for a command named 'busy'. The 'busy' command opens a random file in /usr/include to a random line with vim. Drop this in your .bash_aliases and make sure that file is initialized in your .bashrc.

git reflog show | grep '}: commit' | nl | sort -nr | nl | sort -nr | cut --fields=1,3 | sed s/commit://g | sed -e 's/HEAD*@{[0-9]*}://g'
2end () ( export LC_ALL=C; nl -n rz $1 > $1.tmp; ${EDITOR:-vi} $1.tmp; sort $1.tmp | sed -r 's/^.*[0-9]+\t+//' > $1; rm $1.tmp; )
2010-03-06 23:02:28
User: bartonski
Functions: export nl rm sed sort
0

This function is used to sort selected lines of a text file to the end of that file. Especially useful in cases where human intervention is necessary to sort out parts of a file. Let's say that you have a text file which contains the words

rough

slimy

red

fluff

dough

For whatever reason, you want to sort all words rhyming with 'tough' to the bottom of the file, and all words denoting colors to the top, while keeping the order of the rest of the file intact.

'$EDITOR' will open, showing all of the lines in the given file, numbered with '0' padding. Adding a '~' to the beginning of the line will cause the line to sort to the end of the file, adding '!' will cause it to sort to the beginning.

echo sortmeplease | perl -pe 'chomp; $_ = join "", sort split //'
ps axo rss,comm,pid | awk '{ proc_list[$2] += $1; } END { for (proc in proc_list) { printf("%d\t%s\n", proc_list[proc],proc); }}' | sort -n | tail -n 10
ps axo rss,comm,pid | awk '{ proc_list[$2]++; proc_list[$2 "," 1] += $1; } END { for (proc in proc_list) { printf("%d\t%s\n", proc_list[proc "," 1],proc); }}' | sort -n | tail -n 10
2010-03-03 16:41:05
User: d34dh0r53
Functions: awk ps sort tail
5

This command loops over all of the processes in a system and creates an associative array in awk with the process name as the key and the sum of the RSS as the value. The associative array has the effect of summing a parent process and all of it's children. It then prints the top ten processes sorted by size.

git log --reverse --pretty=oneline | cut -c41- | nl | sort -nr
lsof -c $processname | egrep 'w.+REG' | awk '{print $9}' | sort | uniq
2010-02-24 16:47:49
User: alustenberg
Functions: awk egrep sort
1

lists all files that are opened by processess named $processname

egrep 'w.+REG' is to filter out non file listings in lsof, awk to get the filenames, and sort | uniq to remove duplciation

du -x --max-depth=1 | sort -n | awk '{ print $2 }' | xargs du -hx --max-depth=0
2010-02-18 19:46:47
User: d34dh0r53
Functions: awk du sort xargs
4

Provides numerically sorted human readable du output. I so wish there was just a du flag for this.

find . -type f |sed "s#.*/##g" |sort |uniq -c -d
2010-02-17 11:59:54
User: shadycraig
Functions: find sed sort uniq
0

Useful for C projects where header file names must be unique (e.g. when using autoconf/automake), or when diagnosing if the wrong header file is being used (due to dupe file names)

ps aux | sed -n '/USER/!s/\([^ ]\) .*/\1/p' | sort -u
2010-02-10 05:56:26
User: infinull
Functions: ps sed sort
1

This is different that `who` in that who only cares about logged-in users running shells, this command will show all daemon users and what not; also users logged in remotely via SSH but are running SFTP/SCP only and not a shell.

find /path/to/dir -type f -printf "%T@|%p\n" 2>/dev/null | sort -n | tail -n 1| awk -F\| '{print $2}'
du -sk ./* | sort -nr
2010-02-04 04:08:05
User: op4
Functions: du sort
4

full command below, would not let me put full command in text box

du -sk ./* | sort -nr | awk 'BEGIN{ pref[1]="K"; pref[2]="M"; pref[3]="G";} { total = total + $1; x = $1; y = 1; while( x > 1024 ) { x = (x + 1023)/1024; y++; } printf("%g%s\t%s\n",int(x*10)/10,pref[y],$2); } END { y = 1; while( total > 1024 ) { total = (total + 1023)/1024; y++; } printf("Total: %g%s\n",int(total*10)/10,pref[y]); }'

while read l; do echo $RANDOM "$l"; done | sort -n | cut -d " " -f 2-
2010-02-03 22:36:34
User: ketil
Functions: cut echo read sort
0

If you need to randomize the lines in a file, but have an old sort commands that doesn't support the -R option, this could be helpful. It's easy enough to remember so that you can create it as a script and use that.

It ain't real fast. It ain't safe. It ain't super random. Do not use it on untrusted data. It requires bash for the $RANDOM variable to work.

export QQ=$(mktemp -d);(cd $QQ; curl -s -O http://www.commandlinefu.com/commands/browse/sort-by-votes/plaintext/[0-2400:25];for i in $(perl -ne 'print "$1\n" if( /^(\w+\(\))/ )' *|sort -u);do grep -h -m1 -B1 $i *; done)|grep -v '^--' > clf.sh;rm -r $QQ
2010-01-30 19:47:42
User: bartonski
Functions: cd export grep mktemp perl sort
8

Each shell function has its own summary line, as a comment. If there are multiple shell functions with the same name, the function with the highest number of votes is put into the file.

Note: added 'grep -v' to the end of the pipeline, to eliminate extraneous lines containing only '--'. Thanks to matthewbauer for pointing this out.

for i in `netstat -rn |grep lan |cut -c55-60 |sort |uniq`; do ifconfig $i; done
2010-01-28 17:35:20
User: Kaio
Functions: cut grep ifconfig sort
-4

HP UX doesn't have a -a switch in the ifconfig command.

This line emulates the same result shown in Solaris, AIX or Linux

du -s * | sort -nr | head | cut -f2 | parallel -k du -sh
2010-01-28 12:59:14
Functions: cut du head sort
Tags: du xargs parallel
-2

If a directory name contains space xargs will do the wrong thing. Parallel https://savannah.nongnu.org/projects/parallel/ deals better with that.

find -type d -name ".svn" -prune -o -not -empty -type f -printf "%s\n" | sort -rn | uniq -d | xargs -I{} -n1 find -type d -name ".svn" -prune -o -type f -size {}c -print0 | xargs -0 md5sum | sort | uniq -w32 --all-repeated=separate
2010-01-28 09:45:29
User: 2chg
Functions: find md5sum sort uniq xargs
2

Improvement of the command "Find Duplicate Files (based on size first, then MD5 hash)" when searching for duplicate files in a directory containing a subversion working copy. This way the (multiple dupicates) in the meta-information directories are ignored.

Can easily be adopted for other VCS as well. For CVS i.e. change ".svn" into ".csv":

find -type d -name ".csv" -prune -o -not -empty -type f -printf "%s\n" | sort -rn | uniq -d | xargs -I{} -n1 find -type d -name ".csv" -prune -o -type f -size {}c -print0 | xargs -0 md5sum | sort | uniq -w32 --all-repeated=separate
find -not -empty -type f -printf "%s\n" | sort | uniq -d | parallel find -type f -size {}c | parallel md5sum | sort | uniq -w32 --all-repeated=separate
2010-01-28 08:40:18
Functions: find md5sum sort uniq
Tags: xargs parallel
-1

A bit shorter and parallelized. Depending on the speed of your cpu and your disk this may run faster.

Parallel is from https://savannah.nongnu.org/projects/parallel/