Fastest Sort. Sort Faster, Max Speed

alias sortfast='sort -S$(($(sed '\''/MemF/!d;s/[^0-9]*//g'\'' /proc/meminfo)/2048)) $([ `nproc` -gt 1 ]&&echo -n --parallel=`nproc`)'
sort is way slow by default. This tells sort to use a buffer equal to half of the available free memory. It also will use multiple process for the sort equal to the number of cpus on your machine (if greater than 1). For me, it is magnitudes faster. If you put this in your bash_profile or startup file, it will be set correctly when bash is started. sort -S1 --parallel=2 <(echo) &>/dev/null && alias sortfast='sort -S$(($(sed '\''/MemF/!d;s/[^0-9]*//g'\'' /proc/meminfo)/2048)) $([ `nproc` -gt 1 ]&&echo -n --parallel=`nproc`)' Alternative echo|sort -S10M --parallel=2 &>/dev/null && alias sortfast="command sort -S$(($(sed '/MemT/!d;s/[^0-9]*//g' /proc/meminfo)/1024-200)) --parallel=$(($(command grep -c ^proc /proc/cpuinfo)*2))"
Sample Output
89/490MB        4.22 2.03 1.01 2/85 7081
[24309:24308 0:4220] 08:01:28 Mon Feb 27 [root@galileo:pts/2 +1] ~
$ sort -S400M -u -i -f --parallel=4 files.nocombined.sorted | pv > ~/files.nocombined.reallysorted                                                                                                                                                                                                                              
9.95MB 0:01:39 [ 103kB/s] [     <=>             

What Others Think

Using so much mem is dangerous. Also using more processes than CPUs suggests you're waiting on disk. It would help if you mentioned your input source medium, TMPDIR medium and output medium. Note also the recent nproc command which might be handier than grepping cpuinfo Note also this might be a safer mem usage estimate: free -m | awk '$4 && $7 {print $4+$7-200}'
pixelbeat · 346 weeks and 4 days ago
Thanks for that nproc command and I like the way your command includes the cache... Also good mention of the TMPDIR medium and output medium since these are heavily used. My tmpdir is either tmpfs (memory) or on a high-performance 6-disk raid 0 array.. both max out at 5GB size-wise. I use a custom TMPDIR so didn't include that, but really will make a huge difference if setup correctly. This has 8 subprocesses and 302 calls strace -c sh -c "free -m | awk '\$4 && \$7 {print \$4+\$7-200}'" This has 297 calls and 8 subproccesses strace -c sh -c "free -m|sed '1d;s/^.*: *[0-9]* *[0-9]* *//;s/ [ 0-9]*$//;q'" This has no subprocesses and only 192 calls strace -c sh -c 'sed -n "/mF/s/^.*: *\([0-9]\+\) kB$/\1/p" /proc/meminfo' I will update the above with the changes made per your suggestions.
AskApache · 343 weeks and 6 days ago
Ok there you go.. what do you think of that? Any other ideas?
AskApache · 343 weeks and 6 days ago

What do you think?

Any thoughts on this command? Does it work on your machine? Can you do the same thing with only 14 characters?

You must be signed in to comment.

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.

Share Your Commands



Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for: