Hide

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again.

Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.


If you have a new feature suggestion or find a bug, please get in touch via http://commandlinefu.uservoice.com/

Get involved!

You can sign-in using OpenID credentials, or register a traditional username and password.

First-time OpenID users will be automatically assigned a username which can be changed after signing in.

Hide

Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for:

Hide

News

2011-03-12 - Confoo 2011 presentation
Slides are available from the commandlinefu presentation at Confoo 2011: http://presentations.codeinthehole.com/confoo2011/
2011-01-04 - Moderation now required for new commands
To try and put and end to the spamming, new commands require moderation before they will appear on the site.
2010-12-27 - Apologies for not banning the trolls sooner
Have been away from the interwebs over Christmas. Will be more vigilant henceforth.
2010-09-24 - OAuth and pagination problems fixed
Apologies for the delay in getting Twitter's OAuth supported. Annoying pagination gremlin also fixed.
Hide

Tags

Hide

Functions

Commands using find from sorted by
Terminal - Commands using find - 1,049 results
find . -type f -printf "%h\n" | cut -d/ -f-2 | sort | uniq -c | sort -rn
2009-10-09 23:49:53
User: ivancho
Functions: cut find sort uniq
Tags: file count
6

counts the total (recursive) number of files in the immediate (depth 1) subdirectories as well as the current one and displays them sorted.

Fixed, as per ashawley's comment

find . -iname ".project"| xargs -I {} dirname {} | LC_ALL=C xargs -I {} svn info {} | grep "Last Changed Rev\|Path" | sed "s/Last Changed Rev: /;/" | sed "s/Path: //" | sed '$!N;s/\n//'
2009-10-07 16:13:27
User: hurz
Functions: dirname find grep info sed xargs
0

Searches for all .project files in current folder and below and uses "svn info" to get the last changed revision. The last sed joins every two lines.

find ~/Music -daystart -mtime -60 -name *mp3 -printf "%T@\t%p\n" | sort -f -r | head -n 30 | cut -f 2
find . -type f -exec sha1sum {} >> SHA1SUMS \;
2009-10-05 18:33:59
User: gpenguin
Functions: find sha1sum
2

All output is placed in file SHA1SUMS which you can later check with 'sha1sum --check'. Works on most Linux distros where 'sha1sum' is installed.

find /proc -user myuser -maxdepth 1 -type d -mtime +7 -exec basename {} \; | xargs kill -9
sh -c 'S=askapache R=htaccess; find . -mount -type f|xargs -P5 -iFF grep -l -m1 "$S" FF|xargs -P5 -iFF sed -i -e "s%${S}%${R}%g" FF'
9

I needed a way to search all files in a web directory that contained a certain string, and replace that string with another string. In the example, I am searching for "askapache" and replacing that string with "htaccess". I wanted this to happen as a cron job, and it was important that this happened as fast as possible while at the same time not hogging the CPU since the machine is a server.

So this script uses the nice command to run the sh shell with the command, which makes the whole thing run with priority 19, meaning it won't hog CPU processing. And the -P5 option to the xargs command means it will run 5 separate grep and sed processes simultaneously, so this is much much faster than running a single grep or sed. You may want to do -P0 which is unlimited if you aren't worried about too many processes or if you don't have to deal with process killers in the bg.

Also, the -m1 command to grep means stop grepping this file for matches after the first match, which also saves time.

find . -name "*.txt" -exec sed -i "s/old/new/" {} \;
find [path] [expression] -exec du -ab {} \; | awk '{total+=$0}END{print total}'
find -type f -name "*.avi" -print0 | xargs -0 mplayer -vo dummy -ao dummy -identify 2>/dev/null | perl -nle '/ID_LENGTH=([0-9\.]+)/ && ($t +=$1) && printf "%02d:%02d:%02d\n",$t/3600,$t/60%60,$t%60' | tail -n 1
2009-09-24 15:50:39
User: syssyphus
Functions: find perl printf tail xargs
8

change the *.avi to whatever you want to match, you can remove it altogether if you want to check all files.

find . -name "*.txt" | xargs sed -i "s/old/new/"
find . -type f -exec grep -i <pattern> \;
find . -type f -print0 | xargs -0 grep -i <pattern>
find . | xargs file | grep ".*: .* text" | sed "s;\(.*\): .* text.*;\1;"
find $HOME -type f -print0 | perl -0 -wn -e '@f=<>; foreach $file (@f){ (@el)=(stat($file)); push @el, $file; push @files,[ @el ];} @o=sort{$a->[9]<=>$b->[9]} @files; for $i (0..$#o){print scalar localtime($o[$i][9]), "\t$o[$i][-1]\n";}'|tail
2009-09-21 22:11:16
User: drewk
Functions: find perl
3

This pipeline will find, sort and display all files based on mtime. This could be done with find | xargs, but the find | xargs pipeline will not produce correct results if the results of find are greater than xargs command line buffer. If the xargs buffer fills, xargs processes the find results in more than one batch which is not compatible with sorting.

Note the "-print0" on find and "-0" switch for perl. This is the equivalent of using xargs. Don't you love perl?

Note that this pipeline can be easily modified to any data produced by perl's stat operator. eg, you could sort on size, hard links, creation time, etc. Look at stat and just change the '9' to what you want. Changing the '9' to a '7' for example will sort by file size. A '3' sorts by number of links....

Use head and tail at the end of the pipeline to get oldest files or most recent. Use awk or perl -wnla for further processing. Since there is a tab between the two fields, it is very easy to process.

find -not -empty -type f -printf "%s\n" | sort -rn | uniq -d | xargs -I{} -n1 find -type f -size {}c -print0 | xargs -0 md5sum | sort | uniq -w32 --all-repeated=separate
2009-09-21 00:24:14
User: syssyphus
Functions: find md5sum sort uniq xargs
55

This dup finder saves time by comparing size first, then md5sum, it doesn't delete anything, just lists them.

find repMainPath -maxdepth 1 -mindepth 1 -type d | while read dir; do echo processing $dir; sudo svnadmin dump --deltas $dir >dumpPath/`basename $dir`; done
2009-09-15 20:14:51
User: Marco
Functions: dump echo find read sudo
Tags: bash svn
2

This command dumps all SVN repositories inside of folder "repMainPath" (not recursively) to the folder "dumpPath", where one dump file will be created for each SVN repository.

find -type f |while read a;do [ "`head -c3 -- "${a}"`" == $'\xef\xbb\xbf' ] && echo "Match: ${a}";done
find /home/dir -mtime +1 -print -exec gzip -9 {} \; -exec mv {}.gz {}_`date +%F`.gz \;
find . \( ! -name . -prune \) \( -type f -o -type l \)
2009-09-12 15:58:56
User: mobidyc
Functions: find
1

you must be in the directory to analyse

report all files and links in the currect directory, not recursively.

this find command ahs been tested on hp-ux/linux/aix/solaris.

find . -maxdepth 1 ! -name '.' -execdir du -0 -s {} + | sort -znr | gawk 'BEGIN{ORS=RS="\0";} {sub($1 "\t", ""); print $0;}' | xargs -0 du -hs
2009-09-11 16:07:39
User: ashawley
Functions: du find gawk sort xargs
1

A little bit smaller, faster and should handle files with special characters in the name.

diff <(ssh server01 'cd config; find . -type f -exec md5sum {} \;| sort -k 2') <(ssh server02 'cd config;find . -type f -exec md5sum {} \;| sort -k 2')
2009-09-11 15:24:59
User: arcege
Functions: diff find md5sum sort ssh
14

This can be much faster than downloading one or both trees to a common servers and comparing the files there. After, only those files could be copied down for deeper comparison if needed.

function sepath { echo $PATH |tr ":" "\n" |sort -u |while read L ; do cd "$L" 2>/dev/null && find . \( ! -name . -prune \) \( -type f -o -type l \) 2>/dev/null |sed "s@^\./@@" |egrep -i "${*}" |sed "s@^@$L/@" ; done ; }
2009-09-11 15:03:22
User: mobidyc
Functions: cd echo egrep find read sed sort tr
Tags: bash ksh PATH
-1

search argument in PATH

accept grep expressions

without args, list all binaries found in PATH

echo -e "${PATH//://\n}" >/tmp/allpath; grep -Fh -f /tmp/allpath /var/lib/dpkg/info/*.list|grep -vxh -f /tmp/allpath >/tmp/installedinpath ; find ${PATH//:/ } |grep -Fxv -f /tmp/installedinpath
2009-09-09 05:33:14
User: kamathln
Functions: echo find grep
Tags: Debian dpkg PATH
0

OS: Debian based (or those that use dpkg)

Equivalent to doing a dpkg -S on each file in $PATH, but way faster.

May report files generated though postinstall scripts and such. For example . It will report /usr/bin/vim .. which is not not a file installed directly by dpkg, but a link generated by alternatives hooks

find . -not \( -name .svn -prune \) -type f | xargs zip XXXXX.zip
find . -name \*.c | xargs wc -l | tail -1 | awk '{print $1}'
2009-09-08 08:25:45
User: karpoke
Functions: awk find tail wc xargs
Tags: awk find wc
0

This is really fast :)

time find . -name \*.c | xargs wc -l | tail -1 | awk '{print $1}'

204753

real 0m0.191s

user 0m0.068s

sys 0m0.116s