Hide

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again.

Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.


If you have a new feature suggestion or find a bug, please get in touch via http://commandlinefu.uservoice.com/

Get involved!

You can sign-in using OpenID credentials, or register a traditional username and password.

First-time OpenID users will be automatically assigned a username which can be changed after signing in.

Hide

Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for:

Hide

News

2011-03-12 - Confoo 2011 presentation
Slides are available from the commandlinefu presentation at Confoo 2011: http://presentations.codeinthehole.com/confoo2011/
2011-01-04 - Moderation now required for new commands
To try and put and end to the spamming, new commands require moderation before they will appear on the site.
2010-12-27 - Apologies for not banning the trolls sooner
Have been away from the interwebs over Christmas. Will be more vigilant henceforth.
2010-09-24 - OAuth and pagination problems fixed
Apologies for the delay in getting Twitter's OAuth supported. Annoying pagination gremlin also fixed.
Hide

Tags

Hide

Functions

Commands using find from sorted by
Terminal - Commands using find - 1,039 results
find * -maxdepth 0 -type d
2013-02-25 21:10:49
User: sonic
Functions: find
0

the advantage to doing it this way is that you can adjust the max depth to get more recursive results and run it on non GNU systems. It also won't print trailing slashes, which can easily be removed, but can be slightly annoying..

You could run:

# for file in `find * -maxdepth 0 -type d`;do ls -d $file;done

and in the ls -d part of the command you can put in whatever parameters you want to get things like permissions, time stamps, and ownership.

find ./ -name "*.sh" -exec chmod +x {} \;
2013-02-25 17:14:55
User: Renato
Functions: chmod find
0

This command is useful to recursively make executable all "*.sh" files in a folder.

This command is useful to apply chmod recursively in a determined kind of file.

find . -type f -size +0 -printf "%-25s%p\n" | sort -n | uniq -D -w 25 | sed 's/^\w* *\(.*\)/md5sum "\1"/' | sh | sort | uniq -w32 --all-repeated=separate
2013-02-23 20:44:20
User: jimetc
Functions: find sed sh sort uniq
0

Avoids the nested 'find' commands but doesn't seem to run any faster than syssyphus's solution.

sudo find / -type f -name config.inc.php -exec vim -p {} +
2013-02-12 11:00:02
User: sinevar
Functions: find sudo vim
2

Opening several files at once in Vim can be very easy in connection with find command.

find /path/ -type f -exec grep -l '<string of text>' {} \; | xargs sed -i -e 's%<string of text>%<new text string>%g'
find . -type f -size +100M
find . -name '*.jpg' | awk 'BEGIN{ a=0 }{ printf "mv %s name%01d.jpg\n", $0, a++ }' | bash
2013-02-07 06:12:37
User: doublescythe
Functions: awk find printf
0

This command will take the files in a directory, rename them, and then number them from 1...N.

Black belt stuff.

Hell of a time saver.

find . -type f -name "*.txt" | while read; do (($(cat $THISFILE | wc -l) < 10)) && rm -vf "$THISFILE"; done
find . -iname '*jpg' -print0 | xargs -0 exiftool -warning; find . -iname '*jpg' -print0 | xargs -0 jpeginfo -c
2013-01-28 16:44:19
Functions: find xargs
0

This checks jpeg data and metadata, should be grepped as needed, maybe a -B1 Warning for the first, and a -E "WARNING|ERROR" for the second part....

echo '#!/bin/bash' > junk.sh ; find . -iname *.pdf -type f -printf \p\s\2\a\s\c\i\i\ \"%p\"\ \ \"%p\.\t\x\u\"\;\ \p\a\r\ \<\"%p\.\t\x\u\"\ \>\"%p\.\t\x\t\"\ \;\ \r\m\ \"%p\.\t\x\u\"\ \\n >>junk.sh; chmod 766 junk.sh; ./junk.sh ; rm junk.sh
2013-01-27 21:29:08
User: p0g0
Functions: chmod echo find rm
0

Linux users wanting to extract text from PDF files in the current directory and its sub-directories can use this command. It requires "bash", "ps2ascii" and "par", and the PARINIT environment variable sanely set (see man par). WARNING: the file "junk.sh" will be created, run, and destroyed in the current directory, so you _must_ have sufficient rights. Edit the command if you need to avoid using the file name "junk.sh"

find /var/www/ -name file -exec cp {}{,.bak} \;
2013-01-27 01:03:28
User: joepd
Functions: cp file find
0

Let the shell handle the repetition in stead of find :)

find /var/www/ -name file -exec cp {} {}.bak \;
for i in `pfiles pid|grep S_IFREG|awk '{print $5}'|awk -F":" '{print $2}'`; do find / -inum $i |xargs ls -lah; done
2013-01-24 13:57:19
User: giorger
Functions: awk find grep ls xargs
0

Executing pfiles will return a list of all descriptors utilized by the process

We are interested in the S_IFREG entries since they are pointing usually to files

In the line, there is the inode number of the file which we use in order to find the filename.

The only bad thing is that in order not to search from / you have to suspect where could possibly be the file.

Improvements more than welcome.

lsof was not available in my case

find-duplicates () { find "$@" -not -empty -type f -printf "%s\0" | sort -rnz | uniq -dz | xargs -0 -I{} -n1 find "$@" -type f -size {}c -print0 | xargs -0 md5sum | sort | uniq -w32 --all-repeated=separate; }
2013-01-23 23:20:26
User: mpeschke
Functions: find md5sum sort uniq xargs
-1

This is a modified version of the OP, wrapped into a bash function.

This version handles newlines and other whitespace correctly, the original has problems with the thankfully rare case of newlines in the file names.

It also allows checking an arbitrary number of directories against each other, which is nice when the directories that you think might have duplicates don't have a convenient common ancestor directory.

find $folder -name "[1-9]*" -type f -print|while read file; do echo $file $(sed -e '/^$/Q;:a;$!N;s/\n //;ta;s/ /_/g;P;D' $file|awk '/^Received:/&&!r{r=$0}/^From:/&&!f{f=$0}r&&f{printf "%s%s",r,f;exit(0)}');done|sort -k 2|uniq -d -f 1
2013-01-21 22:50:51
User: lpb612
Functions: awk echo find read sed sort uniq
1

# find assumes email files start with a number 1-9

# sed joins the lines starting with " " to the previous line

# gawk print the received and from lines

# sort according to the second field (received+from)

# uniq print the duplicated filename

# a message is viewed as duplicate if it is received at the same time as another message, and from the same person.

The command was intended to be run under cron. If run in a terminal, mutt can be used:

mutt -e "push otD~=xq" -f $folder

find . -name \*.swp -type f -delete
2013-01-19 07:38:03
User: ashwinkumark
Functions: find
Tags: file remove
0

Removes all *.swp files underneath the current directory. Replace "*.swp" with your file pattern(s).

find . | while read line; do test `stat -c %u $line` -eq 1003 && chown android:android $line && echo $line; done
perl -MFile::Find=find -MFile::Spec::Functions -Tlwe '$found=1; find { wanted => sub { if (/$ARGV[0]\.pm\z/) { print canonpath $_; $found=0; } }, no_chdir => 1 }, @INC; exit $found;' Collectd/Plugins/Graphite
2013-01-11 11:01:46
User: keymon
Functions: exit find perl
-2

Will check if the given module is installed in the @INC. It will print the path and return 0 if found, or 1 otherwise.

Based on script from SharpyWarpy in http://www.linuxquestions.org/questions/linux-general-1/how-to-list-all-installed-perl-modules-216603/

find . -name "*.pdf" -exec pdftk {} dump_data output \; | grep NumberOfPages | awk '{s+=$2} END {print s}'
find . -maxdepth 1 -type l
find . -type l -exec test ! -e {} \; -delete
2012-12-26 06:27:13
User: seb1245
Functions: find test
Tags: find
2

This command is adapted from http://otomaton.wordpress.com/2012/12/26/find-broken-symbolic-links/

Solutions with

find -L

don't work when the link is a loop, an error message is printed.

find -type f | xargs file | grep ".*: .* text" | sed "s;\(.*\): .* text.*;\1;"
find . -type d | while read dir ; do num=`ls -l $dir | grep '^-' | wc -l` ; echo "$num $dir" ; done | sort -rnk1 | head
diff <(cd dir1 && find . | sort) <(cd dir2 && find . | sort)
find . -name "*.[ch]" -exec grep -i /dev/null "search pharse" {} \;
2012-12-04 20:51:04
User: MikeGoerling
Functions: find grep
Tags: find grep
0

Old Sys5 system and SUN computers don't have the -H option. Adding /dev/null forces grep to use the multi-file output and report the file name.