Hide

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again.

Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.


If you have a new feature suggestion or find a bug, please get in touch via http://commandlinefu.uservoice.com/

Get involved!

You can sign-in using OpenID credentials, or register a traditional username and password.

First-time OpenID users will be automatically assigned a username which can be changed after signing in.

Hide

Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for:

Hide

News

2011-03-12 - Confoo 2011 presentation
Slides are available from the commandlinefu presentation at Confoo 2011: http://presentations.codeinthehole.com/confoo2011/
2011-01-04 - Moderation now required for new commands
To try and put and end to the spamming, new commands require moderation before they will appear on the site.
2010-12-27 - Apologies for not banning the trolls sooner
Have been away from the interwebs over Christmas. Will be more vigilant henceforth.
2010-09-24 - OAuth and pagination problems fixed
Apologies for the delay in getting Twitter's OAuth supported. Annoying pagination gremlin also fixed.
Hide

Tags

Hide

Functions

Commands tagged find from sorted by
Terminal - Commands tagged find - 354 results
find . -exec rename 's/_/\ /g' {} +
2014-05-05 02:47:19
User: KlfJoat
Functions: find rename
1

Everyone wants to take spaces out of filenames. Forget that. I want to put them back in. We've got tools and filesystems that support spaces, they look better, so I'm going to use them.

Because of how find works I find I need to run this multiple times, if it's renaming subdirs. But it can be re-run without issues.

I got this version of the command from a comment in this underscore-generating command. http://www.commandlinefu.com/commands/view/760/find-recursively-from-current-directory-down-files-and-directories-whose-names-contain-single-or-multiple-whitespaces-and-replace-each-such-occurrence-with-a-single-underscore. All I did was change the regex.

find directory -type l -lname string
2014-05-02 14:44:24
User: gumption
Functions: find
Tags: find
1

Finds all symbolic links in the specified directory which match the specified string pattern.

I used this when upgrading from an Apple-supported version of Java 6 (1.6.0_65) to an Oracle-supported version (1.7.0_55) on Mac OS X 10.8.5 to find out which executables were pointing to /System/Library/Frameworks/JavaVM.framework/Versions/Current/Commands (Apple version) vs. /Library/Java/JavaVirtualMachines/jdk1.7.0_55.jdk/Contents/Home/bin (Oracle version). However, it appears the current JDK installation script already takes care of modifying the links.

find /some/directory/* -prune -type f -name *.log
2014-05-02 00:14:32
User: bigstupid
Functions: find
0

This find syntax seems a little easier to remember for me when I have to use -prune on AIX's find. It works with gnu find, too.

Add whatever other find options after -prune

for file in $(find . -name *.mp4); do ogv=${file%%.mp4}.ogv; if test "$file" -nt "$ogv"; then echo $file' is newer then '$ogv; ffmpeg2theora $file; fi done
find -type f -exec grep -q "regexp" {} \; -delete
2014-04-06 19:06:50
User: gumnos
Functions: find grep
Tags: find grep
3

Deletes files in the current directory or its subdirectories that match "regexp" but handle directories, newlines, spaces, and other funky characters better than the original #13315. Also uses grep's "-q" to be quiet and quit at the first match, making this much faster. No need for awk either.

grep -Rl "pattern" files_or_dir
2014-04-06 18:18:07
User: N1nsun
Functions: grep
Tags: awk find grep
0

Grep can search files and directories recursively. Using the -Z option and xargs -0 you can get all results on one line with escaped spaces, suitable for other commands like rm.

find . | xargs grep -l "FOOBAR" | awk '{print "rm -f "$1}' > doit.sh
2014-04-06 15:48:41
User: sergeylukin
Functions: awk find grep xargs
Tags: awk find grep
-2

After this command you can review doit.sh file before executing it.

If it looks good, execute: `. doit.sh`

dmesg | grep -Po 'csum failed ino\S* \d+' | awk '{print $4}' | sort -u | xargs -n 1 find / -inum 2> /dev/null
2014-03-22 12:22:46
User: Sepero
Functions: awk dmesg find grep sort xargs
Tags: find inode btrfs
1

Btrfs reports the inode numbers of files with failed checksums. Use `find` to lookup the file names of those inodes. The files may need to be deleted and replaced with backups.

dmesg | grep -Po 'csum failed ino\S* \d+' | sort | uniq | xargs -n 3 find / -inum 2> /dev/null
2014-03-20 06:27:15
User: Sepero
Functions: dmesg find grep sort uniq xargs
Tags: find inode btrfs
-1

Btrfs reports the inode numbers of files with failed checksums. Use `find` to lookup the file names of those inodes.

for i in $(find . -regex '.*\/C.*\.cpp'); do svn mv `perl -e 'my $s=$ARGV[0]; $s=~m/(.*\/)C(.*)/; print "$s $1$2"' "$i"`; done
find . -name *.properties -exec /bin/echo {} \; -exec cat {} \; | grep -E 'listen|properties'
find * -regextype posix-extended -regex '.*\.(ext_1|ext_2)' -exec cp {} copy_target_directory \;
lsblk | grep <mountpoint>
find /path/to/somewhere -newermt "Jan 1"
2014-02-02 18:02:07
User: renich
Functions: find
Tags: find
3

This command uses -newerXY to show you the files that are modified since a specific date. I recommend looking for "-newerXY" on the manpage to get the specifics.

find . -type f -regex ".*/core.[0-9][0-9][0-9][0-9]$"
2014-01-17 16:44:47
User: H3liUS
Functions: find
0

Will find and list all core files from the current directory on. You can pass | xargs rm -i to be prompted for the removal if you'd like to double check before removal.

find . -name "pattern" -type f -exec du -ch {} + | tail -n1
echo $(($(find . -name "pattern" -type f -printf "+%s")))
2014-01-16 03:14:36
User: flatcap
Functions: echo find
3

Use find's internal stat to get the file size then let the shell add up the numbers.

find . -name "pattern" -type f -printf "%s\n" | awk '{total += $1} END {print total}'
2014-01-16 01:16:18
User: pdxdoughnut
Functions: awk find
2

Using find's internal stat to get the file size is about 50 times faster than using -exec stat.

find . -name "pattern" -exec stat -c%s {} \; | awk '{total += $1} END {print total}'
2014-01-15 11:07:09
User: Koobiac
Functions: awk find stat
1

Find files and calculate size with stat of result in shell

find /path/to/directory -not \( -name .snapshot -prune \) -type f -mtime +365
2013-12-11 14:51:53
User: cuberri
Functions: find
1

Useful when you want to cron a daily deletion task in order to keep files not older than one year. The command excludes .snapshot directory to prevent backup deletion.

One can append -delete to this command to delete the files :

find /path/to/directory -not \( -name .snapshot -prune \) -type f -mtime +365 -delete
find -type f -name '*.conf' -exec sed -Ei 's/foo/bar/' '{}' \;
2013-11-21 16:07:06
Functions: find sed
0

note that sed -i is non-standard (although both GNU and current BSD systems support it)

Can also be accomplished with

find . -name "*.txt" | xargs perl -pi -e 's/old/new/g'

as shown here - http://www.commandlinefu.com/commands/view/223/a-find-and-replace-within-text-based-files-to-locate-and-rewrite-text-en-mass.

find ~ -type f -size +500M -exec ls -ls {} \; | sort -n
2013-11-17 13:13:14
User: marcanuy
Functions: find ls sort
Tags: size find
-1

Find all files larger than 500M in home directory and print them ordered by size with full info about each file.

find /var/lib/cassandra/data -depth -type d -iwholename "*/snapshots/*" -printf "%Ty-%Tm-%Td %p\n" | sort
find /var/lib/cassandra/data -depth -type d -iwholename "*/snapshots/*" -mtime +30 -print0 | xargs -0 rm -rf
find . -type f -not -empty -printf "%-25s%p\n"|sort -n|uniq -D -w25|cut -b26-|xargs -d"\n" -n1 md5sum|sed "s/ /\x0/"|uniq -D -w32|awk -F"\0" 'BEGIN{l="";}{if(l!=$1||l==""){printf "\n%s\0",$1}printf "\0%s",$2;l=$1}END{printf "\n"}'|sed "/^$/d"
2013-10-22 13:34:19
User: alafrosty
Functions: awk cut find sed sort uniq xargs
0

* Find all file sizes and file names from the current directory down (replace "." with a target directory as needed).

* sort the file sizes in numeric order

* List only the duplicated file sizes

* drop the file sizes so there are simply a list of files (retain order)

* calculate md5sums on all of the files

* replace the first instance of two spaces (md5sum output) with a \0

* drop the unique md5sums so only duplicate files remain listed

* Use AWK to aggregate identical files on one line.

* Remove the blank line from the beginning (This was done more efficiently by putting another "IF" into the AWK command, but then the whole line exceeded the 255 char limit).

>>>> Each output line contains the md5sum and then all of the files that have that identical md5sum. All fields are \0 delimited. All records are \n delimited.