Hide

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again.

Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.


If you have a new feature suggestion or find a bug, please get in touch via http://commandlinefu.uservoice.com/

Get involved!

You can sign-in using OpenID credentials, or register a traditional username and password.

First-time OpenID users will be automatically assigned a username which can be changed after signing in.

Hide

Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for:

Hide

News

2011-03-12 - Confoo 2011 presentation
Slides are available from the commandlinefu presentation at Confoo 2011: http://presentations.codeinthehole.com/confoo2011/
2011-01-04 - Moderation now required for new commands
To try and put and end to the spamming, new commands require moderation before they will appear on the site.
2010-12-27 - Apologies for not banning the trolls sooner
Have been away from the interwebs over Christmas. Will be more vigilant henceforth.
2010-09-24 - OAuth and pagination problems fixed
Apologies for the delay in getting Twitter's OAuth supported. Annoying pagination gremlin also fixed.
Hide

Tags

Hide

Functions

Commands tagged find from sorted by
Terminal - Commands tagged find - 354 results
ls -l `find . -maxdepth 1 -type l -print`
find -size +100M
find -L /home/sonic/archive -name '*gz' -type f
2013-10-07 14:32:22
User: sonic
Functions: find
Tags: find
-1

If /home/sonic/archive/ was a symlink to /backup/sonic/archive it would follow the links and give you the file listing. By default find will NOT follow symbolic links. The default behavior for the find command is to treat the symlinks as literal files.

I discovered this when trying to write a script run via cron to delete files with a modification time older than X days. The easiest solution was to use:

/usr/bin/find -L /home/sonic/archive -name '*gz' -type f -mtime +14 -exec rm '{}' \;

find . -name '*pdf*' -print0 | xargs -0 ls -lt | head -20
2013-10-03 21:58:51
User: fuats
Functions: find head ls xargs
1

Sorts by latest modified files by looking to current directory and all subdirectories

for i in `find -L /var/ -wholename \*log\* -type d`; do COUNT=`ls -1U $i | wc -l`; if [ $COUNT -gt 10 ]; then echo $i $COUNT; fi; done
echo '#! /usr/bin/ksh\necho `cat $1 | openssl dgst -sha256` $1' > sslsha256; chmod +x sslsha256; find directory -type f -exec ./sslsha256 \{\} \;
2013-09-18 17:37:50
User: RAKK
Functions: chmod echo find
0

This command is for producing GNU sha256sum-compatible hashes on UNIX systems that don't have sha256sum but do have OpenSSL, such as stock IBM AIX.

1.- Saves a wrapper script for UNIX find that does the following:

A.- Feeds a file to openssl on SHA256 hash calculation mode

B.- Echoes the output followed by the filename

2.- Makes the file executable

3.- Runs find on a directory, only processing files, and running on each one the wrapper script that calculates SHA256 hashes

Pending is figuring out how to verify a sha256sum file on a similar environment.

find . -type f -print0 | xargs -0 stat -c'%Y :%y %12s %n' | sort -nr | cut -d: -f2- | head
2013-08-03 09:53:46
User: HerbCSO
Functions: cut find sort stat xargs
4

Goes through all files in the directory specified, uses `stat` to print out last modification time, then sorts numerically in reverse, then uses cut to remove the modified epoch timestamp and finally head to only output the last 10 modified files.

Note that on a Mac `stat` won't work like this, you'll need to use either:

find . -type f -print0 | xargs -0 stat -f '%m%t%Sm %12z %N' | sort -nr | cut -f2- | head

or alternatively do a `brew install coreutils` and then replace `stat` with `gstat` in the original command.

find . -maxdepth 1 -type d -exec sh -c "printf '{} ' ; find '{}' -type f -ls | wc -l" \;
2013-07-29 19:46:35
User: HerbCSO
Functions: find sh
2

For each directory from the current one, list the counts of files in each of these directories. Change the -maxdepth to drill down further through directories.

find ./ -type f -print0 | xargs -0 md5sum
find . -type d | sed -e "s/[^-][^\/]*\// |/g" -e "s/|\([^ ]\)/|-\1/"
2013-07-16 10:08:34
User: opexxx
Functions: find sed
Tags: sed find
-4

show directory three

find -printf "%C@ %p\n"|sort
2013-06-19 10:42:49
User: oivvio
Functions: find
Tags: sort find
5

This uses the ability of find (at least the one from GNU findutils that is shiped with most linux distros) to display change time as part of its output. No xargs needed.

find /usr/include/ -name '*.[c|h]pp' -o -name '*.[ch]' -print0 | xargs -0 cat | grep -v "^ *$" | grep -v "^ *//" | grep -v "^ */\*.*\*/" | wc -l
2013-06-17 08:37:37
Functions: cat find grep wc xargs
1

Count your source and header file's line numbers. This ignores blank lines, C++ style comments, single line C style comments.

This will not ignore blank lines with tabs or multiline C style comments.

find . -type f -name filename.exe -exec sed -i "s/oldstring/oldstring/g" {} +;
find . -type f -a \! -links 1
2013-05-06 20:44:08
User: malathion
Functions: find
Tags: find links
1

libpurple likes to hardlink files repeatedly. To ignore libpurple, use sed: | sed '/\.\/\.purple/d'

for ii in $(find /path/to/docroot -type f -name \*.php); do echo $ii; wc -lc $ii | awk '{ nr=$2/($1 + 1); printf("%d\n",nr); }'; done
2013-04-05 19:06:17
Functions: awk echo find wc
0

I have found that base64 encoded webshells and the like contain lots of data but hardly any newlines due to the formatting of their payloads. Checking the "width" will not catch everything, but then again, this is a fuzzy problem that relies on broad generalizations and heuristics that are never going to be perfect.

What I have done is set an arbitrary threshold (200 for example) and compare the values that are produced by this script, only displaying those above the threshold. One webshell I tested this on scored 5000+ so I know it works for at least one piece of malware.

find ./public_html/ -name \*.php -exec grep -HRnDskip "\(passthru\|shell_exec\|system\|phpinfo\|base64_decode\|chmod\|mkdir\|fopen\|fclose\|readfile\) *(" {} \;
2013-04-03 12:42:19
User: lpanebr
Functions: find grep
0

Searched strings:

passthru, shell_exec, system, phpinfo, base64_decode, chmod, mkdir, fopen, fclose, readfile

Since some of the strings may occur in normal text or legitimately you will need to adjust the command or the entire regex to suit your needs.

print -rl /**/*(.f:o+w:)
2013-04-03 02:53:00
User: khayyam
0

Example of using zsh glob qualifier ...

"." = files

"f:" = files with access rights matching:

o+w = other plus write

find /var/www/ -type f -print0 | xargs -0 chmod 644
find /var/www/ -type f -print0 | xargs -0 chmod 644
2013-03-28 11:10:30
User: FiloSottile
Functions: chmod find xargs
Tags: find xargs chmod
-1

xargs is a more elegant approach to executing a command on find results then -exec as -exec is meant as a filtering flag.

find -maxdepth 1 -type f -newermt "00:00" -printf "%f\n" | sort
2013-03-23 12:50:01
User: TetsuyO
Functions: find
Tags: sort find files
-2

Finds files modified today since 00:00, removes ugly dotslash characters in front of every filename, and sorts them.

*EDITED* with the advices coming from flatcap (thanks!)

find -type f | xargs ls -1tr
count=0;while IFS= read -r -d '' line; do echo "${line#* }"; ((++count==5)) && break; done < <(find . -type f -printf '%s %p\0' | sort -znr)
2013-03-19 17:19:26
User: sharfah
Functions: echo find read sort
Tags: sort find head,
-4

This command is more robust because it handles spaces, newlines and control characters in filenames. It uses printf, not ls, to determine file size.

find . -type f -exec ls -s {} \; | sort -n -r | head -5
find /some/path -type f -printf '%f\n' | grep -o '\..\+$' | sort | uniq -c | sort -rn
2013-03-18 14:42:29
User: skkzsh
Functions: find grep sort uniq
2

Get the longest match of file extension (Ex. For 'foo.tar.gz', you get '.tar.gz' instead of '.gz')

find /some/path -type f | gawk -F/ '{print $NF}' | gawk -F. '/\./{print $NF}' | sort | uniq -c | sort -rn
2013-03-18 14:40:26
User: skkzsh
Functions: find gawk sort uniq
0

If you have GNU findutils, you can get only the file name with

find /some/path -type f -printf '%f\n'

instead of

find /some/path -type f | gawk -F/ '{print $NF}'