Hide

What's this?

commandlinefu.com is the place to record those command-line gems that you return to again and again.

Delete that bloated snippets file you've been using and share your personal repository with the world. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.


Get involved!

You can sign-in using OpenID credentials, or register a traditional username and password.

First-time OpenID users will be automatically assigned a username which can be changed after signing in.

Universal configuration monitoring and system of record for IT.
Hide

Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.

» http://twitter.com/commandlinefu
» http://twitter.com/commandlinefu3
» http://twitter.com/commandlinefu10

Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for:

Hide

News

May 19, 2015 - A Look At The New Commandlinefu
I've put together a short writeup on what kind of newness you can expect from the next iteration of clfu. Check it out here.
March 2, 2015 - New Management
I'm Jon, I'll be maintaining and improving clfu. Thanks to David for building such a great resource!
Hide

Top Tags

Hide

Functions

Psst. Open beta.

Wow, didn't really expect you to read this far down. The latest iteration of the site is in open beta. It's a gentle open beta-- not in prime-time just yet. It's being hosted over at UpGuard (link) and you are more than welcome to give it a shot. Couple things:

  • » The open beta is running a copy of the database that will not carry over to the final version. Don't post anything you don't mind losing.
  • » If you wish to use your user account, you will probably need to reset your password.
Your feedback is appreciated via the form on the beta page. Thanks! -Jon & CLFU Team

Commands tagged find from sorted by
Terminal - Commands tagged find - 368 results
find . -name "*.txt" | xargs sed -i "s/old/new/"
find $HOME -type f -print0 | perl -0 -wn -e [email protected]=<>; foreach $file (@f){ (@el)=(stat($file)); push @el, $file; push @files,[ @el ];} @o=sort{$a->[9]<=>$b->[9]} @files; for $i (0..$#o){print scalar localtime($o[$i][9]), "\t$o[$i][-1]\n";}'|tail
2009-09-21 22:11:16
User: drewk
Functions: find perl
3

This pipeline will find, sort and display all files based on mtime. This could be done with find | xargs, but the find | xargs pipeline will not produce correct results if the results of find are greater than xargs command line buffer. If the xargs buffer fills, xargs processes the find results in more than one batch which is not compatible with sorting.

Note the "-print0" on find and "-0" switch for perl. This is the equivalent of using xargs. Don't you love perl?

Note that this pipeline can be easily modified to any data produced by perl's stat operator. eg, you could sort on size, hard links, creation time, etc. Look at stat and just change the '9' to what you want. Changing the '9' to a '7' for example will sort by file size. A '3' sorts by number of links....

Use head and tail at the end of the pipeline to get oldest files or most recent. Use awk or perl -wnla for further processing. Since there is a tab between the two fields, it is very easy to process.

mate - `find . -name 'filename'`
find . \( ! -name . -prune \) \( -type f -o -type l \)
2009-09-12 15:58:56
User: mobidyc
Functions: find
1

you must be in the directory to analyse

report all files and links in the currect directory, not recursively.

this find command ahs been tested on hp-ux/linux/aix/solaris.

find . -maxdepth 1 ! -name '.' -execdir du -0 -s {} + | sort -znr | gawk 'BEGIN{ORS=RS="\0";} {sub($1 "\t", ""); print $0;}' | xargs -0 du -hs
2009-09-11 16:07:39
User: ashawley
Functions: du find gawk sort xargs
1

A little bit smaller, faster and should handle files with special characters in the name.

diff <(ssh server01 'cd config; find . -type f -exec md5sum {} \;| sort -k 2') <(ssh server02 'cd config;find . -type f -exec md5sum {} \;| sort -k 2')
2009-09-11 15:24:59
User: arcege
Functions: diff find md5sum sort ssh
14

This can be much faster than downloading one or both trees to a common servers and comparing the files there. After, only those files could be copied down for deeper comparison if needed.

cat /var/lib/dpkg/info/*.list > /tmp/listin ; ls /proc/*/exe |xargs -l readlink | grep -xvFf /tmp/listin; rm /tmp/listin
2009-09-09 18:09:14
User: kamathln
Functions: cat grep ls readlink rm xargs
Tags: Debian find dpkg
11

This helped me find a botnet that had made into my system. Of course, this is not a foolproof or guarantied way to find all of them or even most of them. But it helped me find it.

zip -r foo.zip DIR -x "*/.svn/*"
find . -not \( -name .svn -prune \) -type f | xargs zip XXXXX.zip
find . -name \*.c | xargs wc -l | tail -1 | awk '{print $1}'
2009-09-08 08:25:45
User: karpoke
Functions: awk find tail wc xargs
Tags: awk find wc
0

This is really fast :)

time find . -name \*.c | xargs wc -l | tail -1 | awk '{print $1}'

204753

real 0m0.191s

user 0m0.068s

sys 0m0.116s

find . -regex '.*\(h\|cpp\)'
2009-09-06 11:33:19
User: Vereb
Functions: find
Tags: bash find
7

This is the way how you can find header and cpp files in the same time.

find . -name .svn -prune -o -print
2009-09-04 17:41:33
User: arcege
Functions: find
Tags: svn find
6

Put the positive clauses after the '-o' option.

find . -type f -name '*.c' -exec wc -l {} \; | awk '{sum+=$1} END {print sum}'
2009-09-04 15:51:30
User: arcege
Functions: awk find wc
Tags: awk find wc
-1

Have wc work on each file then add up the total with awk; get a 43% speed increase on RHEL over using "-exec cat|wc -l" and a 67% increase on my Ubuntu laptop (this is with 10MB of data in 767 files).

find . -maxdepth 1 -type d|xargs du -a --max-depth=0|sort -rn|cut -d/ -f2|sed '1d'|while read i;do echo "$(du -h --max-depth=0 "$i")/";done;find . -maxdepth 1 -type f|xargs du -a|sort -rn|cut -d/ -f2|sed '$d'|while read i;do du -h "$i";done
2009-09-03 20:33:21
User: nickwe
Functions: cut du echo find read sed sort xargs
2

Based on the MrMerry one, just add some visuals and sort directory and files

find . -type f -exec grep -qi 'foo' {} \; -print0 | xargs -0 vim
2009-09-03 17:55:26
User: arcege
Functions: find grep xargs
Tags: vim find grep
-1

Make sure that find does not touch anything other than regular files, and handles non-standard characters in filenames while passing to xargs.

find . -exec grep foobar /dev/null {} \; | awk -F: '{print $1}' | xargs vi
find /backup/directory -name "FILENAME_*" -mtime +15 -exec rm -vf {};
rm -vf /backup/directory/**/FILENAME_*(m+15)
find /backup/directory -name "FILENAME_*" -mtime +15 | xargs rm -vf
rmdir **/*(/^F)
find . -type d -empty -delete
2009-08-22 09:03:14
User: hemanth
Functions: find
Tags: find rmdir
6

You can also use, $ find . -depth -type d -exec rmdir {} \; 2>/dev/null

find / -name "*.pdf" -exec cp -t ~/Documents/PDF {} +
2009-08-18 06:11:35
Functions: cp find
Tags: find cp for
8

I used this to copy all PDFs recursively to a selected dir

IFS=:; find $PATH | grep pattern
2009-08-14 13:38:58
User: camspiers
Functions: find grep
Tags: bash find grep
1

Best to put it in a file somewhere in your path. (I call the file spath)

#!/bin/bash

IFS=:; find $PATH | grep $1

Usage: $ spath php

find . -type f -printf '%20s %p\n' | sort -n | cut -b22- | tr '\n' '\000' | xargs -0 ls -laSr
2009-08-13 13:13:33
User: fsilveira
Functions: cut find ls sort tr xargs
Tags: sort find ls
10

This command will find the biggest files recursively under a certain directory, no matter if they are too many. If you try the regular commands ("find -type f -exec ls -laSr {} +" or "find -type f -print0 | xargs -0 ls -laSr") the sorting won't be correct because of command line arguments limit.

This command won't use command line arguments to sort the files and will display the sorted list correctly.

mate - `find * -type f -regex 'REGEX_A' | grep -v -E 'REGEX_B'`
2009-08-12 22:24:08
User: irae
Functions: grep
1

This does the following:

1 - Search recursively for files whose names match REGEX_A

2 - From this list exclude files whose names match REGEX_B

3 - Open this as a group in textmate (in the sidebar)

And now you can use Command+Shift+F to use textmate own find and replace on this particular group of files.

For advanced regex in the first expression you can use -regextype posix-egrep like this:

mate - `find * -type f -regextype posix-egrep -regex 'REGEX_A' | grep -v -E 'REGEX_B'`

Warning: this is not ment to open files or folders with space os special characters in the filename. If anyone knows a solution to that, tell me so I can fix the line.