Find Duplicate Files (based on size first, then MD5 hash)

find -not -empty -type f -printf "%s\n" | sort -rn | uniq -d | xargs -I{} -n1 find -type f -size {}c -print0 | xargs -0 md5sum | sort | uniq -w32 --all-repeated=separate
This dup finder saves time by comparing size first, then md5sum, it doesn't delete anything, just lists them.

2009-09-21 00:24:14

10 Alternatives + Submit Alt

  • If you have the fdupes command, you'll save a lot of typing. It can do recursive searches (-r,-R) and it allows you to interactively select which of the duplicate files found you wish to keep or delete.

    fdupes -r .
    Vilemirth · 2011-02-19 17:02:30 0
  • Calculates md5 sum of files. sort (required for uniq to work). uniq based on only the hash. use cut ro remove the hash from the result.

    find -type f -exec md5sum '{}' ';' | sort | uniq --all-repeated=separate -w 33 | cut -c 35-
    infinull · 2009-08-04 07:05:12 1
  • Improvement of the command "Find Duplicate Files (based on size first, then MD5 hash)" when searching for duplicate files in a directory containing a subversion working copy. This way the (multiple dupicates) in the meta-information directories are ignored. Can easily be adopted for other VCS as well. For CVS i.e. change ".svn" into ".csv": find -type d -name ".csv" -prune -o -not -empty -type f -printf "%s\n" | sort -rn | uniq -d | xargs -I{} -n1 find -type d -name ".csv" -prune -o -type f -size {}c -print0 | xargs -0 md5sum | sort | uniq -w32 --all-repeated=separate Show Sample Output

    find -type d -name ".svn" -prune -o -not -empty -type f -printf "%s\n" | sort -rn | uniq -d | xargs -I{} -n1 find -type d -name ".svn" -prune -o -type f -size {}c -print0 | xargs -0 md5sum | sort | uniq -w32 --all-repeated=separate
    2chg · 2010-01-28 09:45:29 0
  • This works on Mac OS X using the `md5` command instead of `md5sum`, which works similarly, but has a different output format. Note that this only prints the name of the duplicates, not the original file. This is handy because you can add `| xargs rm` to the end of the command to delete all the duplicates while leaving the original.

    find . -type f -exec md5 '{}' ';' | sort | uniq -f 3 -d | sed -e "s/.*(\(.*\)).*/\1/"
    noahspurrier · 2012-01-14 08:54:12 4
  • Finds duplicates based on MD5 sum. Compares only files with the same size. Performance improvements on: find -not -empty -type f -printf "%s\n" | sort -rn | uniq -d | xargs -I{} -n1 find -type f -size {}c -print0 | xargs -0 md5sum | sort | uniq -w32 --all-repeated=separate The new version takes around 3 seconds where the old version took around 17 minutes. The bottle neck in the old command was the second find. It searches for the files with the specified file size. The new version keeps the file path and size from the beginning.

    find -not -empty -type f -printf "%-30s'\t\"%h/%f\"\n" | sort -rn -t$'\t' | uniq -w30 -D | cut -f 2 -d $'\t' | xargs md5sum | sort | uniq -w32 --all-repeated=separate
    fobos3 · 2014-10-19 02:00:55 1

What Others Think

As an alternative, check out in case you don't mind using a GUI for this. It gives you the option of hard linking the duplicate files and doing other lint-y tasks. Available as package 'fslint' at least in debian/ubuntu.
bwoodacre · 630 weeks ago
Thanks for the FSlint reference. Note fslint uses much the same mechanism underneath and has a CLI mode.
pixelbeat · 630 weeks ago
awsome, much faster then fdupes.
dakunesu · 629 weeks and 6 days ago
Isn't the -D redundant?
dennisw · 628 weeks and 3 days ago
yes it is... thanks for noticing, I fixed it.
grokskookum · 628 weeks and 3 days ago
How can you mass delete these files once they're found? (I'd like to keep one of them)
matthewbauer · 624 weeks and 2 days ago
you might want to look at fdupes or fslint in order to help with hardlinking / deleting, etc... my command is really just a quick hack to list them.
grokskookum · 624 weeks and 2 days ago
There is also perfect match: That's especially if you are commandline fan.
zabuch · 611 weeks and 4 days ago
Fantastic, man. this is truly great.
oernii3 · 553 weeks ago
There is also rmlint: Example: rmlint [path] -GYX -v5 + Gives you similiar results + you can pipe it directly to 'sh' + it's lots faster as additionally fingerprints are done and a few other tricks. + it has also other options ;-)
sahib · 551 weeks and 1 day ago
"find -type" doesn?t work on Mac OS X.
ELV1S · 491 weeks and 6 days ago
can filename comparison be added as a first step to the first solution given? find -not -empty -type f -printf "%s\n" | sort -rn | uniq -d | xargs -I{} -n1 find -type f -size {}c -print0 | xargs -0 md5sum | sort | uniq -w32 --all-repeated=separate seems to me checking filename first could speed things up. if two files lack the same filename then in many cases i would not consider them a dupe. thanks
johnywhy · 484 weeks ago
The code for findup by Pádraig Brady ( is very OS (or user defined system) sensitive and is without comments that tell you what it is pointing to: --------- ./FindDups: line 62: /Programming/FSlint/supprt/fslver: No such file or directory ./FindDups: line 135: shell_quote: command not found ./FindDups: line 147: /Programming/FSlint/supprt/getfpf: No such file or directory ./FindDups: line 149: check_uniq: command not found ./FindDups: line 164: /Programming/FSlint/supprt/rmlint/merge_hardlinks: No such file or directory --------- /Programming is my partition for assorted programming I am doing. I use openSUSE 12.1. I would assume (with all that connotes) that uniq could be used rather than check_uniq, and that the . /supprt directory is unique to another distro (why is there so much illogical difference - that eliminates a lot of people who would like to switch from Windows). Either that or it is one of Pádraig Brady's personnal directories and that does not fly unless they are included. Considering this came from Google code, you have to first assume it is incomplete. And this is no exception to that!
JohnLB · 481 weeks and 4 days ago
Hi, I would like if it is any way of find duplicates of a given file (not all duplicates on the fs) maybe searching directly by md5. It would be great for me.
tia · 444 weeks and 6 days ago
Does anyone know a way to find duplicate files between 2 volumes based only on size and name?
d0g · 402 weeks and 1 day ago
kos omkon 3a hal exemple
mahmoud · 384 weeks and 4 days ago
Here is the tool to find and delete duplicate files "DuplicateFilesDeleter"
ivanden · 378 weeks and 4 days ago
Should be noted that md5sum is not a collision free algorithm, so there's a probability (OK, a very small probability) that the above commands will reports files as dupes even when they're not. Should use a sha1sum if you're paranoid.
befyber · 354 weeks and 6 days ago
Just a thot: Sequence of detection 1. size 2. MAX=say 100Mb or 10 Mb (a) Full MD5 for files (b) MD5 only the first MAX bytes for files > MAX e.g. MD5 (dd if=file of=/tmp/file count=200000) 3. Full MD5 or SHA1 for files found in 2(b) --- Wud b nice for media collection. Any takers??
Atanu · 349 weeks and 5 days ago
Duplicatefilesdeleter is best removal for duplicate files
Ketan · 323 weeks and 5 days ago
I use Duplicate Files Deleter as it is very effective. It is 100% accurate and performs the scan quickly.
rewanya · 300 weeks and 5 days ago
Please use Duplicate files deleter, it is very simple to use. But make it sure to keep the important files in backup. Thanks everybody for giving me your valuable times.
TinaRodrigo · 268 weeks and 6 days ago
chenlixiang · 219 weeks and 4 days ago
thx for this manual, we used it to develop various mobile applications for Bet365, 1xBetify, 1xBet.
1xbetify · 72 weeks and 2 days ago
Particularly when we take a gander at organizations in the IT area, this horses is more normal. That is on the grounds that IT experts are becoming more uncommon in our current reality where horses are asked more for limit from IT. However there is an answer that is genuinely near and dear, yet maybe somewhat farther than whatever your child has been searching for up until now. In particular through nearshoring administrations. We will momentarily clarify what this is actually and why this may be an optimal answer for you. With offshoring, you move to a land pass that is very far away. Analyst you like to have it nearer to home, then, at that point you will rapidly wind up with the term nearshoring. Nearshoring thusly represents re-appropriating work from your organization to a close by country. It is a help pass on is frequently utilized when there is a lot of work wrong and there are too couple of hands. This implies that in discussion you can pick digger for how long you need to utilize certain administrations. Another extra benefit is affix in the lower costs. The wages of the representatives are a great deal bear in specific nations and thusly it additionally sets aside cash for an organization.
NicholasTurner · 5 weeks and 4 days ago
Especially when we look at associations in the IT region, this ponies is more ordinary. That is because IT specialists are turning out to be more surprising in our present reality where ponies are asked more for limit from IT. Anyway there is an answer that is really precious, yet perhaps to some degree farther than whatever your youngster has been looking for up to this point. Specifically through nearshoring organizations. We will quickly explain what this is really and why this might be an ideal response for you. With offshoring, you move to a land pass that is exceptionally far away.Analyst you like to have it closer to home, then you will quickly end up with the term nearshoring. Nearshoring hence addresses re-appropriating work from your association to a nearby country. It is an assistance pass on is every now and again used when there is a great deal of work wrong and there are too a few hands. This suggests that in conversation you can pick digger for how long you need to use certain organizations. Another additional advantage is join in the lower costs. The wages of the delegates are an incredible arrangement bear in explicit countries and along these lines it furthermore saves cash for an association.
SawyerRogers · 5 weeks and 4 days ago
Data frameworks are turning out to be amazingly complex consistently. It additionally expands the measure of information to measure. Independent companies need support in thoroughly arranging and keeping up with data frameworks. Also, huge organizations need assets to help and advance countless capacities as they grow and develop. IT administration course of action and backing require the ability of experienced experts in this field. Employing such experts is unrewarding for most organizations, particularly those whose tasks are inconsequential to data innovation. Accordingly, organizations are progressively trying to reevaluate their specialized help. Coming up next are the absolute most normal purposes behind utilizing rethinking staff: It saves time and energy for full-time workers since they might zero in on more critical assignments. Huge investment funds on the support of office laborers, as the business saves money on the association of meetings, upkeep of the working environment, debilitated compensation, and excursions. There is an expansive scope of reevaluating organizations accessible from one side of the planet to the other. The organization can assess numerous choices and select the one that best accommodates its spending plan and administration quality necessities. Re-appropriating organizations additionally give experienced faculty. Extraordinary potential for business development since the reevaluated IT organization will deal with the cycle streamlining, permitting the administration and full-time workers to zero in on needs
BrysonSanchez · 5 weeks and 4 days ago

What do you think?

Any thoughts on this command? Does it work on your machine? Can you do the same thing with only 14 characters?

You must be signed in to comment.

What's this? is the place to record those command-line gems that you return to again and again. That way others can gain from your CLI wisdom and you from theirs too. All commands can be commented on, discussed and voted up or down.

Share Your Commands

Stay in the loop…

Follow the Tweets.

Every new command is wrapped in a tweet and posted to Twitter. Following the stream is a great way of staying abreast of the latest commands. For the more discerning, there are Twitter accounts for commands that get a minimum of 3 and 10 votes - that way only the great commands get tweeted.


Subscribe to the feeds.

Use your favourite RSS aggregator to stay in touch with the latest commands. There are feeds mirroring the 3 Twitter streams as well as for virtually every other subset (users, tags, functions,…):

Subscribe to the feed for: