Finding duplicate files?
        Posted  
        
            by 
                ub3rst4r
            
        on Programmers
        
        See other posts from Programmers
        
            or by ub3rst4r
        
        
        
        Published on 2013-06-25T05:42:24Z
        Indexed on 
            2013/06/25
            10:28 UTC
        
        
        Read the original article
        Hit count: 351
        
I am going to be developing a program that detects duplicate files and I was wondering what the best/fastest method would be to do this? I am more interested in what the best hash algorithm would be to do this? For example, I was thinking of having it get the hash of each files contents and then group the hashes that are the same. Also, should there be a limit set for what the maximum file size can be or is there a hash that is suitable for large files?
© Programmers or respective owner