Character encoding issues when generating MD5 hash cross-platform

Posted by rogueprocess on Stack Overflow See other posts from Stack Overflow or by rogueprocess
Published on 2010-03-25T21:20:11Z Indexed on 2010/03/25 21:23 UTC
Read the original article Hit count: 501

Filed under:
|
|
|
|

This is a general question about character encoding when using MD5 libraries in various languages. My concern is: suppose I generate an MD5 hash using a native Python string object, like this:

message = "hello world"
m = md5()
m.update(message)

Then I take a hex version of that MD5 hash using:

m.hexdigest()

and send the message & MD5 hash via a network, let's say, a JMS message or a HTTP request.

Now I get this message in a Java program in the form of a native Java string, along with the checksum. Then I generate an MD5 hash using Java, like this (using the Commons Codec library):

String md5 = org.apache.commons.codec.digest.DigestUtils.DigestUtils.md5Hex(s)

My feeling is that this is wrong because I have not specified character encodng at either end. So the original hash will be based on the bytes of the Python version of the string; the Java one will be based on the bytes of the Java version of the string , these two byte sequences will often not be the same - is that right? So really I need to specify "UTF-8" or whatever at both ends right?

(I am actually getting an intermittent error in my code where the MD5 checksum fails, and I suspect this is the reason - but because it's intermittent, it's difficult to say if changing this fixes it or not. )

Thank you!

© Stack Overflow or respective owner

Related posts about md5

Related posts about character