Doing arithmetic with up to two decimal places in Python?
        Posted  
        
            by user248237
        on Stack Overflow
        
        See other posts from Stack Overflow
        
            or by user248237
        
        
        
        Published on 2010-05-04T00:03:27Z
        Indexed on 
            2010/05/04
            0:58 UTC
        
        
        Read the original article
        Hit count: 310
        
I have two floats in Python that I'd like to subtract, i.e.
v1 = float(value1)
v2 = float(value2)
diff = v1 - v2
I want "diff" to be computed up to two decimal places, that is compute it using %.2f of v1 and %.2f of v2.  How can I do this?  I know how to print v1 and v2 up to two decimals, but not how to do arithmetic like that.
The particular issue I am trying to avoid is this. Suppose that:
v1 = 0.982769777778
v2 = 0.985980444444
diff = v1 - v2
and then I print to file the following:
myfile.write("%.2f\t%.2f\t%.2f\n" %(v1, v2, diff))
then I will get the output: 0.98 0.99 0.00, suggesting that there's no difference between v1 and v2, even though the printed result suggests there's a 0.01 difference. How can I get around this?
thanks.
© Stack Overflow or respective owner