Floating point arithmetic is too reliable.
        Posted  
        
            by mcoolbeth
        on Stack Overflow
        
        See other posts from Stack Overflow
        
            or by mcoolbeth
        
        
        
        Published on 2010-04-15T17:13:27Z
        Indexed on 
            2010/04/15
            17:23 UTC
        
        
        Read the original article
        Hit count: 496
        
I understand that floating point arithmetic as performed in modern computer systems is not always consistent with real arithmetic. I am trying to contrive a small C# program to demonstrate this. eg:
static void Main(string[] args)
    {
        double x = 0, y = 0;
        x += 20013.8;
        x += 20012.7;
        y += 10016.4;
        y += 30010.1;
        Console.WriteLine("Result: "+ x + " " + y + " " + (x==y));
        Console.Write("Press any key to continue . . . "); Console.ReadKey(true);
    }
However, in this case, x and y are equal in the end.  
Is it possible for me to demonstrate the inconsistency of floating point arithmetic using a program of similar complexity, and without using any really crazy numbers? I would like, if possible, to avoid mathematically correct values that go more than a few places beyond the decimal point.
© Stack Overflow or respective owner