Understanding omission failure in distributed systems
        Posted  
        
            by 
                karthik A
            
        on Stack Overflow
        
        See other posts from Stack Overflow
        
            or by karthik A
        
        
        
        Published on 2012-09-01T15:36:18Z
        Indexed on 
            2012/09/01
            15:37 UTC
        
        
        Read the original article
        Hit count: 247
        
client-server
|distributed-computing
The following text says this which I'm not able to quite agree :
client C sends a request R to server S. The time taken by a communication link to transport R over the link is D. P is the maximum time needed by S to recieve , process and reply to R. If omission failure is assumed ; then if no reply to R is received within 2(D+P) , then C will never recieve a reply to R .
Why is the time here 2(D+P). As I understand shouldn't it be 2D+P ?
© Stack Overflow or respective owner