Finding most efficient transmission size in varying network latency scenarios
        Posted  
        
            by rwmnau
        on Stack Overflow
        
        See other posts from Stack Overflow
        
            or by rwmnau
        
        
        
        Published on 2010-03-29T16:12:17Z
        Indexed on 
            2010/03/29
            16:23 UTC
        
        
        Read the original article
        Hit count: 773
        
I'm building a .NET remoting client/server that will be transmitting thousands of files, of varying sizes (everything from a few bytes to hundreds of MB), and I'm curious about a general method for finding the appropriate transmission size. As I see it, there's the following tradeoff:
- Serialize entire file into a transmission object and transmit at once, regardless of size. This would be the fastest, but a failure during tranmission requires that the whole file be re-transmitted.
- If the file size is larger than something small (like 4KB), break it into 4KB chunks and transmit those, re-assembling on the server. In addition to the complexity of this, it's slower because of continued round-trips and acknowledgements, though a failure of any one piece doesn't waste much time.
The ideal transmission method (when taking into account negotiation latency vs. failure rate) is somewhere in between, and I'm wondering about how to find out the best size for that particular client. Do I have some dynamic tuning step in my transmission that looks at the current bytes/second average, and then raises the transmission size until the speed starts to drop (failures overwhelm negotiation cost)? Or is there some other method for determining ideal transmission size?
The application will be multi-threaded, so number of threads also factors in to the calculation. I'm not looking for a formula (though I'll take one if you've got it), but just what to consider as I create this process.
© Stack Overflow or respective owner