|
|
Subscribe / Log in / New account

Packet losses due to congestion vs. errors

Packet losses due to congestion vs. errors

Posted Jun 2, 2015 21:53 UTC (Tue) by vomlehn (guest, #45588)
Parent article: Delay-gradient congestion control

This might help solve a long-term problem in applications with a high value of the product of the data rate and the transmission time between end nodes. These include high speed connections to satellites in high or geosynchronous orbits and ultra high band width over continental scale distances. The satellite issue is especially interesting because the error rate is also considerably higher than most Earth-bound applications. When errors occur, the current algorithim treats it as congestion detection and backs off its send rate. This would be correct for congestion but does nothing to change the error rate; it only reduces throughput. So, if this algorithm can distinguish between errors and congestion, it will be very welcome from the high-speed/long-haul community.


to post comments


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds