What about forward-error-correction?
What about forward-error-correction?
Posted Oct 16, 2014 11:32 UTC (Thu) by Richard_J_Neill (subscriber, #23093)Parent article: A damp discussion of network queuing
1. Wifi which is limited by interference, rather than by congestion. i.e.
where a significant fraction of packets will get dropped, but not because of other traffic, but rather because of interference/noise or distance from the AP. TCP causes the client to back-off, when in fact, I think it should be more aggressive, and perform forward-error-correction: i.e. the client should assume that many packets will not make it, and it should re-transmit everything twice or more within 100ms. (This is especially true for UDP, eg for DHCP IP allocation over a network with 50% packet loss, it's nearly impossible to get the link established).
2. Ajax stuff over https. The problem here is that each Ajax connection (after Apache has finished the keepalive) requires a complete cycle of re-establishing all the encryption layers, even if the actual data is only tiny. It would be useful to have some way to keep an https session alive for many minutes.
(The combination of Ajax, https, and slightly non-ideal wifi results in a horrible experience!)
Posted Oct 16, 2014 11:58 UTC (Thu)
by JGR (subscriber, #93631)
[Link]
Much of the interference/noise is traffic for other APs or traffic for other 2.4GHz protocols. Sending even more data makes the noise problem worse.
If you're getting 50% packet loss, then you'd be better off fixing that rather than trying to work round with client fudges (i.e. change radio channel, move/add APs, change/move antennae, etc.).
As for 2, as I understand it HTTP 2 solves this.
Posted Oct 17, 2014 15:13 UTC (Fri)
by grs (guest, #99211)
[Link]
What about forward-error-correction?
When a packet is lost at the IP layer, one or more of its fragments have already failed to be received after a number of retransmissions.
What about forward-error-correction?