Note though "the network should be as simple as it needs to be, but no simpler." - to paraphrase someone.
While complexity generally should be pushed to the end-stations and kept out of the network, as much as possible, that does not mean *all* complexity should be kept out of the network. There are certain basic functions that *should* be kept in the network, along with whatever complexity is required to support those functions.
E.g. the original designers of TCPIP believed it was very important for the network to abstract away differences in MTUs. They argued that otherwise it would be difficult for end-stations to reliably work-around differences in MTU, so that it would become difficult for the network to make use of larger MTUs. This would block the use of techniques such encapsulation, extra-addressing, etc., by end-stations to solve problems such as mobility, etc. For this reason they made TCPIP support fragmentation at the network level.
Sadly, later implementors and stewards of IP concluded differently. They moved to deprecate fragmentation from IPv4, and excluded it completely from IPv6. Instead, it was left to the end-stations to negotiate the correct MTU. Unfortunately however the original designers of IP have been proven correct. End-station MTU discovery has been proven to be fragile, and often does not work, due to stupid intermediate nodes blocking the required signalling (which is out of band). At the best of times it probably adds more latency than network level fragmentation.
As a result, the Internet is crippled with a 1500 MTU. :( The success of IPv6 would likely set this in stone.
Posted Apr 1, 2013 15:46 UTC (Mon) by intgr (subscriber, #39733)
[Link]
> End-station MTU discovery has been proven to be fragile, and often does not work, due to stupid intermediate nodes blocking the required signalling
Sounds like PMTU discovery has the same enemies: "smart" middleboxes (NAT) and enterprise admins (firewalls).
While I agree that protocol designers should have seen it coming, I still think the larger part of the blame lies with device vendors not implementing existing protocols to the specification and not releasing updates even when those problems are pointed out to them -- basically holding the whole industry hostage because the cost is externalized.
Which brings me to a crazy idea -- perhaps we need an "Acid test of consumer routers" -- following the model of acidtests.org, which pressured several browser vendors to improve on their standards conformance, because every savvy user could easily see how broken their choice was. Has anyone attempted something like this already?