Should distributors disable IPv4-mapped IPv6?
Should distributors disable IPv4-mapped IPv6?
Posted Jun 7, 2016 13:27 UTC (Tue) by farnz (subscriber, #17727)In reply to: Should distributors disable IPv4-mapped IPv6? by paulj
Parent article: Should distributors disable IPv4-mapped IPv6?
Because this was trying to use IPv6 12 years ago, in an extension-based fashion, and failing because middle boxes already knew too much about what a "legitimate" UDP/IPv4 or TCP/IPv4 packet looked like, and assumed that if it wasn't their definition of "legitimate", it had to be dropped. An extension approach suffers the same problem; how do you design the extension such that it meets all middle box definitions of legitimate?
Posted Jun 7, 2016 13:55 UTC (Tue)
by paulj (subscriber, #341)
[Link] (54 responses)
That said, you have far, far more chance of getting a packet through such things if they begin with long-standing headers, than with brand-new ones.
Posted Jun 7, 2016 15:03 UTC (Tue)
by farnz (subscriber, #17727)
[Link] (53 responses)
Between 1998 and 2004, I had no IP connectivity to my home that lacked such middle-boxes - either mobile or fixed line. That's about the beginning of IPv6 time.
Any plan you're making to replace IPv4 has to account for that - given that you've got such middle-boxes, a new packet format is better, as it is guaranteed to consistently fail to pass the middle-box; in contrast, 6to4 was frustrating, as it sometimes worked, and sometimes didn't on one of the networks I used. I eventually found out why - the network had disparate configuration between middle-boxes, and if I hit the "pass unknown protocol" middle-box, it worked, if I hit the "block unknown protocol" middle-box, it didn't. The network operator's reaction was "well, yeah, who cares - it's not a supported protocol".
Posted Jun 7, 2016 15:17 UTC (Tue)
by paulj (subscriber, #341)
[Link] (52 responses)
Before IPv6 existed, even if some boxes might do data-inspection on frames with IPv4 headers and block some stuff based on data past the v4 header, the fact remains that the probability of a packet with a non-IPv4-header (e.g. IPv6) getting forwarded by any given IPv4 router was 0, while the probability of an IPv4 packet with either an IP option or some non-TCP/UDP protocol getting forwarded was >0. P>0 works better than P=0.
Even today, literally *decades* on, the probability that any given IP router will usefully forward an IP packet is much higher if IP.version == 4, than if IP.version == 6. IPv6 tables are smaller than IPv4 tables, and not growing as fast (i.e. the v6 public Internet is smaller and less actively connected to than the v4 one still). But, success!
Posted Jun 7, 2016 15:54 UTC (Tue)
by farnz (subscriber, #17727)
[Link] (51 responses)
Apparently random packet loss is far harder to diagnose and fix than 100% failure. When I can literally be in the middle of diagnosing the fault, and a change in middle-box makes it go away, it's very hard to find out where the blockage is. When there's 100% failure at a given node, the fault is easy to find and fix.
So yes, I'd prefer 100% failure to 80% success at random on any given link - 100% failure is a clear fault, 80% success causes failure that I can't diagnose.
Posted Jun 7, 2016 16:21 UTC (Tue)
by paulj (subscriber, #341)
[Link] (50 responses)
Changes in path putting your packet through a broken box that deliberately fails to forward packets that the spec for its forwarding plane says it should - that will bite you regardless. You're boned any which way if you have such boxes in the path to networks you care about (i.e. near ones you connect from or to). I fail to see how that impacts on the discussion.
A v4 packet with a tunnel protocol could be killed off as easily as any other IPv4 packet with other protocols based on that (and as you say, you had that problem).
If you're trying to argue that the existence of such evil, b0rken middle-boxes means it's somehow a better idea to require /all/ hosts to implement both a new protocol _and_ a second Internet topology along with it, I'd be curious how. Personally, I don't think stupid, b0rken evil middle-boxes will be any less of a problem if/when IPv6 becomes dominant.
I'll tell you this though, I'm fairly damn sure I can get an IPv4 packet with extensions to a lot more hosts out there than I could an IPv6 packet (cause if needs be, I can disguise the extensions, e.g., behind a HTTP header or even a HTTPS header in the worst case).
Posted Jun 7, 2016 16:27 UTC (Tue)
by farnz (subscriber, #17727)
[Link] (49 responses)
No, it wouldn't bite you if you just did TCP and UDP over IPv4, as the network expected. It only bit you when you expected to do more than TCP and UDP (6to4, IPSec without NAT-T etc). And the issue is that the middle-boxes come in and out - as long as you're doing "standard" ICMP, TCP and UDP, it's fine; as soon as you do something more complex, it's problematic. My standard diagnostics tools use ICMP, TCP and UDP, so spotting an issue is hard.
Given the existence of evil middle-boxes, I would prefer a new protocol that's guaranteed not to pass, to a protocol that has an 80% success rate - the former is clear and diagnosable, the latter is unreliable.
And, IME, more networks now support IPv6 than are clean to all IPv4 packets - NAT boxes break IPv4 with extensions, for a start.
Posted Jun 7, 2016 16:33 UTC (Tue)
by paulj (subscriber, #341)
[Link]
Posted Jun 7, 2016 16:45 UTC (Tue)
by paulj (subscriber, #341)
[Link] (47 responses)
Posted Jun 7, 2016 17:34 UTC (Tue)
by farnz (subscriber, #17727)
[Link]
These aren't exactly carefully chosen networks - they're random hotels and the like. I'm able to notice that they've got v6 connectivity because my VPN to work tells me that they're IPv6 enabled.
Posted Jun 7, 2016 18:21 UTC (Tue)
by jem (subscriber, #24231)
[Link] (37 responses)
There's no need to wait. We are way beyond that point already. Google's "Worldwide IPv6 adoption" figure just hit 12% last weekend. 27% of users in the United States use IPv6.
The IPv6 traffic volume is big too, since all the big sites like Facebook, Google, Netflix and Yahoo are reachable via IPv6, and dual stack hosts typically prefer connecting over IPv6. In fact, I am posting this over an IPv6 connection from my home to LWN:s IPv6 address. I never asked for IPv6 connectivity, my ISP just turned it on without asking.
I suggest you do some research before jumping to your conclusions on the slow growth of the IPv6 network. It has had a slow start, but it is really happening now. A tipping point will be when the scarcity of v4 addresses combined with the ubiquity of v6 clients will start making it economically viable to simply ignore v4 only clients and provide services only over IPv6. We're not there yet, but the day will come.
Posted Jun 8, 2016 8:37 UTC (Wed)
by paulj (subscriber, #341)
[Link] (36 responses)
I'm 98% sure that IPv6 will, one day, beat out IPv4. However, I'm not confident it will happen within the next decade. And yes, there'll be an inflection point somewhere - that's how population phase changes and network effects inherently work. The issue is the span of that S-curve across time.
I am also *baffled* that anyone could seriously - from today's perspective and the benefit of the hindsight it provides - try argue that the chosen IPng transition strategy was the correct one. It's there with Python 3 and Perl 6 on the shelf labelled "Example case studies" in the "Second Syndrome" section of the Brooks' library.
Posted Jun 8, 2016 8:46 UTC (Wed)
by paulj (subscriber, #341)
[Link]
Posted Jun 8, 2016 9:04 UTC (Wed)
by Cyberax (✭ supporter ✭, #52523)
[Link] (34 responses)
And lots of transition time was caused by network hardware not supporting IPv6 seriously up until around 2010 or so. A huge company that I know (though I don't work there) recently started IPv6 migration exactly because their internal hardware is soon due for replacement and it'll be finally IPv6-compatible. Please note, that IPv64 would still have been hampered by the very same hardware issues.
Posted Jun 8, 2016 9:16 UTC (Wed)
by paulj (subscriber, #341)
[Link] (33 responses)
Look, no doubt it will eventually replace v4, but arguing IPv6 is an example of a successful transition is *nuts*. Even today, when v4 is actually _exhausted_ completely in terms of new RIR allocations (other than AFRINIC), the IPv6 Internet is still very much smaller in terms of connectivity than the v4 one. That is just staggering. That would be lunacy if that were predicted 10-15 years ago.
An extension IPng would not have been as hampered by hardware - the whole point of accepting the slight ugliness of an extension would have been that it'd have been inherently capable of crossing across sections of the network that were not upgraded and only v4-capable. And more efficiently routing wise than with any transition strategy that brought in a completely disjoint, separate address space.
Posted Jun 8, 2016 17:18 UTC (Wed)
by farnz (subscriber, #17727)
[Link]
Arguing that it's successful may be nuts, but so's arguing that a specification change could have made it much better.
The underlying issue is that, by 2000, a significant fraction of networks were actively hostile to any traffic that did not fit their predefined patterns for TCP/IP and UDP/IP. This meant that changing over needed careful coordination - regardless of what the actual packets look like.
And, in terms of other big transitions in the communications sphere, I don't think it's going that slowly; take IP Multimedia Subsystem (IMS), for example, which is a 2003 specification for replacing PSTN and UMTS telephony with IP-based telephony, and is a prerequisite for carrying voice calls and SMS over LTE networks; we are only just now beginning to see limited deployment of IMS, and then only within networks (being transcoded back to SS7 for interoperation when needed, or calls just dropped instead of bothering with interop), and only because FiOS and LTE don't support the old circuit-switched tech required for voice. For IPv6 to be as slow, it'd have to be at the stage where people are only deploying IPv6 internally (no interworking), and then only because they have no other choices.
Posted Jun 8, 2016 18:18 UTC (Wed)
by raven667 (subscriber, #5198)
[Link] (30 responses)
Fundamentally even with an "extension" of IPv4 there would be hosts that had the extension and ones that did not which would be a disjoint addressing in either case. A host without the extension couldn't talk using the extension to a host with the extension without having the extension, this isn't an area where there are degrees of difference.
Posted Jun 8, 2016 20:54 UTC (Wed)
by paulj (subscriber, #341)
[Link] (29 responses)
In the former case, it means 'new' hosts can still easily send packets toward other 'new' even across 'old', un-upgraded sections of network, if they know the address. Existing mapping/rendezvous services (e.g. DNS) do not need anything fancy, other than to be able to support the new format of address. Of course, 'old' to 'new' and vice versa still require ALGs to communicate, but still 'new'-'new' trivially works with the former approach and doesn't with the latter.
6to4 is a poor cousin of that former approach. However, it is crippled in a world where it isn't an intrinsic part of the spec (so it has to work like a tunnel with dedicated decap/VTEP points and packets must /always/ go to them; rather than working like routing and the first 'new' host that has a viable route can do the right thing), and where a significant part of the 'new' address space has no relation to the 'old' (meaning routing between the unrelated 'new' space, and any 'extended' new space similarly needs tunnels and is inefficient).
Posted Jun 8, 2016 22:40 UTC (Wed)
by Cyberax (✭ supporter ✭, #52523)
[Link] (7 responses)
Posted Jun 8, 2016 23:26 UTC (Wed)
by nybble41 (subscriber, #55106)
[Link]
Not quite. There does need to be a gateway with a public IPv4 address on each side, but it doesn't need to be the host itself. This is similar to the situation with 6to4: you can have any number of IPv6 hosts with 6to4 addresses behind a 6to4 gateway with a single IPv4 address, and they can all communicate with other hosts similarly located behind other 6to4 gateways. Packets are routed IPv6 to the local gateway, then IPv4 to the remote gateway, and finally IPv6 again to the destination. (Naturally, if you have an IPv6 route to the destination's 6to4 address then you can avoid the gateways entirely.)
The problem which would be alleviated by having *only* 6to4 ("extended" IPv4) addresses would be communication between a host with a 6to4 address and one with a native IPv6 address (and no 6to4). This situation requires a relay to translate between encapsulated 6to4 packets and the IPv6 Internet, which was always a weak point—the other being routers that arbitrarily drop 6to4 packets just because they aren't TCP or UDP, which could have been prevented, albeit with some overhead, by encapsulating 6to4 traffic in UDP instead of giving it a new IP protocol number. As a rule any traffic which required the services of a 6to4 anycast relay would not be routed efficiently, even assuming the packet wasn't filtered and the relay wasn't overloaded. The best you could hope for is that the packets reach a relay quickly, in both directions, since the correct routing can't be determined until after the packet has been translated.
Posted Jun 10, 2016 9:21 UTC (Fri)
by paulj (subscriber, #341)
[Link] (5 responses)
Posted Jun 10, 2016 9:48 UTC (Fri)
by farnz (subscriber, #17727)
[Link] (4 responses)
Let's use 6to4 addresses only, for now, just to make it clear, and use the "dotted quad" notation anywhere there's an IPv4 address, not just at the end.
If I have IPv4 192.0.2.0 and IPvN 2002:192.0.2.0::/64, and you have IPvN 2002:192.0.2.0:ffff::/64, what obliges me, as the user of 192.0.2.0, to route your IPvN packets and not just drop them on the floor as "not for me"? Indeed, what prevents me from claiming the entirety of 2002:192.0.2.0::/48 as "mine"?
Posted Jun 10, 2016 10:29 UTC (Fri)
by paulj (subscriber, #341)
[Link] (3 responses)
Once the legacy space is out, further assignments must of course be from a prefix that is constant in the legacy space. It would be the assigning authority that determines that.
As to what stops you advertising other people's space - or greater prefixes spanning many assigned spaces, well nothing really stops you technically in BGP as used today. However, there are socio-political-commercial checks. E.g., what stops you advertising 2001::/16 to todays public Inter6net?
Posted Jun 10, 2016 10:33 UTC (Fri)
by farnz (subscriber, #17727)
[Link] (2 responses)
What stops me not advertising 2002:192.0.2.0::/48 at all in the IPvN space, and just advertising 192.0.2.0 in the IPv4 space, thus allowing me to hijack any suballocations? In IPv6, it's simple - if I control 192.0.2.0, I control the entirety of 2002:192.0.2.0::/48 anyway, and thus hijacking it isn't an issue.
And, from the description you're giving of post-runout allocations, we'd effectively sacrifice 32 bits of address as "dead" - especially since people are likely to optimize their IPvN routing to go down fast paths if those bits are the static "no matching v4" prefix, and to just route over IPv4 otherwise, forcing people who want to switch off IPv4 routing to continue to take part in the v4 network indefinitely, or lose reachability.
Posted Jun 10, 2016 10:49 UTC (Fri)
by paulj (subscriber, #341)
[Link] (1 responses)
Well, what stops you advertising /any/ prefix X in IPv4 today that you don't have a right to advertise? What you're asking is exactly equivalent to "What stops me advertising 64/8?" or "what stops me advertising 184/8?" It's an interesting discussion, but not specific to transition mechanisms for extending IP address bits.
As for sacrificing dead bits, why do you think they have to be /32? There's no reason we couldn't have used foresight in the 90s to reserve a /8 in the v4 space for the extended space. Where I wrote "further assignments must of course be from a prefix that is constant in the legacy space" I didn't intend that to mean that prefix would have to be the full width of the legacy space.
Posted Jun 10, 2016 10:59 UTC (Fri)
by farnz (subscriber, #17727)
[Link]
Because I fully expect router vendors to do the same sort of shit as they do today, and do anything to win benchmarks. If you can be 0.01% faster by special-casing IPvN to the "extended" prefix, and using IPv4 routing for the remainder of the IPv4 network, that's what you'll do, and you'll blame other people when it breaks, right up until you're proven to be at fault.
Thus, I pay the pain of IPv4 routing for much, much longer than I need to - I may have access to far better IPvN connectivity (e.g. he.net were doing some incredible - Cogent-beating - deals on IPv6-only transit at one point), but I'm stuck with IPv4 indefinitely.
Posted Jun 8, 2016 23:00 UTC (Wed)
by nybble41 (subscriber, #55106)
[Link] (11 responses)
And in the end all you get out of this scheme is the ability to avoid upgrading a few core routers to handle the new protocol natively—you still have to update all the endpoints and application software, and place dual-stack routers at the border of each IPv6 "enclave". In other words, most of the actual hard work that's held up IPv6 adoption would remain unchanged.
In terms of actually getting everyone to move to using the new protocol natively, without tunneling via IPv4, I think IPv6 has been rather successful. More so than if significant compromises had been made to retain compatibility with IPv4, at any rate.
> Existing mapping/rendezvous services (e.g. DNS) do not need anything fancy, other than to be able to support the new format of address.
This is exactly what was done for IPv6. Support for new addresses takes the form of AAAA records. "A" records with longer addresses would not have been any different from a technical point of view, and reuse of the name would just have created confusion given the necessarily incompatible binary format.
Posted Jun 10, 2016 9:42 UTC (Fri)
by paulj (subscriber, #341)
[Link] (10 responses)
AAAA is exactly what I was referring to as the straight forward approach. Note that things like 6to4 require /more/ than just DNS in order for IPv6 hosts generally to be able to communicate with each other. Because of the two different types of address space, 6to4 requires an additional special service to allow one class to reach another. Because one class of the 'new' address space (the non-6to4 space) has no connection at all to the 'old' there is no function from labels in that 'non-6to4' space to a routing label in the 'old' space. And because of that you need a special route and special mapping routers, and commercial interests in providing these things don't align (a provider interested in IPv6 will deploy the 'non-6to4' space and may not bother with relays for customers). So the whole things becomes a whole lot less straight-forward than just using simple-extension of existing address databases, using the address found, and *any* router later being guaranteed to be able to carry out a trivial f(IPng-ID) -> (IPv4 routing label) function based solely on the packet header if needs be and re-use existing routing tables to forward...
But hey.
Posted Jun 10, 2016 9:49 UTC (Fri)
by farnz (subscriber, #17727)
[Link] (9 responses)
You don't need the special service if you already have IPv4 of your own - it's a trivial configuration on a dual-stack router to route 6to4 over IPv4, and to route native over IPv6. The special service (the anycast relay) existed to allow people without legacy addresses (and thus legacy routes) to route to people using 6to4.
Posted Jun 10, 2016 10:42 UTC (Fri)
by paulj (subscriber, #341)
[Link] (8 responses)
Can routing between 'new extended from old' and 'new disjoint' be efficient if you have fully native networks between them? Sure! Routing between them over 'old' networks sucks though, especially while the 'old' network is still relatively large.
Posted Jun 10, 2016 10:47 UTC (Fri)
by farnz (subscriber, #17727)
[Link] (7 responses)
The relays are completely avoidable. If I have native IPv6 and native IPv4, I can add an additional 6to4 address to my service, and then I'm using (as per RFC 3484) 6to4 to people who are only doing 6to4 thus far, and native IPv6 when I'm communicating with people on native IPv6.
Even if I don't want to go that far, if I, as a dual-stack user, put in a 6to4 special route at the v6v4 router, I only pass through a relay when a 6to4 user tries to contact my IPv6 address not knowing what my IPv4 address is. Once I've responded once, they've got a cached route via the 6to4 direct mechanism that they can use.
And again, if this is so important to deployment, why didn't it happen? Why hasn't any significant IPv6 service advertised both native and 6to4 addresses, so that I can use routing over IPv4 if I have no native IPv6?
Posted Jun 10, 2016 10:53 UTC (Fri)
by paulj (subscriber, #341)
[Link] (6 responses)
I can fully understand why no one wants to build stuff on 6to4.
Posted Jun 10, 2016 10:59 UTC (Fri)
by farnz (subscriber, #17727)
[Link] (4 responses)
But I'm not talking about building stuff on 6to4; I'm talking about doing the bare minimum to give 6to4 users an efficient way to route over IPv4 to you, instead of over native IPv6.
Obviously, in the current deployment, native is better if you can get it - but if routing over v4 is so important, why is nobody doing anything to make it even possible for people to use 6to4?
I suspect the answer is that if you have native IPv4, but no native IPvN, there's always a penalty (no matter how slight) for using IPvN instead of IPv4 - so the only people who care about IPvN are the people who can't get decent native IPv4 connectivity. In turn, these are the people who cannot make IPvN over IPv4 routing work reliably - they have no access to the IPv4 routers. And so, you end up in a situation where no-one can be bothered to deploy, because there's no reason to for as long as IPv4 works for you (while there em is a cost to IPvN connectivity, in terms of the increased threat surface if nothing else, thus encouraging people to block IPvN).
Posted Jun 10, 2016 12:21 UTC (Fri)
by paulj (subscriber, #341)
[Link] (3 responses)
That read to me like an argument rooted in the "completely logically-new Internet as primary transition strategy, later 6to4" world. And :
"but if routing over v4 is so important, why is nobody doing anything to make it even possible for people to use 6to4?"
You're having a completely different argument to the points I made. I don't disagree at all that 6to4 sucks in the world as it developed. Indeed, that's fundamental to my argument! 6to4 sucks *BECAUSE* of the chosen primary transition strategy, which meant there was a significant disjoint v6 space, meaning you could never get good routing in v6 generally using the extension approach. Which led to much unhappiness, even with 6to4.
Never the less, many people *did* do it - they got "disjoint space" v6 connectivity via tunnel brokers, even before 6to4 (intrinsically going to suck). It was the 'cool' thing to do amongst the circle of networky geeks I knew in the early 00s anyway - as part of getting ready for the imminent IPv6 utopia (which I was a devoted believer in back then).
So, if you want to seriously discuss this, stop referring to arguments based on how things work when there's a large chunk of disjoint 'new' space. Cause, no argument there - extension then sucks. Let me quote one of my earliest comments in this article again, and highlight it:
"No, *6to4* is not an 'extension' approach. It's *the intrinsically inefficient and problematic* "bridge-the-gap" transition mechanism you're left with *once you have created a second Internet address space that is completely divorced from the existing* and (still, to this day, /decades/ later) dominant address space."
I am *fully* aware of how 6to4 sucks and why it was unattractive.
And again, if you don't believe extensions are possible, note that the Internet - *en masse* - *CHOSE THE EXTENSION APPROACH*. Faced between the 'clean' way of IPv6, and the horrid, backward way of IPv4 PNAT, the Internet went with PNAT. Except, large networks have exhausted even the extra bits that port space gives (which is effectively less than 16 due to TCP timeouts and other practical considerations). So now, now that IPv4 really is completely out, and we use so many apps that even the crappy-extension approach of "CG"-PNAT stops working for large providers, now we're seeing some non-trivial IPv6 at last.
Success? Really? :)
Posted Jun 10, 2016 12:35 UTC (Fri)
by farnz (subscriber, #17727)
[Link] (2 responses)
And my argument, which you are consistently and dishonestly misrepresenting, is that the problems with 6to4 that I faced between three hosts under my control are emblematic of why no extension approach that requires support at more than one point in the network can succeed. If I can't reliably operate any extension method between three hosts that I'm willing to make arbitrary changes to in order to make it work (short of treating IPv4 as a wire, and running something like L2TP over it), what makes you think that it'd work better if that was the only option?
PNAT is not an extension approach - it's a way for a network to unilaterally expand, whether or not anyone else in the ecosystem wishes to co-operate with them; there is no way for you to tell whether 81.187.250.192 is a NAT router, or a host, from the information I'm giving you here.
While 6to4 is not an extension approach, many of the problems people face with it are exactly the same problems you would face with an extension approach; if 6to4 had succeeded, while native IPv6 was still taking ages to roll out, then I would be less skeptical of your claims.
And, FWIW, I believe that faced with the choice of IPvN, or NAT44, people would still choose NAT44 - everything that works in pure IPv4 can be made to work in a NAT44 world, while there are immediately hosts that are inaccessible if you try to grow with IPvN. As long as NAT44 enables you to grow in the IPv4 world, ignoring end to end comms, you have zero incentive to do anything to go to IPvN - after all, if I'm on IPvN only, but LWN.net is still on IPv4 only, there's no way for me to communicate with LWN, other than NAT44, and if I have to have NAT44, why would I do the extra work to have IPvN too?
And yes, success, really. IPv6 rollout is taking about the same amount of time as IPv4 rollout did, and yet IPv4 gave people entire new capabilities that did not exist before they moved from pre-IP protocols to IPv4. We've got another 5 to 10 years before IPv6 is slower than IPv4; and they're both rolling out faster than the replacement of in-band signalling with out-of-band signalling in the PSTN.
Posted Jun 10, 2016 13:41 UTC (Fri)
by paulj (subscriber, #341)
[Link] (1 responses)
"While 6to4 is not an extension approach, many of the problems people face with it are exactly the same problems you would face with an extension approach; if 6to4 had succeeded, while native IPv6 was still taking ages to roll out, then I would be less skeptical of your claims."
My issue is some of the problems you've listed are problems caused by the pre-existence of the disjoint v6 address space. Which is exactly congruent with my point. The existence of that disjoint space creates routing problems for extension approaches. That's a mathematical fact grounded in the theory of routing in graphs.
Had we gone with an extended space first, that would have allowed the new protocol to gain a foothold much more easily, because it could have re-used the existing routing state of IPv4. The 'new' packets would have been routable across IPv4 networks just as PNATed packets and 6in4 packets between 6to4-space v6 hosts are. With that foothold, it would have been much easier for applications to use IPv6. We needn't have had OS vendors de-prefer v6 addresses to v4 ones in SAS algorithms, cause if v4 was working then 'new' should have worked too - routing wise at least. Native links and native routing might have been easier to justify building earlier as there'd have been more applications using v6. The natively routed 'new' network would perhaps have been bigger with exhaustion, and so any relays needed post-exhaustion would have been closer to the networks actually dependent on them.
I'll grant you middle-boxes and the difficulty of getting new protocols across the modern Internet however:
1. This problem was a lot less worse in the mid-90s than ten years later. Had an extension approach via an IPv4 protocol been baked into RFCs in 1995, it may have fared a lot better than an IP pre-amble which is inconsistent past the first version field and completely unforwardable by 'old' hardware, and even 'new' hardware that hasn't been configured to be part of the logically new Inter6net.
2. Even if 1 was an issue, you still can just use an existing IP protocol, e.g. UDP have a normal looking UDP header before your extension. E.g., you can just dual-specify the UDP source port as the flow-label for your extension header beyond (so you don't need another flow-label; as a number of other IPv4 encapsulation protocols do, e.g. VxLAN).
3. Even if 2 is an issue, cause UDP is blocked, you can still use TCP. And there are TCP extensions to be able to use TCP in a data-gramme mode precisely of this issue.
Is that aesthetically pleasing, no. It might have been more pragmatic though.
Posted Jun 10, 2016 13:58 UTC (Fri)
by farnz (subscriber, #17727)
[Link]
PNAT is a local-only approach - my use or not of PNAT is transparent to the Internet at large, as long as I control the PNAT I sit behind. With 6 billion people, and only 4 billion IPv4 addresses, this is not an approach that scales out, as not everyone can control their own PNAT (hence CGNAT).
I have consistently cited two problems with 6to4:
An extension header does not address either of these problems; instead it addresses a problem that I don't think we faced (that of convincing networks that don't filter traffic to pass IPvN traffic - you could already run 6to4 or 6in4 tunnels over those networks, and 6to4 tunnels even had autodiscovery of endpoints). I think the deeper problem is that IPvN, no matter how you do it, is a solution to a non-problem for the majority of networks, and (definitionally) creates new potential problems (even if all it does is permit traffic through that an IPv4 only IDS or firewall did not properly inspect).
I don't see how making IPv4 space a strict prefix of IPvN space solves either problem; arguably, the extension method makes it worse, by making it more probable that people have actual security incidents traceable to not correctly blocking IPvN when they need to, thus triggering early overblocking.
Posted Jun 10, 2016 17:51 UTC (Fri)
by nybble41 (subscriber, #55106)
[Link]
Let's say I agree with you on this point. The reason for this is not that the designers of IPv6 were stupid or short-sighted, it's that they didn't want tunneling over UDP/IPv4 to become entrenched as the new de facto standard, which is the logical end result of the address extension approach. End-to-end connectivity was not the only problem they were trying to solve. That's why 6to4 tunnels were positioned as a temporary transition mechanism and not a long-term solution. Extending IPv4 addresses in a way compatible with IPv4-only core routers would have made something like 6to4 an essentially permanent fixture, which would lead to suboptimal routing and stand in the way of various other efficiency improvements.
Basing the IPv6 address space on the massively fragmented IPv4 space would have been a major mistake in the long term, even if it initially resulted in faster adoption. The only practical way to fix the fragmentation issue was to start over from scratch.
Posted Jun 8, 2016 23:39 UTC (Wed)
by raven667 (subscriber, #5198)
[Link] (8 responses)
That does describe how 6to4 works when talking between two subnets which have 6to4 enabled on their gateway router the traffic is routed directly over IPv4 because you know the IPv4 endpoint as its encoded in the IPv6 address, you only need the well-known relay for communication with endpoints outside of the 6to4 address range. There is really no way to avoid that, which should be clear, unless you forever tied new addresses to the old, which would then still be a limited resource that would run out right about now, not solving the larger problem.
You could make a case that earlier adoption of something more like Teredo might have had a better technical chance at working, 6to4 was killed in large part by uncooperative middle boxes, but in either case someone has to run translation endpoints to talk to endpoints outside of the compatibility address range, no one wanted to do it for 6to4 and MS paid out of their own pocket to make Teredo work. I'm just not sure in this universe how one could have plausibly done a significantly better job.
Posted Jun 10, 2016 10:19 UTC (Fri)
by paulj (subscriber, #341)
[Link] (7 responses)
At some point the portion of the routing label that is recognised by the old space must become fixed/invariant, and the portion that changes between assignments of labels is purely in the extended space. That's no different to now having run out of v4 space, and new allocations from RIRs now generally only being possible from v6 space (well, except AfriNIC I think, and some special case reserves). Politics and power-plays (money, etc.) around trading prefixes valid in the old space would still exist, to the extent the old space was more valuable than the new due to transition issues.
However, the 'new' Internet could have had a much more natural roll-out and deployment. It wouldn't have had to boot-strap a whole new Inter6net, it would have just naturally come into being through the deployment of software updates - without much administrative action (inc. cross-organisational administrative action that might require contracts to be signed) being needed, if any at all. The 'new' space would have re-used the routing state of the existing 'old' Internet - the 'new' would not have been blocked on that routing state having to be recreated, and dependent on admin action. The 'new' could have re-used the 'old' routing state where it existed, while the building out of the pure 'new' routing links and state took place.
Relays would not have been necessary at all, until the old space ran out, if even then. Given this would have provided a much more automatic transition, able to re-use existing assignment and routing state, it is plausible that pretty much everyone would have been up and running with the 'new' well before 'old' ran out. Perhaps the build-out of the 'new' routing state would have been pretty much complete and this would be a non-issue - all the routing would have been fully native anyway. Even if pockets of 'old' networks continued to exist past the 'old' space running out of addresses, the operation of the relays would have been much more naturally aligned with those needing them. The 'old' that needed to be bridged across would have been smaller, the 'relays' required would consequently be much closer to those needing, which would naturally limit the routing inefficiency and also make it commercially make the tunnel provider much more likely to be your provider (or your provider's provider) - making the commercial incentives more natural.
Stupid middle-boxes are an orthogonal problem. They'll be a problem in IPv6 as much as IPv4. I don't think the exact protocol matters, so much as proportion of packets. There is safety in numbers. If enough packets of a certain type are important, they'll have to be allowed through. Also, the problem of stupid middle-boxes was less bad in the mid to late 90s, than a decade later. There was perhaps still a window of opportunity in the mid-90s to get something out, while the "boxes built or admined by retards" ratio was still comparatively low. However, of course, it wasn't realised how much worse that problem would get.
Further, if "middle-boxes built or admined by retards" is an issue for packets with common headers and extension data, it is surely an even /bigger/ problem for packets with uncommon, completely new headers. ;)
If you argue that the build-out of the pure 'new' routing state would have been no faster in this model, and hence that it'd have run into the same problems, well, that's fair enough. I think the automatic re-use of existing state an extension method would have allowed, could have allowed such an IPng to have gained traction much faster. But that is indeed opinion, and we will never know.
Posted Jun 10, 2016 10:24 UTC (Fri)
by paulj (subscriber, #341)
[Link]
Except, it doesn't due to stupid middle-boxes. As a consequence, the Internet MTU is effectively forever frozen at 1500 - even if pretty much everyone supports >1500. We can't really make general use of the powerful technique of encapsulation as a result (least, not over the general Internet). That 1500 MTU becomes ever more restrictive as Internet bandwidth increases.
Posted Jun 10, 2016 10:39 UTC (Fri)
by farnz (subscriber, #17727)
[Link] (4 responses)
I actually think that the "IPv4++" approach would have been slower to roll out, not faster. If "routes over IPv4" was a priority for anyone deploying IPv6, we'd see a lot of 6to4 rollout in the wild; RFC 3484 defines address selection policy such that I can advertise 2001:db8::1 and 2002:192.0.2.1::1 in DNS, and have people whose IPv6 support is native communicate over native IPv6, and people who use 6to4 route over the IPv4 network, not depending on intermediate relays.
Empirically, virtually nobody has published 6to4 addresses in DNS along with their native IPv6 addresses, and yet every significant IPv6 stack out there supports RFC 3484 address selection, and has done so for at least the last decade (I don't know what behaviour the pre-Vista Windows stack had). If being able to carry IPv6 traffic over IPv4 routes to people who are stuck routing from behind an IPv4 only AS was operationally beneficial, why are we putting roadblocks in the way of doing that?
Plus, I don't think the IPv6 transition is going at all slowly - I can't think of any large scale, multiple network renumbering exercise that completed quickly in the absence of compulsion; the transition to IPv4 was only quick because NSF said "on this date, the backbone will only carry IPv4" at a time where their backbone was the only choice for long distance routing. Same applies to phone number renumbering - it takes multiple decades to get it to happen (see also UK phone numbers - which are more akin to DNS labels - where the routing is still controlled by moving bits of paper from one operator to another, because 1970s telco kit can't handle automatic routing).
Posted Jun 10, 2016 10:59 UTC (Fri)
by paulj (subscriber, #341)
[Link] (3 responses)
Posted Jun 10, 2016 11:03 UTC (Fri)
by farnz (subscriber, #17727)
[Link] (2 responses)
OK, so what is your point? You're claiming that I'm missing it, but you're not telling me what it is, and when I try to reason based on what happened, you go into "magical sky fairy world", and claim that things would of course have been better.
From where I'm sitting, the only extensions to IPv4 that have succeeded since 1995 are ones that are local-only (like NAT). As soon as you try to push over the general Internet (DCCP, SCTP, IPSec etc), you face unreliable delivery problems due to network admins "knowing" what legit IPv4 traffic looks like. Thus, I think that in a world where the only way to do IPv6 is to do IPv4 plus extension, people without IPv4 would be treated even worse than people with only IPv6 are today - because for everyone else, it's business as usual in IPv4, and we've done the tickybox exercise to show that IPvN could work in theory, but only works in practice as long as you have native IPv4 too.
Posted Jun 10, 2016 14:06 UTC (Fri)
by paulj (subscriber, #341)
[Link] (1 responses)
Had the _initial_ transition strategy - designed and agreed on in the early 90s - been a "re-use the existing connectivity" one, and the disjoint-address-space avoided (at least, till closer to the exhaustion of the old) then that /might/ have allowed a faster rollout, and we might nearly all have had working, efficiently-routed, IPng more than a decade ago. Can't say for sure of course, but it couldn't have been worse.
Would such an approach have been the most aesthetically pleasing? No. Would such an approach have come with packet header overheads? Yes. Might stupid middle-boxes have caused for some at times, yes. But there would have been ways around those with (with additional packet overheads), also stupid middle-boxes will continue to cause problems for some at times, regardless :(.
Also, NAT is *not* local-only. Many hosts connect to lots of sites far away on the Internet through NAT - not local at all. And even NATed hosts can often exchange packets directly with other NATed hosts, using 3rd parties to setup the initial mapping state.
Posted Jun 10, 2016 14:13 UTC (Fri)
by farnz (subscriber, #17727)
[Link]
Then you're not addressing the points I'm making at all about why any transition was doomed to failure - fundamentally, there's nothing about the transition state that makes it worth people's while taking any pain from IPvN (no matter how minimal) until they cannot get IPv4. Multiply that by the fact that IPvN on its own is not helpful until everyone you wish to communicate with has IPvN, and you get exactly the outcome we see - no-one cares until IANA runs out.
And it absolutely could be much worse than it is - other transitions in network land (e.g. the move to SS7) have taken even longer than the move to IPv6; if IPv6 is a failure, please point to another, faster, global network transition.
You're also misunderstanding what I mean by local-only; NAT is local only in the sense that if I wish to use it, I do not need you to take any action to continue communicating with me. If I want to use IPvN, I need you to understand IPvN, regardless of whether IPvN is an extension atop IPv4 (like MPTCP or SCTP), or whether it's a disjoint network (like IPv6). In other words, I can transition to NAT without any of my peers needing to know or care; the same is definitionally false of a larger address space.
Posted Jun 10, 2016 11:02 UTC (Fri)
by paulj (subscriber, #341)
[Link]
Posted Jun 8, 2016 18:33 UTC (Wed)
by Cyberax (✭ supporter ✭, #52523)
[Link]
Posted Jun 7, 2016 19:11 UTC (Tue)
by mjg59 (subscriber, #23239)
[Link] (7 responses)
Posted Jun 8, 2016 8:42 UTC (Wed)
by paulj (subscriber, #341)
[Link] (6 responses)
Posted Jun 8, 2016 12:55 UTC (Wed)
by pizza (subscriber, #46)
[Link] (1 responses)
(That said, I'm not using it because they still don't offer static IPv6 allocations. So I remain on Hurricane Electric's tunnel so my servers remain reachable)
Posted Jun 8, 2016 13:14 UTC (Wed)
by paulj (subscriber, #341)
[Link]
That's me proven completely wrong on the success of the chosen IPng transition strategy of "completely logically rewire the whole Internet" of IPv6 obviously.
Posted Jun 8, 2016 18:20 UTC (Wed)
by raven667 (subscriber, #5198)
[Link] (2 responses)
That's pretty much turning this into a "No True Scotsman" argument, if your definition of "geeky" is "supports IPv6" then it is a distinction which doesn't add any value.
Posted Jun 10, 2016 12:02 UTC (Fri)
by paulj (subscriber, #341)
[Link] (1 responses)
It's not unreasonable to look at how such biases may affect conclusions drawn from biases. E.g., if IPv6 traffic has increased significantly, is that cause of a general case increase in desire for v6, or a small number of large networks adopting v6 for reasons intrinsic to their large size (which may not transfer to many other networks). If v6 networks are cleaner in some way than v4-only networks, is that cause v6 early-adopters are diferent in some way, or a more general trend (reading carefully - that isn't quite what farnz was saying; but it was an implication I had in my mind when replying).
Considering possible biases is not the same as "No True Scotsman".
Posted Jun 10, 2016 12:27 UTC (Fri)
by farnz (subscriber, #17727)
[Link]
The thing is that I'm looking at this from the PoV of a user, wondering what IP version is actually used when I VPN back to work. And I'm seeing more and more that, when I go to a random place with WiFi, or borrow a MiFi type dongle from IT (who give me whichever network's standard kit has best coverage in the area I'm visiting, not whichever kit is best at IPv6), instead of getting an IPv4-only network, I get IPv6 connectivity. I'm explicitly excluding work's network (we've deployed IPv6 already), and networks like my home network, where I know that the network admin is a geek who'll make IPv6 work one way or another.
It's only a couple of years ago that I could reasonably expect that I'd only ever get IPv4 unless I went to special efforts to find an IPv6 network; now, though, I'm seeing IPv6 appear in all sorts of places that I wouldn't expect - heck, even my parents in law (Sky Broadband) have IPv6 at home, and they're sufficiently unbothered about Internet service that they're using the cheapest tier of service that Sky will sell them.
If it were just Comcast, my employer, and geeks like me who had IPv6, I might accept that argument; but it's not. Sky's network is currently small enough to fit within their IPv4 allocations from RIPE, and to allow them to use RFC 1918 space for management (heck, RFC 1918 space is enough for any UK network). Thus, I see IPv6 as starting its serious rollout.
And it took IPv4 a long time to become the dominant computer networking protocol. In terms of the IPv4 rollout, we've reached 1999 - so if IPv6 (which brings no new capabilities over IPv4, unlike IPv4 over the protocols it replaced like Compuserve's proprietary protocol) really is starting to roll out, we're on a par with IPv4.
Posted Jun 8, 2016 19:00 UTC (Wed)
by mjg59 (subscriber, #23239)
[Link]
Should distributors disable IPv4-mapped IPv6?
Should distributors disable IPv4-mapped IPv6?
Should distributors disable IPv4-mapped IPv6?
Should distributors disable IPv4-mapped IPv6?
Should distributors disable IPv4-mapped IPv6?
Should distributors disable IPv4-mapped IPv6?
Should distributors disable IPv4-mapped IPv6?
Should distributors disable IPv4-mapped IPv6?
Should distributors disable IPv4-mapped IPv6?
Should distributors disable IPv4-mapped IPv6?
Should distributors disable IPv4-mapped IPv6?
Should distributors disable IPv4-mapped IPv6?
Should distributors disable IPv4-mapped IPv6?
Should distributors disable IPv4-mapped IPv6?
Should distributors disable IPv4-mapped IPv6?
Should distributors disable IPv4-mapped IPv6?
Should distributors disable IPv4-mapped IPv6?
Should distributors disable IPv4-mapped IPv6?
Only if they BOTH have valid IPv4s. At which point you're back to double-stack model.
Should distributors disable IPv4-mapped IPv6?
> Only if they BOTH have valid IPv4s.
Should distributors disable IPv4-mapped IPv6?
Should distributors disable IPv4-mapped IPv6?
Should distributors disable IPv4-mapped IPv6?
Should distributors disable IPv4-mapped IPv6?
Should distributors disable IPv4-mapped IPv6?
Should distributors disable IPv4-mapped IPv6?
Should distributors disable IPv4-mapped IPv6?
Should distributors disable IPv4-mapped IPv6?
Should distributors disable IPv4-mapped IPv6?
Should distributors disable IPv4-mapped IPv6?
Should distributors disable IPv4-mapped IPv6?
Should distributors disable IPv4-mapped IPv6?
Should distributors disable IPv4-mapped IPv6?
Should distributors disable IPv4-mapped IPv6?
Should distributors disable IPv4-mapped IPv6?
Should distributors disable IPv4-mapped IPv6?
Should distributors disable IPv4-mapped IPv6?
Should distributors disable IPv4-mapped IPv6?
Should distributors disable IPv4-mapped IPv6?
Should distributors disable IPv4-mapped IPv6?
Should distributors disable IPv4-mapped IPv6?
Should distributors disable IPv4-mapped IPv6?
Should distributors disable IPv4-mapped IPv6?
Should distributors disable IPv4-mapped IPv6?
Should distributors disable IPv4-mapped IPv6?
Should distributors disable IPv4-mapped IPv6?
Should distributors disable IPv4-mapped IPv6?
Should distributors disable IPv4-mapped IPv6?
Which isn't a real problem for anybody but the biggest players. Getting IP ranges is way too easy because of a huge secondary market.
Should distributors disable IPv4-mapped IPv6?
Should distributors disable IPv4-mapped IPv6?
Should distributors disable IPv4-mapped IPv6?
Should distributors disable IPv4-mapped IPv6?
Should distributors disable IPv4-mapped IPv6?
Should distributors disable IPv4-mapped IPv6?
Should distributors disable IPv4-mapped IPv6?
Should distributors disable IPv4-mapped IPv6?