|
|
Subscribe / Log in / New account

LCA: Vint Cerf on re-engineering the Internet

By Jonathan Corbet
January 25, 2011
Vint Cerf is widely credited as one of the creators of the Internet. So, when he stood up at linux.conf.au in Brisbane to say that the net is currently in need of some "serious evolution," the attendees were more than prepared to listen. According to Vint, it is not too late to create a better Internet, despite the fact that we have missed a number of opportunities to update the net's infrastructure. Quite a few problems have been discovered over the years, but the solutions are within our reach.

His talk started back in 1969, when he was hacking SIGMA 7 to make it talk to the ARPAnet's first Internet message processor (IMP). The net has grown a little since then; current numbers suggest that there are around 768 million connected machines - and that doesn't count the vast numbers of systems with transient connections or which are hiding behind corporate firewalls. Nearly 2 billion users have access to the net. But, Vint said, that just means that the majority of the world's population is still waiting to connect to the net.

From the beginning, the net was designed around the open architecture ideas laid down by Bob Kahn. Military requirements were at the top of the list then, so the designers of the net created a system of independent networks connected via routers with no global control. Crucially, the designers had no particular application in mind, so there are relatively few assumptions built into the net's protocols. IP packets have no understanding of what [Vint Cerf] they carry; they are just hauling loads of bits around. Also important was the lack of any country-based addressing scheme. That just would not make sense in the military environment, Vint said, where it can be very difficult to get an address space allocation from a country which one is currently attacking.

The openness of the Internet was important: open source, open access, and open standards. But Vint was also convinced from an early date that commercialization of the Internet had to happen. There was no way that governments were going to pay for Internet access for all their citizens, so a commercial ecosystem had to be established to build that infrastructure.

The architecture of the network has seen some recent changes. At the top of the list is IPv6. Vint was, he said, embarrassed to be the one who decided, in 1977, that 32 bits would be more than enough for the net's addressing system. Those 32 bits are set to run out any day now, so, Vint said, if you're not doing IPv6, you should be. We're seeing the slow adoption of non-Latin domain names and the DNSSEC protocol. And, of course, there is the increasing prominence of mobile devices on the net.

One of the biggest problems which has emerged from the current Internet is security. He was "most disturbed" that many of the problems are not technical, they are a matter of suboptimal user behavior - bad passwords, for example. He'd like to see the widespread use of two-factor authentication on the net; Google is doing that internally now, and may try to support it more widely for use with Google services. The worst problems, he said, come from "dumb mistakes" like configuration errors.

So where are security problems coming from? Weak operating systems are clearly a part of the problem; Vint hoped that open-source systems would help to fix that. The biggest problem at the moment, though, is browsers. Once upon a time, browsers were simple rendering engines which posed little threat; now, though, they contain interpreters and run programs from the net. The browser, he said, has too much privilege in the system; we need a better framework in which to securely run web-based applications. Botnets are a problem, but they are really just a symptom of easily-penetrated systems. We all need to work on the search for better solutions.

Another big issue is privacy. User choices are a part of the problem here; people put information into public places without realizing that it could come back to haunt them later. Weak protection of information by third parties is also to blame, though. But, again, technology isn't the problem; it's more of a policy issue within businesses. Companies like [Vint Cerf] Google and others have come into possession of a great deal of privacy-sensitive information; they need to protect it accordingly.

Beyond that, there's the increasing prevalence of "invasive devices," including cameras, devices with location sensors, and more. It is going to be increasingly difficult to protect our privacy in the future; he expressed worries that it may simply not be possible.

There was some talk about clouds. Cloud computing, he said, has a lot of appeal. But each cloud is currently isolated; we need to work on how clouds can talk to each other. Just as the Internet was created through the connection of independent networks, perhaps we need an "intercloud" (your editor's term - he did not use it) to facilitate collaboration between clouds.

Vint had a long list of other research problems which have not been solved; there was not time to talk about them all. But, he says, we have "unfinished work" to deal with. This work can be done on the existing network - we do not need to dump it and start over.

So what is this unfinished work? "Security at all levels" was at the top of the list; if we can't solve the security problem, it's hard to see that the net will be sustainable in the long run. We currently have no equivalent to the Erlang distribution to describe usage at the edges of the network, making provisioning and scaling difficult. The quality of service (and network neutrality) debate, he said, will be going on for a very long time. We need better distributed algorithms to take advantage of mixed cloud environments.

There were, he said, some architectural mistakes made which are now making things harder. When the division was made between the TCP and IP layers, it was decided that TCP would use the same addressing scheme as IP. That was seen as a clever design at the time; it eliminated the need to implement another layer of addressing at the TCP level. But it was a mistake, because it binds higher-level communications to whatever IP address was in use when the connection was initiated. There is no way to move a device to a new address without breaking all of those connections. In the designers' defense, he noted, the machines at the time, being approximately room-sized, were not particularly mobile. But he wishes they had seen mobile computing coming.

Higher-level addressing could still be fixed by separating the address used by TCP from that used by IP. Phone numbers, he said, once were tied to a specific location; now they are a high-level identifier which can be rebound as a phone moves. The same could be done for network-attached devices. Of course, there are problems to be solved - for example, it must be possible to rebind a TCP address to a new IP address in a way which does [Vint Cerf] not expose users to session hijacking. This sort of high-level binding would also solve the multi-homing and multipath problems; it would be possible to route a single connection transparently through multiple ISPs.

Vint would also like to see us making better use of the net's broadcast capabilities. Broadcast makes sense for real-time video, but it could be applied in any situation where multiple users are interested in the same content - for software updates, for example. He described the use of satellites to "rain packets" to receivers; it is, he said, something which could be done today.

Authentication remains an open issue; we need better standards and some sort of internationally-recognized indicators of identity. Internet governance was on the list; he cited the debate over network censorship in Australia as an example. That sort of approach, he said, is "not very effective." He said there may be times when we (for some value of "we") decide that certain things should not be found on the net; in such situations, it is best to simply remove such materials when they are found. There is no hope in any attempt to stop the posting of undesirable material in the first place. Governance, he said, will only become more important in the future; we need to find a way to run the net which preserves its fundamental openness and freedom.

Performance: That just gets harder as the net gets bigger; it can be incredibly difficult to figure out where things are going wrong. He said that he would like a button marked "WTF" on his devices; that button could be pressed when the net isn't working to obtain a diagnosis of what the problem is. But, to do that, we need better ways of identifying performance problems on the net.

Addressing: what, he asked, should be addressable on the Internet? Currently we assign addresses to machines, but, perhaps, we should assign addresses to digital objects as well? A spreadsheet could have its own address, perhaps. One could argue that a URL is such an address, but URLs are dependent on the domain name system and can break at any time. Important objects should have locators which can last over the long term.

Along those lines, we need to think about the long-term future of complex digital objects which can only be rendered with computers. If the software which can interpret such an object goes away, the objects themselves essentially evaporate. He asked: will Windows 3000 be able to interpret a 1997 Powerpoint file? We should be thinking about how these files will remain readable over the course of thousands of years. Open source can help in this regard, but proprietary applications matter too. He suggested that there should be some way to "absorb" the intellectual property of companies which fail, making it available so that files created by that company's software remain readable. Again, Linux and open source have helped to avoid that problem, but they are not a complete solution. We need to think harder about how we will preserve our "digital stuff"; he is not sure what the solution will look like.

Wandering into more fun stuff, Vint talked a bit about the next generation of devices; a network-attached surfboard featured prominently. He talked some about the sensor network in his house, including the all-important temperature sensor which sends him a message if the temperature in his wine cellar exceeds a threshold. But he'd like more information; he knows about temperature events, or whether somebody entered the room, but there's no information about what may have happened in the cellar. So maybe it is time to put RFIDs on the bottles themselves. But that won't help him to know if a specific bottle has gotten too warm; maybe it's time to put [Vint Cerf] sensors into the corks to track the state of the wine. Then he could unerringly pick out a ruined bottle whenever he had to give a bottle of wine to somebody who is unable to tell the difference.

The talk concluded with some discussion of the interplanetary network. There was some amusing talk of alien porn and oversized ovipositors, but the real problem is one of arranging for network communications within the solar system. The speed of light is too slow, meaning that the one-way latency to Mars is, at a minimum, about three minutes (and usually quite a bit more). Planetary rotation can interrupt communications to specific nodes; rotation, he says, is a problem we have not yet figured out how to solve. So we need to build tolerance of delay and disruption deep into our protocols. Some thoughts on this topic have been set down in RFC 4838, but there is more to be done.

We should also, Vint said, build network nodes into every device we send out into space. Even after a device ceases to perform its primary function, it can serve as a relay for communications. Over time, we could deploy a fair amount of network infrastructure in space with little added cost. That is a future he does not expect to see in its full form, but he would be content to see its beginning.

There was a question from the audience about bufferbloat. It is, Vint said, a "huge problem" that can only be resolved by getting device manufacturers to fix their products. Ted Ts'o pointed out that LCA attendees had been advised (via a leaflet in the conference goodie bag) to increase buffering on their systems as a way of getting better network performance in Australia; Vint responded that much harm is done by people who are trying to help.

Index entries for this article
Conferencelinux.conf.au/2011


to post comments

LCA: Vint Cerf on re-engineering the Internet

Posted Jan 25, 2011 5:53 UTC (Tue) by gdt (subscriber, #6284) [Link] (18 responses)

I stand by the recommendations in the leaflet I wrote.

A 16MB buffer size is appropriate for GbE users and the 0.2s of round-trip delay from the undersea network which attaches Australia to the west coast of the USA. As the leaflet explains, Australia is one of the few countries in the world where users face such a high RTT to their popular Internet resources and can afford such high bandwidth too. Given that odd situation, it isn't surprising that operating systems need some tuning.

When discussing buffer bloat you need to distinguish hosts and routers -- Jim Gettys' complaint was about excessive buffers in routers. The host needs a TCP buffer with a maximum size of the bandwidth-delay product in order to be able to fill the pipe. The routers along that pipe need nowhere near that, rather buffering appropriate for the bandwidth and delay of the next hop, and their buffer scheduling appears to be just as important to TCP throughput as the depth of the buffer. It's fair to say that the academic understanding of router buffering is much less clear than host buffering, and this makes definite recommendations of router buffer sizes difficult, which is one reason why router buffering was not mentioned at all in the leaflet.

On the plus side, I get to add "dissed by Vint Cerf" to my CV :-)

LCA: Vint Cerf on re-engineering the Internet

Posted Jan 25, 2011 8:55 UTC (Tue) by ebiederm (subscriber, #35028) [Link] (11 responses)

The distinguishing point needs to be between tcp socket buffers, and buffers in the transit path. In particular NIC queues on the hosts can cause exactly the same issues as large buffers in routers.

LCA: Vint Cerf on re-engineering the Internet

Posted Jan 25, 2011 10:40 UTC (Tue) by gdt (subscriber, #6284) [Link] (8 responses)

Yep, exactly. Re-reading my posting I wish I'd spent more time making clearer the distinction between (1) TCP buffers in end systems and (2) IP buffers on the egress interfaces of routers. The leaflet was about (1), Vint obviously thought from the context of the question that the leaflet was about (2).

LCA: Vint Cerf on re-engineering the Internet

Posted Jan 26, 2011 1:25 UTC (Wed) by mtaht (subscriber, #11087) [Link] (6 responses)

Not having seen the leaflet I don't know what, specifically, to address.

One core problem of bufferbloat is that devices and device drivers are currently doing too much *uncontrolled* buffering. The TCP/IP stack itself is fine, however, 1) once a packet lands on the txqueue, it can be shaped, but rarely is.

2) Far worse, in many cases, especially in wireless routers, once a packet gets off the txqueue it lands in the driver's queue, which can be quite large, and can incur huge delays in further IP traffic.

Once the device driver's buffers are filled, no amount of traffic shaping at a higher level will help. Death will not release you.

Here's a specific example:

Linux's default txqueuelen is 1000. Even after you cut that to something reasonable, it then hits the device driver's buffers. The driver I'm specifically hacking on (the ath9k, patches available here: https://github.com/dtaht/Cruft/raw/master/bloat/558-ath9k...
) defaults to 507 buffers, (with some limiting factors as to the size of the 10 queues applied) for which it will retry to send up to 13 times.

Assume your uplink is 3Mbit/sec, what's your maximum latency?
Assume your uplink is 128Kbit/sec, what's your maximum latency?

I'm not going to show the math here, it's too depressing. If you have an ath9k, try the above patch. there's one for the IWL going around. Many ethernet devices support ethtool...

The difference in overall network responsiveness with the crude ath9k patch above is amazing, and I've still got a long way to go with it.

This paper: http://www.cs.clemson.edu/~jmarty/papers/PID1154937.pdf
strongly suggests that a maximum of 32 uncontrolled IP buffers be used in the general case (with 1 being an ideal). It also raises some other strong points.

There's plenty of experimental data out there now too, and experiments you can easily perform on your own devices.

http://gettys.wordpress.com/2010/12/02/home-router-puzzle...

Every device driver I have looked at defaults to uncontrolled buffers far in excess of that figure, usually at around 256 buffers, even before you count txqueuelen.

The key adjective here for coping with bufferbloat is to reduce uncontrolled" buffering, starting with the device driver(s) and working up to various means of traffic shaping and providing adequate feedback to TCP/ip streams (packet drop/ECN etc) to keep the servo mechanism(s) working.

LCA: Vint Cerf on re-engineering the Internet

Posted Jan 26, 2011 3:02 UTC (Wed) by jthill (subscriber, #56558) [Link] (1 responses)

But isn't the uplink the advice is for a gigabit uplink, not 3Mb? If that's right, I make the uplink latency for a full ath9k queue well under 10ms - they should be able to run flat out. That or I borked the math.

LCA: Vint Cerf on re-engineering the Internet

Posted Jan 26, 2011 4:11 UTC (Wed) by mtaht (subscriber, #11087) [Link]

In the case of a wireless card in your laptop or a wireless link you can be running at speeds ranging from 300Mbit down to 1Mbit/sec, or less.

In the case of a home gateway, my comcast's business class service is running at about 3Mbit/sec on the uplink.

Huge dark (unmanaged) buffers in the device affect latency really badly - not just for TCP/ip, but for stuff that would ordinary jump to the head of the queue - udp, dns, voip, gaming, NTP... ... and in some cases are so big as to break TCP/ip almost entirely.

We've been sizing device buffers as if it was all on gigE backbone networks. Nor have we been using reasonable AQM. I urge you to try the experiments mentioned earlier.

LCA: Vint Cerf on re-engineering the Internet

Posted Jan 28, 2011 6:45 UTC (Fri) by The_Barbarian (guest, #48152) [Link] (1 responses)

Why, I happen to have an ath9k. I'll give this a whirl at some point. Thanks!

LCA: Vint Cerf on re-engineering the Internet

Posted Jan 29, 2011 5:12 UTC (Sat) by mtaht (subscriber, #11087) [Link]

If you are doing openwrt development on the ath9k I have builds with that patch for the wndr5700 and ubiquity. I have never been more happy to see packet loss in my life.

LCA: Vint Cerf on re-engineering the Internet

Posted Feb 11, 2011 16:43 UTC (Fri) by mcgrof (subscriber, #25917) [Link] (1 responses)

Looking forward to the final upstream patch and respective commit log entry :)

LCA: Vint Cerf on re-engineering the Internet

Posted Feb 11, 2011 17:08 UTC (Fri) by mcgrof (subscriber, #25917) [Link]

On second though, the bandwidth for 802.11 will change dynamically depending on the topology of the 802.11 environment, if you're an AP on the STAs connected and their own 802.11 counterpart. So I wonder if the internal buffers are best adjusted influenced by rate control who will have a better idea of the average bandwidth to peers through one 802.11 interface.

LCA: Vint Cerf on re-engineering the Internet

Posted Jan 26, 2011 21:15 UTC (Wed) by mcoleman (guest, #70990) [Link]

If you're Vint Cerf, you might be thinking that most conference attendees bring their own routers with them (perhaps several). ;-)

LCA: Vint Cerf on re-engineering the Internet

Posted Jan 25, 2011 11:25 UTC (Tue) by marcH (subscriber, #57642) [Link] (1 responses)

Yes, the science of TCP buffers is *unrelated* to other buffers below it. Simply because TCP is in charge of everything: reliability, end to end flow control, and network congestion avoidance, while the rest below is in charge of none of it.

Confusing these two very different buffering roles (TCP versus below TCP) is a huge mistake.

On the other hand, I wonder why a leaflet was needed at all. TCP auto-tuning is supposed to have fixed this problem already?

LCA: Vint Cerf on re-engineering the Internet

Posted Jan 25, 2011 11:32 UTC (Tue) by marcH (subscriber, #57642) [Link]

> Yes, the science of TCP buffers is *unrelated* to other buffers below it.

I take some of that back. In theory, they are not related. In practice, you can have nasty interactions between the two. In any case, they are totally different beasts.

LCA: Vint Cerf on re-engineering the Internet

Posted Jan 25, 2011 13:35 UTC (Tue) by gmaxwell (guest, #30048) [Link]

> The routers along that pipe need nowhere near that, rather buffering appropriate for the bandwidth and delay of the next hop,

Woah woah there. We're not doing hop by hop congestion control. There is no explicit back-pressure. It's TCP end to end. The routers need enough buffers to "fill the pipe" too, and it's ultimately the TCP sender on the ends that the buffers in the network need to satisfy.

In the degenerate case of a single flow across the whole network each of the routers absolutely do need the full delay-bandwidth product buffering that you point out for the host in order to keep the pipes full. This is old knowledge, established by rigorous mathematical analysis, simulation, an real world experiments. This is the classic paper on the subject.

More recent research has established that under certain assumptions the amount of router buffering can be greatly reduced: If there are a great many flows, no super-large flows that completely dominate the link on their own, and the RTTs seen by the flows are well distributed then the buffer requirements can be reduced by an amount proportional to the square root of the number of flows. More information can be found here.

In terms of router buffer bloat, I think it's more of an combination of issues of buffers far in excess of the delay/bandwidth product from manufacturers building for the worst case (e.g. aussies with ten gigabit flows), and service providers (and their most demanding customers) being far more concerned about packet loss than jitter/delay for best effort traffic. For high value jitter sensitive traffic the equipment can always be configured to handle it differently (e.g. anything with buffers big enough to cause problems can do differentiated queuing with a strict high priority queue), but that doesn't help joe-sixpack on DSL at home.

LCA: Vint Cerf on re-engineering the Internet

Posted Jan 25, 2011 16:38 UTC (Tue) by daniel (guest, #3181) [Link]

"I stand by the recommendations in the leaflet I wrote."

Good for you. I fount Cerf's comment about "those trying to help" horribly rude, but not out of character. I also found the talk to be largely empty of technical content.

Congestion-management subtleties

Posted Jan 25, 2011 19:49 UTC (Tue) by jthill (subscriber, #56558) [Link] (2 responses)

I think I've got what might be a helpful contribution here. I spent many years, long ago, fixing and building high-performance networking code in address spaces handling thousands of connections, so I hope it turns out to be worth attention. It might also be something everybody knows about already, but I get the suspicion from the discussion I'm seeing that, maybe it isn't a known and discarded idea. Your reasoning shouldn't have been so easily missed if it were.

So let me start with a slightly artificial example to get all the elements in play: on such a router with high-volume TCP endpoints of its own, the TCP buffers need to be kept separate from the routed-packet buffers because the TCP buffers are necessary only for TCP retransmit and shouldn't be allowed to clog the queue for telnet or whatnot.

No need to burden QoS for this: separate and shrink the routed-packet pool, and arrange to have the routed pool ask for another packet from the TCP pool shortly before it needs it. That will do the trick automatically if I have it right. It occurs to me, since endpoint TCP has much more info available than any router, it should be able to do that-much-better prioritizing anyway.

To keep fairness with non-local sources, local TCP gets some simple proportion of the packets in the pool relative to the packets from other sources. Packets going to local TCP never enter the routed pool at all: it's a matter of swapping a full routed-packet buffer for an empty receive-window buffer. TCP can offer ACK packets in return right there.

So, to the payload: even though doing this for local TCP achieves the purpose in that scenario, I don't see any intrinsic reason to do this only for TCP, or only locally.

When the opportunity and need coincide, why not do this kind of coordinated buffer management across links?

This isn't source-quench. The basic idea needs extension to handle more general cases, but start small.

Pick a leaf router, where one link reaches the vast majority of the net and virtually everything reaching it is going to use that link. Use the idle local bandwidth to make the backpressure explicit.

To put it in a way that might horrify some, why not have a congested leaf router convert its downstream links to half-duplex for the duration of the congestion? It's easy: "Ok. Go 'way now, I'm busy". "Gimme a packet". "Ok, send what you like".

Those need acks to avoid throttling in error, but again those are sent on links that should be idle anyway. This is one-hop link management.

Plainly, when life starts to get interesting (i.e. when more than one of the router's links is likely to get congested), the poll should explicitly list congested routing entries. An overbroad (or ignored) choke list would slow some things down unnecessarily, but if the choke is honored at all (and the router sanely reserves one or two packets for each link no matter what) the congestion gets pushed directly to its source.

When you get to nodes where combinations of inbound links are saturating combinations of outbound links I think this starts running out of steam, but as I understand it those aren't the nodes where we're seeing this problem in the first place.

So, thanks for reading, if it's a good idea I don't feel all possessive about it, and either way I'd appreciate feedback.

Congestion-management subtleties

Posted Jan 25, 2011 23:10 UTC (Tue) by ajb (subscriber, #9694) [Link] (1 responses)

Sounds vaguely like backward congestion notification, which is now in data-center grade ethernet: www.ieee802.org/3/ar/public/0505/bergamasco_1_0505.pdf

Congestion-management subtleties

Posted Jan 26, 2011 2:05 UTC (Wed) by jthill (subscriber, #56558) [Link]

Yeah, that's the idea, only IP-aware, not so unselective. As I said, I think this scheme starts running out of steam as you get towards the core. Cisco's saying theirs starts there, where the routers are already too busy to think. If more than a few simple address ranges were included in this scheme's backpressure notifications I think it'd start getting ugly. For e.g. intranet border routers it occurs to me greenlight ranges (send me what you want for these guys, you hang on to traffic for anyplace else) would be simpler.

Fwliw, seems to me from reading his links that gmaxwell has it right about the seeming contradiction between the results Gettys and Villamizar/Song get - if I recall prices then, the idea of grossly overprovisioning buffers would have seemed insane in 1994. Plus the market was more technical, so there'd be little reason for the earlier study to examine it.

Some things I like about this notion (I am, of course, completely objective on the subject) are that

  • Unlike Cisco's BCN, the sender can still forward e.g. network control packets (in addition to packets destined for outbound uncongested links, because it knows what those are).
  • Like Cisco's scheme it's incremental. If the congestion is local only, i.e. if the aggregate buffering in the route back to the source is sufficient, the sending TCPs never see it at all—and when they do hear of it, they hear via backpressure from their local router:
    • The pipe is never unnecessarily drained
    • they know why they're not getting ACKs if the jam lasts, they don't have to retransmit
    • and they can prioritize what to send when polled using every bit of local state
There's more, they're all even more obvious than these.

Video available

Posted Jan 27, 2011 18:31 UTC (Thu) by dowdle (subscriber, #659) [Link]

Here's a direct link to an ogv video of the presentation:

http://a9.video2.blip.tv/9350007685272/Linuxconfau-Keynot...

LCA: Vint Cerf on re-engineering the Internet

Posted Jan 25, 2011 7:53 UTC (Tue) by rilder (guest, #59804) [Link] (1 responses)

Are the videos of these talks available anywhere ? I could find the streaming link but not the videos archived anywhere.

LCA: Vint Cerf on re-engineering the Internet

Posted Jan 26, 2011 6:55 UTC (Wed) by BenHutchings (subscriber, #37955) [Link]

They're being uploaded gradually, starting with this very talk, to http://linuxconfau.blip.tv/

Erlang-style distribution

Posted Jan 25, 2011 8:03 UTC (Tue) by sustrik (guest, #62161) [Link] (1 responses)

"We currently have no equivalent to the Erlang distribution to describe usage at the edges of the network, making provisioning and scaling difficult."

We've been working on that for past couple of years. The idea is explained here:

http://www.250bpm.com/hits

ZeroMQ project can be thought of as a proof of concept.

Linux kernel implementation is underway.

I'm going to propose creation of dedicated IETF working group shortly.

0mqftw

Posted Jan 25, 2011 12:09 UTC (Tue) by wingo (guest, #26929) [Link]

That looks really cool Martin, thanks for the link. And good luck!

LCA: Vint Cerf on re-engineering the Internet

Posted Jan 25, 2011 11:11 UTC (Tue) by ajb (subscriber, #9694) [Link] (3 responses)

Another take on separating TCP from IP addresses is the 'Name Based Sockets' idea:
http://www.ietf.org/proceedings/79/nbs.html

LCA: Vint Cerf on re-engineering the Internet

Posted Jan 25, 2011 15:30 UTC (Tue) by mstone (subscriber, #58824) [Link]

Van Jacobson is presently heading up another effort in this direction over at http://www.ccnx.org/. Jim Gettys recommended it to our attention here: http://lwn.net/Articles/390925/

LCA: Vint Cerf on re-engineering the Internet

Posted Jan 26, 2011 6:12 UTC (Wed) by dwmw2 (subscriber, #2063) [Link]

With working Mobile IP designed into IPv6 from the beginning, fixing the 'triangle routing' problem, why would we need it?

LCA: Vint Cerf on re-engineering the Internet

Posted Jan 26, 2011 6:48 UTC (Wed) by jmm82 (guest, #59425) [Link]

Sctp has been around for years with support for multihoming, but I personally do not see tcp being replaced any time soon. Look at the ipv6 situation and the only reason it is coming into use is out of necessity.

What about IPv6 right here on earth?

Posted Jan 25, 2011 16:26 UTC (Tue) by daniel (guest, #3181) [Link] (76 responses)

My question is, why is Vint Cerf drifting off to Mars while the IPv6 effort right here is failing so miserably? This comes across as Nero fiddling while Rome burns.

Second question is, if IPv6 was so horribly misconceived, I'm not sure I would want those involved to become involved again in designing something even more technically demanding.

What about IPv6 right here on earth?

Posted Jan 25, 2011 17:31 UTC (Tue) by tialaramex (subscriber, #21167) [Link] (42 responses)

I'm not sure what about this talk makes you think IPv6 was "so horribly misconceived" ?

The mainstream commercial ISPs are going to make this far more painful than it had to be. For them it's a game of chicken. Swerve too soon (ie deploy IPv6 before exhaustion) and you've spent millions of dollars to prevent a problem that your customers won't know they had. So they all have the accelerator pedal flat against the floor and their eyes half-closed. Central government (funded by us, the tax payers) gets to clean up the mess when none of them swerve.

But we couldn't have avoided that. The protocol makes no difference. The transition is going to be very painful because it makes good commercial sense that way, not because of the relatively minor technical issues.

My own ISP, having received its formal notification that the RIR expects address exhaustion to occur in around 12 months, responded by telling its customers that there's nothing to worry about, it has no immediate plans to do anything, and there plenty of addresses. Sure, some subscribers might sue them in a year's time when the facts are more concrete, but that's a problem for the directors _next year_ the directors _now_ are focused on telling subscribers everything is OK.

What about IPv6 right here on earth?

Posted Jan 25, 2011 21:39 UTC (Tue) by bojan (subscriber, #14302) [Link] (38 responses)

> I'm not sure what about this talk makes you think IPv6 was "so horribly misconceived" ?

My guess is that it would probably have to do with IPv6 setup being required in parallel to IPv4, instead of making IPv6 just work with IPv4. You know, something along the lines of what DJB was talking about some time ago: http://cr.yp.to/djbdns/ipv6mess.html.

What about IPv6 right here on earth?

Posted Jan 26, 2011 4:48 UTC (Wed) by tialaramex (subscriber, #21167) [Link] (37 responses)

DJB's solution is brilliant.

It requires only one thing: A function which correctly maps from any of 2^128 integers to 2^32 integers, and back again. This is a brilliant function which would not only have made IPv6 transition smoother (specifically, it would have made it completely unnecessary, since we would continue using IPv4 and just throw in this map) - but it allows infinite compression, you can take any arbitrary 16 byte value, map it to the 4 bytes value, and then the person at the far end just uses DJB's magic function to get back the original 16 bytes.

OK, now back in the real world where this function doesn't exist

In IP the peers address each other mutually. Peer A's ability to grok Peer B's address is useless if Peer B does not also grok Peer A's address. So an arbitrary IPv4 peer (of which there are already billions deployed) can only communicate with a hypothetical IPvDJB peer if its address is a standard IPv4 32-bit address. But we already have a protocol which limits us to 32-bit addresses, called IPv4, and that's what we were trying to fix with IPv6. It is no accident that DJB's example uses MX, which doesn't have peers but only a client-server relationship in which the server is oblivious to how the client addressed it.

What about IPv6 right here on earth?

Posted Jan 26, 2011 7:35 UTC (Wed) by bojan (subscriber, #14302) [Link]

I don't think you actually read what DJB wrote.

What about IPv6 right here on earth?

Posted Jan 26, 2011 8:51 UTC (Wed) by cmccabe (guest, #60281) [Link] (35 responses)

I think DJB is arguing that there should have been a transition period where people moved to IPv6, but still continued to use 32-bit addresses. This could have been handled by setting aside a special part of the IPv6 address space for addresses that mapped directly on to IPv4 equivalents.

Then, over time, operating system vendors, Linux distributions, and network equipment manufacturers could have moved to IPv6 "painlessly." IPv4 would have been phased out-- you literally would have been unable to buy IPv4 gear or download IPv4 software-- just like you can't buy IPX or Token Ring gear any more.

Then, when the big crunch came, as it is coming now, we could all look around at our modern IPv6-only gear and have a chuckle at the expense of those old Windows 95 users who still had IPv4 equipment. Finally we would flip the switch, allowing everyone to use all 128 bits of IPv6.

Instead of doing this, the IPv6 designers decided that they would create a completely separate network namespace. If you choose to support IPv6 on your network, the burden of it falls on you. It's measured in terms of time spent, equipment bought, and so on. You won't see any benefit to supporting IPv6, however, until a tipping point is reached where "enough" of the internet uses IPv6 (for some definition of "enough".) At that point, everyone will benefit, probably including the people who dragged their feet during the conversion. So the rational, profit-maximizing strategy is to ignore IPv6.

In fact, address scarcity often helps the incumbent telecoms companies. Having control of a scarce resource, like IPv4 address ranges, is usually considered a good thing.

The working group should not have made ignoring IPv6 an option. They should have pushed for it to become the only choice for newer systems. The only way to do this would have been to have a backwards compatibility mode.

It just shows how badly even intelligent people may misunderstood the simple problem

Posted Jan 26, 2011 10:58 UTC (Wed) by khim (subscriber, #9252) [Link] (31 responses)

I think DJB is arguing that there should have been a transition period where people moved to IPv6, but still continued to use 32-bit addresses.

What will it change?

This could have been handled by setting aside a special part of the IPv6 address space for addresses that mapped directly on to IPv4 equivalents.

This is done already.

Then, over time, operating system vendors, Linux distributions, and network equipment manufacturers could have moved to IPv6 "painlessly."

Operation system vendors? Yes. Linux distribution? Of course. Network equipment manufacturers? No way in hell. You see, IPv4 requires 4 bytes in the routing table, IPv6 requires 16. If the given piece of silicon is used to do lots of things and handle IP traffic as well you can easily add IPv6 support. But if the equipment's primary reason for existence is TCP/IP then switch from IPv4 to IPv6 (without magic function) is expensive. This is why we have situation where hardware all around supports IPv6 with one exception: ISPs don't support it and most have no plans to support it. Intermediate step with 32bit addresses will just increase the mess. Instead of clear IPv4 -> IPv6 step it'll introduce two steps: IPv6/32bit and IPv6. Industry will enthusiastically embrace first step (because of it's PR value) and will delay the second step as long as humanly possible.

The working group should not have made ignoring IPv6 an option. They should have pushed for it to become the only choice for newer systems.

The working group does not build the hardware, so I can not push anything this way.

It just shows how badly even intelligent people may misunderstood the simple problem

Posted Jan 26, 2011 11:14 UTC (Wed) by bojan (subscriber, #14302) [Link] (27 responses)

From what I can tell, DJB wrote his piece in 2002. That's over 8 years ago. How much IP sensitive network equipment is around from then or even before that when IPv6 was designed? Problably little.

It is a simple fact that right now we cannot reach any IPv6 hosts from pure IPv4 addresses that have been around for a while. All the software and networking equipment is way newer than that and it just doesn't work. That is interoperability failure, whether you want to admit it or not.

A brief catch up

Posted Jan 26, 2011 12:47 UTC (Wed) by tialaramex (subscriber, #21167) [Link] (7 responses)

Yes, DJB wrote this in 2002. Yes, in 2011 most ISPs still haven't bought IPv6 capable equipment (and those that have never configured it to enable IPv6). This isn't an interoperability failure, it's classic market failure. If IPvDJB actually did anything, and thus cost money, the ISPs wouldn't have bought that either.

The ISPs chose not to buy IPv6-capable equipment. Cisco and others have been selling IPv6-capable gear for about a decade, and the hardware routers (which need custom hardware or at least FPGAs to process a different unit address size) have been available for more than five years. ISPs haven't been saying "Do you make an IPv6 capable version of the thing we bought last time?" they've been saying "Is IPv4-only cheaper? We are putting off all non-essential purchases to drive down costs.". The ISPs chose not to deploy IPv6 capable end user devices to their customers because it keeps the acquisition cost low. The ISPs chose not to do any configuration work, because that means manpower and technical manpower is expensive. Plus it might mean engineering outages, and customers hate those.

Left to their own devices the water companies in my country don't repair water mains, the train operators never buy new trains, the nuclear power industry take increasingly unsafe shortcuts. So we have to regulate them in order to force them to internalise long term costs. We don't regulate Internet service provision to this extent, many believe we shouldn't have to. Perhaps not, but this time it's going to cost us.

Hardware people, like Cisco, and software people, like Microsoft or Linus came through on this, not always as fast as we'd have liked, but in time at least for the inevitable crunch. That left only one key stakeholder, the ISPs and they're twiddling their thumbs. But by all means blame the engineers, nobody had any doubt from the outset who'd get the blame. If we don't do the impossible day after day it's because we just weren't trying.

The "simple fact" that several of you seem hung up on is a mathematical truth. It's not avoidable by any amount of shenanigans on our part. Repeating that someone, somehow, should have found a way around this is nothing but active denial.

A brief catch up

Posted Jan 26, 2011 13:40 UTC (Wed) by bojan (subscriber, #14302) [Link] (6 responses)

Right, the plan that produced something that doesn't work was unavoidable and people that pointed that out 8 years ago are in denial.

Have you tried running DOS programs on Windows? Many of them worked just fine.

Small mixup...

Posted Jan 26, 2011 13:47 UTC (Wed) by khim (subscriber, #9252) [Link] (5 responses)

Have you tried running DOS programs on Windows? Many of them worked just fine.

Well, sure. It's called 6to4 and it works really well. But this was not Bernstein's idea. His idea was to run IPv6 over IPv4 network - essentially run Windows programs on DOS. Can you say that "worked just fine"? If yes then you come from some other universe then me... Even Windows 3.11 stopped using DOS for HDD access and when Windows 95 switched to so-called "virus compatibility mode" it became practically unusable.

Small mixup...

Posted Jan 26, 2011 14:02 UTC (Wed) by bojan (subscriber, #14302) [Link] (3 responses)

Actually, his idea was to extend the IPv4 to become IPv6, so that v4 is included in it. This is what happens when you run a DOS program inside any NT based Windows. You are not really running on DOS, it just looks like it to the program.

And given that all the software would have understood 16 byte addresses by now, the transition would just happen.

Practical example: my home network would have been on IPv6, although all my addreses would still be written in 4 byte form in my config files. My effort in this: zero. ISP effort in this: close to zero.

Small mixup...

Posted Jan 26, 2011 14:44 UTC (Wed) by khim (subscriber, #9252) [Link] (2 responses)

Practical example: my home network would have been on IPv6, although all my addreses would still be written in 4 byte form in my config files. My effort in this: zero. ISP effort in this: close to zero.

Wow, great achievement. How it'll be different from the situation today: if ISP supports either SLAAC or DHCPv6 then your home network will be on IPv6 - the only changes needed are new hardware on ISPs side? It's hard to find contemporary OS without SLAAC (and most support DHCPv6, too), you know.

The problems with IPv6 deployment are not related to problem described in DJB's article. It's simple economics. ISP economics to be exact. Other problems were fixed long ago. May be not in a way DJB likes, but they are fixed.

Small mixup...

Posted Jan 26, 2011 15:13 UTC (Wed) by bojan (subscriber, #14302) [Link] (1 responses)

Aha. And they will buy my IPv6 router? Configure my AAAA records? Setup my IPv6 firewall? Reconfigure all my services? Ditto at my employer that has this multiplied by the factor of at least a 1000?

Please. This plan is an utter disaster. If it was any good, nobody would have to touch a thing.

You know, kinda like this: when I send texts from my mobile, I couldn't care less whether 3G or GSM is used. A phone I purchased almost 4 years ago could do both automagically.

Small mixup...

Posted Jan 27, 2011 12:30 UTC (Thu) by cesarb (subscriber, #6266) [Link]

> when I send texts from my mobile, I couldn't care less whether 3G or GSM is used. A phone I purchased almost 4 years ago could do both automagically.

Yes, the same way you can use IPv4 on top of either wired ethernet (802.3) or wireless (802.11). A laptop purchased more than 4 years ago could do both automagically.

What we are talking about is more like changing the phone number format or the SMS packet format, not switching between technologies lower in the stack (3G/GSM and 802.3/802.11 are all link-level), so your analogy is a red herring. Changing the link (which affects only that link) is a local decision, and thus much easier than changing a higher-level protocol (which affects everyone).

Small mixup...

Posted Jan 27, 2011 1:36 UTC (Thu) by daniel (guest, #3181) [Link]

His idea was to run IPv6 over IPv4 network - essentially run Windows programs on DOS. Can you say that "worked just fine"?

Why yes it did, it was called "Windows 95" then "Windows 98" and finally began breaking down with Windows Millenium. However it bought Microsoft enough time to migrate their application software to an operating system worthy of the name. Yes, Windows on DOS worked out very well indeed, unlike IPv6.

It just shows how badly even intelligent people may misunderstood the simple problem

Posted Jan 26, 2011 13:36 UTC (Wed) by khim (subscriber, #9252) [Link] (18 responses)

From what I can tell, DJB wrote his piece in 2002. That's over 8 years ago.

Yup.

How much IP sensitive network equipment is around from then or even before that when IPv6 was designed? Problably little.

Wrong question. Right question: how much IP sensitive network equipment without IPv6 support are still produced today? Probably a lot. People are delaying IPv6 as much as possible - and without "magic function" you can not change anything there. This is well-known phenomenon and you can not change much there.

Bernstein explains how to solve non-problem: how to marry endpoints if the routing infrastructure does not need an upgrade (as was in the case of MX records: you needed change on Internet endpoints, but it didn't affect routers in the middle). IPv4-to-IPv6 transition does not need to solve such problem because it does not exist: most endpoints have IPv6 support (and had for many years) already. The real problem (the need to upgrade routers "in the middle") is silently ignored in the article.

It is a simple fact that right now we cannot reach any IPv6 hosts from pure IPv4 addresses that have been around for a while.

This was never in the plan - and rightfully so.

All the software and networking equipment is way newer than that and it just doesn't work.

Sure - but this is because IPv4 addresses are still available! They are expensive, true, but they are available!

That is interoperability failure, whether you want to admit it or not.

No, this is application of "Internet only just works" principle. IPv6 will be employed when it'll be needed - and not before. IP, CIDR and other network-wide specifications were introduced under extreme pressure when other options were exhausted - because any other way just does not make any sense from economics POV. I suspect that LTE which will introduce hundred of millions IPv6-only clients will be the required push, but then again, may be not (proxies will mitigate the problem for a while). But Bernstein's plan is just stupid: the fact that you need new kind of IP number supplied by ISP never was a problem (and still is not a problem). The fact that ISP must replace it's expensive hardware was and is a problem - and article kind of ignores that problem completely.

It just shows how badly even intelligent people may misunderstood the simple problem

Posted Jan 26, 2011 13:52 UTC (Wed) by bojan (subscriber, #14302) [Link] (17 responses)

Sorry, but you completely misinterpreted what DJB wrote. He was talking about upgrading everything on the net with software that understands both types of addresses. This includes routers. This was 8 years ago - plenty of time.

The reason this wasn't done is that IPv6 in its form (as proposed) was useless and not interoperable with IPv4, ergo nobody wanted to spend time configuring something that had no application.

The whole thing should have happened transparently, so that current IPv4 site didn't have to change a single thing to work with IPv6 addresses. If network manufacturers received that message, there would be no question which equipment to buy. It would be one and the same. And you would not have IPv6/v4 stack combos on OSes - just IPv6 that included IPv4. That's the point that you missed.

Who will pay - this is the question...

Posted Jan 26, 2011 14:20 UTC (Wed) by khim (subscriber, #9252) [Link] (16 responses)

Sorry, but you completely misinterpreted what DJB wrote. He was talking about upgrading everything on the net with software that understands both types of addresses. This includes routers. This was 8 years ago - plenty of time.

To throw good money after bad? All pieces where it was cheap and easy to add IPv6 support were upgraded in these 8 years, only routers are the hold-out - and they are real problem.

The reason this wasn't done is that IPv6 in its form (as proposed) was useless and not interoperable with IPv4, ergo nobody wanted to spend time configuring something that had no application.

No, the reason it was not done is that it's more expensive to install IPv6 on router and gives you no benefits. DJB does not address this issue at all. In his plan ISPs will magically decide to be altruists and install more expensive and useless hardware for the sake of the future. This is not how ISPs operate if they want to survive.

The whole thing should have happened transparently, so that current IPv4 site didn't have to change a single thing to work with IPv6 addresses.

Have you read the article? This is definitely not what I'm seeing:

(2) I control the operating system and the applications. I am ready and willing to make various changes to the code.

(3) However, I refuse to provide any information to those programs beyond what they already have (such as my IPv4 addresses), and I refuse to do any work outside changing the programs themselves. I'm not going to ask my ISP for an IPv6 address, for example, and I'm not going to touch my DNS data.

This asinine dilemma does not change anything WRT to real problem.

If network manufacturers received that message, there would be no question which equipment to buy. It would be one and the same.

How come? How exactly you propose to make IPv6 router as cheap as IPv4 router? Remember: IPv4 routers are highly-optimized pieces of ASIC which are optimized for particular bit-layout of packets (if you use optional flags in IPv4 they slowdown by factor of 10x-100x, these packets are dropped early, etc). This is critical question - and both you and DJB keep to ignore it.

And you would not have IPv6/v4 stack combos on OSes - just IPv6 that included IPv4.

There are very few OSes without IPv6. The problem lies with networking hardware. You know: FPGA, ASICs - things which are expensive and hard to change.

That's the point that you missed.

I've not missed it: as I've said it's irrelevant. The problem which this approach was supposed to fix either does not exist (we can change OS and everything else, but can not ask for a new IPv6 address - WTF? why no?) or impossible (we want to participate in IPv4 network using only IPv6 address: how?). The real problem is not discussed at all: DJB presumes that it's easy to change hardware/software on ISP side and hard on the client side while in reality it's the other way around.

Who will pay - this is the question...

Posted Jan 26, 2011 14:36 UTC (Wed) by bojan (subscriber, #14302) [Link] (15 responses)

So, IPv6 is designed to embed IPv4, standards are witten, all software manufacturers start implementing IPv6 that includes v4 (i.e. understands 16 byte addresses as well), but network equipment manufacturers (according to you) do not implement this at all because they cannot redesign their ASICs to do that in almost 10 years. And on top of that, ISPs do not buy a single new router in that time.

Ten years ago all Cisco routers were routinely accessed via telnet. These days folks mostly use ssh. Things change when the right signals are given.

IPv6 transition is being handled rougly like the 2000 bug. At the last minute people are scrambling to cobble together workarounds. At least old programmers had a good excuse - space was at a premium.

Who will pay - this is the question...

Posted Jan 26, 2011 14:59 UTC (Wed) by khim (subscriber, #9252) [Link] (14 responses)

So, IPv6 is designed to embed IPv4, standards are witten, all software manufacturers start implementing IPv6 that includes v4 (i.e. understands 16 byte addresses as well), but network equipment manufacturers (according to you) do not implement this at all because they cannot redesign their ASICs to do that in almost 10 years.

Yeah, let's go with strawman. Of course they support both IPv6 and IPv4. For example Nexus 7000 M-Series (found in one minute using Google): up to 60 million packets per second (Mpps) of IPv4 unicast forwarding traffic and up to 30 Mpps of IPv6 unicast forwarding traffic. Price difference between IPv4 and IPv6 is exactly two times.

And on top of that, ISPs do not buy a single new router in that time.

Sure they do, but they disable IPv6: this gives 2x price saving. They are not stupid: why spend $200'000 when you can spend $100'000 and give the same features to end-users?

IPv6 transition is being handled rougly like the 2000 bug. At the last minute people are scrambling to cobble together workarounds. At least old programmers had a good excuse - space was at a premium.

It's still a premium and prices for network equipment show...

Who will pay - this is the question...

Posted Jan 26, 2011 15:23 UTC (Wed) by bojan (subscriber, #14302) [Link] (11 responses)

The only reason they can disable (or not pick) IPv6 is because they are two different protocols. Which is exactly the problem with this transition plan. They don't need IPv6, because they are not expecting anyone to use it. Because it's useless. Because all hosts are IPv4.

If transition was handled differently, all hosts with current 32 bit addresses (which would also work as 128 bit ones) would also be IPv6 hosts, so having IPv6 on routers would actually be (surprise!) useful.

Who will pay - this is the question...

Posted Jan 26, 2011 17:15 UTC (Wed) by khim (subscriber, #9252) [Link] (10 responses)

If transition was handled differently, all hosts with current 32 bit addresses (which would also work as 128 bit ones) would also be IPv6 hosts, so having IPv6 on routers would actually be (surprise!) useful.

Useful for what? In this plan (the same as with current one) you can still safely disable useless extension on networking router, save half of the money and users will not notice. What incentive will there be for the ISP to support this extension? If Cisco will decide that it's good idea to support IPv6 unconditionally then it'll just lose to Juniper (or some other firm) which will implement "turbo mode" with 32bit addresses only...

Who will pay - this is the question...

Posted Jan 26, 2011 19:18 UTC (Wed) by cmccabe (guest, #60281) [Link] (5 responses)

Routers have to go through qualification tests-- sometimes very difficult ones, in third party labs. I'm sure there are a lot of features Cisco would like to drop from IPv4 and IPv6, but guess what: they can't. Not if they want to call their equpiment compatible.

You are focusing on the wrong end of the problem completely. Big equipment vendors love new standards, especially if they're complicated and difficult to implement. It creates churn, which means more purchase orders, which means more money.

The problem is that IPv6, as designed, is as useless as a screen door on a submarine until the magic moment arrives-- the IPv6 rapture, if you will. And ISPs are focused on Q4.

Who will pay - this is the question...

Posted Jan 26, 2011 21:51 UTC (Wed) by tialaramex (subscriber, #21167) [Link] (4 responses)

So, your approach is to hope that people will pay extra for something they don't want, because you said so.

Why do you need DJB's jabbering for that? You can just demand they pay extra for IPv6. Call it "Super Internet Plus" if you like. Go lobby your representative. See how much that helps.

The idea that because something is "part of" IP therefore it works on all the network gear people are buying? Just more proof you're completely out of touch with the real world. It is completely _normal_ to have stuff that doesn't work. Most of the time there's a config switch, the default is conservative, everything else is a lucky dip. All the features that deviate from the most basic processing of unicast TCP traffic are in the lucky dip category. You know Intel shipped a whole stepping of i386 that can't run protected mode Windows? That's the whole point of the 386, and it didn't work. Vendors have shipped whole families of products where they know there are features that just don't work. But "Oops, yes that is a bug, we'll let you know if we can fix it in firmware" is a lot better than "No, we can't deliver that feature, we won't bid". Nobody is going to sue - network admins in big corporations (especially big tight-fisted corporations) are used to being disappointed.

Who will pay - this is the question...

Posted Jan 26, 2011 22:28 UTC (Wed) by bojan (subscriber, #14302) [Link] (3 responses)

> So, your approach is to hope that people will pay extra for something they don't want, because you said so.

No. For something they want.

I cannot request access to a cool new IPv6 site right now. I'm on IPv4. So, my ISP cannot possibly have request for IPv6 traffic from me. Ergo, IPv6 is useless to them (and me).

Have my stacks been upgraded so that my current IPv4 addressing just worked with IPv6, occasionally I would get an AAAA response to my DNS queries (that is also stupid - this should have been simple A - but that's a different issue altogether), which would have a genuine IPv6 address in it. Now, I would not be able to access this site, because my ISP failed to buy IPv6 capable equipment, although my network was already IPv6 ready without me touching a thing in my config (I'm already on the net).

So, I have a choice:

1. Drop this stupid ISP and get one that does IPv6.
2. Tell them they are stupid and ask them to upgrade.

In both cases IPv6 wins by default.

Right now, the onus of IPv6 upgrade is on each and every customer. Each and every customer already connected to the only net we have. For no good purpose whatsoever.

You've lost one more possibility.

Posted Jan 26, 2011 23:45 UTC (Wed) by khim (subscriber, #9252) [Link] (2 responses)

So, I have a choice:

1. Drop this stupid ISP and get one that does IPv6.
2. Tell them they are stupid and ask them to upgrade.

Sorry, but you forgot one more choice:

3. Forget about crazy site which by someone's folly only have IPv6 address.

Because 99% users sites choose option number 3 (DJB plan or no DJB plan) there are no need to think about these silly AAAA-only sites. ISPs know this full well.

Right now, the onus of IPv6 upgrade is on each and every customer.

And this is not a problem at all: either you have IPv6-capable OS like Windows7 (where you only need to connect to the IPv6 internet to use IPv6) or you have something like PS3 or XBox360 where IPv6 does not work because developers just decided to ignore it. In both cases DJB plan is not needed at all. Sure, if you have large organization you'll need to do something, but "IPv6 works by default" approach will not help there at all: a lot of such organizations (most of them?) disable direct access to internet and ask uses to use proxy with authorization - and all that must be changed for IPv6 anyway.

You've lost one more possibility.

Posted Jan 27, 2011 3:13 UTC (Thu) by bojan (subscriber, #14302) [Link] (1 responses)

> 3. Forget about crazy site which by someone's folly only have IPv6 address.

Folly? You mean the address exhaustion. Yeah, let's do that. That'll work as well as the current plan. Which is to say not work at all.

> either you have IPv6-capable OS

I have an IPv6 capable OS. I have a relatively new DSL router with very young software on it. I have a valid net address. I have my DNS configured. I have my firewall configured. I've been connected to the net for a few years now using the same address. And yet, I cannot ping ipv6.google.com. That is what common sense people call "interoperability failure." I'm sure you'll have some funny explanation for this, full of acronyms like ASIC, FPGA etc. :-)

No, I meant simple fact...

Posted Jan 27, 2011 8:58 UTC (Thu) by khim (subscriber, #9252) [Link]

> 3. Forget about crazy site which by someone's folly only have IPv6 address.

Folly? You mean the address exhaustion. Yeah, let's do that.

No, I meant another folly: someone decides to implement IPv6-only resource in an IPv4 world. This is real stupidity and you can safely ignore such people - their resource will be dead very soon anyway. Of course if it's some kind of underground resource and you absolutely positively need to visit it... you'll find a way. Just like people without access to the Internet had a ways to download some files from it (yes, I mean ftpmail and other similar technologies).

That'll work as well as the current plan. Which is to say not work at all.

It works perfectly. Just like any disruptive technology it starts from the places where IPv4 just does not fit and goes from there. The only problem: IPv4 address are still not scarce enough so there are few such niches.

I have an IPv6 capable OS. I have a relatively new DSL router with very young software on it. I have a valid net address. I have my DNS configured. I have my firewall configured. I've been connected to the net for a few years now using the same address.

But you don't have an ISP which supports IPv6 - and that is the problem. Everything else is irrelevent and if you'll not have such provider IPv6 will not work. DJB plan or no DJB plan.

And yet, I cannot ping ipv6.google.com. That is what common sense people call "interoperability failure." I'm sure you'll have some funny explanation for this, full of acronyms like ASIC, FPGA etc. :-)

And you'll insist that somehow it can be solved by the insane DJB plan. You were asked dozen of times: how exactly this plan materialize ISPs with IPv6 routers. You refused to answer. This means one thing and one thing only: you don't know. And if you don't know how this critical part will be worked out with DJB plan then what evidence do you have that it may work?

Who will pay - this is the question...

Posted Jan 26, 2011 22:09 UTC (Wed) by bojan (subscriber, #14302) [Link] (3 responses)

> Useful for what?

Routing of packets from hosts.

Bottom line: lots of people have to do unnecessary work for no benefit at all. They are already on the net. Why do they have to connect again? Yeah, it's that simple.

Who will pay - this is the question...

Posted Jan 27, 2011 0:01 UTC (Thu) by khim (subscriber, #9252) [Link] (2 responses)

> Useful for what?

Routing of packets from hosts.

You mean: DJB plan will magically induce ISPs to throw money for no apperent reason? Hard to believe...

Bottom line: lots of people have to do unnecessary work for no benefit at all. They are already on the net. Why do they have to connect again? Yeah, it's that simple.

Bottom line: there were more CompuServer users 20 years ago then Internet users back then. They all are gone today. 20 years down the road IPv4 user will be similarly extinct. Switch will happen in the same fashion: people will get "poor alternative" because they can not afford "good one" and eventually the "good alternative" will be useless. Prioces for IPv4 will start raising in the coming years, so there are no need to worry: people will upgrade. They are not stupid. But before that happens price of IPv4 address should become high enough to make these 2x hardware prices cheap by comparison. It didn't happen yet: "white IP" is sold for $2-$10 per month today. It's not nearly high enough to induce change. But times are changing.

Who will pay - this is the question...

Posted Jan 27, 2011 2:35 UTC (Thu) by bojan (subscriber, #14302) [Link] (1 responses)

> 20 years down the road IPv4 user will be similarly extinct.

That I can believe. The transition plan to it is still shit.

Well, sure.

Posted Jan 27, 2011 9:05 UTC (Thu) by khim (subscriber, #9252) [Link]

> 20 years down the road IPv4 user will be similarly extinct.

That I can believe. The transition plan to it is still shit.

Well, sure. I mean: people spent lots of reasources trying to invent some "perfect" transition plan (DJB's way is only one alternative - and it's just as stupid as all other ones), but in the end there are only two ways:
1. Market way: IPv4-baset Internet will become more and more plainful in the future and eventually Ipv6 will win because it'll be just better.
2. Government-mandated way: IPv6 is madated in some large regions of world and then everyone else follow.
Looks like we'll go a market way. Well... may be not the best way but it'll work.

But for the market way to work IPv4 needs to be significantly more painful then it's now. Prices for "white IP" should be around $100/month at least, not $2-$10 like they are now. I'm not sure we'll reach this stage any time soon. More likely some ecosystem will adopt IPv6 first and the snowball will go from there. Will this be an LTE or something else? We'll see.

Who will pay - this is the question...

Posted Jan 27, 2011 7:26 UTC (Thu) by jem (subscriber, #24231) [Link] (1 responses)

"Price difference between IPv4 and IPv6 is exactly two times."

I don't buy your logic. If IPv6 traffic right now is <1 % of the total traffic and probably won't grow very rapidly any time soon, why does it matter that this year's router model is half as slow forwarding IPv6 packets?

Because today's model is the best available

Posted Jan 27, 2011 9:09 UTC (Thu) by khim (subscriber, #9252) [Link]

If you enable IPv6 you reduce not only number of packets you can process. You reduce number of routes you can process, etc. And about <1%... this is red herring: if the IPv6 traffic is <1% and you expect to have at on this point for a long time - then why bother (do you really expect significant revenuy from this <1% traffic?), but if you expect that this proportion will grow then IPv6 support means real money. The best alternative is to prepare contingency plans and wait till they will be needed - and this is exactly whay most ISPs are doing.

It just shows how badly even intelligent people may misunderstood the simple problem

Posted Jan 26, 2011 12:55 UTC (Wed) by tialaramex (subscriber, #21167) [Link] (1 responses)

"This is done already"

More than that, it's been done several times for people who wanted different things.

At first they say "I just want IPv4 addresses to have an equivalent in IPv6" so that's there, albeit largely useless except for some types of software.

Then it's "Oh, I should automatically get IPv6 addresses if I have IPv4". Did that, it's in 6to4, every IPv4-capable node has a /48 in 2002/16 for it to sub-allocate as it wishes, the IPv4 addressed node acts as router for the subnet, and an anycast address optionally provides a tunnelled route to the entire IPv6 space.

But now we get what people really want, what they really, really want. They want to be able to parley an IPv6 address (which they'll have billions of) to get an IPv4 address so they can continue using the IPv4 Internet after the crunch. This is mathematically impossible, and so, more so even than something which violates a law of physics, just wanting it really badly won't make it possible. But that won't stop them blaming the engineers who "failed" to do it for their woes.

It just shows how badly even intelligent people may misunderstood the simple problem

Posted Jan 26, 2011 14:17 UTC (Wed) by bojan (subscriber, #14302) [Link]

What "people" are saying is "I have a perfectly good setup, why do I need to change it?" Answer is: because IPv6 plan sucks.

Yeah, I know it will all work itself out in a few years. But pretending that it wasn't screwed up doesn't make it go away.

It just shows how badly even intelligent people may misunderstood the simple problem

Posted Jan 28, 2011 6:54 UTC (Fri) by butlerm (subscriber, #13312) [Link]

"This could have been handled by setting aside a special part of the IPv6 address space for addresses that mapped directly on to IPv4 equivalents."

This is done already.

Not on a routable basis it isn't. If the mapping was routable, the entire IPv4 network numbering plan could be routed through IPv6 only routers. But since the designers did not want to route the large number of prefixes from the existing numbering plan, we instead have a new numbering plan that requires the network configuration of a couple of decades now to be dumped and redone from scratch. A brave new world that no one wants to inhabit.

There were proposals on the table sixteen years ago that would have avoided this problem and made the transition process more or less transparent. Look up TP/IX sometime.

What about IPv6 right here on earth?

Posted Jan 26, 2011 23:16 UTC (Wed) by job (guest, #670) [Link] (2 responses)

Your premise is wrong. IPv6 is much cheaper to switch in hardware because if its fixed header size. Less complexity leads to more performance at a lower price.

Also since the routing logic is simplified I'd even expect routing tables to drop in size, at least initially, despite the addresses being that much larger. I may be wrong on that, but see above.

What about IPv6 right here on earth?

Posted Jan 27, 2011 11:44 UTC (Thu) by jthill (subscriber, #56558) [Link] (1 responses)

But it's either route IPv6 on hardware built to optimize 32-bit-address performance or buy an IPv6-optimized twin for every router they have. Looks like doing the former hurts so bad it makes no difference, the switch to IPv6 will make them double their router investment either way.

What about IPv6 right here on earth?

Posted Jan 27, 2011 16:16 UTC (Thu) by job (guest, #670) [Link]

Of course you're working against the market here. Any change will meet economic resistance since the perceived markets starts out very small. But this is a truism, and would be the case independent of whether the change is IPvDJB or IPv6.

My point is that IPv6 is easier and cheaper to route than IPv4, not the other way around.

What about IPv6 right here on earth?

Posted Jan 27, 2011 9:29 UTC (Thu) by Seegras (guest, #20463) [Link]

> The mainstream commercial ISPs are going to make this
> far more painful than it had to be.

It's not the ISPs. It's the vendors of so-called "routers". Those ADSL- and cable-modems. They just can't do IPv6. Most backbones are IPv6, and have been for years. And if your modem/router at home could do IPv6, you would have had IPv6 for some time now. Conversely, the content providers (which can implement IPv6 easily) don't see any use for IPv6, because the users don't have IPv6.

What about IPv6 right here on earth?

Posted Jan 30, 2011 15:23 UTC (Sun) by jeleinweber (subscriber, #8326) [Link] (1 responses)

If you ask John Curran, CEO of ARIN, and current cheerleader for the IPv6 transition, he'll tell you that the lack of a good v4 to v6 transition strategy was indeed "horribly misconceived", and he said so back in 1994 when the IPng working group first blessed the 3-way merged "simple internet protocol plus" as IPv6. The working group was probably mislead by their memories of the 1981-83 transition of ARPANET from NCP onto TCP/IP v4. Also by the correct theory that a dual transition (v4 to v4+, followed by v4+ to v6) would be twice as expensive and hard to sell as a single transition to v6.

In that earlier transition we were dealing with a much smaller internet (<300 hosts), only 3 protocols (TELNET, FTP, SMTP), a single backbone (BBN), only research organizations as customers, and had a mandated flag day when NCP was turned off and you went off-net if you hadn't converted yet. While the current transition from v4 to v6 lacks all of those characteristics, which is part of what has slowed it, it does share the multiyear transition and immaturity of the new protocol stacks problems we saw last time. I think there will be a flag day eventually too, around 2020, but that will be after the transition already happened (2009-2015?) and after IP traffic on the internet is 99% v6 (2017?).

"The transition is going to be very painful because it makes good commercial sense that way ..."

Yes. But, since there are about to large numbers of users for whom native IPv6 is about to become better, faster, and cheaper than hard to get IPv4 with multiple layers of expensive NAT444 appliances, and the economics are finally going to flip. In the US bad experiences by future 4G smartphone dual-stack-lite (native v6, tunneled + carrier NAT44) customers with v4-only web sites is going to pressure content providers to dual-stack, which is going to pressure ISP's to dual-stack too. This is why the likes of Google, Netflix, Facebook, CNN, and soon Yahoo are already dual-stacked.

What about IPv6 right here on earth?

Posted Jan 31, 2011 2:20 UTC (Mon) by dlang (guest, #313) [Link]

the IPv6 addresses may be easier to get, but since they are pretty close to useless why would anyone accept one instead of an address that looks to the rest of the Internet like it's IPv4 (either NAT444, NAT464, or just NAT44 like ISPs are using today). NAT64 + DNS64 are a possibility, but they are both bigger unknowns than the other NAT options, and a large number of consumer devices will not work on IPv6, so those customers can't use NAT64.

What about IPv6 right here on earth?

Posted Jan 25, 2011 18:30 UTC (Tue) by cmorgan (guest, #71980) [Link] (32 responses)

IPv6 isn't widely implemented because of the difficulties in the transition and overall lazyness on the part of ISPs, network equipment manufacturers etc. IPv6 is also complete. While that doesn't preclude evolving IPv6 or any future protocol it isn't as if Vint is holding up some crucial part of IPv6 that is keeping everyone from using it.

What about IPv6 right here on earth?

Posted Jan 25, 2011 20:24 UTC (Tue) by marcH (subscriber, #57642) [Link] (31 responses)

> IPv6 isn't widely implemented...

... in the US.

What about IPv6 right here on earth?

Posted Jan 25, 2011 22:14 UTC (Tue) by daniel (guest, #3181) [Link] (1 responses)

>> IPv6 isn't widely implemented...
>
>... in the US.

...or anywhere else.

What about IPv6 right here on earth?

Posted Jan 26, 2011 9:10 UTC (Wed) by marcH (subscriber, #57642) [Link]

I have IPv6 at home since a couple of years now, thank you very much.

What about IPv6 right here on earth?

Posted Jan 25, 2011 22:18 UTC (Tue) by drag (guest, #31333) [Link] (28 responses)

That's right.

Although Just This Weekend I got a /48 network assigned to me for free.. as well as a /64 network. For my personal use.

This was for FREE. No cost. It was easier signing up for a ISP size of networkable addresses then it is for most internet forums. All these addresses are routable on the internet. Meaning if you have a computer on IPv6 network on the internet you can reach any of those IP addresses. I could provide addresses for a good sized country with what I was given for nothing.

How much does your company pay for IPv4 addresses? I know I can get extra ones from my ISP for about 10 dollars a month or something like that.

I now have 1,208,925,819,614,629,174,706,176 worth of internet addresses to play around with.

Or something else like that that is completely insane. Over 25 thousand subnets? I still have a hard time grasping the numbers.

And talk about security... have you ever wanted remove the ability for anybody to portscan you? They have a mode of addressing IPv6 addresses now were you literally pick a random address for yourself every 5 minutes or so. Even every new connection if you would like. And it works fine... you can have dozens of addresses assigned to a single network connection and you just drop them as TCP sessions end. Fantastic stuff.

Want to be able to control what is allowed on your network? They have a mode of addressing were machines pick out for themselves new addresses based on a cryptographic scheme. Unless a machine is able to guess what new IP address is in the sequence then it won't be able to use your network.

Now compare that to the IPv4 world were we they are now moving to carrier grade NAT called NAT444. Meaning that your tunnelling Ipv4 addresses over IPv4 IPs being tunnelled through IPv4 IPs. NAT routers being NAT routers behind NAT routers!

You DO NOT have to put up with that crap!

http://tunnelbroker.net/main.php
http://www.sixxs.net/pops/occaid/

If your in the USA don't be a internet luddite! Join the IPv6 revolution today!

:-D

What about IPv6 right here on earth?

Posted Jan 26, 2011 1:21 UTC (Wed) by bojan (subscriber, #14302) [Link] (26 responses)

Quick question: can a person with an IPv4 address only reach your IPv6 network without any reconfiguration on their end? Just curious...

What about IPv6 right here on earth?

Posted Jan 26, 2011 3:42 UTC (Wed) by tialaramex (subscriber, #21167) [Link] (25 responses)

Depends how you define "without any reconfiguration".

If you mean that they are literally not to configure anything whatsoever, including with autoconfiguration if that's provided by the OS then: No, obviously. Without a configuration change you're stuck with IPv4 and if we could have figured out a way to make hundreds of billions of nodes addressable using IPv4's 32-bit addressing, we would have used this hole in number theory to solve world hunger or make our own pocket universe or something not wasted time on trivia like the Internet.

If you meant reconfigure the local node (possibly automatically) but not any intermediary nodes in the network the answer is yes, with a caveat. You will have to tunnel, and so you may (will) have a less direct route than you would natively and the overhead of the tunnel. Several autoconfigurable (ie you say "I want to enable this" and then it works) tunnel protocols exist, with varying parameters for how much infrastructure someone else has to have built somewhere (an anycast address? a router? lots of routers?), what sort of performance you can expect and how robust it will be against badly designed NAT / firewalls / etc.

This isn't how things could really work for most people though. A world of tunnels and ad-hoc connectivity is not a future millions of people can live in, it's temporary local solution. ISPs will have to deploy native IPv6, the problem is that they should have done this (at least) five years ago, and instead they'll probably start just after it's too late.

What about IPv6 right here on earth?

Posted Jan 26, 2011 7:49 UTC (Wed) by bojan (subscriber, #14302) [Link] (24 responses)

> Depends how you define "without any reconfiguration".

Like this. I have my DNS configured now. I have my DHCP configured now. I have my static hosts configured now. I have my firewalls configured now. I have my services configured now. Without touching any of that (i.e. configuration), can I talk to IPv6 hosts out there? I'm guessing the answer to this question is probably no.

What DJB pointed out is that deployment of new software on all computers in existence is surprisingly easy (in fact, it happens all the time with updates). Deployment of new configuration is hard (each admin has to figure out their new addresses, firewalls, DNS, tunnels etc.). Ergo, IPv6 should have embedded IPv4 in itself. And because it didn't, we have the current interoperability disaster where hosts with IPv4 addresses cannot talk to hosts with IPv6 addresses natively.

Sure, eventually all this will be overcome and we'll all end up being on IPv6, but the plan was not very good.

What about IPv6 right here on earth?

Posted Jan 26, 2011 8:10 UTC (Wed) by dlang (guest, #313) [Link] (15 responses)

and in case you are still missing it, the issue isn't about having a machine with an address in the first 32 bits talking to a machine with an address that's larger than 32 bits.

the issue is having a transition while everyone still has addresses that fit in 32 bits that would allow some machines to be on IPv4 and some machines to be on IPv6, all talking togeather, and it would only be as machines started to use addresses that didn't fit into 32 bits that there would be any incompatibility.

If this had been done, most of the systems out there would be running IPv6 now, and talking to IPv4. but since IPv6 was made completely incompatible with IPv4, running IPv6 became a significant effort rather than a transparent upgrade. As a result, almost nobody is running IPv6 except as a showpiece or hobby.

What about IPv6 right here on earth?

Posted Jan 26, 2011 8:48 UTC (Wed) by bojan (subscriber, #14302) [Link]

Case in point: my home network. I have to acquire an IPv6 capable DSL router. This will give me connection to IPv6 world. I also have to figure out new firewall rules, new DNS config and services configuration, so I can be found from the outside for IPv6 hosts.

I already have all this for IPv4, so doing this duplicative work is really idiotic. If DJB's suggestion was followed, my various OSes would already have been upgraded to understand 16 byte long addresses and it would just work.

PS. And I am actually interested in making a transition. Many large companies don't see anything being done related to IPv6 for the next few years at least. Simply too much hassle for no immediate benefit.

What about IPv6 right here on earth?

Posted Jan 26, 2011 16:10 UTC (Wed) by tialaramex (subscriber, #21167) [Link] (13 responses)

OK, this has been dissected lots of times, I hope I get all these right (I am rusty)

1. IPvDlangA is deployed on some boxes. This hypothetical protocol is just 100% compatible with IPv4 so these boxes can talk directly to IPv4 boxes.

IPvDlangA is junk - it doesn't get us anywhere we weren't with IPv4 so it doesn't matter if its deployed or not.

2. IPvDlangB is deployed on some boxes. This is a very sophisticated hypothetical protocol. It uses some magic IPv4 flag bits to detect other IPvDlangB boxes and when talking to them it embeds enhanced 128-bit addresses, with the top 96 bits cleared. To transit over the IPv4 network the packets must of course all have valid IPv4 headers too.

Everybody who gets IPvDlangB sees slower performance AND less bandwidth because IPvDlangB is wasting a lot of both. So every smart individual or organisation disables or avoids IPvDlangB. No uptake.

3. IPvDlangC does protocol conversion. It probes next hop routers (good luck with that) and if they also do IPvDlangC it speaks IPvDlangC otherwise it converts every packet to IPv4 and strips 96 bits of address. When receiving packets from such routers, it adds 96 bits of zeroes, and converts to IPvDlangC.

IPvDlangC is very resource-intensive, far more than mere IP routing. It would be tremendously expensive to deploy in core systems. This expenditure will never be authorised. In terms of development effort, code footprint, attack surface etc. it's much the same as the dual-stack solution today except without any of the benefits.

Do you have more options? We can look at those too (I can't promise to be swift, I have lots of real work to do)

What about IPv6 right here on earth?

Posted Jan 26, 2011 20:45 UTC (Wed) by dlang (guest, #313) [Link] (12 responses)

in a world where people tunnel everything of HTTP, and use thing like PPPoE, the overhead of a few more bytes in the header of each packet would not bring things to a screaming halt.

you could have left the existing 32 bits of addressing where they are in the packet header and set an option flag to enable additional addressing bytes later in the header.

If you did something like this, the 32 bit addresses would route you to the ISP that handles the 'full' address, and that ISP could have a router that implemented NAT by just grabbing the bottom 24 bits of the expanded address space and using the 10/8 network with these addresses internally (or a different class A could have been allocated for this purpose)

this would have allowed the address space to be transparently extended from 32 bits to 56 bits without any existing routers needing to be changed at all (including home routers), the client endpoints would need new software to send the extended headers, and the ISPs that run out of addresses would have to have new routers to move the bits from one point in the headers to the other (which is a _very_ cheap thing to do, much cheaper than current NAT where the router needs to maintain a table of translations, time them out, etc).

systems with the new 'extended' IP addresses would have trouble initiating connections outside their ISP to endpoints that did not include these additional headers in their response packets, but all that would take is 'if you see this flag and extra info in the headers of a packet that comes to you, include them in the response', and that's a pretty simple thing for the OS vendors to do.

I'm not worried about network vendors doing this for the network gear, as that should not have to be maintained across ISPs

What about IPv6 right here on earth?

Posted Jan 27, 2011 12:21 UTC (Thu) by tialaramex (subscriber, #21167) [Link] (11 responses)

OK, IPvDlangD, not very well specified, but good enough to dissect

First of all you've moved the legacy 32-bits from a suffix, in one corner of the address space, to a prefix. So you don't actually fix the exhaustion problem, because now there are far more addresses, but they're still all allocated to someone already. So your whole plan just wastes everybody's time. Ouch.

And your "pretty simple thing for the OS vendors to do" needs to be done in every IP stack deployed, which realistically means it needs to be in IP itself, or in one of the very early tweaks. By the 1990s it's just too late to make a difference. (IPng working groups were created in 1992 or so)

FWIW Solutions that require time travel (e.g. to go change the IPv4 standard before it was widely deployed) are cheating. Sorry, try again?

What about IPv6 right here on earth?

Posted Jan 27, 2011 18:55 UTC (Thu) by dlang (guest, #313) [Link] (8 responses)

in 1990 there would have been time to do this, in 2011 there isn't. part of this discussion is the compalints that the 'migration plan' was horrible, to which peopel are responding claiming that it's not horrible, it's the only possible way to do things. As a result people like me are pointing out things that people could do.

and the IP spec has been changed over time to include new features. there are reserved feature bits in the header that could be used to indicate the presense of these changes.

this is just plain going to be a horrible mess. there are few possible outcomes

1. we have a long period of using NAT and then move to IPv6

2. we have a long period of using NAT and then move to something else (probably at least as painful in the meantime as the move to IPv6)

3. extensive use of NAT becomes permanent

4. the internet as we know it collapses (extremely unlikely in my opinion)

What about IPv6 right here on earth?

Posted Jan 27, 2011 22:49 UTC (Thu) by tialaramex (subscriber, #21167) [Link] (7 responses)

You're entitled, in this context, to explain what could have been done during IPng (which began in 1992). Just not to go back and fix IPv4 so that it makes your job easier. Otherwise "obviously the addresses should have been wider from the outset" is the tedious answer to everything (except maybe "would IPv4 have still taken off, with such baggage?"). Any time estimates should be real world, so for example, having begun planning in 1992 you will be too late to make an impact on the project Microsoft are jokingly calling "Windows 93" and which will eventually ship as Windows 95.

Surprisingly little has changed in IP, except in cases where two peers can identify that the other implements some newer feature. Most of what gets done in the stacks is optimisation, taking advantage of features that exist in the specification already. And sometimes even that fails - just because the specification says you can do something, does not mean it always actually works in the real world. Life is full of disappointments.

Yes, there will be a mess. That is not a new observation. We have no useful prior experience to judge exactly how big the mess will be. Some of the things we've been doing to try to give ourselves more time will, as with NAT, contribute to the mess.

FWIW Options 1 and 3 are the only ones that look reasonably likely. Option 3 is really bad, but only in the same way that widespread private ownership of motor vehicles was really bad. It didn't bring about the end of civilisation or anything, and it made our culture what it is, for better or worse.

Option 2 basically won't happen NOW (even in the unlikely event you can come up with a better alternative in our "What if?" scenario) because it would be too late to the party. Achieving even IPv6's tiny penetration will take an alternative decades.

Option 4 won't happen because people like captioned images of kittens. An expensive, inefficient network that continues to deliver cat pictures is still enough to keep the lights on at the ISPs. I'm being flippant, but millions of people are now used to having this service and paying for it. Address exhaustion doesn't make that demand go away. Of course "as we know it" could be construed to include Option 3 in some ways. A network without end-to-end is not the Internet we thought we knew and loved.

What about IPv6 right here on earth?

Posted Jan 29, 2011 7:48 UTC (Sat) by butlerm (subscriber, #13312) [Link] (6 responses)

I think you are forgetting option 5: Everyone gradually comes to the realization that abandoning the IPv4 numbering plan and network configuration on a global basis is never going to happen in this century, so the IETF goes back to the drawing board and designs an IPv4 upgrade that is compatible with those addresses. Something like TP/IX, which was proposed by Robert Ullman in 1993, nearly a decade before DJB made the same argument. See RFC1475.

Then NAT reigns supreme for several years while this IPv4 plus protocol gradually disseminates through software and hardware upgrades and ten or so years later the extended IPv4 plus addresses are actually routable across the public Internet.

What about IPv6 right here on earth?

Posted Jan 29, 2011 15:08 UTC (Sat) by cesarb (subscriber, #6266) [Link] (5 responses)

Isn't that the same as option 2 (a period of NAT followed by a move to something which isn't IPv6)?

What about IPv6 right here on earth?

Posted Jan 29, 2011 16:53 UTC (Sat) by butlerm (subscriber, #13312) [Link] (4 responses)

I guess so. It is just that so few people would know that this "move" was actually occurring, that it wouldn't be much of a move at all. More like a silent transition.

I like a plan where a.b.c.d address format is changed to an administratively compatible format where each of the four components is a 16 bit number represented in text format in decimal form.

In binary form the IPv4 address C0 A0 20 01 would become 00C0 00A0 0020 0001. All existing address prefixes would be preserved, except in expanded form when represented in binary. In text format the address would be 192.160.32.1 in both cases. Then one day about ten years later, addresses like 300.278.22.1 would become publicly routable. Or even addresses like 6700.45320.658.33781.

A straightforward expansion would allow a variable number of components where the number varied from 4 to 8 or so. That way everyone with an IPv4 style 4 component address could add publicly addressable sub networks without getting a new allocation. The network core would generally only do routing on the first four components (64 bits) for economic reasons, but hardware routers with 128 bit prefix capability would eventually be come common, starting at large edge networks. Trailing zero bits would be implied.

Existing configurations would be preserved, although alternative netmask indicators would eventually have to be added, because a current "/24" would in actuality be a 48 bit prefix, and if you want to specify netmasks that end on any of the inserted bits a different syntax would need to be used. "//48" style perhaps.

One of the other advantages of a variable length addressing scheme like this is that most addresses would only be 64 bits, not 128, significantly reducing the overhead for small packets like those used in VOIP, especially on lower bandwidth connections. Who really wants their MAC address broadcast all over the Internet anyway? For servers, it is practically useless. For individuals it is a privacy nightmare.

TP/IX and similar plans

Posted Jan 30, 2011 15:15 UTC (Sun) by tialaramex (subscriber, #21167) [Link] (3 responses)

You can't make things compatible by sheer force of will. TP/IX isn't really compatible with IPv4, the RFC leaves you in no doubt whatsoever about that if you ignore the repeated but unfounded assertions to the contrary. The "compatibility" advertised for TP/IX is the same compatibility delivered by dual-stack, in fact it _is_ dual-stack by another name. Everyone must speak both protocols if they want to participate on the Internet.

It's a huge sprawling mess, far more invasive than the eventual IPv6. I can only say that I'm indebted to whoever persuaded its author that his entirely new routing protocol and algorithms deserved their own "informational" RFC alongside this one, so that people asked to read about one needn't waste their time reading the other.

The economics (remember, that's why IPv6 has a fraction of a percent of penetration, rather than say 80%) are if anything worse. First you must get enough people to deploy "version 7". Your ten year estimate seems optimistic when you realise that although this costs a lot of money it delivers no immediate benefit whatsoever. Far more so than deploying IPv6 today, deploying "version 7" would have been a leap in the dark, trusting that some day we'd get the wider addresses working and it would be for the best. But then they must upgrade _again_ to have wider addresses. In fact they might have to do so repeatedly.

Ullmann's plan reserves 75% of addresses, rationalising that they can be used if the initial proposal turns out to be a bad idea. A similar strategy was chosen in IPv6. But Ullman reserves the wrong bits. An experienced engineer would know that if you leave the top few bits empty, some clown will either use them for a purpose you didn't intend meaning they can never be put into production, or they will confuse signed and unsigned terms and drop the top bit, again rendering it useless in practice. IPv6 was careful to use those top few bits for something obligatory, so implementers would notice and correct such bugs. [ For another example look at the way x86-64 handles virtual addresses ]

There are also numerous technical errors in the specification. e.g. Like some earlier LWN comments it assumes that DNS records are just arbitrary text strings so changing to a system where 'A' answers are sometimes too long for IPv4 is fine, everything will just work. Leaving aside the naivety of imagining that no-one would inadvertently rely on something that's been true for as long as the system has existed the simple technical answer is: No, can't do that, DNS unambiguously defines A records as exactly 32-bits. Hence the existence of 'AAAA' records.

TP/IX and similar plans

Posted Feb 5, 2011 16:46 UTC (Sat) by butlerm (subscriber, #13312) [Link] (1 responses)

First, no one would have to dual stack. Same stack, larger address space. Routers would do wire format conversion where necessary. Second, technical errors in the TP/IX RFC have no bearing on the merit of the principle described. The IETF was just incredibly short sighted on that point, preferring a start over from scratch design to one where the transition would be practically seamless.

In short, the counterargument here has to be not against the weaknesses of a seventeen year old network proposal per se, but rather against the entire idea of preserving the existing address space on a long term basis without renumbering.

TP/IX and similar plans

Posted Feb 9, 2011 14:21 UTC (Wed) by tialaramex (subscriber, #21167) [Link]

"Same stack, larger address space"

Reality isn't interested in proof by assertion, if I call a frog a bird it will not make it fly. TP/IX is a completely new stack. "Just" widening the address field (which is not what TP/IX does) is like "just" adding an extra floor to the middle of every house in the country.

"no one would have to dual stack"

"routers would do wire format conversion"

Do you even read what you're writing? In order to "do wire format conversion" the routers not only need to have both stacks, but they have to be able to convert from one to the other, a major additional expense. Worse, for TP/IX (and any alternative I can imagine) this is stateful. So the plan becomes "instead of buying an expensive IPv4 and IPv6 router, buy an even more expensive IPv4 and TP/IX router-converter" and you've made things worse, not better.

"technical errors in the TP/IX RFC have no bearing on the merit of the principle described. The IETF was just incredibly short sighted"

Of course, how stupid of me. It doesn't need to actually work, someone who needs things to work is being "short sighted". I can't address problems in a non-existent alternative and neither can the IETF.

Fortunately thanks to such "short sighted" people we have a plan, not a painless or easy plan, but one that will work. Now it remains to be seen if everyone will implement it, and how long that will take.

TP/IX and similar plans

Posted Feb 9, 2011 17:21 UTC (Wed) by daniel (guest, #3181) [Link]

<quote>DNS unambiguously defines A records as exactly 32-bits. Hence the existence of 'AAAA' records</quote>

For the proposal at hand, which as I understand it is to extend the IPv4 address space by 16 bits on the right, an 'AA' record will do and only needs to be implemented at the edges of the classic IPv4 space. In other words, at points under complete control of participants in the experiment.

An 'AA' record would sensibly be defined with the classic IPv4 space in the middle four bytes, the IPv4++ bytes on the right, and the two remaining bytes on the left reserved for future expansion.

What about IPv6 right here on earth?

Posted Feb 8, 2011 18:45 UTC (Tue) by daniel (guest, #3181) [Link] (1 responses)

<quote>you've moved the legacy 32-bits from a suffix, in one corner of the address space, to a prefix. So you don't actually fix the exhaustion problem, because now there are far more addresses, but they're still all allocated to someone already. So your whole plan just wastes everybody's time. Ouch.</quote>

Wait, could you back up and step through that again for me? I fail to see why more addresses is not a step in the right direction, even if they are "already allocated to someone". Reallocate, maybe? Share maybe?

I'm labelling your argument "bifuraction" for the time being.

What about IPv6 right here on earth?

Posted Feb 8, 2011 23:56 UTC (Tue) by dlang (guest, #313) [Link]

by making the address space larger I allow all the entities that have IP addresses to have a lot more.

yes, my plan favors the established ISPs who have IP addresses as each IP address they currently have becomes a class A network (or larger), but in practice you have to get the established ISPs agreement anyway before you can use any IP addresses (they have to agree to peer with you and accept your advertisement), so I don't see this as a major problem

What about IPv6 right here on earth?

Posted Jan 26, 2011 12:24 UTC (Wed) by drag (guest, #31333) [Link] (7 responses)

> Without touching any of that (i.e. configuration), can I talk to IPv6 hosts out there? I'm guessing the answer to this question is probably no.

Yeah. Nobody in IPv4 can talk to you unless you have a IPv4 address. In my case I will still use the IPv4 NAT firewall for that.

But if your running commercial services then you can use a IPv4 compatible IPv6 address.

Most commercial routers besides home consumer kits have been IPv6 for a long time now. Probably most stuff sold to businesses since 2005 or so has IPv6 support. Most corporations are going to have IPv6 aware devices on their network right now. My Android phone supports IPv6 access points.

It's going to be a security issue for a lot of networks... if I use IPv6 I can bypass most restrictions in most networks. Just because your not using IPv6 on your network, doesn't mean that somebody else isn't. :P

I'd say that most corporate networks deployed in 2006 or so is probably IPv6 ready now... or nearly so. They just have to turn it on.

There are enough reasons to use IPv6 besides just the bigger IP space that people are going to look back and say "WTF didn't we do this earlier?"

What about IPv6 right here on earth?

Posted Jan 26, 2011 14:23 UTC (Wed) by bojan (subscriber, #14302) [Link] (6 responses)

I think your post illustrates what is wrong with this transition. If the transition was planned properly, your answer would have been "yes" or even "why do you ask?"

What about IPv6 right here on earth?

Posted Jan 26, 2011 15:20 UTC (Wed) by drag (guest, #31333) [Link] (5 responses)

How would it be possible for IPv4 systems to be able address a system that they were never designed to address?

What about IPv6 right here on earth?

Posted Jan 26, 2011 16:11 UTC (Wed) by bojan (subscriber, #14302) [Link] (4 responses)

You should really read DJB's text. Software would be upgraded over the years so that they could (this is how we got dual 4/6 stacks anyway). However, the existing addresses and setups would stay the same.

In other words, systems would be upgraded to IPv6 in place, with no additional configuration required. Networks, dns, firewalls, services, routers etc. would keep working as usual.

We would have had almost 10 years for all this. More than enough. Too late now.

What about IPv6 right here on earth?

Posted Jan 26, 2011 19:27 UTC (Wed) by johill (subscriber, #25196) [Link] (3 responses)

I don't think it would have happened that way -- the vendors would still have bought the cheaper router that is aware only of 4-byte address matching (in silicon).

You're assuming that it's all software, and that upgrading your stack to a hypothetical IPv6 that is fully backward compatible with IPv4 would essentially have been free. Neither of those is true. Vendors would simply have offered to sell new, improved revisions of their existing, "legacy", IPv4-only devices -- without adding the more expensive silicon that is aware of the new, longer address matching.

What about IPv6 right here on earth?

Posted Jan 26, 2011 22:18 UTC (Wed) by bojan (subscriber, #14302) [Link] (2 responses)

> I don't think it would have happened that way -- the vendors would still have bought the cheaper router that is aware only of 4-byte address matching (in silicon).

And that's fine. If they don't want to route new, IPv6, hosts. Ever.

However, people connected to their network would automatically start asking for this routing. Because they would already be on IPv6. Without doing anything. Any new host they wished to access that had a real IPv6 address (i.e. not legacy IPv4) would be out of their reach. This would create a lot of complaints (hey, my friend can see this cool new site and I can't), which would either get rid of idiotic ISPs or force them to upgrade. Without the need to tell customers that they need to do something special to see IPv6 hosts.

What about IPv6 right here on earth?

Posted Jan 27, 2011 12:01 UTC (Thu) by tialaramex (subscriber, #21167) [Link]

They would "start asking for this routing" ? Like they're asking for IPv6 right? Even better, why should I care that Joe Blogs on the other side of the world wants me to spend millions of dollars upgrading my network? He's not _my_ customer.

What about IPv6 right here on earth?

Posted Feb 3, 2011 14:43 UTC (Thu) by farnz (subscriber, #17727) [Link]

We already have the situation you're discussing - I have today's IPv6 and IPv4, and can get to any cool sites that exist on IPv6 only.

Problem; there are no cool sites that are available on IPv6 and not IPv4. The reason? If I'm available on IPv4, close to 100% of my target market can get to me; if I'm not, only a tiny fraction of a percentage point can get to me. The economics are simply not there; in your hypothetical "IPv4++" world, the idiotic ISP would respond with "you need to do this very complex thing (at least as complex as deploying IPv6 is in today's world) to get access - it's the site's fault for using IPv4++". Net result? Everyone continues to use plain IPv4, ignoring the extended addresses possible in IPv4++, because you haven't solved the chicken and egg problem.

Note also that thanks to buggy systems, it's not safe to dual-stack your hosts by default. There are machines out there which think they have working IPv6 routing, but don't - about 0.1% of Google users last time I looked for the figures. So, real world experiments tell us that even co-existence of two protocols doesn't work properly; this leads to a thought experiment. How exactly does IPv4++ handle the case of two IPv4++ hosts with an IPv4 only segment in the middle, such that you can successfully talk IPv4 but not IPv4++?

Any answer that assumes that IPv4++ can transit the IPv4 segment has failed already - IPv6 can transit over IPv4 segments, yet we still see brokenness. Any answer that implies that an IPv4 only host cannot distinguish IPv4++ traffic from traditional IPv4 traffic has failed already - the most common form of brokenness in the IPv6 world is IPv6 in IPv4 tunnelling, where the IPv4 network deliberately blocks protocol 41, and there would have to be some similar indication that this is IPv4++ traffic.

What about IPv6 right here on earth?

Posted Jan 26, 2011 3:47 UTC (Wed) by jthill (subscriber, #56558) [Link]

No, no, that big number is 2^80, so the entire IPv6 space is that much *times* your allocated /48 range. Let's see.... Mass of the Earth: 5.9724e27g. IPv6 net: 2^128. per address: 1.755e-11g, 17.55 picograms.

Jeez, they only gave you enough addresses to cover about 5kg of you. Stingy bastards.

LCA: Vint Cerf on re-engineering the Internet

Posted Jan 25, 2011 19:09 UTC (Tue) by rccrv (guest, #72268) [Link]

For those interested in solutions of the single addressing used in the network and transport layers, the experimental Host Identity Protocol tries to solve this problem.

LCA: Vint Cerf on re-engineering the Internet

Posted Jan 26, 2011 7:36 UTC (Wed) by branden (guest, #7029) [Link]

"His talk started back in 1969..."

Dude's got STAMINA. Puts Fidel Castro to SHAME.

And I'd hate to see the honorarium check for a 42-year speech.

"Suboptimal user behavior"?

Posted Feb 6, 2011 0:55 UTC (Sun) by clemenstimpler (guest, #71914) [Link]

"He was 'most disturbed' that many of the problems are not technical, they are a matter of suboptimal user behavior - bad passwords, for example."

I'm a bit disturbed by that attitude: If a system requires "optimal user behavior", that may be an indication of bad design. Maybe we should put a sticker on PCs: "This wasn't made for you: Use on your own risk!"


Copyright © 2011, Eklektix, Inc.
This article may be redistributed under the terms of the Creative Commons CC BY-SA 4.0 license
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds