User: Password:
|
|
Subscribe / Log in / New account

Routing

Routing

Posted Nov 19, 2009 5:14 UTC (Thu) by smurf (subscriber, #17840)
In reply to: Sender vs. Recipient latency by iabervon
Parent article: Reducing HTTP latency with SPDY

The whole thing will have to fallback to TCP when some firewall (NOT router!) is stupid. But supporting SCTP has to start somewhere, and that necessarily involves pressure from users. (Witness the ECN problem.)

I don't think multiplexing over TCP is a good interim solution. If people start to implement that, SCTP will never take off.

The question is, do you want a technically-sound solution (which probably involves IPv6: no NAT there), or a hack which will ultimately delay implementing that solution?

I suppose Google is all about hacks, at least in this area. Witness Android. :-P


(Log in to post comments)

But supporting SCTP has to start somewhere ? Why?

Posted Nov 19, 2009 5:55 UTC (Thu) by khim (subscriber, #9252) [Link]

The whole thing will have to fallback to TCP when some firewall (NOT router!) is stupid.

For most people out there router is this thing connected to cable modem. Even if it's technically not router, but firewall with NAT. If SCTP needs an update for that piece of plastic it's DOA and not worth talking about.

The question is, do you want a technically-sound solution (which probably involves IPv6: no NAT there), or a hack which will ultimately delay implementing that solution?

Wrong questions. The real question: do you want a solution or handwaving? If the "techically-sound solution" is ievitable (i.e.: other proposed solution either don't work at all or are just as invasive) - it has a chance. If there are some other solution which is worse but works with existing infrastructure... then "techically-sound solutoion" can be written off immediately.

I suppose Google is all about hacks, at least in this area. Witness Android. :-P

Yup. Witness system which works and is selling by millions (albeit by one-digit millions at this point) and compare it to "techically-sound solutions" which are scrapped and gone...

Google is about realistic solutions, not pipe-dreams. IPv6 is acceptable even if it has this stupid fascination with "technically-sound solution" approach - because there are things IPv4 just can't do. But STCP... I'm not sure it'll ever be used but I'm realistically sure it'll not be used in the next 10 years.

SCTP is the heir apparent

Posted Nov 20, 2009 23:18 UTC (Fri) by perennialmind (subscriber, #45817) [Link]

When it comes to the open internet, I agree that it'll be a long, long time before SCTP could become broadly feasible, but that's because you're talking about upgrading a massive network. New protocols are not born on the internet, not anymore. New network protocols breed in the crevices of the LAN, and SCTP has a bright future there. Some of the newer protocols like SIP, iSCSI, and NFSv4 will happily sit atop SCTP. If you're going out to fix the same problems that SCTP tackles, at the very least they should define a mapping, as those protocols do. We don't need to keep the kruft forever, but it has to be a gradual upgrade. Encapsulate as needed: SCTP has a reasonable UDP layering. Because "internet access" translates to TCP port 80 for so many, you may have to define something like SPDY, but in that case shouldn't it simply be the TCP variant, on a level with SCTP? Even if it does take ten years, twenty years, won't you want to be able to drop the inefficient backwards compatibility at some point?

Comcast is upgrading their gear to IPv6 because /they/ need it. With the multi-homing support in SCTP, you should be able to sell it to Verizon, AT&T, Sprint, etc as being genuinely useful to /them/. They have the unique position of both owning the (huge) proprietary networks all the way to the edge and actually making substantial use of those same networks, so they have both the ability and the business interests to adopt SCTP that random servers and clients do not. Just because SCTP isn't ready to supplant TCP for the web, doesn't diminish it's usefulness, right now.

SCTP is the heir apparent

Posted Nov 22, 2009 0:56 UTC (Sun) by khim (subscriber, #9252) [Link]

Even if it does take ten years, twenty years, won't you want to be able to drop the inefficient backwards compatibility at some point?

Is it really so inefficient? Is it really impossible to make things more efficient while retaining compatibility? Witness fate of Algol which decided to "drop inefficient backwards compatibility at some point" and compare it with Fortran which kept it around for decades. And the same story is with RISC and x86. And other countless examples. Compatibility is very important: it can only be dropped if there are no compatible way forward.

Comcast is upgrading their gear to IPv6 because /they/ need it.

Wrong emphasis. /They/ is irrelevant. /Need/ is imperative word.

With the multi-homing support in SCTP, you should be able to sell it to Verizon, AT&T, Sprint, etc as being genuinely useful to /them/.

You can try do this, but it's almost too late. They are losing their network and are becoming just "another ISP" (albeit big one). AOL already went this way, Verizon, AT&T, Sprint will follow. Sure, they'll try to delay it as much as possible, and may be even survive long enough for SCPT to become the whole article in history books, not just a footnote, but ultimately it's not a big difference.

But supporting SCTP has to start somewhere ? Why?

Posted Nov 25, 2009 14:27 UTC (Wed) by marcH (subscriber, #57642) [Link]

> For most people out there router is this thing connected to cable modem. Even if it's technically not router, but firewall with NAT.

<pedantic>Every NAT or firewall is technically some kind of router</pedantic>

More generally speaking,

I do not think any core network device blocks SCTP (nor anything else they do not recognize). So if two parties want to SCTP, they can by just reconfiguring their edge devices. Except for NATs, but NATs will disappear with the very near exhaustion of IPv4 addresses and the pressure of P2P applications.

You do not need the whole planet to be able to use a new protocol or service in order for it to get some traction. Zillions of people are forbidden to use facebook (and others...) at work. Does that dooms facebook?

But supporting SCTP has to start somewhere ? Why?

Posted Nov 25, 2009 15:36 UTC (Wed) by foom (subscriber, #14868) [Link]

> Except for NATs, but NATs will disappear with the very near exhaustion of IPv4 addresses
> and the pressure of P2P applications.

No they won't. Did you see the PR disaster Apple had when the Airport Express supported IPv6
without NAT? Everyone suddenly went "OMG my internal network is all exposed to the internet now,
giant security hole!!!". And of course, they were right -- that is unexpected behavior in today's
world. So, no doubt about it, NAT will live on even with IPv6. (When I say NAT there, I really mean
connection-tracking-based filtering: tacking the address translation on is trivial to do or not, but
it's the connection-tracking which would cause problems with SCTP).

But supporting SCTP has to start somewhere ? Why?

Posted Nov 25, 2009 18:57 UTC (Wed) by smurf (subscriber, #17840) [Link]

>> When I say NAT there, I really mean connection-tracking-based filtering

So why do you call it NAT, if no address is actually translated?

But supporting SCTP has to start somewhere ? Why?

Posted Nov 25, 2009 23:55 UTC (Wed) by foom (subscriber, #14868) [Link]

Because when people say "XXX is broken because of NAT", they actually mean "XXX is broken because of stateful connection tracking and filtering".

They just say "NAT" because stateful connection tracking and filtering is an integral part of NAT, and NAT is the most use. Of course it's possible to do a the connection-tracking without the address rewriting, but the important thing to note it is not any less complex, and causes no fewer problems.

It still prevents you from having an end-to-end internet.

You still want to have protocol-specific parsing in order to find "related" connections which should be allowed through. (e.g. with FTP). You'd still need a protocol like uPNP or NAT-PMP in order to advise the firewall to open a hole for things like BitTorrent. There's almost no advantage at that point versus actually having a NAT.

But supporting SCTP has to start somewhere ? Why?

Posted Nov 26, 2009 7:57 UTC (Thu) by smurf (subscriber, #17840) [Link]

>> There's almost no advantage at that point versus actually having a NAT.

Sure there is.

You avoid starving the router of TCP (or SCTP) ports. You avoid having to mangle TCP packets because they happen to contain addresses. You avoid IP address based "one-connection-per-client" limits on servers.

In short, you can use simpler servers and routers. Which translates to fewer bugs and less power-hungry CPUs.

But supporting SCTP has to start somewhere ? Why?

Posted Nov 25, 2009 23:45 UTC (Wed) by marcH (subscriber, #57642) [Link]

You are talking about default settings. I am talking about what is to become possible. Both are interesting, but quite different.

But supporting SCTP has to start somewhere ? Why?

Posted Nov 26, 2009 10:04 UTC (Thu) by marcH (subscriber, #57642) [Link]

> Except for NATs, but NATs will disappear...

Sorry, I actually meant:

So if two parties want to SCTP, they can by just reconfiguring their edge devices. Except when they have only one old public IPv4 address to share. But quite soon many people will have ZERO IPv4 address to share, which will ironically solve the only major deployment problem of SCTP.


Copyright © 2017, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds