|
|
Subscribe / Log in / New account

Should the IETF ship or skip HTTP 2.0?

By Nathan Willis
May 29, 2014

As the Internet Engineering Task Force (IETF) moves closer to finalizing the HTTP 2.0 standard (a.k.a. HTTP/2), there is a counter-call for the as-yet-unreleased standard to be dropped. Proponents of that move contend that effort should be put into a follow-up that fixes several problems that (it is argued) are intrinsic to HTTP 2.0. An Internet standard never seeing the light of day is nothing new, of course, but it is still an understandably difficult decision to make for those people who have put in considerable time and effort.

HTTP/2 has been in development since 2007. As we have noted before, the revised protocol adds techniques like request multiplexing and header compression to how HTTP is sent over the wire, but it intentionally does not change the semantics for requests, response codes, and so on. The goal is to better optimize HTTP traffic flow, such as reducing latency. The initial draft of the new specification was derived from Google's SPDY, which began as an in-house experiment.

Today, most browser vendors and quite a few high-traffic web services (including Google) support the yet-to-be-finalized HTTP/2. That plus, perhaps, general fatigue with the length of the standardization process, has led some people to ask that the new revision be declared done and given the official IETF seal of approval. On May 24, HTTP Working Group Chair Mark Nottingham proposed marking the latest draft as an Implementation Draft at the HTTP Working Group's upcoming June meeting, then issuing a Last Call (LC). If there is no major objection raised after the LC, HTTP/2 would make its way out of the working group and toward standardization by the Internet Engineering Steering Group (IESG), which makes the final call.

But not everyone in the HTTP Working Group is satisfied with the state of the HTTP/2 draft, and some of the criticisms run deep. In reply to Nottingham, Greg Wilkins said that "I do not see a draft that is anywhere near to being ready for LC [Last Call]." He enumerated what he sees as four major problems:

  • The state machine described for processing multiplexed HTTP streams does not match the states that the rest of the specification describes for HTTP/2 streams.
  • There are unsolved problems with the HPACK header compression algorithm, including inefficiencies and risks that incorrect implementations will leak information.
  • The protocol allows data to be included in HTTP headers; since headers are not subject to flow control, segmentation, or size limits, malicious parties could exploit this to unfairly monopolize a connection.
  • There is no clear layering between HTTP/2 frames, requests, and streams.

Several other members of the group concurred with Wilkins's concerns. Nottingham replied that there was still time to fix problems in the specification, but said that the pressure from implementers to establish and adhere to a schedule was important to consider, too:

We’re clearly not going to make everyone satisfied with this specification; the best we can do is make everyone more-or-less equally dissatisfied. Right now, I’m hearing dissatisfaction from you and others about spec complexity at the same time I’m hearing dissatisfaction from others about schedule slips...

As to the specific complaints, Nottingham acknowledged that some of the pieces may not be ideal, but remain the best that the participants have been able to create. HPACK, for instance,"is more complex than we’d like, in that there isn’t an off-the-shelf algorithm that we can use (as was the case with gzip)." Yet, after repeated discussions, the group has always decided to stick with it. Moreover, there have already been discussions about HTTP/3, and "while there was a ~15 year gap between HTTP/1.1 and HTTP/2, it’s very likely that the next revision will come sooner."

On May 26, Poul-Henning Kamp posted a rather pointed response (titled "Please admit defeat") to Nottingham's email, specifically the prospects for a sequel to HTTP/2. If the working group already knows that HTTP/2 will require a follow-up in HTTP/3 to fix important problems, he said, then the group should simply drop HTTP/2 and develop its successor.

The WG took the prototype SPDY was, before even completing its previous assignment, and wasted a lot of time and effort trying to goldplate over the warts and mistakes in it.

And rather than "ohh, we get HTTP/2.0 almost for free", we found out that there are numerous hard problems that SPDY doesn't even get close to solving, and that we will need to make some simplifications in the evolved HTTP concept if we ever want to solve them.

Now even the WG chair [publicly] admits that the result is a qualified fiasco and that we will have to replace it with something better "sooner".

Kamp argued that pushing out HTTP/2 would waste the time of numerous implementers, as well as introduce code churn that may carry unforeseen security risks. Unsurprisingly, Nottingham did not concur with that assessment. In addition to suggesting that Kamp's wording overstated matters (taking issue, for instance, with the "fiasco" sentence quoted above), Nottingham replied that HTTP implementers feel that the protocol draft is close to being ready to ship, despite any shortcomings. At this stage, he said, technical proposals are what are required.

Nottingham also pointed out that one of Kamp's objections was that HTTP/2 leaves unfixed some bad semantics that have been around since HTTP/1.1. Many people might agree, Nottingham said, but changing the semantics of HTTP is specifically out of scope for the HTTP/2 effort, since it would break compatibility with existing browsers and web servers.

There may be quite a few things about HTTP that still need fixing after HTTP/2, but that is one of the reasons Nottingham cited for wrapping up the HTTP/2 standardization process: once it is completed, the community can move on. There has been a fifteen-year (and counting) gap between HTTP/1.1 and HTTP/2; the longer the gap, the more difficult it is to not break compatibility with existing implementations—if for no other reason, there are simply more browsers and sites.

At this point, it is still possible that HTTP/2 will undergo more revision before it makes it to the final stage of standardization. Several other working group members had concerns about HPACK, and it has been proposed that the compression algorithm be made a negotiable parameter, so that future revisions could drop in an improvement. What does seem clear, however, is that HTTP/2 is moving forward even if not everyone is satisfied with it.

Lack of universal agreement, of course, is not an uncommon problem with standards. As Nottingham noted, the browser and web-server vendors are more-or-less ready to see HTTP/2 reach official approval—which would seem to place HTTP/2 well in line with the IETF's longstanding mantra of "rough consensus and running code." There may indeed be problems that are not discovered until implementation is widespread; perhaps the best option for dealing with them will be to start work on solutions well before another fifteen years have elapsed.

[Thanks to Paul Wise and James Andrewartha for bringing this story to our attention.]


to post comments

Should the IETF ship or skip HTTP 2.0?

Posted May 30, 2014 5:25 UTC (Fri) by Asebe8zu (subscriber, #24600) [Link] (1 responses)

..."the best we can do is make everyone more-or-less equally dissatisfied. Right now, I’m hearing dissatisfaction from you and others about spec complexity at the same time I’m hearing dissatisfaction from others about schedule slips..."

This sounds like the average manager comparing functional qualities with planning stress.

Should the IETF ship or skip HTTP 2.0?

Posted Jun 1, 2014 22:10 UTC (Sun) by marcH (subscriber, #57642) [Link]

> This sounds like the average manager comparing functional qualities with planning stress.

Indeed - a standardization body is the last place where I expected to hear this. A standard is not a product and products never need to wait for the final, correct version of a standard.

Just print something and call it HTTP 1.99-beta if that makes some people happier.

Should the IETF ship or skip HTTP 2.0?

Posted May 30, 2014 7:06 UTC (Fri) by gren (subscriber, #3954) [Link] (13 responses)

Thanks. I'm glad to see some analysis of what lead up to the "please admit defeat" message and the issues it was complaining about.

Should the IETF ship or skip HTTP 2.0?

Posted Jun 3, 2014 9:58 UTC (Tue) by nim-nim (subscriber, #34454) [Link] (12 responses)

Well, the analysis is rather short and does not really why people are not satisfied with http/2

As noted by phk http/2 is a rather unbalanced protocol that shows its Google roots, and in the name of expediency the IETF refused to fix a lot of its problems:

1. it gained approval by some privacy groups by enshrining TLS, but without real analysis of http privacy emplications. As a result it only secures Google/facebook… data mining

2. it is a "no new features" protocol except that it includes server push (which changes completely http security)

3. on the other hand the IETF refused to open the cookie issue despite it being trivial to solve (don't save anything client side, provide a session id). The ietf argued a cookie-less protocol would see no adoption despite contrary evidence (the same people claimed UE's requirement to tell users about cookies could not be implemented)

Should the IETF ship or skip HTTP 2.0?

Posted Jun 3, 2014 10:07 UTC (Tue) by Cyberax (✭ supporter ✭, #52523) [Link] (9 responses)

How a session ID is not a cookie? It'll have all the problems of cookies if you want it to have equivalent functionality.

Should the IETF ship or skip HTTP 2.0?

Posted Jun 3, 2014 18:00 UTC (Tue) by nim-nim (subscriber, #34454) [Link] (8 responses)

A session ID pushes data persistence server side and can be severely scoped by the browser instead of the way cookies make mass tracking dirt cheap (save anything you want in the cookie, allow everything to read it, no data costs, no need to synchronise servers, your target is doing all the work for you.

What changed in the past years is not the ability to spy on people but that's it's so cheap you can even set it up just in case you need it later. Making it a little harder would go a long way to limit opportunistic abuses

Should the IETF ship or skip HTTP 2.0?

Posted Jun 3, 2014 18:04 UTC (Tue) by Cyberax (✭ supporter ✭, #52523) [Link] (7 responses)

Most cookies are used to store session IDs. For example, the Evil Google Cookie only stores a longish session ID.

Next, currently cookies are easily scoped - by their domain. How do you propose to scope session IDs?

Should the IETF ship or skip HTTP 2.0?

Posted Jun 4, 2014 6:59 UTC (Wed) by nim-nim (subscriber, #34454) [Link] (3 responses)

Scoping by fqnd and browser session or fqdn + 1 day/week max.

That would be sufficient to limit abuses.

Should the IETF ship or skip HTTP 2.0?

Posted Jun 4, 2014 20:58 UTC (Wed) by Cyberax (✭ supporter ✭, #52523) [Link] (2 responses)

Won't help at all. For example, if I place an image from http://google.com/someanalytics on my page and you have a session ID for google.com domain then you'd still be tracked.

And of course, I personally _want_ lots of my sessions to last more than 1 day or week.

And lastly, nobody stops you from deleting cookies every day or restricting them in any way.

Should the IETF ship or skip HTTP 2.0?

Posted Jun 5, 2014 11:54 UTC (Thu) by mathstuf (subscriber, #69389) [Link] (1 responses)

> And lastly, nobody stops you from deleting cookies every day or restricting them in any way.

No, because if one gets intercepted, that cookie is good for years.

> For example, if I place an image from http://google.com/someanalytics on my page and you have a session ID for google.com domain then you'd still be tracked.

Use RequestPolicy and don't let J. Random Website force your browser to communicate with any other site. Saves bandwidth too.

Should the IETF ship or skip HTTP 2.0?

Posted Jun 5, 2014 14:25 UTC (Thu) by Cyberax (✭ supporter ✭, #52523) [Link]

> No, because if one gets intercepted, that cookie is good for years.
So? If someone intercepts your session ID they'd still be able to access your data for the duration of the session.

> Use RequestPolicy and don't let J. Random Website force your browser to communicate with any other site. Saves bandwidth too.
You are free to do that with cookies.

Should the IETF ship or skip HTTP 2.0?

Posted Jun 6, 2014 21:15 UTC (Fri) by job (guest, #670) [Link] (2 responses)

Dear god no, don't put "easily" in the same sentence as cookie scoping. Have you actually looked at the ghastly ad-hoc spaghetti that govern that? It involves hard coding pretty much all the TLDs, and that's just the beginning of it.

Should the IETF ship or skip HTTP 2.0?

Posted Jun 6, 2014 21:20 UTC (Fri) by Cyberax (✭ supporter ✭, #52523) [Link] (1 responses)

Whut?

Cookie scoping is easy: http://tools.ietf.org/html/rfc6265#section-4

Should the IETF ship or skip HTTP 2.0?

Posted Jun 6, 2014 22:20 UTC (Fri) by nybble41 (subscriber, #55106) [Link]

Oh, sure, "easy". Until you read sections 5.1.2 regarding canonical host names, and 5.3.5 (which I think "job" was referring to) regarding the ever-varying list of "public prefixes" requiring special consideration--without which any random example.com could register a cookie for "com." and have it scoped over nearly all commercial websites.

Should the IETF ship or skip HTTP 2.0?

Posted Jun 3, 2014 17:05 UTC (Tue) by intgr (subscriber, #39733) [Link] (1 responses)

> IETF refused to open the cookie issue despite it being trivial to solve (don't save anything client side, provide a session id).

Doesn't sound trivial to me. Is there a more detailed proposal for this?

Should the IETF ship or skip HTTP 2.0?

Posted Jun 3, 2014 18:04 UTC (Tue) by nim-nim (subscriber, #34454) [Link]

All the proposals have been shot down in the work group before getting the chance to be fleshed out. You can find them in the archives, with the constant refusal of the group head to open this subject.

Should the IETF ship or skip HTTP 2.0?

Posted May 30, 2014 7:54 UTC (Fri) by jezuch (subscriber, #52988) [Link]

So... are advantages of HTTP/2 worth it? Would we be really worse-off if we dropped it? Yes, there are some nice things in there but I don't think I've seen anything that would clearly say "yes" to these questions...

Please admit insecurity

Posted May 30, 2014 16:56 UTC (Fri) by Max.Hyre (subscriber, #1054) [Link] (1 responses)

Why should we get another specification with security holes built in?

“Rough concensus and running code” got the Internet underway, but the world’s changed a lot since then. Private information transfer is under attack from groups as diverse as the NSA/GCHQ, the RIAA/MPAA, and ISPs hungry for more profit, and every shortcoming, misunderstanding, or misimplementation offers another attack vector for them.

Each of Greg Wilkins's four problems introduces another opportunity for such misfeatures (or even seems to guarantee them in the cases of HPACK and headers with data). The IETF now asserts that pervasive monitoring is an attack.

Heaven knows I understand the pain of having worked for six years without a release, but how can they approve a standard that makes things worse?

Please admit insecurity

Posted May 30, 2014 18:59 UTC (Fri) by ballombe (subscriber, #9523) [Link]

I offer this link to a post of Poul-Henning Kamp in the same thread
<http://lists.w3.org/Archives/Public/ietf-http-wg/2014AprJ...>

Should the IETF ship or skip HTTP 2.0?

Posted Jun 4, 2014 17:14 UTC (Wed) by josh (subscriber, #17465) [Link] (5 responses)

HTTP 2 seems to have suffered from the second system effect. Picking up SPDY and declaring it HTTP 1.2 would have helped greatly; instead, this effort seems to have tried to solve every problem with HTTP simultaneously rather than establishing an incremental standard that solves a common subset of clear problems.

Inventing a new compression algorithm for HTTP headers (HPACK), rather than using an off-the-shelf compression algorithm (like zlib) seems like a good example of that. HPACK has the stated rationale of avoiding attacks like CRIME, but rather than add controls to existing compression algorithms to avoid attacks (such as not compressing sensitive headers, or otherwise trading off compression for hardening), it invents a new compression format. That compression format addresses some of *today's* problems (though apparently not even all of those), but when (not if) the next such problem appears, we'll just get stuck with HPACK plus modifications and mitigations rather than zlib plus modifications and mitigations.

Should the IETF ship or skip HTTP 2.0?

Posted Jun 4, 2014 17:51 UTC (Wed) by Cyberax (✭ supporter ✭, #52523) [Link] (4 responses)

They actually tried it with ZLIB with a prepopulated dictionary. Still doesn't work, because compression leaks data.

And if you think about it, pretty much ALL headers are security-sensitive.

Should the IETF ship or skip HTTP 2.0?

Posted Jun 4, 2014 19:20 UTC (Wed) by josh (subscriber, #17465) [Link] (3 responses)

> They actually tried it with ZLIB with a prepopulated dictionary. Still doesn't work, because compression leaks data.

That's fixable, as evidenced by HPACK. An unmodified zlib leaks information; a version of zlib modified to support tradeoffs between security and compression need not leak information.

Should the IETF ship or skip HTTP 2.0?

Posted Jun 4, 2014 19:50 UTC (Wed) by raven667 (subscriber, #5198) [Link] (2 responses)

But if you modify it, is it still zlib anymore? You might have a custom encoding that can use stock zlib to decode and remain compatible but if you change the decoding then you still have effectively built a custom system and will have the maintenance costs of a custom system, not the costs of using a common library.

Should the IETF ship or skip HTTP 2.0?

Posted Jun 4, 2014 20:05 UTC (Wed) by josh (subscriber, #17465) [Link] (1 responses)

You definitely don't want to change the encoding; it should remain deflate-compatible. Just provide ways to control zlib compression to meet the security requirements.

Should the IETF ship or skip HTTP 2.0?

Posted Jun 4, 2014 20:46 UTC (Wed) by Cyberax (✭ supporter ✭, #52523) [Link]

If you CAN offer how to do it, then feel free to offer your ideas to IETF WG. They haven't found a way to do header compression securely with zlib.

Getting rid of bad design

Posted Jun 5, 2014 10:42 UTC (Thu) by Wol (subscriber, #4433) [Link]

Reading tfa, I think there is a good reason for shipping it.

But before you do

1) Add the requirement that an http/2 compliant browser/server may not *initiate* the use of deprecated features, and

2) All those naff features? Deprecate them!

Not quite sure how you get round deprecated features with no alternative, maybe mark them for future deprecation, and introduce 2.1, 2.2 etc as replacements get invented.

But that way, you can easily start designing http/3, knowing that all those features will be disappearing.

Cheers,
Wol

Should the IETF ship or skip HTTP 2.0?

Posted Jun 6, 2014 21:40 UTC (Fri) by job (guest, #670) [Link]

I always found it slightly worrying that out of all the things that could be improved with HTTP, they chose this. As if we didn't have enough protocols overlaid on HTTP, now we want to run TCP on top of it as well!

(By the way, I'm far from convinced that stream protocols along the lines of SCTP isn't a better way to achieve stream multiplexing. Sure, there would be compatibility problems, but the endpoint could choose to use it only when available and let the problems sort themselves out over the next decade. It's not as if there isn't firewall issues with SPDY.)

The most glaring omission with HTTP must be session management. This has been bolted on with cookies, but that does not work very well in practice. It makes it very difficult to know when you can serve cached documents, since cookies can carry all sorts of meanings. It's security semantics is all over the place and they can leak a thousand ways -- not to mention the gouge-your-eyes-out rules on which domains get to use them. Nobody uses HTTP authentication for public web sites simply because there is no login session management. That is why we can't have nice things such as SRP. Instead we send passwords back and forth over the wire. In 2014.

So there's plenty work to do that could improve security and reliability in obvious ways. But instead we get ... multiplexing and compression? That may shave off a few bytes here and there? That's close to useless. Most sites could with a single run of pngcrush do an order of magnitude better. Even Google, who supposedly runs a tight ship, could shave off thousands of bytes on every home page request of they structured their markup a bit tighter. But they don't. Because it doesn't matter.


Copyright © 2014, Eklektix, Inc.
This article may be redistributed under the terms of the Creative Commons CC BY-SA 4.0 license
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds