Should the IETF ship or skip HTTP 2.0?
As the Internet Engineering Task Force (IETF) moves closer to finalizing the HTTP 2.0 standard (a.k.a. HTTP/2), there is a counter-call for the as-yet-unreleased standard to be dropped. Proponents of that move contend that effort should be put into a follow-up that fixes several problems that (it is argued) are intrinsic to HTTP 2.0. An Internet standard never seeing the light of day is nothing new, of course, but it is still an understandably difficult decision to make for those people who have put in considerable time and effort.
HTTP/2 has been in development since 2007. As we have noted before, the revised protocol adds techniques like request multiplexing and header compression to how HTTP is sent over the wire, but it intentionally does not change the semantics for requests, response codes, and so on. The goal is to better optimize HTTP traffic flow, such as reducing latency. The initial draft of the new specification was derived from Google's SPDY, which began as an in-house experiment.
Today, most browser vendors and quite a few high-traffic web services (including Google) support the yet-to-be-finalized HTTP/2. That plus, perhaps, general fatigue with the length of the standardization process, has led some people to ask that the new revision be declared done and given the official IETF seal of approval. On May 24, HTTP Working Group Chair Mark Nottingham proposed marking the latest draft as an Implementation Draft at the HTTP Working Group's upcoming June meeting, then issuing a Last Call (LC). If there is no major objection raised after the LC, HTTP/2 would make its way out of the working group and toward standardization by the Internet Engineering Steering Group (IESG), which makes the final call.
But not everyone in the HTTP Working Group is satisfied with the
state of the HTTP/2 draft, and some of the criticisms run deep. In reply to
Nottingham, Greg Wilkins said
that "I do not see a draft that is anywhere near to being ready
for LC [Last Call].
" He enumerated what he sees as four major problems:
- The state machine described for processing multiplexed HTTP streams does not match the states that the rest of the specification describes for HTTP/2 streams.
- There are unsolved problems with the HPACK header compression algorithm, including inefficiencies and risks that incorrect implementations will leak information.
- The protocol allows data to be included in HTTP headers; since headers are not subject to flow control, segmentation, or size limits, malicious parties could exploit this to unfairly monopolize a connection.
- There is no clear layering between HTTP/2 frames, requests, and streams.
Several other members of the group concurred with Wilkins's concerns. Nottingham replied that there was still time to fix problems in the specification, but said that the pressure from implementers to establish and adhere to a schedule was important to consider, too:
As to the specific complaints, Nottingham acknowledged that some of the
pieces may not be ideal, but remain the best that the participants
have been able to create. HPACK, for instance,"is more complex
than we’d like, in that there isn’t an off-the-shelf algorithm that we
can use (as was the case with gzip).
" Yet, after repeated
discussions, the group has always decided to stick with it. Moreover,
there have already been discussions about HTTP/3, and "while
there was a ~15 year gap between HTTP/1.1 and HTTP/2, it’s very likely
that the next revision will come sooner.
"
On May 26, Poul-Henning Kamp posted a rather pointed response (titled "Please admit defeat") to Nottingham's email, specifically the prospects for a sequel to HTTP/2. If the working group already knows that HTTP/2 will require a follow-up in HTTP/3 to fix important problems, he said, then the group should simply drop HTTP/2 and develop its successor.
And rather than "ohh, we get HTTP/2.0 almost for free", we found out that there are numerous hard problems that SPDY doesn't even get close to solving, and that we will need to make some simplifications in the evolved HTTP concept if we ever want to solve them.
Now even the WG chair [publicly] admits that the result is a qualified fiasco and that we will have to replace it with something better "sooner".
Kamp argued that pushing out HTTP/2 would waste the time of numerous implementers, as well as introduce code churn that may carry unforeseen security risks. Unsurprisingly, Nottingham did not concur with that assessment. In addition to suggesting that Kamp's wording overstated matters (taking issue, for instance, with the "fiasco" sentence quoted above), Nottingham replied that HTTP implementers feel that the protocol draft is close to being ready to ship, despite any shortcomings. At this stage, he said, technical proposals are what are required.
Nottingham also pointed out that one of Kamp's objections was that HTTP/2 leaves unfixed some bad semantics that have been around since HTTP/1.1. Many people might agree, Nottingham said, but changing the semantics of HTTP is specifically out of scope for the HTTP/2 effort, since it would break compatibility with existing browsers and web servers.
There may be quite a few things about HTTP that still need fixing after HTTP/2, but that is one of the reasons Nottingham cited for wrapping up the HTTP/2 standardization process: once it is completed, the community can move on. There has been a fifteen-year (and counting) gap between HTTP/1.1 and HTTP/2; the longer the gap, the more difficult it is to not break compatibility with existing implementations—if for no other reason, there are simply more browsers and sites.
At this point, it is still possible that HTTP/2 will undergo more revision before it makes it to the final stage of standardization. Several other working group members had concerns about HPACK, and it has been proposed that the compression algorithm be made a negotiable parameter, so that future revisions could drop in an improvement. What does seem clear, however, is that HTTP/2 is moving forward even if not everyone is satisfied with it.
Lack of universal agreement, of course, is not an uncommon problem with standards. As Nottingham noted, the browser and web-server vendors are more-or-less ready to see HTTP/2 reach official approval—which would seem to place HTTP/2 well in line with the IETF's longstanding mantra of "rough consensus and running code." There may indeed be problems that are not discovered until implementation is widespread; perhaps the best option for dealing with them will be to start work on solutions well before another fifteen years have elapsed.
[Thanks to Paul Wise and James Andrewartha for bringing this story to
our attention.]
Posted May 30, 2014 5:25 UTC (Fri)
by Asebe8zu (subscriber, #24600)
[Link] (1 responses)
This sounds like the average manager comparing functional qualities with planning stress.
Posted Jun 1, 2014 22:10 UTC (Sun)
by marcH (subscriber, #57642)
[Link]
Indeed - a standardization body is the last place where I expected to hear this. A standard is not a product and products never need to wait for the final, correct version of a standard.
Just print something and call it HTTP 1.99-beta if that makes some people happier.
Posted May 30, 2014 7:06 UTC (Fri)
by gren (subscriber, #3954)
[Link] (13 responses)
Posted Jun 3, 2014 9:58 UTC (Tue)
by nim-nim (subscriber, #34454)
[Link] (12 responses)
As noted by phk http/2 is a rather unbalanced protocol that shows its Google roots, and in the name of expediency the IETF refused to fix a lot of its problems:
1. it gained approval by some privacy groups by enshrining TLS, but without real analysis of http privacy emplications. As a result it only secures Google/facebook… data mining
2. it is a "no new features" protocol except that it includes server push (which changes completely http security)
3. on the other hand the IETF refused to open the cookie issue despite it being trivial to solve (don't save anything client side, provide a session id). The ietf argued a cookie-less protocol would see no adoption despite contrary evidence (the same people claimed UE's requirement to tell users about cookies could not be implemented)
Posted Jun 3, 2014 10:07 UTC (Tue)
by Cyberax (✭ supporter ✭, #52523)
[Link] (9 responses)
Posted Jun 3, 2014 18:00 UTC (Tue)
by nim-nim (subscriber, #34454)
[Link] (8 responses)
What changed in the past years is not the ability to spy on people but that's it's so cheap you can even set it up just in case you need it later. Making it a little harder would go a long way to limit opportunistic abuses
Posted Jun 3, 2014 18:04 UTC (Tue)
by Cyberax (✭ supporter ✭, #52523)
[Link] (7 responses)
Next, currently cookies are easily scoped - by their domain. How do you propose to scope session IDs?
Posted Jun 4, 2014 6:59 UTC (Wed)
by nim-nim (subscriber, #34454)
[Link] (3 responses)
That would be sufficient to limit abuses.
Posted Jun 4, 2014 20:58 UTC (Wed)
by Cyberax (✭ supporter ✭, #52523)
[Link] (2 responses)
And of course, I personally _want_ lots of my sessions to last more than 1 day or week.
And lastly, nobody stops you from deleting cookies every day or restricting them in any way.
Posted Jun 5, 2014 11:54 UTC (Thu)
by mathstuf (subscriber, #69389)
[Link] (1 responses)
No, because if one gets intercepted, that cookie is good for years.
> For example, if I place an image from http://google.com/someanalytics on my page and you have a session ID for google.com domain then you'd still be tracked.
Use RequestPolicy and don't let J. Random Website force your browser to communicate with any other site. Saves bandwidth too.
Posted Jun 5, 2014 14:25 UTC (Thu)
by Cyberax (✭ supporter ✭, #52523)
[Link]
> Use RequestPolicy and don't let J. Random Website force your browser to communicate with any other site. Saves bandwidth too.
Posted Jun 6, 2014 21:15 UTC (Fri)
by job (guest, #670)
[Link] (2 responses)
Posted Jun 6, 2014 21:20 UTC (Fri)
by Cyberax (✭ supporter ✭, #52523)
[Link] (1 responses)
Cookie scoping is easy: http://tools.ietf.org/html/rfc6265#section-4
Posted Jun 6, 2014 22:20 UTC (Fri)
by nybble41 (subscriber, #55106)
[Link]
Posted Jun 3, 2014 17:05 UTC (Tue)
by intgr (subscriber, #39733)
[Link] (1 responses)
Doesn't sound trivial to me. Is there a more detailed proposal for this?
Posted Jun 3, 2014 18:04 UTC (Tue)
by nim-nim (subscriber, #34454)
[Link]
Posted May 30, 2014 7:54 UTC (Fri)
by jezuch (subscriber, #52988)
[Link]
Posted May 30, 2014 16:56 UTC (Fri)
by Max.Hyre (subscriber, #1054)
[Link] (1 responses)
“Rough concensus and running code” got the Internet
underway, but the world’s changed a lot since then. Private
information transfer is under attack from groups as diverse as the
NSA/GCHQ, the RIAA/MPAA, and ISPs hungry for more profit, and every
shortcoming, misunderstanding, or misimplementation offers another
attack vector for them.
Each of Greg Wilkins's four problems introduces another opportunity
for such misfeatures (or even seems to guarantee them in the cases of
HPACK and headers with data). The IETF now asserts
that pervasive
monitoring is an attack.
Heaven knows I understand the pain of having worked for six years
without a release, but how can they approve a standard that makes things worse?
Posted May 30, 2014 18:59 UTC (Fri)
by ballombe (subscriber, #9523)
[Link]
Posted Jun 4, 2014 17:14 UTC (Wed)
by josh (subscriber, #17465)
[Link] (5 responses)
Inventing a new compression algorithm for HTTP headers (HPACK), rather than using an off-the-shelf compression algorithm (like zlib) seems like a good example of that. HPACK has the stated rationale of avoiding attacks like CRIME, but rather than add controls to existing compression algorithms to avoid attacks (such as not compressing sensitive headers, or otherwise trading off compression for hardening), it invents a new compression format. That compression format addresses some of *today's* problems (though apparently not even all of those), but when (not if) the next such problem appears, we'll just get stuck with HPACK plus modifications and mitigations rather than zlib plus modifications and mitigations.
Posted Jun 4, 2014 17:51 UTC (Wed)
by Cyberax (✭ supporter ✭, #52523)
[Link] (4 responses)
And if you think about it, pretty much ALL headers are security-sensitive.
Posted Jun 4, 2014 19:20 UTC (Wed)
by josh (subscriber, #17465)
[Link] (3 responses)
That's fixable, as evidenced by HPACK. An unmodified zlib leaks information; a version of zlib modified to support tradeoffs between security and compression need not leak information.
Posted Jun 4, 2014 19:50 UTC (Wed)
by raven667 (subscriber, #5198)
[Link] (2 responses)
Posted Jun 4, 2014 20:05 UTC (Wed)
by josh (subscriber, #17465)
[Link] (1 responses)
Posted Jun 4, 2014 20:46 UTC (Wed)
by Cyberax (✭ supporter ✭, #52523)
[Link]
Posted Jun 5, 2014 10:42 UTC (Thu)
by Wol (subscriber, #4433)
[Link]
But before you do
1) Add the requirement that an http/2 compliant browser/server may not *initiate* the use of deprecated features, and
2) All those naff features? Deprecate them!
Not quite sure how you get round deprecated features with no alternative, maybe mark them for future deprecation, and introduce 2.1, 2.2 etc as replacements get invented.
But that way, you can easily start designing http/3, knowing that all those features will be disappearing.
Cheers,
Posted Jun 6, 2014 21:40 UTC (Fri)
by job (guest, #670)
[Link]
(By the way, I'm far from convinced that stream protocols along the lines of SCTP isn't a better way to achieve stream multiplexing. Sure, there would be compatibility problems, but the endpoint could choose to use it only when available and let the problems sort themselves out over the next decade. It's not as if there isn't firewall issues with SPDY.)
The most glaring omission with HTTP must be session management. This has been bolted on with cookies, but that does not work very well in practice. It makes it very difficult to know when you can serve cached documents, since cookies can carry all sorts of meanings. It's security semantics is all over the place and they can leak a thousand ways -- not to mention the gouge-your-eyes-out rules on which domains get to use them. Nobody uses HTTP authentication for public web sites simply because there is no login session management. That is why we can't have nice things such as SRP. Instead we send passwords back and forth over the wire. In 2014.
So there's plenty work to do that could improve security and reliability in obvious ways. But instead we get ... multiplexing and compression? That may shave off a few bytes here and there? That's close to useless. Most sites could with a single run of pngcrush do an order of magnitude better. Even Google, who supposedly runs a tight ship, could shave off thousands of bytes on every home page request of they structured their markup a bit tighter. But they don't. Because it doesn't matter.
Should the IETF ship or skip HTTP 2.0?
Should the IETF ship or skip HTTP 2.0?
Thanks. I'm glad to see some analysis of what lead up to the "please admit defeat" message and the issues it was complaining about.
Should the IETF ship or skip HTTP 2.0?
Should the IETF ship or skip HTTP 2.0?
Should the IETF ship or skip HTTP 2.0?
Should the IETF ship or skip HTTP 2.0?
Should the IETF ship or skip HTTP 2.0?
Should the IETF ship or skip HTTP 2.0?
Should the IETF ship or skip HTTP 2.0?
Should the IETF ship or skip HTTP 2.0?
Should the IETF ship or skip HTTP 2.0?
So? If someone intercepts your session ID they'd still be able to access your data for the duration of the session.
You are free to do that with cookies.
Should the IETF ship or skip HTTP 2.0?
Should the IETF ship or skip HTTP 2.0?
Should the IETF ship or skip HTTP 2.0?
Should the IETF ship or skip HTTP 2.0?
Should the IETF ship or skip HTTP 2.0?
Should the IETF ship or skip HTTP 2.0?
Why should we get another specification with security holes built in?
Please admit insecurity
Please admit insecurity
<http://lists.w3.org/Archives/Public/ietf-http-wg/2014AprJ...>
Should the IETF ship or skip HTTP 2.0?
Should the IETF ship or skip HTTP 2.0?
Should the IETF ship or skip HTTP 2.0?
Should the IETF ship or skip HTTP 2.0?
Should the IETF ship or skip HTTP 2.0?
Should the IETF ship or skip HTTP 2.0?
Getting rid of bad design
Wol
Should the IETF ship or skip HTTP 2.0?