User: Password:
Subscribe / Log in / New account

Reducing HTTP latency with SPDY

Reducing HTTP latency with SPDY

Posted Nov 18, 2009 19:01 UTC (Wed) by elanthis (guest, #6227)
In reply to: Reducing HTTP latency with SPDY by kjp
Parent article: Reducing HTTP latency with SPDY

Lost packets aren't a problem for regular http multiplexing because browsers just open
multiple connections to he server to download all the page resources. If one image stalls
from lost packets it has no effect on the other images and css files and such being
downloaded simultaneouslly. If the downloads are pipelines in a single connection then any
stall will affect all resources and not just one.

I suppose a question to answer is how often a single connection out of many between two
endpoints stalls while the other connections do not. Most times I see excessive packet loss
it's for the entire connection, not just a single connected socket. That's not a particularly big
sample size though. :)

(Log in to post comments)

Reducing HTTP latency with SPDY

Posted Nov 18, 2009 19:20 UTC (Wed) by knobunc (subscriber, #4678) [Link]

I think the question is how SPDY is any better than pipelining when encountering packet loss when both are built on TCP, and it is TCP that handles the re-transmits when packets are lost.

All I could find in the article is:
* SPDY sends ~40% fewer packets than HTTP, which means fewer packets affected by loss.
* SPDY uses fewer TCP connections, which means few changes to lose the SYN packet. In many TCP implementations, this delay is disproportionately expensive (up to 3 s).
* SPDY's more efficient use of TCP usually triggers TCP's fast retransmit instead of using retransmit timers.

But that is a comparison of plain (non-pipelined) HTTP to SPDY.

Reducing HTTP latency with SPDY

Posted Nov 19, 2009 11:36 UTC (Thu) by v13 (subscriber, #42355) [Link]

And HTTP/1.1 always uses pipelining, so it is a comparison of HTTP/1.0 with

Reducing HTTP latency with SPDY

Posted Nov 19, 2009 13:22 UTC (Thu) by knobunc (subscriber, #4678) [Link]

Sadly pipelining is off on most browsers.

Summarized from
* IE8 - no
* Firefox 3 - yes, but disabled by default
* Camino - same as FF3
* Konq 2.0 - yes, but disabled by default
* Opera - yes AND enabled by default
* Chrome - not believed to support it, certainly not enabled

Reducing HTTP latency with SPDY

Posted Nov 19, 2009 14:26 UTC (Thu) by v13 (subscriber, #42355) [Link]

The konqueror line is obsolete since konqueror 2.0 is from kde 2.0. I just
tested 4.3.2 and it uses pipelining.

Firefox OTOH doesn't (just tested it). The bad thing about Firefox is that
it opens multiple connections but keeps each connection alive after the data
are transmitted (!). What a misuse of resources!

However, the support is there and all HTTP/1.1 servers support it. AFAIK,
only akamai servers don't support keepalives (they support HTTP/1.0 only).

Reducing HTTP latency with SPDY

Posted Nov 22, 2009 16:38 UTC (Sun) by ibukanov (subscriber, #3942) [Link]

> * Opera - yes AND enabled by default

Yet according to Opera engineers that was not easy. Even after many years of having that enabled by default they still has to tweak their blacklisting database to add new entry disabling the pipelining. If they would know in advance the pain they may not even implement it.

Reducing HTTP latency with SPDY

Posted Nov 27, 2009 0:21 UTC (Fri) by efexis (guest, #26355) [Link]

It doesn't, HTTP can only pipeline when the size of the incoming response is known before hand (eg, through sending a Content-length: header at the beginning of the response). Without knowing how big the response is going to be, it doesn't know when the response ends and the next one begins, so the server has to close the TCP connection to let it know it's done.

Reducing HTTP latency with SPDY

Posted Nov 27, 2009 1:15 UTC (Fri) by mp (subscriber, #5615) [Link]

Not necessarily. There is also the chunked encoding.

Reducing HTTP latency with SPDY

Posted Nov 28, 2009 18:41 UTC (Sat) by efexis (guest, #26355) [Link]

Chunked transfer does also send the size first, that way the other side knows when the chunk it's been receiving has come to an end and the header for the next has begun, the primary difference just being that the message size becomes independant of the document size, but while it may be defined in the HTTP/1.1 spec, it's not mandated... end-to-end support is required for it, and there's a -wide- range of proxy servers out there, at the personal, corporate, and ISP level, transparent and explicit, all to cause problems, not to mention personal firewall/anti-virus software that perhaps can't complete its job until it has the whole document, so being party to chunked transfers isn't going to be so high on the developers list of priorities.

None of these are particularly massive hurdles, but it's still the state of things even in HTTP/1.1 land, so the potential for improvement is very real, and being something that's most important to Google's business, there could actually be some pressure behind it.

Copyright © 2017, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds