The future of NGINX
For the core NGINX Open Source software, we continue to add new features and functionality and to support more operating system platforms. Two critical capabilities for security and scalability of web applications and traffic, HTTP3 and QUIC, are coming in the next version we ship.
Posted Aug 24, 2022 14:08 UTC (Wed)
by anarcat (subscriber, #66354)
[Link] (39 responses)
This reminds me of GitLab saying how they listen to the community, and that they're open to making more features migrate from the proprietary to the free version, but obviously everyone wants everything to be free, so effectively it just frustrates people because *their* own specific requirement is not free.. I'm not sure there's a way out of that mess in opencore...
Posted Aug 24, 2022 14:51 UTC (Wed)
by flussence (guest, #85566)
[Link] (38 responses)
Posted Aug 24, 2022 15:01 UTC (Wed)
by anarcat (subscriber, #66354)
[Link]
Posted Aug 24, 2022 15:24 UTC (Wed)
by tau (subscriber, #79651)
[Link] (36 responses)
Posted Aug 24, 2022 15:46 UTC (Wed)
by HenrikH (subscriber, #31152)
[Link] (29 responses)
Posted Aug 25, 2022 5:12 UTC (Thu)
by LtWorf (subscriber, #124958)
[Link] (28 responses)
Parallelism won't make them go any faster after all.
Posted Aug 25, 2022 6:50 UTC (Thu)
by bagder (guest, #38414)
[Link] (25 responses)
First, browsers don't use a single connection for HTTP/1.1, they typically use 6 TCP connections per host name (and big sites "invents" several new aliased host names for the site, so a browser might use well up to 48 connections).
Each TCP connection needs to be setup with a TLS handshake and it has a slow-start period, barely any connection for HTTP/1.1 manages to get up to full speed before it needs to close down because it cannot maintain that many connections when you browse around the web. Increased bandwidth thus does not make the experience faster. Old data from Firefox showed the median number of HTTP requests per connection to be... **1**.
With multiplexed streams over a single connection, the server can saturate the bandwidth much better and faster, reach high speed and the browser can thus render the page sooner.
Posted Aug 25, 2022 7:07 UTC (Thu)
by LtWorf (subscriber, #124958)
[Link] (23 responses)
I do it regularly when downloading big files with wget. I kill it and start it again. Otherwise the speed keeps dropping and dropping.
Having a single long lived connection might not make things faster for people with scummy providers (so most people).
Posted Aug 25, 2022 8:13 UTC (Thu)
by bagder (guest, #38414)
[Link] (17 responses)
A single long-lived connection generally makes things much faster than frequently creating and killing connections.
Your wget/aria2c experiments are for downloading a single (huge) resource. A task that browsers have been and still are surprisingly bad at. For a single huge download, HTTP/2 and HTTP/3 don't make a lot of improvements.
(I work on curl, I worked on Firefox, this is my backyard)
Posted Aug 25, 2022 8:30 UTC (Thu)
by LtWorf (subscriber, #124958)
[Link] (14 responses)
Now google has the power to force providers to do things, but most other wevserver owners do not have this leverage and might be disadvantaged from switching.
Posted Aug 25, 2022 10:31 UTC (Thu)
by gspr (subscriber, #91542)
[Link] (8 responses)
Posted Aug 25, 2022 10:54 UTC (Thu)
by hummassa (subscriber, #307)
[Link]
With the current adoption of chromium-based, that would not be an unreasonable statement.
Posted Aug 25, 2022 15:25 UTC (Thu)
by LtWorf (subscriber, #124958)
[Link] (6 responses)
It's that easy for them.
Posted Aug 26, 2022 6:24 UTC (Fri)
by jamesh (guest, #1159)
[Link] (5 responses)
It's not an HTTP 1.1 vs HTTP 2 issue.
Posted Aug 26, 2022 7:31 UTC (Fri)
by wtarreau (subscriber, #51152)
[Link]
Posted Aug 26, 2022 14:46 UTC (Fri)
by LtWorf (subscriber, #124958)
[Link] (3 responses)
I'm saying if google wants to push http 4 they are in a position to do it and force everyone else to implement it.
Posted Aug 26, 2022 18:59 UTC (Fri)
by wtarreau (subscriber, #51152)
[Link] (2 responses)
Posted Aug 26, 2022 20:51 UTC (Fri)
by mathstuf (subscriber, #69389)
[Link]
I've seen enough evil from airport and hotel "access points" that I really don't want them to muck with even `ytmnd.com` content. Forcing everything to at least *support* `https://` is a great boon IMO (forcing it via 304 redirects might be a bit much in some situations even if I do so for my own website).
Posted Aug 27, 2022 8:38 UTC (Sat)
by LtWorf (subscriber, #124958)
[Link]
Posted Aug 25, 2022 11:39 UTC (Thu)
by bagder (guest, #38414)
[Link] (3 responses)
Posted Aug 25, 2022 15:24 UTC (Thu)
by LtWorf (subscriber, #124958)
[Link] (2 responses)
What you believe will not change the fact that it does happen. If your provider doesn't… good for you. Not everyone is so lucky.
Since it does happen, a single connection will be slower overall. Yes handshake takes time… we all know that. Still redoing a handshake is faster than a connection that could go at 1MiB/s but is going at 100KiB/s because provider said so.
I understand http2 saves resources to servers, but for people connecting to those servers it will not necessarily be faster or better.
Again. Your connection is very good. Congratulations. Everyone else's connection isn't.
Posted Aug 25, 2022 15:29 UTC (Thu)
by anarcat (subscriber, #66354)
[Link]
but maybe this has gone for long enough? or at least give you two a little pause to reconsider each other's arguments and maybe make peace with the fact you're in disagreement? :)
Posted Aug 26, 2022 7:29 UTC (Fri)
by wtarreau (subscriber, #51152)
[Link]
Posted Aug 26, 2022 7:25 UTC (Fri)
by wtarreau (subscriber, #51152)
[Link]
Huh ? You apparently weren't there in 2015 when it was time to send proposals for HTTP/2, nor when SPDY was used as a starting point and heavily modified. No, HTTP/2 is not Google, it's the HTTPbis WG at the IETF, animated by a few tens of knowledgeable client, server, proxies implementers. Discussions were sometimes quite heated but that resulted in an overall good enough design for the time spent on it (yes, time was key because SPDY was going to become the next standard). HTTP/3 and QUIC also come from this group and a new one (quicwg, sharing a large part of participants) and took a long time to mature. It's not Google's anymore either (in Google's original version now called gQUIC there was no separation between the HTTP and the transport layer for example).
So please don't refrain from using protocols just for your fear that they could give someone else more power. Use them if they bring you or your users some value.
Posted Aug 25, 2022 15:23 UTC (Thu)
by farnz (subscriber, #17727)
[Link] (1 responses)
In addition, the last time I looked at how browsers do things (so your experience may well be more recent), you have some ordering in your set of things to download. You know, for example, that the thing referenced by a <style> tag may trigger more downloads that you'll need to get your final layout correct (@import in CSS), so you want to prioritise downloading those, whereas the thing inside an <img> tag that has width and height attributes can be laid out as a placeholder, and filled in for final render when the image downloads. However, you don't want to delay downloading images etc until you've completely parsed the CSS and HTML, because if there isn't anything else to download, delaying the download is just going to delay final render.
For an ideal user experience, you use one connection to just drain the pending resource list, and the remaining connections to download resources that might trigger further downloads. That way, if none of the stylesheets, JavaScript etc requires more downloads to allow you to do final layout, you've already started downloading images etc as fast as possible on one connection while if you do need more downloads to do final layout, you can do them using the other 5 connections in parallel to images.
Posted Aug 31, 2022 11:43 UTC (Wed)
by nim-nim (subscriber, #34454)
[Link]
http/3 is great because the browser does not have to guess the ideal downloading order to render a page, one element can not stop the rest of the page loading.
Posted Aug 25, 2022 11:49 UTC (Thu)
by tialaramex (subscriber, #21167)
[Link] (4 responses)
Citation definitely needed. To attempt such a thing, the provider needs to keep a vast list of quads (representing source and destination IP plus port) for TCP connections and then... somehow "throttle" the ones that were in the list for longer. Just doing nothing is both cheaper and easier yet produces better results. I understand that in the US market maybe "Worse but also more expensive" is a good product decision, but in most places that's going to cause customers to leave for a provider that isn't spending their money to make the service worse.
If your evidence is "I have this wget job and sometimes killing and restarting it helps" I would suggest the problem is unlikely to be your provider arbitrarily "throttling" your connection and more likely the problem is at one end or the other of that connection.
Posted Aug 25, 2022 16:39 UTC (Thu)
by LtWorf (subscriber, #124958)
[Link] (2 responses)
Oh I never said "sometimes" it helps. I can reproduce it.
Posted Aug 26, 2022 11:34 UTC (Fri)
by eduperez (guest, #11232)
[Link] (1 responses)
Posted Aug 27, 2022 8:37 UTC (Sat)
by LtWorf (subscriber, #124958)
[Link]
Posted Aug 27, 2022 1:04 UTC (Sat)
by WolfWings (subscriber, #56790)
[Link]
Track only X recent connections, prioritize them over other connections so that 'recent' connections are always perky, like speed tests and the like. Also limits computation resources for the tracking.
Posted Aug 25, 2022 20:48 UTC (Thu)
by mtaht (subscriber, #11087)
[Link]
Posted Aug 26, 2022 7:50 UTC (Fri)
by barryascott (subscriber, #80640)
[Link] (1 responses)
The time to download X files * 4 * 100ms.
Given a web site that needs 30 files loaded then in series it will take 12s.
That is why parallel is so important. Its the ping time that dominates not the bandwidth for large number of small files.
And goes from TCP to UDP help with head-of-line blocking that was the
Posted Aug 26, 2022 19:05 UTC (Fri)
by wtarreau (subscriber, #51152)
[Link]
Actually it was anticipated even before HTTP/2 was released, but by then there was no short-term solution in sight and all that was done was to agree on a set of generally acceptable default settings (64k stream window and moderate number of default stream count). My opinion is that the choice to enforce a single connection per browser was *the* mistake as it really emphasized HoL. I'm pretty sure that even just a second connection started during network pauses could have considerably helped.
Posted Aug 24, 2022 16:01 UTC (Wed)
by anarcat (subscriber, #66354)
[Link] (5 responses)
Posted Aug 24, 2022 16:14 UTC (Wed)
by davidstrauss (guest, #85867)
[Link] (1 responses)
https://en.wikipedia.org/wiki/Head-of-line_blocking#In_HTTP
So, even skepticism of HTTP/2 doesn't mean that all versions since HTTP/1.1 are pointless.
Posted Aug 26, 2022 19:11 UTC (Fri)
by wtarreau (subscriber, #51152)
[Link]
Posted Aug 24, 2022 19:00 UTC (Wed)
by flussence (guest, #85566)
[Link] (2 responses)
Posted Aug 26, 2022 21:49 UTC (Fri)
by barryascott (subscriber, #80640)
[Link] (1 responses)
Isn’t request smuggling where you corrupt headers in the hope that the receiver will be tricked into pulling the wrong value out?
Posted Aug 27, 2022 16:51 UTC (Sat)
by flussence (guest, #85566)
[Link]
Ostensibly, yes, everyone should just write bug-free application server code and there'd be no problem, but remember this came at the beginning of Web 2.0™ — there was a perfect storm of fresh naive developers, a total lack of concurrency-safe languages, and overhyped page performance benchmarks like YSlow and PageSpeed. I think the obsession with gamifying micro-optimisation like that may have caused more collective long-term damage to the web than any single thing one can say about PHP or JavaScript.
Some systems got defensive by throwing `Connection: close` on everything, but that leads to the worst possible performance (serialised and TCP slow-start per URL).
The future of NGINX
The future of NGINX
The future of NGINX
The future of NGINX
The future of NGINX
The future of NGINX
The future of NGINX
The future of NGINX
The future of NGINX
The future of NGINX
The future of NGINX
The future of NGINX
The future of NGINX
The future of NGINX
The future of NGINX
The future of NGINX
The future of NGINX
The future of NGINX
The future of NGINX
The future of NGINX
The future of NGINX
short pause?
The future of NGINX
The future of NGINX
The future of NGINX
The future of NGINX
The future of NGINX
The future of NGINX
The future of NGINX
The future of NGINX
The future of NGINX
Mike Belshe's paper on rtt
The future of NGINX
lesson learnt from HTTP/2.
The future of NGINX
The future of NGINX
The future of NGINX
The future of NGINX
The future of NGINX
The future of NGINX
The future of NGINX
