|
|
Subscribe / Log in / New account

The future of NGINX

This blog post on the NGINX corporate site describes the plans for this web server project in the coming year.

For the core NGINX Open Source software, we continue to add new features and functionality and to support more operating system platforms. Two critical capabilities for security and scalability of web applications and traffic, HTTP3 and QUIC, are coming in the next version we ship.


to post comments

The future of NGINX

Posted Aug 24, 2022 14:08 UTC (Wed) by anarcat (subscriber, #66354) [Link] (39 responses)

I'm not sure what's going on with this blog post. I was assuming they would make an announcement like "we were opencore, but we saw the light and will just do free software now", but that doesn't seem like it. Hacker News also has a discussion about this post, and people are similarly skeptical about the marketing spiel. Like what's the *actual* news here? They have a slack channel? They switched from Mercurial to GitHub? I guess that's... nice?

This reminds me of GitLab saying how they listen to the community, and that they're open to making more features migrate from the proprietary to the free version, but obviously everyone wants everything to be free, so effectively it just frustrates people because *their* own specific requirement is not free.. I'm not sure there's a way out of that mess in opencore...

The future of NGINX

Posted Aug 24, 2022 14:51 UTC (Wed) by flussence (guest, #85566) [Link] (38 responses)

QUIC support in a mainstream web server seems newsworthy enough, and they've also beaten Apache to it this time.

The future of NGINX

Posted Aug 24, 2022 15:01 UTC (Wed) by anarcat (subscriber, #66354) [Link]

yeah, but that's "in the next version they ship", it's not yet there!

The future of NGINX

Posted Aug 24, 2022 15:24 UTC (Wed) by tau (subscriber, #79651) [Link] (36 responses)

I make a point of turning off the Google-brained HTTP/2 protocol and its later incarnations on the servers that I personally operate. Encouraging the widespread adoption of this protocol is great if you are a high-volume low-latency ad trading exchange like Google, or if you like chasing new and shiny things. I am neither and I don't see the need for its pointless complexity.

The future of NGINX

Posted Aug 24, 2022 15:46 UTC (Wed) by HenrikH (subscriber, #31152) [Link] (29 responses)

HTTP/2 is a major win over HTTP/1.1 if your site requires more files to download than just the index.html one. Letting the browser download files in parallel over the same connection also lessens the stress on your server. Disabling it just to spite Google is just...

The future of NGINX

Posted Aug 25, 2022 5:12 UTC (Thu) by LtWorf (subscriber, #124958) [Link] (28 responses)

Why does it matter that they are downloaded in parallel rather than sequentially over the same connection like in HTTP 1.1?

Parallelism won't make them go any faster after all.

The future of NGINX

Posted Aug 25, 2022 6:50 UTC (Thu) by bagder (guest, #38414) [Link] (25 responses)

Yes it will.

First, browsers don't use a single connection for HTTP/1.1, they typically use 6 TCP connections per host name (and big sites "invents" several new aliased host names for the site, so a browser might use well up to 48 connections).

Each TCP connection needs to be setup with a TLS handshake and it has a slow-start period, barely any connection for HTTP/1.1 manages to get up to full speed before it needs to close down because it cannot maintain that many connections when you browse around the web. Increased bandwidth thus does not make the experience faster. Old data from Firefox showed the median number of HTTP requests per connection to be... **1**.

With multiplexed streams over a single connection, the server can saturate the bandwidth much better and faster, reach high speed and the browser can thus render the page sooner.

The future of NGINX

Posted Aug 25, 2022 7:07 UTC (Thu) by LtWorf (subscriber, #124958) [Link] (23 responses)

Yes the connection establishing is unfortunate, but the reason for multiple connections is because normally internet providers throttle single long standing connections. So closing a connection and opening a new one will be faster. It's why tools like aria2c exist.

I do it regularly when downloading big files with wget. I kill it and start it again. Otherwise the speed keeps dropping and dropping.

Having a single long lived connection might not make things faster for people with scummy providers (so most people).

The future of NGINX

Posted Aug 25, 2022 8:13 UTC (Thu) by bagder (guest, #38414) [Link] (17 responses)

No. The reason browsers do many parallel connections for HTTP/1 is to enable downloading of resources in parallel. Go to a page with 200 images and see if they load sequentially or in parallel, even with HTTP/1. (Also of course CSS, javascript, fonts etc and whatever else a browser needs to render the site.)

A single long-lived connection generally makes things much faster than frequently creating and killing connections.

Your wget/aria2c experiments are for downloading a single (huge) resource. A task that browsers have been and still are surprisingly bad at. For a single huge download, HTTP/2 and HTTP/3 don't make a lot of improvements.

(I work on curl, I worked on Firefox, this is my backyard)

The future of NGINX

Posted Aug 25, 2022 8:30 UTC (Thu) by LtWorf (subscriber, #124958) [Link] (14 responses)

I know why browsers do multiple short lived connections. But the result is that providers tune for that. So if you make a webserver use a single connection, it will probably be slower to load overall, because of what the providers do.

Now google has the power to force providers to do things, but most other wevserver owners do not have this leverage and might be disadvantaged from switching.

The future of NGINX

Posted Aug 25, 2022 10:31 UTC (Thu) by gspr (subscriber, #91542) [Link] (8 responses)

Are you implying that Google has the power to make changes to HTTP/2 on a whim?

The future of NGINX

Posted Aug 25, 2022 10:54 UTC (Thu) by hummassa (subscriber, #307) [Link]

> Are you implying that Google has the power to make changes to HTTP/2 on a whim?

With the current adoption of chromium-based, that would not be an unreasonable statement.

The future of NGINX

Posted Aug 25, 2022 15:25 UTC (Thu) by LtWorf (subscriber, #124958) [Link] (6 responses)

Basically yes. They can just make youtube go on this hypotetical new protocol available in chrome and throttle the old protocol. Other browsers can implement it or stay slow.

It's that easy for them.

The future of NGINX

Posted Aug 26, 2022 6:24 UTC (Fri) by jamesh (guest, #1159) [Link] (5 responses)

Are you specifically complaining specifically about youtube-dl being throttled? That seems more about the arms race between Google trying to ensure only authorised clients access video streams, and youtube-dl developers trying to reverse those protections (which currently involve interpreting some of the JavaScript on the page to determine correct URLs).

It's not an HTTP 1.1 vs HTTP 2 issue.

The future of NGINX

Posted Aug 26, 2022 7:31 UTC (Fri) by wtarreau (subscriber, #51152) [Link]

Often it's even easier. Most video sites try to optimize their bandwidth due to clients having massive ones. If your video player downloads the video faster than you can watch, and you stop in the middle, they send too many bytes over the wire, and that bandwidth has a cost, and even causes congestion on certain links at certain hours. So it's more efficient to throttle every stream to roughly the stream's bandwidth, but that implicitly also throttles downloading tools. Now is that really an issue ?

The future of NGINX

Posted Aug 26, 2022 14:46 UTC (Fri) by LtWorf (subscriber, #124958) [Link] (3 responses)

No it's not at all what I'm saying.

I'm saying if google wants to push http 4 they are in a position to do it and force everyone else to implement it.

The future of NGINX

Posted Aug 26, 2022 18:59 UTC (Fri) by wtarreau (subscriber, #51152) [Link] (2 responses)

They already forced everyone to use SSL even where it makes no sense at all... When you're the entry point of the internet for the masses, you can do whatever you want. If they decide to stop indexing the sites that use a green background, they can do it and these ones will stop doing business pretty quickly.

The future of NGINX

Posted Aug 26, 2022 20:51 UTC (Fri) by mathstuf (subscriber, #69389) [Link]

> They already forced everyone to use SSL even where it makes no sense at all...

I've seen enough evil from airport and hotel "access points" that I really don't want them to muck with even `ytmnd.com` content. Forcing everything to at least *support* `https://` is a great boon IMO (forcing it via 304 redirects might be a bit much in some situations even if I do so for my own website).

The future of NGINX

Posted Aug 27, 2022 8:38 UTC (Sat) by LtWorf (subscriber, #124958) [Link]

They forced me to get 2FA on pypi! :D They can certainly force things.

The future of NGINX

Posted Aug 25, 2022 11:39 UTC (Thu) by bagder (guest, #38414) [Link] (3 responses)

Clearly you know HTTP more and better than the HTTPbis working group in the IETF and all the work done there and elsewhere that has driven the HTTP development the last decade or so. My hat is off.

The future of NGINX

Posted Aug 25, 2022 15:24 UTC (Thu) by LtWorf (subscriber, #124958) [Link] (2 responses)

You can choose to not believe me that providers do throttle long standing connections.

What you believe will not change the fact that it does happen. If your provider doesn't… good for you. Not everyone is so lucky.

Since it does happen, a single connection will be slower overall. Yes handshake takes time… we all know that. Still redoing a handshake is faster than a connection that could go at 1MiB/s but is going at 100KiB/s because provider said so.

I understand http2 saves resources to servers, but for people connecting to those servers it will not necessarily be faster or better.

Again. Your connection is very good. Congratulations. Everyone else's connection isn't.

short pause?

Posted Aug 25, 2022 15:29 UTC (Thu) by anarcat (subscriber, #66354) [Link]

it seems this discussion is diverging significantly from the original article and from my original comment which, granted, probably wasn't great in the first place.

but maybe this has gone for long enough? or at least give you two a little pause to reconsider each other's arguments and maybe make peace with the fact you're in disagreement? :)

The future of NGINX

Posted Aug 26, 2022 7:29 UTC (Fri) by wtarreau (subscriber, #51152) [Link]

What you're describing sounds much more like congested links with an inefficient congestion control algorithm on your side. You should play with them to see which one performs better. It is also possible that your provider is doing transparent proxying and got their implementation wrong, missing some window updates and progressively getting their windows smaller and smaller. Otherwise, there's no particular point in tracking long-lived connections, that comes with an extra processing cost, and brings no benefit.

The future of NGINX

Posted Aug 26, 2022 7:25 UTC (Fri) by wtarreau (subscriber, #51152) [Link]

> Now google has the power to force providers to do things, but most other wevserver owners do not have this leverage and might be disadvantaged from switching.

Huh ? You apparently weren't there in 2015 when it was time to send proposals for HTTP/2, nor when SPDY was used as a starting point and heavily modified. No, HTTP/2 is not Google, it's the HTTPbis WG at the IETF, animated by a few tens of knowledgeable client, server, proxies implementers. Discussions were sometimes quite heated but that resulted in an overall good enough design for the time spent on it (yes, time was key because SPDY was going to become the next standard). HTTP/3 and QUIC also come from this group and a new one (quicwg, sharing a large part of participants) and took a long time to mature. It's not Google's anymore either (in Google's original version now called gQUIC there was no separation between the HTTP and the transport layer for example).

So please don't refrain from using protocols just for your fear that they could give someone else more power. Use them if they bring you or your users some value.

The future of NGINX

Posted Aug 25, 2022 15:23 UTC (Thu) by farnz (subscriber, #17727) [Link] (1 responses)

In addition, the last time I looked at how browsers do things (so your experience may well be more recent), you have some ordering in your set of things to download. You know, for example, that the thing referenced by a <style> tag may trigger more downloads that you'll need to get your final layout correct (@import in CSS), so you want to prioritise downloading those, whereas the thing inside an <img> tag that has width and height attributes can be laid out as a placeholder, and filled in for final render when the image downloads. However, you don't want to delay downloading images etc until you've completely parsed the CSS and HTML, because if there isn't anything else to download, delaying the download is just going to delay final render.

For an ideal user experience, you use one connection to just drain the pending resource list, and the remaining connections to download resources that might trigger further downloads. That way, if none of the stylesheets, JavaScript etc requires more downloads to allow you to do final layout, you've already started downloading images etc as fast as possible on one connection while if you do need more downloads to do final layout, you can do them using the other 5 connections in parallel to images.

The future of NGINX

Posted Aug 31, 2022 11:43 UTC (Wed) by nim-nim (subscriber, #34454) [Link]

Page rendering has become so complex nowadays I’d be (pleasantly) surprised if all the layers involved browser-side managed to coordinate well enough to make some form of smart download scheduling possible (especially since resources are spread over servers and clouds with different link characteristics and their own opaque balancing and rate management policies).

http/3 is great because the browser does not have to guess the ideal downloading order to render a page, one element can not stop the rest of the page loading.

The future of NGINX

Posted Aug 25, 2022 11:49 UTC (Thu) by tialaramex (subscriber, #21167) [Link] (4 responses)

> normally internet providers throttle single long standing connections

Citation definitely needed. To attempt such a thing, the provider needs to keep a vast list of quads (representing source and destination IP plus port) for TCP connections and then... somehow "throttle" the ones that were in the list for longer. Just doing nothing is both cheaper and easier yet produces better results. I understand that in the US market maybe "Worse but also more expensive" is a good product decision, but in most places that's going to cause customers to leave for a provider that isn't spending their money to make the service worse.

If your evidence is "I have this wget job and sometimes killing and restarting it helps" I would suggest the problem is unlikely to be your provider arbitrarily "throttling" your connection and more likely the problem is at one end or the other of that connection.

The future of NGINX

Posted Aug 25, 2022 16:39 UTC (Thu) by LtWorf (subscriber, #124958) [Link] (2 responses)

> sometimes

Oh I never said "sometimes" it helps. I can reproduce it.

The future of NGINX

Posted Aug 26, 2022 11:34 UTC (Fri) by eduperez (guest, #11232) [Link] (1 responses)

Perhaps you could share a bit more info about your experiments, so others can repeat them, and see what happens.

The future of NGINX

Posted Aug 27, 2022 8:37 UTC (Sat) by LtWorf (subscriber, #124958) [Link]

You're aware that being provider dependent, that means travelling the world and buying a landline subscription where I tell you right?

The future of NGINX

Posted Aug 27, 2022 1:04 UTC (Sat) by WolfWings (subscriber, #56790) [Link]

Reverse the problem, then it becomes far less resource intensive.

Track only X recent connections, prioritize them over other connections so that 'recent' connections are always perky, like speed tests and the like. Also limits computation resources for the tracking.

Mike Belshe's paper on rtt

Posted Aug 25, 2022 20:48 UTC (Thu) by mtaht (subscriber, #11087) [Link]

The future of NGINX

Posted Aug 26, 2022 7:50 UTC (Fri) by barryascott (subscriber, #80640) [Link] (1 responses)

Consider that you need to download a set of files and you have infinite bandwidth but a ping time of 100ms and assuming only two round trips per file.

The time to download X files * 4 * 100ms.

Given a web site that needs 30 files loaded then in series it will take 12s.

That is why parallel is so important. Its the ping time that dominates not the bandwidth for large number of small files.

And goes from TCP to UDP help with head-of-line blocking that was the
lesson learnt from HTTP/2.

The future of NGINX

Posted Aug 26, 2022 19:05 UTC (Fri) by wtarreau (subscriber, #51152) [Link]

> head-of-line blocking that was the lesson learnt from HTTP/2.

Actually it was anticipated even before HTTP/2 was released, but by then there was no short-term solution in sight and all that was done was to agree on a set of generally acceptable default settings (64k stream window and moderate number of default stream count). My opinion is that the choice to enforce a single connection per browser was *the* mistake as it really emphasized HoL. I'm pretty sure that even just a second connection started during network pauses could have considerably helped.

The future of NGINX

Posted Aug 24, 2022 16:01 UTC (Wed) by anarcat (subscriber, #66354) [Link] (5 responses)

I am not sure that debating the merits of HTTP/2 and later are on topic here, but I must say I have used HTTP/2 on my home server, which is on a fairly crappy uplink (both in terms of bandwidth and latency), and it worked wonders. So I do think *that* is an improvement.

The future of NGINX

Posted Aug 24, 2022 16:14 UTC (Wed) by davidstrauss (guest, #85867) [Link] (1 responses)

HTTP/3 may also provide more substantial improvements to less-than-ideal connections through the switch to UDP, which addresses head-of-line blocking issues in TCP:

https://en.wikipedia.org/wiki/Head-of-line_blocking#In_HTTP

So, even skepticism of HTTP/2 doesn't mean that all versions since HTTP/1.1 are pointless.

The future of NGINX

Posted Aug 26, 2022 19:11 UTC (Fri) by wtarreau (subscriber, #51152) [Link]

One point where H2 shines is the ability to close a stream without closing the connection. In H1 aborting a download (e.g. clicking Stop or pressing Esc while a site loads) needed to abort the connection. In H2 it's just an RST_STREAM frame that's sent. Also in H1 when you get a redirect during a POST the server has either to drain the whole body or to close the connection (or the client can decide to stop it). In H2, either side may simply send an RST_STREAM and proceed with a new request. These ones are nice improvements. Other points are painful such as the 9-byte header size that forces slow memcpy() everywhere or the need to delimit frames to 16kB, substantially annihilating the ability to splice().

The future of NGINX

Posted Aug 24, 2022 19:00 UTC (Wed) by flussence (guest, #85566) [Link] (2 responses)

That's my motivation too. We aren't all privileged with the decadence of an entire 100/100Mbit datacentre ethernet connection for a trendy static blog, and the last 20 or so times I checked, HTTP/1.1 pipelining (which, if we're going to fly into thought-terminating histrionics about who produces the standards, has Gates-Microsoft's sticky fingerprints all over it ;-) was only good for increasing the risk of request smuggling attacks.

The future of NGINX

Posted Aug 26, 2022 21:49 UTC (Fri) by barryascott (subscriber, #80640) [Link] (1 responses)

I am not seeing how pipelining is making request smuggling worse.

Isn’t request smuggling where you corrupt headers in the hope that the receiver will be tricked into pulling the wrong value out?

The future of NGINX

Posted Aug 27, 2022 16:51 UTC (Sat) by flussence (guest, #85566) [Link]

Pipelining added the possibility of a class of fencepost errors that previously couldn't exist (assuming the OS's socket API worked) because there's no longer a 1:1 alignment between headers and the stream/connection framing.

Ostensibly, yes, everyone should just write bug-free application server code and there'd be no problem, but remember this came at the beginning of Web 2.0™ — there was a perfect storm of fresh naive developers, a total lack of concurrency-safe languages, and overhyped page performance benchmarks like YSlow and PageSpeed. I think the obsession with gamifying micro-optimisation like that may have caused more collective long-term damage to the web than any single thing one can say about PHP or JavaScript.

Some systems got defensive by throwing `Connection: close` on everything, but that leads to the worst possible performance (serialised and TCP slow-start per URL).


Copyright © 2022, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds