LWN.net Logo

Introducing the Qt WebEngine

At the Digia blog, Lars Knoll announces that Qt has decided to migrate its web rendering engine from WebKit to Chromium. First among the reasons listed is that "Chromium has a cross-platform focus, with the browser being available on all major desktop platforms and Android. The same is no longer true of WebKit, and we would have had to support all the OS’es on our own in that project." Knoll also cites Chromium's better support for recent HTML5 features, and says that "we are seeing that Chromium is currently by far the most dynamic and fastest moving browser available. Basing our next-generation Web engine on Chromium is a strategic and long-term decision. We strongly believe that the above facts will lead to a much better Web engine for Qt than what we can offer with Qt WebKit right now. "


(Log in to post comments)

Introducing the Qt WebEngine

Posted Sep 13, 2013 1:15 UTC (Fri) by salimma (subscriber, #34460) [Link]

And Chromium has historically been hard to package (in a distribution-guideline-compliant way) because Google devs build against modified versions of upstream libs.

Hopefully the core renderer itself does not suffer from too much of this problem, but at the very least this might nudge the Chromium team into accommodating distributions more (or upstream devs to accept changes faster).

Introducing the Qt WebEngine

Posted Sep 13, 2013 2:05 UTC (Fri) by mathstuf (subscriber, #69389) [Link]

Besides that, is there a library and API to link against that doesn't include living in the Chromium repo? I have yet to see a libblink package or chromium-devel.

Introducing the Qt WebEngine

Posted Sep 13, 2013 15:57 UTC (Fri) by khim (subscriber, #9252) [Link]

Reuse of Chromium code in other projects is explicit non-goal thus I doubt you'll ever see something like that.

Of course it's certainly possible to pull code from Chromium repo and adapt it for your needs, but then you are responsible for keeping it in working state.

Introducing the Qt WebEngine

Posted Sep 13, 2013 2:36 UTC (Fri) by geofft (subscriber, #59789) [Link]

Honestly, that's a problem for distros to solve. Distros primarily are in the business of making software work together in a sane way -- it's their responsibility, if possible, to figure out what this functionality Chromium needs is, and add it to their package with a runtime flag to enable or disable it.

If that's not possible (which can certainly happen in some cases), then... why can't there be multiple compilations of a library in a distro with different patchsets?

Introducing the Qt WebEngine

Posted Sep 13, 2013 3:49 UTC (Fri) by rahulsundaram (subscriber, #21946) [Link]

Because it massively inefficient? Distributions using different patchsets are a huge maintenance burden and creates friction with upstream. Upstream really needs to take effort to unbundle libraries and avoiding forking projects instead of pushing this work to multiple distributions to deal with.

Introducing the Qt WebEngine

Posted Sep 13, 2013 4:45 UTC (Fri) by torquay (guest, #92428) [Link]

No, upstream doesn't need to do anything to please distributions. Any project by definition has access to only a limited amount of time and effort.

If a given distribution (a separate OS really) has a certain internal policy it wants to enforce (such as "no bundled libraries"), then the distribution itself needs to put in the effort to:
(1) unbundle the libraries and carry a modified version of the upstream project,
or
(2) unbundle the libraries and provide the necessary patch sets to the upstream projects, where the plural indicates the upstream project as well as the upstream libraries.

If the OS is not willing (or not able) to do the above, it should shut up and either ship the upstream project (including all the bundled libraries) as is, or not ship the project.

Introducing the Qt WebEngine

Posted Sep 13, 2013 6:07 UTC (Fri) by Homer512 (subscriber, #85295) [Link]

Sounds like a real win-win scenario to me. -.-

Introducing the Qt WebEngine

Posted Sep 13, 2013 8:20 UTC (Fri) by pabs (subscriber, #43278) [Link]

I very much disagree. In case people reading this are involved in upstreams who at least intend to be distro-friendly, please take a look at the Debian upstream guide and the links available in it, my favourite is "How you know your Free or Open Source Software Project is doomed to FAIL" by Tom Callaway.

http://wiki.debian.org/UpstreamGuide

Introducing the Qt WebEngine

Posted Sep 13, 2013 8:41 UTC (Fri) by alexl (subscriber, #19068) [Link]

I'm pretty sure chrome has not FAILed. In fact, its likely more popular than all the Linux desktop distros combined.

Introducing the Qt WebEngine

Posted Sep 13, 2013 10:32 UTC (Fri) by rsidd (subscriber, #2582) [Link]

+1. If you're writing your own small program intended mainly for Linux users, the advice to use system libraries applies. But the people at Google know what they are doing. It may not be optimal for Linux distro packagers -- tough, but that's one reason why Google distributes chrome directly. It may hamper security or it may improve security: if a vulnerability is discovered in a bundled library, Google can push an update, but can they depend on the user upgrading the system libraries? (Of course, if the user is quick with security updates and Google is slow, it hurts security. But I wonder which situation is more common.)

Introducing the Qt WebEngine

Posted Sep 13, 2013 11:24 UTC (Fri) by dsommers (subscriber, #55274) [Link]

This is utter nonsense.

It is completely possible for google to provide Windows builds which bundles what's lacking in Windows. But they then need to bundle upstream versions which is otherwise found on other platforms. On platforms with these packages, they don't need to this bundling at all, they benefit from what's already available there.

That's exactly what OpenVPN does (and many other projects, I just have hands-on experience with OpenVPN). The same source code is used to build for all platforms. Distributions can compile and package OpenVPN with all external dependencies as needed. The Windows build bundles the needed libraries, as they don't exist on Windows.

I suspect the core reason why google does what it does, is that it wants features in these third party libraries which is not present. So instead of submitting and co-operating with the upstream projects, they just bundle it all together and ignores upstream projects if they reject it. Otherwise this bundling is just poor engineering.

Think back at what happened in the very beginning with the Linux kernel in Android, and they realised that was a stupid thing to do. Bitterly. Because the maintenance of a code base diverging too much from the upstream gets painful. And they will not benefit from input and reviews of patches from a broader community. Google have changed their working methods now. They're now co-operating far closer with the kernel community to resolve issues. Yes, their patches gets rejected, but they can at least npw get input on how to make things better for a broader user base than what just google needs. And many of their patches also gets accepted.

Introducing the Qt WebEngine

Posted Sep 13, 2013 16:13 UTC (Fri) by khim (subscriber, #9252) [Link]

On platforms with these packages, they don't need to this bundling at all, they benefit from what's already available there.

“Benefit”? Didn't you meant to say “Suffer”? If you need to bundle some library in some cases then it's much easier to bundle it in all cases and only ever deal with one version of said library. Only when library is never needs to be bundled you can benefit “from what's already available there”. Chromium allows some unbundling as a concession to the distribution writers, but make no mistake: this concession, not an advantage. In particular if some bug can only be reproduced with system-provided library and can not be reproduced with a bundled one it's considered very low-priority bug.

I suspect the core reason why google does what it does, is that it wants features in these third party libraries which is not present.

Nope. Google just wants to reduce number of configurations which needs testing. In fact it tries to keep changed for the third-party code to the minimum, but it wants to know the exact version of library which is used and it wants to make sure different versions of Chrome behave similarly to each other.

Think back at what happened in the very beginning with the Linux kernel in Android, and they realised that was a stupid thing to do. Bitterly. Because the maintenance of a code base diverging too much from the upstream gets painful. And they will not benefit from input and reviews of patches from a broader community.

Sure, but all these arguments only imply that you should try to upstream all the changes in bundled libraries, that's all. It's easy to carry and update bundled libraries if they are not changed and it's much easier to only deal with one version of any library at any given time rather then try to deal with bazillion versions spread over bazillion distributions.

Bundling of libraries

Posted Sep 14, 2013 8:31 UTC (Sat) by oldtomas (guest, #72579) [Link]

“Benefit”? Didn't you meant to say “Suffer”?

This is called pissing in the pond. It's more comfy... as long as you are the only one doing it.

Imho it's only the collaboration of distros and upstreams what can make dreams true. And free software might be the facilitator.

Introducing the Qt WebEngine

Posted Sep 13, 2013 16:35 UTC (Fri) by NAR (subscriber, #1313) [Link]

Think back at what happened in the very beginning with the Linux kernel in Android, and they realised that was a stupid thing to do. Bitterly. Because the maintenance of a code base diverging too much from the upstream gets painful.

The kernel changes quite rapidly and the Android developers modified it extensively, that's why it is not that good idea to fork it. What about those packages that Chrome bundles, do they change that much and that often? If Google barely touches that bundled code and upstream makes a release only every other year than this is not much of a burden.

Introducing the Qt WebEngine

Posted Sep 13, 2013 17:54 UTC (Fri) by dashesy (subscriber, #74652) [Link]

All this talk about bundled libraries have higher chance of having security vulnerabilities unfixed, is there any pointer to quantify this? In OSX most applications are bundled, but the security is not that bad. On the other hand, at least applications always work as developers intend.

In general I think Google has far more resources and willingness to keep its flagship software secure, than a maintainer working on her free time.

Chrome != Chromium

Posted Sep 13, 2013 10:43 UTC (Fri) by tialaramex (subscriber, #21167) [Link]

Just in case you were maybe unclear, Chrome is proprietary software available only from Google and thus the distributors don't care, whereas this is a proposed Free Software project based on Chromium.

It is of course very common for proprietary projects to build up enormous technical debt and then eventually collapse under the resulting maintenance burden, lots of dead operating systems demonstrate that you can do this on a far bigger scale than Chrome has, at least for a few years. And that's fine for them, but the distributions have to take a longer view. Debian is twenty years old.

Chrome != Chromium

Posted Sep 13, 2013 14:39 UTC (Fri) by torquay (guest, #92428) [Link]

    but the distributions have to take a longer view. Debian is twenty years old.
The idea of Debian might be twenty years old, but I doubt there is an actively maintained version of Debian that's 20 years old.

Chrome != Chromium

Posted Sep 15, 2013 10:17 UTC (Sun) by Jonno (subscriber, #49613) [Link]

>but the distributions have to take a longer view. Debian is twenty years old.

>The idea of Debian might be twenty years old, but I doubt there is an actively maintained version of Debian that's 20 years old.

Actually, it is 17 years old, and is called Debian Gnu/Linux Unstable, or (since 2000) informally "sid".

Even if you only count the age of the current incarnation of unstable created by branching of from the just released Debian Gnu/Linux 2.2 ("potato") in August 2000, it is still over 13 years old, which imho still requires "a longer view".

Note: Until December 2000 "unstable" was an alias for the upcoming stable release, branching of from the previous one after it was released. In December 2000 "testing" was created, branching of from unstable, and took over that role, but unstable is still maintained and is where all new development takes place, before gradually being migrated to testing.

Introducing the Qt WebEngine

Posted Sep 13, 2013 8:48 UTC (Fri) by torquay (guest, #92428) [Link]

Let's not forget that the open source world is plagued with API instability (including deprecations). Bundled libraries are a way of working around that issue. Obviously not perfect nor the preferred approach, but it is effective.

A given software project aims to achieve certain functionality, and in the process may rely on libraries provided by 3rd parties in order to save effort. The amount of effort required to provide the functionality is already high. The further effort required to track API and ABI changes in the 3rd party libraries is beyond what many developers/firms can (or want to) handle.

This is one of the main reasons why RHEL exists, and why Debian freezes everything roughly once every two years. In contrast, if an overwhelmingly large set of API within the stack was stable, we wouldn't need the entire freezing silliness.

Right now we either have: (1) API stability without possibility of new APIs, or (2) API instability and new APIs. This is a seriously wrong picture.

Introducing the Qt WebEngine

Posted Sep 13, 2013 11:42 UTC (Fri) by pabs (subscriber, #43278) [Link]

> why Debian freezes everything

Debian makes releases (freezes) for people who don't want to upgrade their OS every day. I can't speak for RHEL but I expect it is the same. API/ABI changes are a relatively small part of that.

Introducing the Qt WebEngine

Posted Sep 13, 2013 12:16 UTC (Fri) by pizza (subscriber, #46) [Link]

> Debian makes releases (freezes) for people who don't want to upgrade their OS every day. I can't speak for RHEL but I expect it is the same. API/ABI changes are a relatively small part of that.

On an individual package or application level, there's very little API or ABI churn from version to version. But when you aggregate $largeamount of software together, there's quite a bit more API/ABI churn when looked at as a whole.

The problem people have is that they expect nothing to ever change (ie write once and forget) and that is a very naive attitude.

The real question is: Do you want to deal with small amounts of change continually, or large amounts of change all at once?

The general Linux (and distro) approach is the former, with the likes of RHEL providing the latter if you're willing to pay the price. And FWIW that price isn't just up-front $$$, but also opportunity costs of missing out on (often much) newer features and things and maintainance costs of backporting/bundling newer libraries or simply supporting old ones.

Introducing the Qt WebEngine

Posted Sep 13, 2013 13:06 UTC (Fri) by torquay (guest, #92428) [Link]

    The real question is: Do you want to deal with small amounts of change continually, or large amounts of change all at once?

How about neither? API stability means that current API doesn't break. This does not preclude new APIs being introduced, or existing APIs being extended.

Nothing is stopping the old function blah() living beside the new function foo() in an existing library. Nothing is stopping multiple versions of libraries being installed, and nothing is stopping multiple versions of a given program being installed in parallel. Nothing, apart from people breaking APIs willy-nilly, because they finally found the OneTrueWay (tm) of doing things this week.

The entire "old = stable" and "new = unstable" dichotomy has to stop.

Introducing the Qt WebEngine

Posted Sep 13, 2013 14:29 UTC (Fri) by pizza (subscriber, #46) [Link]

>How about neither? API stability means that current API doesn't break. This does not preclude new APIs being introduced, or existing APIs being extended.

You're asking for a considerable amount more discipline from library authors. Even in the best of circumstances, that sort of thing isn't free, and only tends to come about over time as a project matures and gains a critical mass of users/contributors.

Nevermind that getting APIs right is *hard*, even for folks who have been doing it a long time. And most of us haven't. :)

But all of this really is a red herring -- The stuff that folks seem to be gnashing their teeth over isn't API/ABI instability of individual libraries per se but rather from paradigm shifts in how things are aggregated together -- eg the rise of systemd or the old Win9x vs WinNT dichotomy.

An application developer ends up needing to support both the old and new paradigms at build or runtime, and that's where the real angst lies.

Introducing the Qt WebEngine

Posted Sep 13, 2013 16:38 UTC (Fri) by khim (subscriber, #9252) [Link]

You're asking for a considerable amount more discipline from library authors.

Well, not really. Note how he offered very simple solution, too: Nothing is stopping multiple versions of libraries being installed, and nothing is stopping multiple versions of a given program being installed in parallel.

Even in the best of circumstances, that sort of thing isn't free, and only tends to come about over time as a project matures and gains a critical mass of users/contributors.

Wrong again. Such things don't suddenly arrive over time. OpenSSL is quite "mature" library, but it does not offer stable ABI because well, it's designed this way. To offer stable ABI you either need to design library that way from the start or, alternatively, you must provide few versions of library installed side-by-side. Bundling is an option, too.

But if library is not designed to provide stable ABI and you insist that one particular version of said library must be used, well… that's just bad practice and we should stop doing that.

An application developer ends up needing to support both the old and new paradigms at build or runtime, and that's where the real angst lies.

Right. If application developers need to support both old and new paradigms then it means application platform have failed—even Apple can not do that (I know quite a few developers which stopped supporting their programs on iOS when Apple started demanding that retina display will be natively supported e.g.) and Linux market is significantly less lucrative.

Introducing the Qt WebEngine

Posted Sep 13, 2013 18:04 UTC (Fri) by pizza (subscriber, #46) [Link]

> Well, not really. Note how he offered very simple solution, too: Nothing is stopping multiple versions of libraries being installed, and nothing is stopping multiple versions of a given program being installed in parallel.

Okay, I'm confused -- are you agreeing with that statement? Because if you are.. then what exactly is the problem with "newer versions having an incompatible ABI/API?" After all, this describes the general status quo.

> But if library is not designed to provide stable ABI and you insist that one particular version of said library must be used, well… that's just bad practice and we should stop doing that.

I don't believe I've ever advocated for this, and for what it's worth I agree with you here.

> Right. If application developers need to support both old and new paradigms then it means application platform have failed—even Apple can not do that (I know quite a few developers which stopped supporting their programs on iOS when Apple started demanding that retina display will be natively supported e.g.) and Linux market is significantly less lucrative.

I'm guessing those developers didn't derive any meaningful income from their iOS apps so walking away didn't really cost them much -- but it's hard to blame Apple for that.

Anyway -- You can't support The Old Way(tm) forever, and no matter how long you bend over backwards to do so, it's not going to be long enough.

Witness the flak that MS got when they finally removed the deprecated-for-a-decade Win16 subsystem with the release of Vista. As it turned out a boatload of otherwise-compatible 32-bit code out there that used 16-bit installers. You just can't win sometimes.

(Sony is another poster child for the benefits of backwards compatility (PS1->PS2) and its costs (PS2->PS3). And interestingly, they're completely ignoring it for the PS4)

Introducing the Qt WebEngine

Posted Sep 13, 2013 20:35 UTC (Fri) by khim (subscriber, #9252) [Link]

Because if you are.. then what exactly is the problem with "newer versions having an incompatible ABI/API?"

There are no problems here.

But if library is not designed to provide stable ABI and you insist that one particular version of said library must be used, well… that's just bad practice and we should stop doing that. I don't believe I've ever advocated for this, and for what it's worth I agree with you here.

Unfortunately that's how distributions traditionally handled libraries (well, except for RHEL). Today Ubuntu does much better, but still there are occasional glitches when Lucid only includes libffi.so.5 and Precise only includes libffi.so.6. That means that to support both you need to bundle libffi with your program and this information was not known when Lucid was released!

I'm guessing those developers didn't derive any meaningful income from their iOS apps so walking away didn't really cost them much -- but it's hard to blame Apple for that.

As is typical for games they only produce “meaningful income” in the first year. They still produce some income few years down the road, but often not big enough income to warrant large amount of work required to “natively” support Retina. And come on, that's Apple! With hundred of millions devices sold! Linux distributions demand similar amount of work for what… ten million of users… twenty…

Anyway -- You can't support The Old Way(tm) forever, and no matter how long you bend over backwards to do so, it's not going to be long enough.

Sure. But Linux distributions don't bother to offer even minimal support. You can not build one package to be usable with libffi on both Lucid and Precise. Both are currently supported, both offer libffi in their code list of packages. That's the failure I'm talking about.

Witness the flak that MS got when they finally removed the deprecated-for-a-decade Win16 subsystem with the release of Vista. As it turned out a boatload of otherwise-compatible 32-bit code out there that used 16-bit installers.

Witness how Microsoft solved this problem in Windows 7. Is it perfect solution? Probably not: it only supports InstallShield 5.x, but not e.g. InstallShield 4.x. But it works and it means people can continue to use these programs.

You just can't win sometimes.

Sure, but you can try to support as many users and as many developers and you can. Linux distributions answer was traditionally “just rewrite your stuff”. Situation is slowly changing, but it's still not even close to what Apple is doing, let alone what Google (with Android) and Microsoft (with desktop Windows) are doing. Microsoft in it's arrogance did almost the same thing with Windows Phone with predictable result: 3-4% marketshare after billions (literally) spent on promotion.

Introducing the Qt WebEngine

Posted Sep 13, 2013 22:29 UTC (Fri) by pizza (subscriber, #46) [Link]

> Unfortunately that's how distributions traditionally handled libraries (well, except for RHEL). Today Ubuntu does much better, but still there are occasional glitches when Lucid only includes libffi.so.5 and Precise only includes libffi.so.6. That means that to support both you need to bundle libffi with your program and this information was not known when Lucid was released!

FWIW Fedora's also pretty good about this too (I'm involved in an effort to properly package one library that just went through a major API change), but it's worth remembering that it takes time and effort (ie work) to make this sort of thing happen, and there's a severe shortage of folks who are able (or willing) do do that for free.

Introducing the Qt WebEngine

Posted Sep 16, 2013 3:10 UTC (Mon) by pizza (subscriber, #46) [Link]

> As is typical for games they only produce “meaningful income” in the first year. They still produce some income few years down the road, but often not big enough income to warrant large amount of work required to “natively” support Retina. And come on, that's Apple! With hundred of millions devices sold! Linux distributions demand similar amount of work for what… ten million of users… twenty…

Incidentally, this is the perfect example why libraries shouldn't be bundled. If the application isn't generating sufficient income to make it worth the effort to stay on top of security-related updates (not even counting shifting sands like Apple's guidelines) we'll rapidly end up with a large pile of vulnerable stuff with nothing we can do about it.

Games may seem relatively low-risk, but when you consider the sheer number of libraries they employ, plus the fact that they're all networked these days.. it's a recipe for considerable mayhem. (Just ask any console vendor!)

Introducing the Qt WebEngine

Posted Sep 16, 2013 13:05 UTC (Mon) by khim (subscriber, #9252) [Link]

Games may seem relatively low-risk, but when you consider the sheer number of libraries they employ, plus the fact that they're all networked these days.. it's a recipe for considerable mayhem. (Just ask any console vendor!)

Zero remote exploits and few locally-exploitable ones (usually related to [mis]handling of saves). Reason enough to live with few mediocre games instead of many thousands popular games? Ah… right… also some ID games which you can enjoy yeras after release. Well… perhaps that's enough for you. Certainly not for me.

P.S. I specifically excluded most Linux games from my count because, you know, all these “Worlds of Goo”s and “DOTA”s bundle just as many libraries as their MacOS or Windows counterparts.

Introducing the Qt WebEngine

Posted Sep 16, 2013 13:57 UTC (Mon) by pizza (subscriber, #46) [Link]

> Zero remote exploits and few locally-exploitable ones (usually related to [mis]handling of saves). Reason enough to live with few mediocre games instead of many thousands popular games? Ah… right… also some ID games which you can enjoy yeras after release. Well… perhaps that's enough for you. Certainly not for me.

Don't forget the iOS/Android model of advertising-supported games; most vulnerabilities are through that path, and most of the time the advertising libraries are bundled rather than system-wide, so each app/game has to be updated individually. There are considerably more than "zero" network-based vulnerabilities out there.

As for "locally-exploitable" save game handling, just ask Microsoft, Sony, and Nintendo about those consequences.

FWIW, I don't consider games any differently than anything else -- a large blob of third-party binary code is a large blob of third-party binary code. If it's not distributed in source form, you're pretty much SOL if anything goes awry.

Introducing the Qt WebEngine

Posted Sep 17, 2013 19:21 UTC (Tue) by khim (subscriber, #9252) [Link]

Don't forget the iOS/Android model of advertising-supported games; most vulnerabilities are through that path

Do you have some data to support such claim? I'm sure there are some vulnerabilities in iOS/Android games but I was under impression that most games are not attacked via their [vulnerable] adlibs. Instead .apks with bundled trojans are pushed around.

Consoles don't have this problem because applications must be signed thus it's not easy to push changed game with added trojan (or some other malware).

As for "locally-exploitable" save game handling, just ask Microsoft, Sony, and Nintendo about those consequences.

Why would you need to ask “Microsoft, Sony, and Nintendo”? You can just take a look on news and you'll see that usually vulnerable game is patched in hurry and you can only usually use it for a few days. And somehow bundled libraries are not a problem.

FWIW, I don't consider games any differently than anything else -- a large blob of third-party binary code is a large blob of third-party binary code. If it's not distributed in source form, you're pretty much SOL if anything goes awry.

Well, there are large difference in the incentive. Most other programs can be written once and used for years, but few games have this property. Usually they are only interesting for a short time. That's why there are very few open-source games (mostly strategic ones which can be enjoyed longer).

Introducing the Qt WebEngine

Posted Sep 17, 2013 21:42 UTC (Tue) by pizza (subscriber, #46) [Link]

> Why would you need to ask “Microsoft, Sony, and Nintendo”? You can just take a look on news and you'll see that usually vulnerable game is patched in hurry and you can only usually use it for a few days. And somehow bundled libraries are not a problem.

Because in those case it's not the *user* that's being attacked, it's the user doing the attacking! (think breaking/bypassing DRM)

Incidentally, Nintendo in particular doesn't bundle libraries; instead they bundle the entire OS with each game. (They may have finally changed this with the WiiU, however)

Introducing the Qt WebEngine

Posted Sep 17, 2013 23:08 UTC (Tue) by khim (subscriber, #9252) [Link]

Because in those case it's not the *user* that's being attacked, it's the user doing the attacking! (think breaking/bypassing DRM)

I understand that. But the end result is exactly the same as what is achievable with Linux (or Windows, or any other) distributions: when new exploit is found it can be used for a short time (by NSA or other attackers in case of Linux/MacOS/Windows/etc, by the end user who wants to run homebrew or pirated games on PlayStation, Xbox or Wii) and then it's patched out and can not be used any longer.

Incidentally, Nintendo in particular doesn't bundle libraries; instead they bundle the entire OS with each game.

They only bundle upgrade. If newer version of OS is already present it's not used. And they can patch their OS as easily as Miscrosoft or SONY. Their problem was basically fatally unsecure OS, not their inability to fix security vulnerabilities in games. Apparently Wii U is much better in that regard.

Introducing the Qt WebEngine

Posted Sep 18, 2013 20:43 UTC (Wed) by Arker (guest, #14205) [Link]

"Unfortunately that's how distributions traditionally handled libraries (well, except for RHEL). Today Ubuntu does much better, but still there are occasional glitches when Lucid only includes libffi.so.5 and Precise only includes libffi.so.6."

So where's the problem?

It shouldnt be any problem to install both libraries, keep both up to date, and let each program call the one it needs. I dont use Ubuntu, what are they doing wrong here?

Introducing the Qt WebEngine

Posted Sep 18, 2013 21:15 UTC (Wed) by khim (subscriber, #9252) [Link]

"Unfortunately that's how distributions traditionally handled libraries (well, except for RHEL). Today Ubuntu does much better, but still there are occasional glitches when Lucid only includes libffi.so.5 and Precise only includes libffi.so.6."

So where's the problem?

Huh? What do you mean? You really don't see a problem with that?

It shouldnt be any problem to install both libraries, keep both up to date, and let each program call the one it needs. I dont use Ubuntu, what are they doing wrong here?

Exactly what I've said: Ubuntu Lucid only offers libffi5 while Ubuntu Precise only offers libffi6. Which means that it's not possible to create package which is based on libffi and is installable on both Ubuntu Lucid and Ubuntu Precise (well, unless you'll start doing complex tricks with dlopen(3) and stuff). Worse: if you've used libffi in Ubuntu Lucid then your package will suddenly break if you'll try to use it on Ubuntu Precise (it'll probably work fine on Ubuntu Precise which was created from Ubuntu Lucid by upgrade which makes the whole mess even more confusing). And both Ubuntu Lucid and Ubuntu Precise are current (as in: currently supported) versions. Still don't see a problem?

Introducing the Qt WebEngine

Posted Sep 18, 2013 21:41 UTC (Wed) by pizza (subscriber, #46) [Link]

> Exactly what I've said: Ubuntu Lucid only offers libffi5 while Ubuntu Precise only offers libffi6.

You know, given that nobody's reported this oversight/omission/bug to Ubuntu, it's not surprising they haven't done anything about it.

I just checked; 41 reports referencing 'libffi', 15 for 'libffi5', and 23 for 'libffi6' (with some overlap). Only one of those may be related (#1004295) and it was reported in May 2012.

Introducing the Qt WebEngine

Posted Sep 18, 2013 21:56 UTC (Wed) by khim (subscriber, #9252) [Link]

That's why I've called it “occasional glitch”, not “huge problem”. Ubuntu does much better these days, “disappearing libraries” used to be much bigger problem. Now they are mostly caused by GNOME2-to-GNOME3 transition (which is a PITA, but is so well-known that I've decided not to beat this dead horse).

Introducing the Qt WebEngine

Posted Sep 18, 2013 22:17 UTC (Wed) by Arker (guest, #14205) [Link]

"Ubuntu Lucid only offers libffi5 while Ubuntu Precise only offers libffi6. Which means that it's not possible to create package which is based on libffi and is installable on both Ubuntu Lucid and Ubuntu Precise (well, unless you'll start doing complex tricks with dlopen(3) and stuff). Worse: if you've used libffi in Ubuntu Lucid then your package will suddenly break if you'll try to use it on Ubuntu Precise (it'll probably work fine on Ubuntu Precise which was created from Ubuntu Lucid by upgrade which makes the whole mess even more confusing). And both Ubuntu Lucid and Ubuntu Precise are current (as in: currently supported) versions. Still don't see a problem?"

Sounds like the problem is the package management system.

Introducing the Qt WebEngine

Posted Sep 13, 2013 16:29 UTC (Fri) by khim (subscriber, #9252) [Link]

The real question is: Do you want to deal with small amounts of change continually, or large amounts of change all at once?

Wrong question. Right question: who decides when to deal with change? Sane approach: developer decides when to upgrade (usually when new version of software is pushed out and you can deal with temporary period of instability). Insane (but FOSS-favored) approach: someone else decides when it must be done—distributor, library author, etc. Not developer. It was already discussed years ago.

I find this attitude puzzling. Everyone else are doing it the same way. When TV goes from SDTV to HDTV there are plethora of converters which guarantee that you can use old TV sets with upgraded cable system. When people replace copper with optic they make sure old phones are still usable.

But when FOSS developers issue new ABI-incompatible version of library they expect that others will change their code to make it compatible with new version of library right away. Why? Who gave them this right? It's their responsibility to create and support compatibility shims. And it's good idea to keep them alive for a few years at least.

Introducing the Qt WebEngine

Posted Sep 16, 2013 5:22 UTC (Mon) by FranTaylor (guest, #80190) [Link]

you are assuming that the changes are only to the API itself

if the semantics behind the APIs change then shims probably won't work

shims are just more opportunities for bugs and security issues

Introducing the Qt WebEngine

Posted Sep 17, 2013 22:58 UTC (Tue) by khim (subscriber, #9252) [Link]

if the semantics behind the APIs change then shims probably won't work

Sure. And the article under discussion is perfect example: If you use the QObject bridge or the QWebElement API, we recommend you wait a bit longer with moving, as a replacement API for those will most likely not be part of the first version of Qt WebEngine.

It does not matter how you support your old API—by using shims, old libraries or even keeping more-or-less complete version of old OS. The only thing that matter: you must do that. Users and application developers must be in control, not library writers and packagers. The question is always about proportion: if some outliers are not ready to go with your changes then you can always ignore them, but if majority if users revolt then other platform can replace yours easily.

Introducing the Qt WebEngine

Posted Sep 18, 2013 10:21 UTC (Wed) by krake (subscriber, #55996) [Link]

Exactly.

Which is why libraries with a strict stability policy, like in this case Qt, will add new functionality while preserving the old one, even if that means keep maintaining the old one until an so-name change allows removal

Libraries

Posted Sep 13, 2013 15:30 UTC (Fri) by tialaramex (subscriber, #21167) [Link]

There are two ways to think about libraries.

One way is the Unix C library way (or the standard libraries of other languages, such as Java's standard class library, or the Python standard library) which is like the SI units. The SI units are stable, dependable on the whole. There is no danger that a metre will be doubled in length overnight or made into a unit of time. There are changes, but the changes are heavily signposted and quite modest in scope, the second has been re-defined, but only slightly and it didn't affect the user of an alarm clock, or even the manufacturer of an alarm clock, only those few people who cared very precisely about the exact definition are affected. But they are _unavoidably_ affected, the second cannot remain the same for them while changing for everyone else.

I presume that no-one doubts such "standard libraries" are useful? And that no-one thinks we should be including them in every individual piece of software again as we might have done twenty years ago?.

Another way is the Boost approach where the library is seen as like a reference book of quotes. You find a quote you like, paste it into your document. Done. New editions of the book might hold different quotes, or change the attribution on a quote, but you probably don't care. Nobody expects that updating the reference book will fix all the place where authors have pasted in a quote into another work. New versions of an algorithm with very different behaviour can be incorporated with no notice, or with only some small print in a Changelog. On the downside if the reference contains a mistake, it will infect everything. Abraham Lincoln will be forever associated with the observation that not everything you read on the Internet is true. Huh.

These libraries have their place, but are they a good substitute for standard libraries? Did C++ benefit from the existence of a thousand similar yet different String classes rather than just one, standard class that did 99% of what most people would want?

Libraries

Posted Sep 19, 2013 9:10 UTC (Thu) by justincormack (subscriber, #70439) [Link]

The Unix C library however has no defined ABI and implementations of it do not always offer ABI stability.

Libraries

Posted Sep 20, 2013 20:36 UTC (Fri) by jhoblitt (subscriber, #77733) [Link]

Yes but glibc has very effective symbol versioning which is proof that ABI stability is possible with enough motivation.

Libraries

Posted Sep 24, 2013 11:24 UTC (Tue) by tialaramex (subscriber, #21167) [Link]

Things are a lot better than they were historically. On x86-64 we have ABI compatibility between compilers, which means that if you've got a C data structure definition you've de facto got an ABI promise on the actual in-memory layout and alignment of that structure, if you've got a C function prototype you can call an implementation of that function compiled by a different compiler and expect it to work as intended.

That means a lot of standard API from the C library becomes an ABI without any extra lifting. But you are right than in principle you aren't promised ABI stability even though GLIBC largely delivers it.

Introducing the Qt WebEngine

Posted Sep 13, 2013 16:35 UTC (Fri) by clopez (subscriber, #66009) [Link]

Completely agree.

I think FLOSS library developers should learn from the Kernel development mantra: Never *ever* breakage of the API/ABI is allowed. Even between major releases. You can add things to the API/ABI, but you can't remove or modify what is already there.

I think that Linux, as a kernel, is so successful thanks to this.

Introducing the Qt WebEngine

Posted Sep 13, 2013 17:05 UTC (Fri) by khim (subscriber, #9252) [Link]

I think FLOSS library developers should learn from the Kernel development mantra: Never *ever* breakage of the API/ABI is allowed. Even between major releases. You can add things to the API/ABI, but you can't remove or modify what is already there.

That's too harsh. You can only ever can have one kernel. You can install dozen of libraries in parallel. Thus it's fine for a newer version of library to have incompatible ABIs but yes, if that happens then both versions must be included in the distribution.

I think that Linux, as a kernel, is so successful thanks to this.

That's certainly it's major advantage. Since *BSD's only keep compatibility at libc ABI level you can not just take their kernel and combine it with dietlibc to create small distribution. But for the desktop or phone you need somewhat richer platform than what bare kernel offers.

Introducing the Qt WebEngine

Posted Sep 13, 2013 17:47 UTC (Fri) by clopez (subscriber, #66009) [Link]

> You can install dozen of libraries in parallel. Thus it's fine for a newer version of library to have incompatible ABIs but yes, if that happens then both versions must be included in the distribution.

That is only fine if the breakage implies an upgrading of the SONAME. The problem is that *many* libraries introduce API/ABI breakages without upgrading the SONAME. Just check http://upstream-tracker.org/statistics.html

Introducing the Qt WebEngine

Posted Sep 13, 2013 18:08 UTC (Fri) by pizza (subscriber, #46) [Link]

> Just check http://upstream-tracker.org/statistics.html

Oh wow, that's an excellent resource. Thanks for the link!

Introducing the Qt WebEngine

Posted Sep 13, 2013 19:52 UTC (Fri) by khim (subscriber, #9252) [Link]

The problem is that *many* libraries introduce API/ABI breakages without upgrading the SONAME.

If library does not bother to bump SONAME when ABI is changed then it should not have SONAME in the first place and should only be offered as .a file. Problem solved.

Introducing the Qt WebEngine

Posted Sep 16, 2013 23:01 UTC (Mon) by nix (subscriber, #2304) [Link]

Most of these breakages are accidental, and are generally reverted as soon as discovered. Here and there you also have removal of a function impossibly hard to maintain that almost nobody was using, without bumping soname because the cost would be higher to the vast majority of non-users than the cost to the vanishingly small proportion of users: theoretically problematic, but in practice nobody but X and glibc bother to keep functions as useless as strfry() around just because of ABI stability concerns. So, occasionally, the generally good rules of library soname conformance are violated intentionally.

Introducing the Qt WebEngine

Posted Sep 17, 2013 22:52 UTC (Tue) by torquay (guest, #92428) [Link]

    Here and there you also have removal of a function impossibly hard to maintain that almost nobody was using, without bumping soname because the cost would be higher to the vast majority of non-users than the cost to the vanishingly small proportion of users:

Sorry, that's just plain wrong, and frankly, dangerous and arrogant behaviour (on the part of the (ir)responsible developers, not the poster named nix). Firstly, how does one reliably measure how "small" this proportion of users is, and secondly, why is this particular portion of users any less important?

For function blah() that's deemed "unimportant", user set A will be affected. For function foo() that's also deemed "unused", user set B is affected. There may be no overlap between sets A and B. Pretty soon all the "unimportant" and "unused" functions that get removed start to affect more and more people. Ergo, API/ABI instability.

An API break is an API break, no matter how it's dressed up. For API and ABI stability, no API or ABI is changed or removed within a major version. There is no other definition.

    So, occasionally, the generally good rules of library soname conformance are violated intentionally.
and thinking like that gets us on a very slippery slope, resulting in the current API/ABI instability we have in the distros. The best solution? Don't go on that slippery slope. DO NOT CHANGE THE API/ABI WITHOUT CHANGING THE SONAME. It's as simple as that.

Introducing the Qt WebEngine

Posted Sep 18, 2013 0:11 UTC (Wed) by nix (subscriber, #2304) [Link]

Firstly, how does one reliably measure how "small" this proportion of users is, and secondly, why is this particular portion of users any less important?
Generally, what is done is to grep an entire distro's source code for users (sometimes more than one). If nobody's used it, it's probably unused. If Google further returns no uses, or, worse, if every use that's shown is used wrong and introduces a security hole (as has happened before), it's best to drop the damn unusable function and write it off as an API design error. Then, as so often in real-world engineering, it's a tradeoff: do you inconvenience *every single user of the library*, forcing a recompile for the sake of the dropping of a function that essentially none of them were using, or do you inconvenience a couple of users who are so far in the dark that no source is visible, or who are, in all likelihood, using the function wrong anyway (because everyone else was)? I know what I'd pick.

Now the right thing to do here is obviously to use symbol versions and just make the symbol un-linkable against in the next release, but despite having been in glibc for a decade and a half and having been in Solaris in a cruder form for longer, almost no upstreams seem to use symbol versions to any real degree, and you can't add them retroactively, so they're left with the unfortunate Hobson's choice above.

Introducing the Qt WebEngine

Posted Sep 18, 2013 1:29 UTC (Wed) by torquay (guest, #92428) [Link]

    Generally, what is done is to grep an entire distro's source code for users (sometimes more than one). If nobody's used it, it's probably unused. If Google further returns no uses
Sorry, this is unreliable. Probably doesn't mean definitely. Not all source code is open. This doesn't simply mean proprietary source code, but stuff sitting in private repos that's only used internally. Even code that's in a user's home directory, where the user doesn't want to share it.
    If Google further returns no uses, or, worse, if every use that's shown is used wrong and introduces a security hole (as has happened before), it's best to drop the damn unusable function and write it off as an API design error.
It is still wrong to break the API. If every use is wrong, then provide a workaround within the code behind the API, where the wrong use is detected and passed to a different (safer) code path. Removing APIs for whatever reason, without bumping the major version number, is simply a nono.

Working software is much more preferable to non-working software, even if such working software may have a security issue. Let the user decide how to deal with the security issue (eg. running in a sandbox, modifying the code, etc). It's not up to the library developer to break stuff that previously worked.

If one really wants to get pedantically philosophical about it, let's look at it from the freedom point of view, in the sense of Stallman's software freedom. Such freedom also involves users having the freedom to run software in whatever manner they want. We can certainly encourage them to upgrade to a newer version of the library, but breaking working software amounts to taking the users' freedom away.

Introducing the Qt WebEngine

Posted Sep 18, 2013 2:10 UTC (Wed) by khim (subscriber, #9252) [Link]

Removing APIs for whatever reason, without bumping the major version number, is simply a nono.

Unless you actually have users for the ABI you don't know if it works or not. It's pretty common situation when ABI breaks for whatever reason (everyone does that: Linux kernel, glibc, GTK+, Windows, MacOS, etc - the only way to keep API 100% compatible is to never change anything and this option is always available: just use old software and never upgrade and you are golden) and then you need users to actually notice breakage and fix it. If you don't have users then it's impossible to keep it in working state anyway thus it's Ok to remove it and deal with never-before-seen-uses users on case-by-case bases. As Linus famously put it: Breaking user space is a bit like trees falling in the forest. If there's nobody around to see it, did it really break?.

Working software is much more preferable to non-working software, even if such working software may have a security issue.

Right, but the only way to keep ABI in working state is to exercise it. If nobody does that then sooner or later it'll be broken. It's more honest to break it explicitly rather then send user on the wild goose chase if you can not find anyone who uses it.

P.S. What I also find puzzling is the whole discussion are talks about API breakage. Why subject was changed from ABI breakage? Yes, it's nice to keep API compatible, but it's so hard that nobody does that. Linux kernel, glibc, even Windows! And it's of little benefit to the typical user: if one knows how to compile stuff from scratch then one [hopefully] knows how to adjust source to reflect API change (otherwise what's the point?), but ABIs are often exercised by people who know nothing about programming (and don't want/need to know anything about programming!).

Introducing the Qt WebEngine

Posted Sep 18, 2013 22:44 UTC (Wed) by nix (subscriber, #2304) [Link]

Yes, it's nice to keep API compatible, but it's so hard that nobody does that. Linux kernel, glibc, even Windows!
It's harder than hard. I would venture to suggest that it is borderline impossible to keep an ABI completely compatible under the C linkage model (or any model without a packaging system at the soname boundary *and* some sort of scheme to prevent package name collisions).

The problem is that even the apparently ABI-compatible and routine operation of introducing new functions might break arbitrary other packages because of symbol name clashes. An example was provided a few years ago by the new getline() function in glibc. Yes, it's really useful, and now in POSIX -- and it's a name lots of packages, from TeX up, had already used for many years. All of those packages were broken, no longer compiled, and had to change. (If glibc hadn't used symbol versioning pervasively, non-hidden uses of any such names in shared libraries would have suddenly referred to the glibc symbol without even a recompilation, breaking things even more badly. -z now fixes this to some degree, but not entirely.)

Introducing the Qt WebEngine

Posted Sep 18, 2013 3:31 UTC (Wed) by pizza (subscriber, #46) [Link]

> If one really wants to get pedantically philosophical about it, let's look at it from the freedom point of view, in the sense of Stallman's software freedom. Such freedom also involves users having the freedom to run software in whatever manner they want. We can certainly encourage them to upgrade to a newer version of the library, but breaking working software amounts to taking the users' freedom away.

Oh come now, that interpretation of "Software Freedom" misses the whole point!

(see http://www.gnu.org/philosophy/free-sw.html)

The FSF's definition of freedom is one of *empowerment*, not convenience -- Users will always have at their disposal the (technical & legal) means to support themselves -- even if they lack sufficient knowledge/skill to do so, they could hand it all to someone who does.

What does the $std_disclaimer say? "This software is provided as-is, in the hope it will be useful, but without even the implied warranty of merchantability or fitness for a particular purpose."

Once a particular bit of Free Software is released, there is precisely *zero* obligation on the part of its authors to ever look at it again or support anyone in its use.

Introducing the Qt WebEngine

Posted Sep 18, 2013 8:29 UTC (Wed) by torquay (guest, #92428) [Link]

    "This software is provided as-is, in the hope it will be useful, but without even the implied warranty of merchantability or fitness for a particular purpose." Once a particular bit of Free Software is released, there is precisely *zero* obligation on the part of its authors to ever look at it again or support anyone in its use.

Sure, and we follow this line of thinking, developers have precisely zero obligation to link with shared libraries, instead of simply bundling them. This is what started the entire discussion in the first place.

But then distros start moaning about "bundling can be bad from a security point of view". No dispute there (unless the developers can actually provide security fixes in the bundled libraries faster than or on par with the distro developers, as illustrated by khim). So the distros want unbundled libraries, but they are not willing to provide stable API & ABI within the libraries. The developers think this is a perverse deal, and decide to bundle the libraries.

Chicken and egg ? No. The distros can stop this vicious cycle by enforcing strict policies for API & ABI stability, And this is not exactly hard to do. Might be even more palatable within a rings / layers framework.

Introducing the Qt WebEngine

Posted Sep 18, 2013 11:11 UTC (Wed) by lsl (subscriber, #86508) [Link]

> The distros can stop this vicious cycle by enforcing strict policies for API & ABI stability

Why do you think it's the distribution's job? If the upstream authors constantly break the ABI it's just not maintainable for a distro to offer a stable ABI anyhow. Sure, they shouldn't make it worse than it already is.

As an application developer why not pick your dependencies according to the desired interface stability? But yeah, it has to be the newest fancy (but constantly breaking) Ruby stuff instead of something else where the devs at least think about compat issues (Perl? Tcl?). ;-)

Introducing the Qt WebEngine

Posted Sep 18, 2013 11:54 UTC (Wed) by mpr22 (subscriber, #60784) [Link]

If the distros started routinely jettisoning libraries whose upstreams refuse to start (or consistently prove themselves incapable of) practising good interface hygeine, maybe the upstreams would get the message that this stuff matters.

Introducing the Qt WebEngine

Posted Sep 18, 2013 12:53 UTC (Wed) by pizza (subscriber, #46) [Link]

> If the distros started routinely jettisoning libraries whose upstreams refuse to start (or consistently prove themselves incapable of) practising good interface hygeine, maybe the upstreams would get the message that this stuff matters.

Or, in the real world, users would rapidly abandon said distro.

If you're going to demand others to do work on your behalf, you'd better compensate them somehow, because "exposure" doesn't pay the bills.

Introducing the Qt WebEngine

Posted Sep 18, 2013 13:14 UTC (Wed) by torquay (guest, #92428) [Link]

The distros can still push back by refusing the incorporate an updated library until its API/ABI breaks are fixed, and staying with the previous version of the library.

If a given distro doesn't want to push back, then this can be clearly interpreted that the distro doesn't care about API & ABI stability. In that case, the distro has no right to complain if projects bundle libraries in order to achieve API & ABI stability.

Introducing the Qt WebEngine

Posted Sep 18, 2013 14:03 UTC (Wed) by rahulsundaram (subscriber, #21946) [Link]

Distributions don't update to a new version of library just for the sake of doing so. It is usually because applications require it and holding back on updating application can have serious negative consequences (ex: unmarked security fixes in newer versions). If upstream is breaking compatibility, make them responsible for fixing it instead of having distributions workaround it. If you want distributions to workaround it, don't complain that they have control. You can't have it both ways.

Introducing the Qt WebEngine

Posted Sep 18, 2013 14:45 UTC (Wed) by khim (subscriber, #9252) [Link]

If upstream is breaking compatibility, make them responsible for fixing it instead of having distributions workaround it.

WFT? If upstream is breaking compatibility but carries libraries with them then it's not a problem: they don't need to deal with ABI instability because they always deal with exactly one version of library. If distribution makers insist on using system libraries then ABI breakage is it's theirs problem, not upstream's problem! If they could not provide stable ABI then they can always just refuse to provide .so version of library and pack only .a version instead.

You can't have it both ways.

Exactly. If distribution makers ask people to use system libraries (for security or other reason) then they should make sure these libraries have usable and thus stable ABI (not just in a single distribution but preferably among most if not all distributions), if they can not do that then they should stop insisting on use of these libraries.

Distributions were created eons ago to basically hide that mess under the carpet—and that works to some degree. But only in the limited “we can always recompile the world” ivory tower. In real world this basically precludes third-party apps from being developed and when they are developed they bundle everything they could to solve this dependency problem which then leads to complains from packagers.

Introducing the Qt WebEngine

Posted Sep 18, 2013 15:19 UTC (Wed) by rahulsundaram (subscriber, #21946) [Link]

"WFT? If upstream is breaking compatibility but carries libraries with them then it's not a problem"

It is a problem unless they are maintaining it properly and even then the additional resources (disk space for instance) comes at a cost. zlib fiasco was the reason why distributions started focusing on unbundling in the first place and many upstream projects *do not* do even a remotely reasonable job of dealing with what they bundle. They are mini distributions essentially.

Introducing the Qt WebEngine

Posted Sep 18, 2013 17:15 UTC (Wed) by khim (subscriber, #9252) [Link]

Any upstream projects *do not* do even a remotely reasonable job of dealing with what they bundle

…but they do a reasonable job of dealing with their own code? Sorry, but I don't buy that. If project bundles bunch of libraries and then does not update them when vulnerabilities are found in these libraries then why do you believe it'll deal correctly with vulnerabilities of the code of the project itself?

zlib fiasco was the reason why distributions started focusing on unbundling in the first place

This was natural knee-jerk reaction but it only made situation worse. Now developers should cope with changes in API instead of doing a remotely reasonable job of dealing with what they bundle. Result is basically a system which is neither secure nor stable.

Introducing the Qt WebEngine

Posted Sep 18, 2013 17:34 UTC (Wed) by rahulsundaram (subscriber, #21946) [Link]

"If project bundles bunch of libraries and then does not update them when vulnerabilities are found in these libraries then why do you believe it'll deal correctly with vulnerabilities of the code of the project itself?"

It is an observation and a question of degree. You don't necessarily know the security implications of all the code you bundle (and sometimes fork) as much as you know the code you have written from scratch.

Btw, zlib wasn't the only such case. There has been very many examples since then. It can work out both ways, sure but it is naive to pretend that bundling doesn't have a cost to it.

Introducing the Qt WebEngine

Posted Sep 20, 2013 7:16 UTC (Fri) by kleptog (subscriber, #1183) [Link]

All this discussion and it seems to me that people have forgotten the reason distributions exist at all.

You have project Foo which has a number of developers which care very deeply about Foo, but not so much about other projects.

You have project Bar which has a number of developers which care very deeply about Bar, but not so much about other projects.

And you have user X, which cares about both Foo and Bar.

Project Foo develops rapidly and decides to use a new version of Baz. Unfortunately, this new version fixes a bug which the causes Bar to no longer work. Whose fault is this?

The answer is that it not really anybodies fault. But as user X who wants to use Foo and Bar you're screwed. It's not just shared library changes that trip you up. New versions of compilers, linkers, etc are good too. A bugfix in automake might cause some packages that used to build fine suddenly break. New versions of gcc/g++ reveal all sorts of issues.

Now multiply the problem by 10,000 upstreams and you have a serious issue. So some users got together to see if they could solve the integration issue in one place so not every user had to deal with every possible problem themselves, but could share and push back on upstreams to get their act together. Enter the distributions.

Distributions are there primarily to solve the integration problem. The role of actually distributing binaries is less important, which is why we also have source only distributions (Gentoo). And people complain they're not good platforms to release proprietary software on. No surprise, since that was never the goal.

People who say distributions get in the way were clearly not around 15 years ago when upstreams were horrible. Tarballs packaged in weird ways, unclear licensing, builds that only worked in some directories, hardcoded library names, DESTDIR was not supported, FHS did not exist, etc... It was horrible. The fact that most upstream software these days can actually be built by normal users without a distribution is a testament to the work that distributions have done to teach upstreams how to distribute software properly.

In fact, the fact there are so many distributions points to how much better it has gotten. But that's because the elders (Debian, Redhat, etc) did all the hard work.

An interesting experiment would be to take the whole of archive.debian.org and plot the average size of the debian/rules file over the years. My guess would be a dramatic drop, showing better upstream packaging.

I don't think we can lose the distributions, because I'm pretty sure that without their pushback, upstreams will devolve back to their pre-2000 distribution methods and we'll never get any work done.

Introducing the Qt WebEngine

Posted Sep 20, 2013 8:28 UTC (Fri) by renox (subscriber, #23785) [Link]

>But as user X who wants to use Foo and Bar you're screwed.

As torquay wrote, if you can have Baz version n and version n+1 installed in parallel, all is good for the user.
Except when Baz's bugfix is a security bugfix of course..

Introducing the Qt WebEngine

Posted Sep 20, 2013 9:59 UTC (Fri) by kleptog (subscriber, #1183) [Link]

As torquay wrote, if you can have Baz version n and version n+1 installed in parallel, all is good for the user.
It's a start, but not enough. You need the SONAMEs to be different and also symbol versioning (which didn't exist 15 years ago). For example, the NSS modules were a particular problem. If the list of users was stored in LDAP, then potentially any program on the system could find itself linked against some version of libldap, and you can't determine this at build time.

Who is it that tells upstreams when they broke stuff? The distributions. Who teaches upstreams why symbol versioning is important? The distributions. Upstream developers, by and large, don't care whether their software is compatible with anyone else's. Users though, they do care.

But you're still talking shared libraries. Being able to parallel install the headers for two different versions of a library is very uncommon. If a program does #include <bar.h>, which Bar are they referring to? Should we be patching every include directive in existence to specify a library version?

Making everything parallel installable is an exercise of solving one problem by massively increasing complexity in another direction. You have to make a trade-off somewhere.

Introducing the Qt WebEngine

Posted Sep 20, 2013 13:52 UTC (Fri) by khim (subscriber, #9252) [Link]

All this discussion and it seems to me that people have forgotten the reason distributions exist at all.

Perhaps some of them did. But some still remember (me included). SLS and Yggdrasil existed for a sole purpose only: to make it possible to install Linux on a new computers somehow. They had no dependency tracking (yes, seriously: you were expected to install what packages you needed and not forget to install all the pre-requisites, too). They have not pretended that what they are doing is intended to bring world pease^H^H^H^Hfree software to the masses. They just offered ready-to-use UNIX-like OS (it was at the time when UNIX was still something people craved and not something people feared)—nothing more, nothing less.

People who say distributions get in the way were clearly not around 15 years ago when upstreams were horrible.

Why do you think so? I was there and I still remember times when distributions were happy to grab all sort of stuff and pack shareware and often commercial programs (like XV and even proprietary things like Abuse or Doom). Distributions were real helpful back then: they acted in manner similar to Simtel or CTAN and have not tried to impose rules on the upstream. Sure, they offered their changes to upstream, but it was up to upstream to accept the offer or reject it.

The fact that most upstream software these days can actually be built by normal users without a distribution is a testament to the work that distributions have done to teach upstreams how to distribute software properly.

Don't overestimate this effect, please. Guys who wrote Metaconfig and Autoconf made most of the work. Sure, some of the same people worked as packagers, but most of them didn't. And still today important packages (like bzip2 or perl5) don't support DESTDIR and/or use hardcoded paths (try to create /usr/local/scripts directory on your system and see try to see how it'll affect perl build).

Worse: rise of distributions made them the gatekeepers and they started abusing their power. For years they claimed that user should not install random programs from the internet and should use distribution repositories instead. Yet they offered no sane way to add programs to the distros. Not only they refuse to package binary-only programs (which they happily did 15 years ago), they offer “my way or the highway” dilemma to the upstream and when upstream chooses “highway” (as Google did with Chrome and Chromium) it raises huge racket.

I don't think we can lose the distributions, because I'm pretty sure that without their pushback, upstreams will devolve back to their pre-2000 distribution methods and we'll never get any work done.

Sure. But they should decide for themselves what they are trying to produce: pure “free software” system which can be used only on the antiquated hardware (or, alternatively, on the emulators) or they want to produce something for the rest of the world.

For over a decade distributions claimed that they are trying to bring this elusive “year of Linux desktop” yet did everything possible to make sure it'll not happen.

Now “year of Linux desktop” is closer then ever, yet, ironically enough, it's not because of distributions in spite of them. Someone else have created usable distribution channel. We'll see how this play will work out, but it'll be somewhat ironic to see it succeed after 20 wasted years, don't you think?

Introducing the Qt WebEngine

Posted Sep 18, 2013 14:51 UTC (Wed) by torquay (guest, #92428) [Link]

    Distributions don't update to a new version of library just for the sake of doing so. It is usually because applications require it and holding back on updating application can have serious negative consequences (ex: unmarked security fixes in newer versions).

So the trade-off is perceived security, at the cost of no guarantee of API & ABI stability ? Sorry, I don't buy this. Distributions should be exercising judgement and caution in what they present to users. If they don't, it's a clear indication that they don't care about their users.

If an application requires an updated library (with an increased major version number), then yes, let's provide the updated library, along with the previous major version for compatibility with older software.

However, if the major version number hasn't been increased, and the library has API/ABI breaks, by definition it's broken. You're in effect shovelling crap around, for the sake of an updated application which may or may not have benefits. ("Oh look! It's new and shiny! It must be better!" doesn't count).

On top of that, you're potentially breaking other software which uses the library, and/or hoping that the other software JustMagicallyWorks (tm) with the broken library. Whether such software is or isn't part of the distro is immaterial: the API is broken, and developers will bundle libraries to prevent from being affected by broken APIs.

    If upstream is breaking compatibility, make them responsible for fixing it instead of having distributions workaround it. If you want distributions to workaround it, don't complain that they have control.

Yes, upstream broke it and they are ultimately responsible. However, as a distro is putting a software collection together, they are in a very good position to detect such breaks. They can inform upstream about the break and politely request that he/she fixes the stuff they've released. If the distro is unwilling to wait for upstream, the distro can always put in a patch to fix the break, and then send the patch upstream along with a "you've been naughty" message.

If the distro is not willing to do liaise with upstream developers to ensure no API/ABI breaks, or has no way of detecting such breaks, then the distro has no right to complain that developers bundle libraries.

Introducing the Qt WebEngine

Posted Sep 18, 2013 15:12 UTC (Wed) by rahulsundaram (subscriber, #21946) [Link]

"So the trade-off is perceived security, at the cost of no guarantee of API & ABI stability ? Sorry, I don't buy this."

If a upstream library breaks ABI on a critical security update, it is not question of "perceived" security anymore. Sometimes, that is the only way to deal with that security issue. Backporting may be feasible but only in limited circumstances unless it is a commercial distribution with more resources. c.f RHEL

Introducing the Qt WebEngine

Posted Sep 18, 2013 22:51 UTC (Wed) by nix (subscriber, #2304) [Link]

Sorry, this is unreliable.
Well, yes, of course. As I said, it's a tradeoff. It's possible that removing an apparently completely unused symbol in a widely-used library might inconvenience some random user nobody's ever heard of -- but it's certain that it will inconvenience every other user, in that all those users will require a recompilation to pick up the soname bump. And if your shared library is used by other shared libraries, the fact that the soname scheme is not transitive means that all of a sudden those other libraries are plunged into hell because they have no way to force their users who use your library to relink when they are relinked. And all to save a theoretical case, people who might be using a function of which no users can be found.

I am not encouraging the mass breaking of API/ABI: I am just pointing out that it is more of a tradeoff than you might think, and that sometimes dropping entirely unused symbols without bumping soname is acceptable. After all, as I mentioned just now in a comment I just made so you can't have responded to it, even introducing a new symbol might break arbitrary other libraries both at compile time and at runtime, yet nobody recommends bumping soname whenever a new symbol is introduced! Nobody even greps for other uses of the same name, so breakage due to name clashes is fairly often observed. So your state of theoretical perfection is actually unattainable. I wish it could be attained, but I can see no way to do it without a radical rethinking of the ELF linkage model (at the very least you'd need something like DT_SYMBOLIC plus DT_GROUP for every single shared library, no symbol interposition, and some sort of scheme whereby every single undefined symbol in a shared library is replaced with an soname-plus-symbol pair, so it can't be confused with a symbol of the same name appearing in any other library.)

This is nontrivial stuff, and is not something that can be dealt with via religious 'never do this' sorts of rules. It's all tradeoffs. That's what engineering is.

Introducing the Qt WebEngine

Posted Sep 19, 2013 0:30 UTC (Thu) by torquay (guest, #92428) [Link]

I don't disagree that there won't be corner cases. However, we shouldn't be using corner cases as excuses for not fixing the core problem: the open source world is plagued with API and ABI instability. It's no wonder some projects bundle libraries.

Linus Torvalds, speaking at today's LinuxCon kernel developer panel, agrees that user-space API & ABI instability is a big problem:
    Torvalds complained that many userspace projects break compatibility with other software, which is something he's steadfastly avoided in the kernel itself. "I'd really like to see some of the kernel culture spread into userspace," even if that means some developers leave the kernel project to go work in userland, he said.

Introducing the Qt WebEngine

Posted Sep 19, 2013 16:57 UTC (Thu) by nix (subscriber, #2304) [Link]

Oh, I completely agree with all of that. I was just pointing out that edge cases do exist. (I think it would have been reasonable to drop strfry() and memfrob() from glibc without an soname bump, for instance: glibc orders its object files in the shared library in collation order, so they're right in the middle of widely-used str*() functions, thus are essentially always loaded by everyone, but never used by anything ever.)

Introducing the Qt WebEngine

Posted Sep 19, 2013 19:42 UTC (Thu) by Arker (guest, #14205) [Link]

"I am not encouraging the mass breaking of API/ABI: I am just pointing out that it is more of a tradeoff than you might think, and that sometimes dropping entirely unused symbols without bumping soname is acceptable. After all, as I mentioned just now in a comment I just made so you can't have responded to it, even introducing a new symbol might break arbitrary other libraries both at compile time and at runtime, yet nobody recommends bumping soname whenever a new symbol is introduced!"

I think everyone understands there is a difference between inadvertently causing a problem when adding a new symbol, and deliberately removing one without a version bump. The former is an occasional accident, the latter is deliberate breakage. One does not justify the other.

And the notion that you can possibly tell that a symbol is 'unused' is completely bogus. That level of knowledge is not possible.

Introducing the Qt WebEngine

Posted Sep 26, 2013 17:13 UTC (Thu) by nix (subscriber, #2304) [Link]

Of course. All that is necessary for this tradeoff to become relevant is that a symbol is hard to keep around and is *rarely* used. Then, retaining the symbol costs the maintainers: removing it and bumping the soname breaks *every* user; removing it without bumping the soname costs only those rare things that actually use that symbol.

It is, as I keep saying, an engineering tradeoff, not a matter of religious absolutism. If you view it through absolutist lenses you will come to the wrong conclusions.

Introducing the Qt WebEngine

Posted Sep 26, 2013 17:19 UTC (Thu) by clopez (subscriber, #66009) [Link]

> removing it and bumping the soname breaks *every* user

Sorry, but bumping the soname breaks nothing. The applications that rely on your library can still use the old soname until they are re-compiled and linked with the new one.

You will have to keep the old soname library version on the system until all aplications are recompiled, but i don't see the problem with that.

Introducing the Qt WebEngine

Posted Sep 29, 2013 18:10 UTC (Sun) by nix (subscriber, #2304) [Link]

The applications that rely on your library can still use the old soname until they are re-compiled and linked with the new one.
That breaks as soon as you have a library that uses your library and is itself used by at least one application which, directly or transitively, also uses the library. :(

Introducing the Qt WebEngine

Posted Sep 29, 2013 18:50 UTC (Sun) by khim (subscriber, #9252) [Link]

Note that is problem of ELF file format and it not present in some other environments (e.g. Windows). For ELF there are ELF Symbol Versioning.

Introducing the Qt WebEngine

Posted Sep 30, 2013 15:43 UTC (Mon) by nix (subscriber, #2304) [Link]

ELF symbol versioning is irrelevant to this, except inasmuch as in theory it can make it rarer to have to bump sonames (which is definitely a good thing: I'd very much like it if everyone used proper symbol versions, but instead we have only a few real users, one being, of course, glibc).

Introducing the Qt WebEngine

Posted Sep 30, 2013 16:01 UTC (Mon) by khim (subscriber, #9252) [Link]

Huh? Are you joking? If libexpat.so.0 uses LIBEXPAT_0 version and libexpat.so.1 uses LIBEXPAT_1 version then this nicely solves the problem you are talking about.

Sure, it does not work if you need to pass object created by libexpat.so.0 to another module where functions from libexpat.so.1 is used, but this problem is not solvable without some forward thinking.

Introducing the Qt WebEngine

Posted Oct 1, 2013 16:44 UTC (Tue) by nix (subscriber, #2304) [Link]

So... it exactly solves the problem, except that now you have littered your program with time bombs. Sorry, I don't really consider a system in which multiple distinct major versions of the same shared library are linked into the same address space anything but 'broken'.

There's just too much danger of an object being passed from one to the other, and worse yet it's hard for the system administrator to tell the difference between the horribly dangerous case of multiple differently-sonamed shared libraries foo.so.1 and foo.so.2 exporting all-differently-versioned-symbols (which might work) versus the same pair of libraries exporting some or all of the same symbols (instant death unless you used DT_SYMBOLIC and extremely risky regardless: there are other resources that might be shared, as the wtmp and malloc-related fun around the glibc2 upgrade made clear).

Worse yet, virtually all libraries will be of the second class (unversioned), not the first.

Introducing the Qt WebEngine

Posted Oct 1, 2013 19:01 UTC (Tue) by khim (subscriber, #9252) [Link]

Sorry, I don't really consider a system in which multiple distinct major versions of the same shared library are linked into the same address space anything but 'broken'.

And I consider a system which can not run old binaries 'broken'. It's your choice which kind of brokenness you prefer but we know what majority of users prefer (hint: they are not using GNU/Linux because it's 'boken' from their POV).

There's just too much danger of an object being passed from one to the other,

Well, sure, if you don't think about compatibility in software development, but then noone can save you. Somehow millions “inferior lifeforms” can do that under Windows (where, e.g. each shared library has it's own malloc and free) but “superior developers” of Linuxland can not do that? Paint me unimpressed.

and worse yet it's hard for the system administrator to tell the difference between the horribly dangerous case of multiple differently-sonamed shared libraries foo.so.1 and foo.so.2 exporting all-differently-versioned-symbols (which might work) versus the same pair of libraries exporting some or all of the same symbols (instant death unless you used DT_SYMBOLIC and extremely risky regardless: there are other resources that might be shared, as the wtmp and malloc-related fun around the glibc2 upgrade made clear).

Difference is much smaller then you think: since most libraries bump SONAME for no good reasons second setup quite often works… till you call a function which is actually changed enough to crash the whole tower of babel.

Worse yet, virtually all libraries will be of the second class (unversioned), not the first.

Well, if packagers have time to cripple upstream packages in other ways they can as well add versioning, too. Note that it does not require a flag day decision: you can add versioning to any library without breaking it's ABI.

All that work is natural consequence of a rule “you should not include or build against a local copy of a library that exists on a system” BTW: unless OS supplier can actually promise that said library will always be there in compatible form and will always be usable it have no right to demand use of said library.

Introducing the Qt WebEngine

Posted Oct 1, 2013 19:29 UTC (Tue) by dlang (✭ supporter ✭, #313) [Link]

he wasn't saying that you shouldn't have multiple version of the library on one system, just that you shouldn't have multiple version of the library linked into one program.

as for the claim that other OSs have no problem with this, DLL hell shows how windows has problems with multiple versions of the same library installed on the system, even if you are going to have completely independent programs using them.

you seriously undermine your message when you make claims that are so obviously wrong.

*nix has the mechanism to handle libraries correctly in all cases except when you include two libraries that each depend on different versions of a third library. The problem isn't the lack of a mechanism, it's that far too many developers don't use them, and regularly break compatibility.

Introducing the Qt WebEngine

Posted Oct 1, 2013 20:55 UTC (Tue) by khim (subscriber, #9252) [Link]

DLL hell shows how windows has problems with multiple versions of the same library installed on the system, even if you are going to have completely independent programs using them.

Really? News to me. The Wikipedia article mentions few subcategories of “DLL Hell”:
1. Incompatible versions (that's about incompatibilities of the implementations of the same version of library).
2. DLL stomping (problems with installers which tended to put older version of library on top of the newer one).
3. Incorrect COM registration (also exist in Linux when things like dbus are used).
4. Shared in-memory modules (only existed in 16bit Windows, it's no loger relevant).
5. Lack of serviceability (if you install 10 libraries in parallel then you need to support 10 libraries in parallel).
Note how oh-so-problematic case of problems with multiple versions of the same library installed on the system is only ever mentioned in last case: if you install multiple versions of libraries then security fixes must be applied to all of these… Well, duh… what do you expect?

Linux-style problem which we are discussing here is conspicuously absent! Do you know why? That's because problem of using two versions of the same library in one program is *NIX-exclusive, it does not exist on Windows! For very simple reason: Windows records the name of library which should supply the function in .LIB and this information is transferred to .EXE file thus other, incompatible, libraries are not even considered. In a sense all libraries always use the model I've discussed. Other problems, yes, they do exist on Windows (and documented in the aforementioned article), but that one… nope. Does not exist. And have never existed there. Was solved in Windows 1.0 released over quarter-century ago.

*nix has the mechanism to handle libraries correctly in all cases except when you include two libraries that each depend on different versions of a third library.

As I've pointed out above today GNU/Linux systems do have the mechanism which can solve that problem, too. The only problem is that is was added to to GNU/Linux in 1999 (while Windows had it in 1985) and that it's options (in Windows it's mandatory).

Windows does many things wrong, but please don't pretend that Linux does everything better. Some thing Windows got right and Linux got wrong. Dependency handling of shared libraries is done better in Windows—perhaps because it was important for Windows where as Linux had the mentality that one can always “recompile the world”.

Introducing the Qt WebEngine

Posted Oct 1, 2013 21:38 UTC (Tue) by Cyberax (✭ supporter ✭, #52523) [Link]

I remember "DLL Hell". Well, it wasn't that hellish.

Most users remember errors about missing msvcrtwhatever.dll caused by installers not packaging required libraries or accidentally removing them. Solving it usually required hunting down installable package (which was often tricky without Internet back then).

Also, most of these packages were system libraries - parts of MS Runtime or DirectX. Almost all Windows applications simply bundle all other libraries along with the application in question.

DLL Hell caused by incompatible versions? I personally haven't encountered it.

Introducing the Qt WebEngine

Posted Oct 2, 2013 1:38 UTC (Wed) by hummassa (subscriber, #307) [Link]

> I remember "DLL Hell". Well, it wasn't that hellish.

Rose-colored goggles.

> Most users remember errors about missing msvcrtwhatever.dll caused by installers not packaging required libraries or accidentally removing them. Solving it usually required hunting down installable package (which was often tricky without Internet back then).

I had to hunt some update that installed incompatible but equal-named libraries in a park of 2000 machines. That was tricky.

> Also, most of these packages were system libraries - parts of MS Runtime or DirectX. Almost all Windows applications simply bundle all other libraries along with the application in question.

Nah, actually, post-COM/ActiveX DLLs solved the problem including different interfaces in different UUIDd classes, practically ended the DLL hell.

> DLL Hell caused by incompatible versions? I personally haven't encountered it.

As I said above, I have encountered it, it took a month or so to track, and both packages liked to put their versions of the DLLs in System32, like they all did back in the day.

Introducing the Qt WebEngine

Posted Oct 2, 2013 1:42 UTC (Wed) by Cyberax (✭ supporter ✭, #52523) [Link]

> As I said above, I have encountered it, it took a month or so to track, and both packages liked to put their versions of the DLLs in System32, like they all did back in the day.
It's certainly possible, but it was rare. Also, Microsoft solved this for legacy apps in Vista with the SxS system.

On the other hand, running something like the original (binary) Quake2 on Linux is... complicated.

Introducing the Qt WebEngine

Posted Sep 26, 2013 18:14 UTC (Thu) by khim (subscriber, #9252) [Link]

Sure, if you can not keep old function around and if you can not make two versions of your library parallel-install able and if you really need the changes which lead to ABI breakage then it's probably Ok to do what you are proposing. Of if it's absolutely impossible to create program which works with broken function.

But that's a lot of highly improbable "ifs". Most real world developers are doing such things just because they are lazy, not because they have to do them.

P.S. Practically speaking such cases are the norm when hardware drivers are involved: all OSes which have tried to create stable ABI for device drivers (Solaris, Windows, etc) have failed in one way or another. That's because new hardware often introduces completely new ways of doing things and expects that software will provide a compatibility layer. When you try to stuff binary blobs in said compatibility layer itself… it just does not work (if something is far enough removed from hardware to be usable over decades of changes then it's usually far enough removed from hardware to be pushable to userspace). Pure software libraries usually can be supported for years and probably decades.

Introducing the Qt WebEngine

Posted Sep 18, 2013 9:34 UTC (Wed) by clopez (subscriber, #66009) [Link]

> Here and there you also have removal of a function impossibly hard to maintain that almost nobody was using, without bumping soname because the cost would be higher to the vast majority of non-users than the cost to the vanishingly small proportion of users: theoretically problematic, but in practice nobody but X and glibc bother to keep functions as useless as strfry() around just because of ABI stability concerns. So, occasionally, the generally good rules of library soname conformance are violated intentionally.

And every time you do that, God kills a kitten.

Introducing the Qt WebEngine

Posted Sep 18, 2013 23:20 UTC (Wed) by nix (subscriber, #2304) [Link]

God is no engineer. :P :P

Introducing the Qt WebEngine

Posted Sep 14, 2013 0:46 UTC (Sat) by mpr22 (subscriber, #60784) [Link]

And the maintainers of those libraries should be mocked and derided in public every time they commit such a misdeed until they remedy their error.

Introducing the Qt WebEngine

Posted Sep 14, 2013 1:18 UTC (Sat) by clopez (subscriber, #66009) [Link]

> And the maintainers of those libraries should be mocked and derided in public every time they commit such a misdeed until they remedy their error.

So you have long work:

1) Go here: http://upstream-tracker.org/index.html

2) Open the 3 first links to the libraries on the table. Check the breakages on each one. Mock and blame developers.

3) Repeat until EOF

Introducing the Qt WebEngine

Posted Sep 16, 2013 23:04 UTC (Mon) by nix (subscriber, #2304) [Link]

Hm. Many of these are complaining about changes in sizes of structures, when the corresponding API provides no way to initialize a static instance of those structures (instead having a malloc-like API that returns a newly allocated instance) and all the API functions that take that structure only take pointers to it. i.e., in practice, utterly harmless, unless someone was bizarrely choosing to allocate a structure statically that they couldn't initialize or use.

Introducing the Qt WebEngine

Posted Sep 17, 2013 14:55 UTC (Tue) by mathstuf (subscriber, #69389) [Link]

Changes in the size of a structure can be serious and leads to very odd bugs. If more code is needed to show that it's an opaque type, that'd be nice, but false positives here are, IMO, more important than false negatives. This is especially important in C++ where is liba.so allocates a class with a string, then calls libb.so which assigns it, you get all kinds of fun when the string assignment crashes because it had 8 bytes before it in liba.so, but libb.so expects 16.

Introducing the Qt WebEngine

Posted Sep 19, 2013 14:29 UTC (Thu) by kov (subscriber, #7423) [Link]

That's quite interesting! Looking at the clutter data from that site, though, I see that most of the API/ABI breakage happens inside development releases, which is expected. You can see that changes happened in or from a 1.X release where X is odd:

http://upstream-tracker.org/versions/clutter.html

What would be good to track is changes from a stable release to another, so if there were changes from 1.10.x to 1.12.x, and inside those stable versions.

Introducing the Qt WebEngine

Posted Sep 14, 2013 11:55 UTC (Sat) by Company (guest, #57006) [Link]

You are aware that the kernel is constantly changing loads of its APIs, right? I mean, where's /dev/hda1? And why doesn't /dev/oss output sound?

The only API the kernel usually mostly keeps the same is the system call interface...

Introducing the Qt WebEngine

Posted Sep 14, 2013 13:12 UTC (Sat) by clopez (subscriber, #66009) [Link]

> You are aware that the kernel is constantly changing loads of its APIs, right? I mean, where's /dev/hda1? And why doesn't /dev/oss output sound?
>

No man. What happens is that you change your hardware.

/dev/hd* is used for IDE disks.
/dev/sd* is used for SCSI disks.

Do you realize that your SATA disks have an SCSI-like interface?

http://www.tldp.org/LDP/sag/html/dev-fs.html

And /dev/oss doesn't work because you decided to use ALSA or PulseAudio. Configure back OSS and you have it.

/dev is not API. This are device file names that are expected to change as soon as you change your hardware or load a different driver to handle the hardware. This is both hardware and driver dependent.

> The only API the kernel usually mostly keeps the same is the system call interface...

Yeah, that's my point. That's the only *API* the kernel exposes to user space.

Introducing the Qt WebEngine

Posted Sep 16, 2013 8:21 UTC (Mon) by Company (guest, #57006) [Link]

Right. So no useful application from 10 years ago works anymore because it also needs the data of the kernel (like device nodes and /proc trees and whatnot).

But the kernel is still awesome because the API never changed!!!!111eleven

Introducing the Qt WebEngine

Posted Sep 16, 2013 9:39 UTC (Mon) by mpr22 (subscriber, #60784) [Link]

It turns out to be the case that quite a lot of useful applications care not at all about the layout of /dev or /proc.

Introducing the Qt WebEngine

Posted Sep 17, 2013 12:33 UTC (Tue) by HelloWorld (subscriber, #56129) [Link]

If your application relies on finding a hard disk at /dev/hda then it was broken in the first place since that was never guaranteed. /dev/dsp (which is probably what you meant when you typed /dev/oss) is still there if you need it, just modprobe snd-pcm-oss.

Really, try harder.

Introducing the Qt WebEngine

Posted Sep 18, 2013 1:59 UTC (Wed) by Company (guest, #57006) [Link]

Right. Finding hard disks is not something that's supported if you want your code to be portable.

And sound requires a module that emulates backwards compat but still doesn't work because sound must be output through pulseaudio for a desktop to work properly.

And it's I that needs to try harder?

Introducing the Qt WebEngine

Posted Sep 18, 2013 2:38 UTC (Wed) by dlang (✭ supporter ✭, #313) [Link]

> Right. Finding hard disks is not something that's supported if you want your code to be portable.

well, given that hard drives were never guaranteed to be at /dev/hda in the first place, having software that doesn't work when they are somewhere else is software that was broken to begin with.

depending on your hardware, there are a whole lot of different paths that your drives may have been at, even from the very early days (my first computer that I ran linux on, in 1993 or so had scsi so it was /dev/sda, and over the years I've had a lot of raid cards that made the drives appear someplace other than /dev/[sh]d*)

Introducing the Qt WebEngine

Posted Sep 18, 2013 2:39 UTC (Wed) by khim (subscriber, #9252) [Link]

Linux's approach to ABI breakage is well-known: Breaking user space is a bit like trees falling in the forest. If there's nobody around to see it, did it really break?

Now let's consider your examples.

Finding hard disks is not something that's supported if you want your code to be portable.

First of all /dev/hda is not kernel interface. Kernel interface are block devices. Second (and probably more important): /dev/sda was in use since march 1992 (when linux 0.95a was released). According to the criteria cited above we'll need at least one program which is still in active use. It'll be nice to know where you've found such a program and what you use it for.

And sound requires a module that emulates backwards compat but still doesn't work because sound must be output through pulseaudio for a desktop to work properly.

That's problem of your desktop, not kernel. Nobody ever said that you can configure kernel in some weird way and still expect it to work. Indeed your perverse idea of ABI compatibility will mean that any configurable option in kernel automatically breaks the ABI compatibility promise because, you know, kernels with option enabled and with option disabled will be incompatible. That's just absurd.

And it's I that needs to try harder?

Sure. There were some kernel-related breakages in the past and there will be in the future, but they are very rare and usually don't pass "If there's nobody around to see it, did it really break?" criteria. OSS is major PITA e.g. because ALSA is explicitly incompatible with OSS but if someone wants to use OSS - emulation is there and you can even connect it to PulseAudio if you really want.

Introducing the Qt WebEngine

Posted Sep 18, 2013 9:36 UTC (Wed) by HelloWorld (subscriber, #56129) [Link]

> Right. Finding hard disks is not something that's supported if you want your code to be portable.
As others have pointed out already, you were *never* guaranteed to find a hard disk at /dev/hda, and if your program expects that, it always was broken.

> And sound requires a module that emulates backwards compat but still doesn't work because sound must be output through pulseaudio for a desktop to work properly.
It works just like OSS/Free always has: your program will be able to output sound, but not if something else is playing. This isn't a compatibility issue, this is you expecting to magically get new functionality for your old applications. Not to mention that PulseAudio is actually able to make that happen through padsp...

Introducing the Qt WebEngine

Posted Sep 18, 2013 12:33 UTC (Wed) by nye (guest, #51576) [Link]

>> Right. Finding hard disks is not something that's supported if you want your code to be portable.
> As others have pointed out already, you were *never* guaranteed to find a hard disk at /dev/hda, and if your program expects that, it always was broken.

Additionally, this problem could be worked-around quite easily if it's really needed, by writing a simple udev rule to generate that device file.

>> And sound requires a module that emulates backwards compat but still doesn't work because sound must be output through pulseaudio for a desktop to work properly.
> It works just like OSS/Free always has: your program will be able to output sound, but not if something else is playing.

But this is not how it always worked, at least not in practice.

> This isn't a compatibility issue, this is you expecting to magically get new functionality for your old applications.

No, by 1999 or so it was expected that you'd be able to play more than one sound at once[0]. Recently (in the last few years) we keep hearing that the only way this ever happened before PA was to use ALSA with dmix, but this is historical revisionism; it worked perfectly fine with OSS back in the day. Arguably this could be attributed to regressions in the quality of audio hardware over the years, but regardless this is *functionally* a regression that would be avoidable if backward compatibility were a serious concern.

>Not to mention that PulseAudio is actually able to make that happen through padsp...

Except that it doesn't. The right way to do this would be to provide an actual /dev/dsp device file and plumb it in however PA wants it. In contrast, padsp is a multi-arch nightmare (at least on Debian Wheezy; I'm not sure if there's *any* way to get a 32-bit binary expecting OSS to produce sound on a 64-bit system with PA - certainly there was no documented way when I tried last year). This is particularly problematic because the predominant reason to want OSS compatibility is to run old binaries, which will be 32-bit, on a new system, which will be 64-bit.

[0] I could be wrong about this; it might have been with 2.4 that this worked in Linux, which would put it at 2001.

Introducing the Qt WebEngine

Posted Sep 18, 2013 12:53 UTC (Wed) by HelloWorld (subscriber, #56129) [Link]

> No, by 1999 or so it was expected that you'd be able to play more than one sound at once[0]. Recently (in the last few years) we keep hearing that the only way this ever happened before PA was to use ALSA with dmix, but this is historical revisionism; it worked perfectly fine with OSS back in the day.
It *never* worked with OSS/Free with mixer-less audio hardware.

> The right way to do this would be to provide an actual /dev/dsp device file and plumb it in however PA wants it.
osspd does that. It is thus the *third* possible alternative to make OSS programs work.

Introducing the Qt WebEngine

Posted Sep 18, 2013 16:57 UTC (Wed) by nye (guest, #51576) [Link]

>osspd does that. It is thus the *third* possible alternative to make OSS programs work.

Thanks for the pointer to that. It seems it's not in wheezy, but it is in sid so there's some progress being made here.

Introducing the Qt WebEngine

Posted Sep 18, 2013 12:58 UTC (Wed) by pizza (subscriber, #46) [Link]

> No, by 1999 or so it was expected that you'd be able to play more than one sound at once[0]. Recently (in the last few years) we keep hearing that the only way this ever happened before PA was to use ALSA with dmix, but this is historical revisionism; it worked perfectly fine with OSS back in the day. Arguably this could be attributed to regressions in the quality of audio hardware over the years, but regardless this is *functionally* a regression that would be avoidable if backward compatibility were a serious concern.

The OSS API technically supported multiple streams, but only if the hardware provided native multi-stream support. And even back in the day, very little hardware did.

BTW, don't confuse OSS/Free (in the kernel) with the commercial OSS drivers. Those were always considerably more featureful, but cost money.

Introducing the Qt WebEngine

Posted Sep 18, 2013 14:30 UTC (Wed) by khim (subscriber, #9252) [Link]

Recently (in the last few years) we keep hearing that the only way this ever happened before PA was to use ALSA with dmix, but this is historical revisionism; it worked perfectly fine with OSS back in the day.

Nope. I remember these days well enough to remember that you needed advanced card with hardware mixer to make it happen (“SB Live!” was popular for that if I remember correctly). If you'll find hardware which can do that in today's world (on ebay, perhaps) then this should work with today's kernel, too.

Again: you want to magically get new functionality for your old applications. There is nothing wrong with that but it's not question of ABI stability.

Introducing the Qt WebEngine

Posted Sep 18, 2013 17:34 UTC (Wed) by nye (guest, #51576) [Link]

>Nope. I remember these days well enough to remember that you needed advanced card with hardware mixer to make it happen (“SB Live!” was popular for that if I remember correctly).

The cheapest budget card I could find in about 2000ish did it, so I completely disagree that we're talking exclusively about 'advanced' cards. I consider this a standard feature that one could expect to work with no shenanigans on a normal system.

>Again: you want to magically get new functionality for your old applications. There is nothing wrong with that but it's not question of ABI stability.

There is more to backward compatibility than just defining an interface and calling it job done. One of the things that needs to be preserved is the actual, *user-visible* behaviour. Depending on how strictly you define 'ABI', this may or may not be included. For the application to be unaware of any differences is certainly a good start, but it's not enough on its own unless the *user* of that application can be unaware of those differences.

The job of an operating system is essentially resource management, and one aspect of that is to abstract a machine's hardware into a common interface. If the hardware currently available doesn't provide a feature in the same way, then it is the job of the operating system - in an ideal world - to work around those differences.

Obviously we don't live in an ideal world, but that doesn't mean we shouldn't acknowledge our shortcomings where they exist.

And indeed it seems someone *has* decided that we can do better, and I'll remember to try osspd the next time I'm trying to coax Wine into making sound on Debian amd64.

(The road to multi-arch has been sufficiently brutal that I'm now about five years into a 'temporary' switch to Windows for my desktop 'just until things have stabilised a bit'. Maybe jessie.)

Introducing the Qt WebEngine

Posted Sep 18, 2013 21:04 UTC (Wed) by khim (subscriber, #9252) [Link]

There is more to backward compatibility than just defining an interface and calling it job done.

Sure. You also need to do testing to make sure interface is actually behaving as spec describes. That's it. Oh, sometimes you also need to clean/clarify spec, but this is just a bugfixes similar to what's done to code.

One of the things that needs to be preserved is the actual, *user-visible* behaviour.

It's preserved. Take ten years old hardware, install new kernel on it and you'll see that everything works just like it did before, hardware mixing and all.

For the application to be unaware of any differences is certainly a good start, but it's not enough on its own unless the *user* of that application can be unaware of those differences.
If the hardware currently available doesn't provide a feature in the same way, then it is the job of the operating system - in an ideal world - to work around those differences.

It's a job of an operating system, but it's certainly not a job of kernel of an operating system. There are other layers for that. If your hardware dropped support for fixed path (as allowed by OpenGL ES 2.0) then it's good thing to emulate it at some layer. But do you really want to stuff shader recompilers in ring 0? This just makes no sense (even if nVidia apparently does that). You add new functionality to kernel when you have to (for performance or other reason), not when you want to.

And yeah, OSS… it's similar and it's not a new story - I wrote about it when it was new. Nonetheless it's new feature as far as kernel API is concerned. The fact that (as you've pointed out) said feature is actually needed to support old programs on real hardware by real users means that it should have been developed earlier, I agree with you, but it's next step after stable ABI.

Userspace crowd should have clamored for something like CUSE earlier (when soundcards started dropped hardware mixers), but they were stuck in “we'll just port everything to ALSA and forget about OSS” mode. When OSSv4 (with huge amount of complex code in kernel space) approach was rejected they should have started to think about CUSE. It reality it took years.

Introducing the Qt WebEngine

Posted Sep 18, 2013 21:11 UTC (Wed) by pizza (subscriber, #46) [Link]

> The cheapest budget card I could find in about 2000ish did it, so I completely disagree that we're talking exclusively about 'advanced' cards. I consider this a standard feature that one could expect to work with no shenanigans on a normal system.

You may consider it "standard" but it wasn't; not by a very long shot.

A "normal" system circa 2000 had an onboard AC'97 audio codec, limited to a single stream at a fixed 48KHz sample rate. All stream mixing or sample rate conversion was handled at the OS/driver level (if at all) often at a considerable CPU hit.

Onboard AC'97 audio destroyed most of the add-in sound card market nearly overnight, leaving only the (vastly smaller) market for more advanced cards when users needed features like multi-stream, 3D positional audio and/or ADC/DACs less noisy than a jackhammer in heat.

Introducing the Qt WebEngine

Posted Sep 19, 2013 1:48 UTC (Thu) by mathstuf (subscriber, #69389) [Link]

> And indeed it seems someone *has* decided that we can do better, and I'll remember to try osspd the next time I'm trying to coax Wine into making sound on Debian amd64.

I haven't had issues with WINE on Fedora (Rawhide) on x86_64. Limbo played fine (which is on Linux through a bundled WINE) and other games were playing sound without an issue. There's a wine-pulseaudio package in Fedora; does that it not exist in Debian?

Introducing the Qt WebEngine

Posted Sep 19, 2013 10:58 UTC (Thu) by nye (guest, #51576) [Link]

>There's a wine-pulseaudio package in Fedora; does that it not exist in Debian?

It doesn't appear to. In principle Wine should work with PA (maybe they have that sound module as part of the base package - not sure), however in practice Wine was badly hit by the multiarch transition, and left broken in many ways that couldn't be fixed for about a year once the wheezy freeze hit.

There's a good chance it all works in unstable now - I've not tried since wheezy was released since I find the task completely demoralising.

Introducing the Qt WebEngine

Posted Sep 19, 2013 11:16 UTC (Thu) by cortana (subscriber, #24596) [Link]

> The cheapest budget card I could find in about 2000ish did it, so I completely disagree that we're talking exclusively about 'advanced' cards. I consider this a standard feature that one could expect to work with no shenanigans on a normal system.

This simply is not true. In those days I had several machines with onboard sound, and several SoundBlaster 64 and 128 cards. None of them had hardware mixing. I had to shell out for an expensive SoundBlaster Live in order to play two sounds at once.

These days, the situation has not changed. Onboard Intel HD Audio has no hardware mixing. Fortunately we have PulseAudio these days so it no longer matters for the vast majority of use cases.

Introducing the Qt WebEngine

Posted Sep 15, 2013 16:45 UTC (Sun) by krake (subscriber, #55996) [Link]

Most libraries used by desktop application developers or, maybe better, end-user application developers, do have rules for keeping API and ABI stable in major version.

GLib/GTL+, Qt, SDL, Gstreamer, KDE Platform, etc. just to name a few.

Introducing the Qt WebEngine

Posted Sep 15, 2013 17:10 UTC (Sun) by mathstuf (subscriber, #69389) [Link]

One interesting question for Qt from this decision: what happens to the QtWebKit API/ABI? Is it stuck at 5.1? Will both WebKit and Blink be needed until Qt6? Qt already isn't a cheap build, WebKit is ~10G and Blink is similar.

Introducing the Qt WebEngine

Posted Sep 15, 2013 18:13 UTC (Sun) by krake (subscriber, #55996) [Link]

The previous webkit API will obviously still be around, Qt can't remove any API before Qt6

But I am pretty certain that the new Blink module will be a separate module, just like current webkit.
So users of either API will only build the one they need.

Introducing the Qt WebEngine

Posted Sep 13, 2013 15:15 UTC (Fri) by rahulsundaram (subscriber, #21946) [Link]

Without distributions pushing their so called internal policy which is common across all major distributions afaik, many more upstream projects would be shipping forks and bundling a lot of libraries which you as a user will be paying for.

Also, distributions already push patches upstream and in places where upstream doesn't accept, distributions are already doing what you are recommending but I don't think we need to shut up about it at all. On the contrary, this is a process of technical advocacy that needs to continue. In most issues I have worked on, it was just a matter of opening up that conversation and upstream projects readily agreed they needed to do that and we worked on it to get it fixed. You can disagree all you want but you cannot ask others to shut up about it.

Introducing the Qt WebEngine

Posted Sep 14, 2013 22:59 UTC (Sat) by bojan (subscriber, #14302) [Link]

Very true. Totally agree.

Google are, of course, playing the "we are so big and important, we can do whatever we want and get away with it" game here. Not very cooperative and totally against the spirit of open source, IMHO.

Cooperating with upstream and getting your patches merged into unbundled libraries is harder, no doubt about that. But, it the the right thing to do, because everyone benefits that way, not just one project. That is kind of the point of open source.

Introducing the Qt WebEngine

Posted Sep 13, 2013 10:04 UTC (Fri) by exadon (guest, #5324) [Link]

This is one of those problems you can solve by ignoring them: Just treat the bundled version of, say, libpng like any other original chromium code, and ignore the fact that it's a fork of some other repository.

Introducing the Qt WebEngine

Posted Sep 13, 2013 10:56 UTC (Fri) by robert_s (subscriber, #42402) [Link]

And then when there's a vulnerability in libpng, you've got to a) know which projects have bundled versions of libpng and b) go around and make sure their authors know about the need to fix them or go and fix them yourself (this happens surprisingly often and causes significant trouble).

As opposed to just fixing the system libpng and being done with it.

Introducing the Qt WebEngine

Posted Sep 13, 2013 13:17 UTC (Fri) by rsidd (subscriber, #2582) [Link]

This is the comment I made earlier. You, as the distro, fix libpng, but how do you ensure your users are updating? Now imagine you're Google. If you fix the bundled libpng in chrome, your users get the update since chrome auto-updates. If you depend on the system libpng, how do you protect users who don't update system libraries? (True, those users are vulnerable anyway via other programs, but at least this way it's not Chrome's fault.)

Introducing the Qt WebEngine

Posted Sep 13, 2013 14:10 UTC (Fri) by niner (subscriber, #26151) [Link]

So chrome's auto-updates are an argument but the distro's auto-update somehow does not count?

Introducing the Qt WebEngine

Posted Sep 13, 2013 16:47 UTC (Fri) by khim (subscriber, #9252) [Link]

It's the same argument. Chromium developers think it's their responsibility and thus bundle libpng. If distribution makers feel it's their responsibility because they can keep up with the vulnerability reports better than Google then they can unbundle libpng, but then they should deal with the fallout. In particular they should do the Q&A work and deal with user reports.

But in reality distribution makers can not even deal with unbundling work—yet they believe for some reason that they will be able to cope with much larger and harder Q&A work. Why? Where exactly this hubris comes from?

Introducing the Qt WebEngine

Posted Sep 15, 2013 9:06 UTC (Sun) by alankila (subscriber, #47141) [Link]

From the relentless optimism that the software will work despite it being chainged in ways the authors didn't intend. My pet peeve is the way eclipse has always been broken by debian to the point of being unusable for actual work.

Introducing the Qt WebEngine

Posted Sep 13, 2013 23:00 UTC (Fri) by lsl (subscriber, #86508) [Link]

> Now imagine you're Google. If you fix the bundled libpng in chrome, your users get the update since chrome auto-updates.

Google might do this. How do other upstreams handle their bundled libraries? Hint: they don't. Walk up to any random Windows box and count the number of vulnerable zlib/msvcrt/whatever copies you can find.

Virtually no one one gets this right on Windows and Mac. Mozilla and Google are rare exceptions.

Introducing the Qt WebEngine

Posted Sep 13, 2013 10:58 UTC (Fri) by seyman (subscriber, #1172) [Link]

> Just treat the bundled version of, say, libpng like any other original chromium code, and ignore the fact that it's a fork of some other repository.

You then need to ship a new copy of chromium every time there's a security fix made to libpng (repeat for every library bundled). And you end up with several copies of the same library on disk and in memory where one should suffice.

So by "solving" one problem, you've created several others.

Introducing the Qt WebEngine

Posted Sep 13, 2013 11:32 UTC (Fri) by dgm (subscriber, #49227) [Link]

> So by "solving" one problem, you've created several others.

much smaller in size. Having users unable to run your application because a library incompatibility is much worse than using some extra 100 KB of RAM, or having to frequently update your browser (that you have to do, anyway).

Introducing the Qt WebEngine

Posted Sep 15, 2013 23:48 UTC (Sun) by HelloWorld (subscriber, #56129) [Link]

Do you trust Google to rebase their libpng fork every time a security fix shows up in libpng? I certainly don't.

Introducing the Qt WebEngine

Posted Sep 16, 2013 2:06 UTC (Mon) by khim (subscriber, #9252) [Link]

Why do you think so? Do you really believe understaffed and overworked packagers can do a better job then security ream in Google?

I know plural of anecdote is not data, but we can still try to test your finny theory. Let's take libpng (since you've raised it) and Ubuntu (since it's one of more popular distributions).

CVE-2008-1382. Google fix: not vulnerable, Ubuntu fix 1.2.15~beta5-3ubuntu0.1 => Mar 16 2010.

CVE-2009-0040. Google fix: Feb 20 2009, Ubuntu fix: 1.2.15~beta5-3ubuntu0.1 => Mar 16 2010.

CVE-2009-2042. Google fix: Jun 22 2009, Ubuntu fix: 1.2.15~beta5-3ubuntu0.2 => Jul 8 2010.

CVE-2009-5063. Google fix: Mar 18 2010, Ubuntu fix: 1.2.15~beta5-3ubuntu0.5 => Mar 22 2012.

CVE-2010-0205. Google fix: Mar 18 2010, Ubuntu fix: 1.2.15~beta5-3ubuntu0.2 => Jul 8 2010.

CVE-2010-1205. Google fix: Jun 26 2010, Ubuntu fix: 1.2.15~beta5-3ubuntu0.3 => Jul 26 2011.

CVE-2011-2690. Google fix: Jul 29 2011, Ubuntu fix: 1.2.15~beta5-3ubuntu0.4 => Feb 16 2012.

CVE-2011-2692. Google fix: Jul 29 2011, Ubuntu fix: 1.2.15~beta5-3ubuntu0.4 => Feb 16 2012.

CVE-2011-3026. Google fix: Feb 8 2012, Ubuntu fix: 1.2.15~beta5-3ubuntu0.5 => Mar 22 2012.

CVE-2011-3045. Google fix: Mar 7 2012, Ubuntu fix: 1.2.15~beta5-3ubuntu0.6 => Apr 05 2012.

Well, Chromium is not perfect, but it does not look like choice of Ubuntu's libpng is safer, that's for sure.

Introducing the Qt WebEngine

Posted Sep 16, 2013 2:51 UTC (Mon) by pizza (subscriber, #46) [Link]

Your dates don't line up.

For example. I picked one entry off your list at random:

> CVE-2011-2692. Google fix: Jul 29 2011, Ubuntu fix: 1.2.15~beta5-3ubuntu0.4 => Feb 16 2012:

These are the URLs to the actual CVE entry and the equivalent Ubuntu security announcement:

http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2011-2692
http://www.ubuntu.com/usn/USN-1175-1/

The date on the Ubuntu's Security Notice (which included the released updated packages) is '26th July, 2011' which is actually three days earlier than the fix was applied to the Chromium repository (the actual date of a released Chrom[e|ium] build would be even later).

I'm sorry, but you basically proved yourself wrong with this one.

(In your defense, the "last updated" date on the page you linked matches the date you specified, but given the work you went through to generate that list, you really should have checked the "release date" instead)

Introducing the Qt WebEngine

Posted Sep 16, 2013 13:01 UTC (Mon) by khim (subscriber, #9252) [Link]

In your defense, the "last updated" date on the page you linked matches the date you specified, but given the work you went through to generate that list, you really should have checked the "release date" instead.

Mea culpa. That's what one gets when one reads LWN after midnight. I usually used these links to find out when some package was released, but apparently for obsolete packages it specifies obsoletion date, not release date! This makes some kind of twisted sense, but it's not really useful.

Let's try again.

CVE-2008-1382. Google fix: not vulnerable, Ubuntu fix usn-730-1 => Mar 5 2009.

CVE-2009-0040. Google fix: Feb 20 2009, Ubuntu fix: usn-730-1 => Mar 5 2009.

CVE-2009-2042. Google fix: Jun 22 2009, Ubuntu fix: usn-913-1 => Mar 16 2010.

CVE-2009-5063. Google fix: Mar 18 2010, Ubuntu fix: usn-1367-1 => Feb 16 2012.

CVE-2010-0205. Google fix: Mar 18 2010, Ubuntu fix: usn-913-1 => Mar 16 2010.

CVE-2010-1205. Google fix: Jun 26 2010, Ubuntu fix: usn-957-1 => Jul 23 2010.

CVE-2011-2690. Google fix: Jul 29 2011, Ubuntu fix: usn-1175-1 => Jul 26 2011.

CVE-2011-2692. Google fix: Jul 29 2011, Ubuntu fix: usn-1175-1 => Jul 26 2011.

CVE-2011-3026. Google fix: Feb 8 2012, Ubuntu fix: usn-1367-1 => Feb 16 2012.

CVE-2011-3045. Google fix: Mar 7 2012, Ubuntu fix: usn-1402-1 => Mar 22 2012.

Thank for the fix, but now results look even more similar to what you a priori can expect: since Ubuntu does not do an extensive Q&A it can push out some updates few days earlier, but since it does not do an extensive Q&A it misses some of them and vulnerabilities often survive for months and years—while Google rarely leaves vulnerabilities unpatched for years but yes, sometimes releases them few days later then Ubuntu.

What provides better security: approach where some updates are unpatched for a few days faster but some are not patched for months or one where few updates are applied slower but few are just missed altogether? Wasn't one of prides of FOSS the fact that it's developers don't act like Apple or Microsoft and don't wait till vulnerability is exploited in the wild?

Introducing the Qt WebEngine

Posted Sep 16, 2013 13:40 UTC (Mon) by pizza (subscriber, #46) [Link]

Ah, much better. (And it would seem I'd randomly managed to pick one of the two data points where Ubuntu apparently responded faster. Guess I got lucky)

There is still one remaining problem with your methodology -- You're comparing the fix was *committed* into the Chromium repo with the date that Ubuntu *released* a fixed set of packages. To be a true apples-to-apples comparison we need to know when Google released an updated Chrome build to the general public. (Does Google even publish that information?)

Flipping through the Chromium commits in more detail, for about half of them Google just blindly updated to a newer libpng release without explicitly marking it as security-related. Of the commits marked as security related, only two referenced a CVE#.

Anyway, thanks for doing this legwork to support your claims; I wish more people backed themselves up with some actual evidence.

> What provides better security: approach where some updates are unpatched for a few days faster but some are not patched for months or one where few updates are applied slower but few are just missed altogether? Wasn't one of prides of FOSS the fact that it's developers don't act like Apple or Microsoft and don't wait till vulnerability is exploited in the wild?

Unfortunately, just because Google may often respond faster with Chromium than Ubuntu doesn't mean that Google doesn't miss stuff too. And crucially, it certianly doesn't mean that the vast majority of the other "app developers" out there pay any attention at all -- If they're even around any more! Unfortunately my experience suggests that Google is an extremely rare exception to a truly depressing norm.

Introducing the Qt WebEngine

Posted Sep 17, 2013 19:09 UTC (Tue) by khim (subscriber, #9252) [Link]

To be a true apples-to-apples comparison we need to know when Google released an updated Chrome build to the general public. (Does Google even publish that information?)

It's kind of hard to find this information because it's not even a single date! Such fixes usually are pushed up on the next week after fix is commited to the repo, but there are process which starts with 1% of users, then results are observed (mostly crash reports), it's pushed to 10% of users and then slowly to 100%. If new version is too unstable then another version is made, etc.

Believe me, it's not easy to push update if number of users you support is near billion.

I think time of the commit to the repo is the last hard observable date - even if it's biased by a week or so.

Flipping through the Chromium commits in more detail, for about half of them Google just blindly updated to a newer libpng release without explicitly marking it as security-related.

Not true at all. If you can't see such information it does not mean that said information does not exist. It's not in commit message because it's not a good idea to advertise that you've closed vulnerability in publicly accessible place.

Of the commits marked as security related, only two referenced a CVE#.

Right - these are the ones where Google was not fast enough and actually committed fix after CVE was issued (actually even in these cases CVE don't actually belong to commit message).

Unfortunately my experience suggests that Google is an extremely rare exception to a truly depressing norm.

Well, there are a good reason for that. Browsers are not just as complex as operation systems nowadays—and they deal with malware on the constant basis. Think about it: even most local exploits in kernel are considered low risk because most user systems are single-user ones in this day and age. But browser executes malicious code on your system! Which means that browser is probably the most security-critical piece of code in today's system. That's why Chromium includes quite strict sandboxes, that's why there are special team which tries to quickly patch things in code (third-part code or core Chromium code), etc. Even other Google-backed projects are not as paranoid as Chromium developers. Chromium developers are exception because their creation attracts exception amount of attackers.

Introducing the Qt WebEngine

Posted Sep 17, 2013 21:57 UTC (Tue) by pizza (subscriber, #46) [Link]

> Not true at all. If you can't see such information it does not mean that said information does not exist.

AKA "trust us", which is hard to to when they don't seem to be following any sort of publically-consistent process.

Introducing the Qt WebEngine

Posted Sep 17, 2013 22:28 UTC (Tue) by pizza (subscriber, #46) [Link]

> CVE-2010-1205. Google fix: Jun 26 2010, Ubuntu fix: usn-957-1 => Jul 23 2010.

With this one (picked at random) there's more detail discernable:

Bug originally reported to Google Chromium team via a Chromium user on Ubuntu on June 6. Same user reported it to Mozilla on the same day.
Libpng folks notified on June 16.
Fix committed into mozilla repo on June 25.
Fixed libpng 1.4.3 released on June 26.
Google fix commited on June 26.
Google chrome stable release on July 02.
Fedora fix released July 06.
Ubuntu fix released on July 08. (USN 960-1)
Debian fix released July 19.
Mozilla fix released July 20 via their standard release cadence.
Fedora released fixed firefox/etc on July 22nd.
Ubuntu released fixed firefox/etc on July 23rd, with USN 957-1, which you referenced above.

All I can take from this is that one vendor that bundled libpng released a fix more than two weeks after another vendor that bundled libpng.

Introducing the Qt WebEngine

Posted Sep 17, 2013 23:34 UTC (Tue) by khim (subscriber, #9252) [Link]

With this one (picked at random) there's more detail discernable.

Well—it's the second time when you've “picked at random” one of the very few problematic cases. Sorry about that mistake, but you “random” starts to look mightily suspicious (but then again, my first mistake looked even more strange: I should have noticed that timings are really suspicious in all cases thus I will not say that you are just picking worst examples from my list).

But even after your rightful correction situation have not changed materially: yes, in this particular case Ubuntu decided for some reason to release update for libpng and Firefox separately and I've picked wrong one—big deal. Even after all the corrections Google's release of Chromium was still faster then Fedora's and Ubuntu's relates of unbundled libpng. I'm not saying that Ubuntu is all that bad, but Chromium guys still are doing better work.

Introducing the Qt WebEngine

Posted Sep 18, 2013 3:11 UTC (Wed) by pizza (subscriber, #46) [Link]

Okay, it wasn't truly at random -- I just picked the one in the middle of the list this time, but I didn't dive into anything else.

> Google's release of Chromium was still faster then Fedora's and Ubuntu's relates of unbundled libpng. I'm not saying that Ubuntu is all that bad, but Chromium guys still are doing better work.

You are correct; Chromium (ie Chrome) got an updated version in hands of end-users faster than anyone else in this case. And as you pointed out, there were still two cases where Ubuntu apparently took many months to issue an update for a published CVE.

But my other point remains -- Just because Google (and to a lesser extent, Mozilla) is on the ball that doesn't mean anyone else is. Fedora/Ubuntu/whatever updating the distro-supplied libpng automatically fixed *every other application in the system*, save for Firefox (and its familial relatives). (In Fedora's case, the system libpng update would have taken care of Chromium as well!)

But since we're now talking about exposure window differences measured in days.. as nothing forces users to restart their browsers to finish the update, users are likely to be vulnerable for some time after the update is pushed.

Introducing the Qt WebEngine

Posted Sep 13, 2013 11:11 UTC (Fri) by dsommers (subscriber, #55274) [Link]

If you use third party libraries based on upstream versions, packaged in most distros, you don't have to worry that much with bug and security updates. You just need to mainly care about your own unique software.

If you rather do the "google-thing" and bundle a bunch of third-party libraries into one big package and ship it, then you as a end-user cannot be sure it has the same security and bug fixes implemented as the rest of your distros. And if more projects does this as well, then you can really end up in a fun scenario where some of your applications are more vulnerable than others - based on which versions they bundle. And do you as an end-user pay attention to what kind of third party libraries the application uses and if it's system wide or bundled? I suspect most users don't care about that, they just want a safe product which works and gets the job done well.

I just hope google at some points gives up their insanity and co-operate with upstream projects to get their issues and reasons why they nowadays do this bundling resolved.

From a security and maintenance perspective,from google's point of view, should give far less hassle and less maintenance work in the long term perspective.

From a end-user perspective, it gives you software packages which is smaller/takes less space on your systems (no need for more installations of SQLite3 libraries, f.ex) and your system is more likely safer due to more rapid updates from your distro vendor. And google just needs to push out updates when there's something only related to their own developed software.

Introducing the Qt WebEngine

Posted Sep 13, 2013 17:00 UTC (Fri) by khim (subscriber, #9252) [Link]

If you use third party libraries based on upstream versions, packaged in most distros, you don't have to worry that much with bug and security updates. You just need to mainly care about your own unique software.

This only works if all distributions carry the same version of library and don't introduce any changes (or at least introduce similar changes). Which is not true in practice.

From a security and maintenance perspective,from google's point of view, should give far less hassle and less maintenance work in the long term perspective.

Nope. Today Google deals with exactly one version of each bundled library. You are asking it do deal with bazillion versions floating around. How exactly is it “far less hassle and less maintenance work”?

I suspect most users don't care about that, they just want a safe product which works and gets the job done well.

Right. But the best security is achieved not when distributions are doing updates and not when Google are doing updates but when the most diligent party are doing updates. Do you have any studies which show that Google is doing worse job then, e.g., Debian or Ubuntu? Or are you just assuming that small group of people which supports tens of thousands of packages does better job then larger group which deals with hundreds of packages?

And if more projects does this as well, then you can really end up in a fun scenario where some of your applications are more vulnerable than others - based on which versions they bundle.

Sure. But said “fun scenario” is actually more secure then other scenarios if developers of applications which bundle some libraries are more diligent then distribution maintainers.

Introducing the Qt WebEngine

Posted Sep 13, 2013 17:39 UTC (Fri) by pizza (subscriber, #46) [Link]

> Nope. Today Google deals with exactly one version of each bundled library. You are asking it do deal with bazillion versions floating around. How exactly is it “far less hassle and less maintenance work”?

Actually, the problem with Google isn't that they bundle third-party libraries with Chrome, but rather that they bundle *modified* third-party libraries, usually without even attempting to push their API/ABI-incompatible changes upstream.

They treat those privately-forked libraries as part of the Chrome codebase. Not unlike how they treat nearly everything else they touch, incidentally.

I think that attitude is misguided in the long run, but hey, it's their money to spend as they wish.

Introducing the Qt WebEngine

Posted Sep 13, 2013 19:49 UTC (Fri) by khim (subscriber, #9252) [Link]

They treat those privately-forked libraries as part of the Chrome codebase.

Nope. They treat these libraries in the exact same way distributions treat upstream: they include changes which make sense for them and always include README.chromium file, too. It contains detailed explanation of local modifications. Things like Added #ifdef'd definifitions of a few symbols to support 10.5 SDK or Support for Windows (patches/re2-msvc9-chrome.patch). Sometimes these patches are pushed upstream, sometimes they don't pushed anywhere - again, exactly like distributions are doing.

I don't see anything wrong with this approach. What's good for the goose is good for the gander.

I think that attitude is misguided in the long run, but hey, it's their money to spend as they wish.

If this attitude is not misguided when used by package maintainers in distributions then why it's misguided when by Chromium developers?

Introducing the Qt WebEngine

Posted Sep 13, 2013 22:28 UTC (Fri) by pizza (subscriber, #46) [Link]

> Nope. They treat these libraries in the exact same way distributions treat upstream: they include changes which make sense for them and always include README.chromium file, too. It contains detailed explanation of local modifications. Things like Added #ifdef'd definifitions of a few symbols to support 10.5 SDK or Support for Windows (patches/re2-msvc9-chrome.patch). Sometimes these patches are pushed upstream, sometimes they don't pushed anywhere - again, exactly like distributions are doing.

Yes, it's exactly the same as what distros are doing, except for the part where distros *aren't* making incompatible-with-upstream changes to the library and its APIs -- you know, so libiffi.so.5 on fedora is the same as libiffi.so.5 on ubuntu -- Distros attempt to standardize on upstream, while chromium attempts to standardize only within its own sandbox.

This reinvention of privately-bundled-libraries (Aka "self-contained apps") scares the hell out of me, because I remember all too well the security fiascos (eg zlib) that led the distros to ardently unbundle everything and the level of public embarassment it took to get many $bigvendors to update their software.

Introducing the Qt WebEngine

Posted Sep 14, 2013 1:23 UTC (Sat) by xtifr (subscriber, #143) [Link]

This reinvention of privately-bundled-libraries (Aka "self-contained apps") scares the hell out of me, because I remember all too well the security fiascos (eg zlib) that led the distros to ardently unbundle everything and the level of public embarassment it took to get many $bigvendors to update their software.
I agree. This thread has cured me of any interest I might have had in ever running Chrom{e,ium}.

The worst part is that they're attempting to work around a basically solved problem. Debian and Fedora and many others are able to coordinate gigantic masses of shared libraries, and make them work and be compatible. Because, y'know, this sort of thing used to be a big problem, and soversioning provided a very straightforward and easy fix, and nobody wants to return to the bad old days when different versions of libraries had random, hard-to-track incompatibilities.

Open/Libre Office, Mozilla, and countless other projects don't seem to have the sorts of problems that Google seems so worried about. In my experience, these horrible library incompatibilities seem only to occur in the most obscure and poorly maintained libraries. Which are probably not the best thing to rely on in the first place.

Introducing the Qt WebEngine

Posted Sep 15, 2013 21:02 UTC (Sun) by Mook (guest, #71173) [Link]

Mozilla actually does appear to have these problems (I'm singling them out because they're what I observe, not because I think they're special in this regard). Heck, Mozilla developers are leaning away from making changes in NSPR, which is hosted on hg.mozilla.org (but has a separate ownership structure from Firefox, and Mozilla managers can't really lean on people to get reviews within their release cycles).

As an application developer: upstream doesn't always want your changes, sometimes because they would be detrimental to a different consumer. In the closed source world, this would mean either egregious workarounds, stop using the upstream library, or give up and not making the desired change. (Or, given enough money, license the upstream source code and have a private fork - see Flash in Chrome for an example.) In the open source world, forking it becomes an option. Though this is definitely not as good as having your changes upstream, it's often better than dropping the change in the first place - at least from the application developer's point of view.

For a random example: this directory appears to contain a bunch of patches to Cairo; I could find no mention of Direct2D/DirectWrite (Windows stuff) in upstream git or bugzilla.

Introducing the Qt WebEngine

Posted Sep 14, 2013 11:15 UTC (Sat) by nim-nim (subscriber, #34454) [Link]

> I don't see anything wrong with this approach. What's good for the goose
> is good for the gander.

Nope, that's not the same at all.
What app writers do not want to hear is that users *like* to have a single point of contact (the distro), and *like* to be able to check easily if there's a security (or other) problem in a package in a single place.

I *don't* care as a user if a Google lib shares security fixes with the same Google lib on a gazillon other OSes I don't use. I *do* care it shares security fixes with the rest of the system I do use.

The only people interested in app-centric security management are app authors, every one else wants system-centric management. That's why all the zero deployment app-centric technologies have been market failures. That's why the Apple store/Android marketplace/Mozilla extensions have been successes. Not because they enabled app authors to do stupid things like bundling libs (like app authors claim). But because they forced all the app authors that wanted to do their usual one-of-a-kind braindamaged private app-centric deployment thing to go through the single system-wide channel users wanted.

Introducing the Qt WebEngine

Posted Sep 14, 2013 15:54 UTC (Sat) by dashesy (subscriber, #74652) [Link]

I do not get your logic. I think central repositories could very well play the role of "single system-wide channel" you praise.

App markets are successful because apps work, users care more about apps not crashing (because of an unexpected ABI change), than security. That is fact of life. Similarly in countries with tight grasp over security life is not as vibrant, you need some room between gears for machinery to work smoothly.

Introducing the Qt WebEngine

Posted Sep 14, 2013 17:33 UTC (Sat) by khim (subscriber, #9252) [Link]

What app writers do not want to hear is that users *like* to have a single point of contact (the distro)

Are you sure. I know anecdote is not data, but I, for one, absolutely hate the fact that packagers insert themselves between me and developers and make interaction harder.

*like* to be able to check easily if there's a security (or other) problem in a package in a single place.

You don't need packagers for that. Appstore model works just fine, thank you very much.

The only people interested in app-centric security management are app authors

True.

every one else wants system-centric management.

Not true. Most users don't care who exactly provides updates as long as updates are provided. And model where central “system-centric” repo provides updates for all applications out there just does not scale thus most developers and users don't use it.

That's why the Apple store/Android marketplace/Mozilla extensions have been successes.

They were successes for the same reason WallMart was success: people like one central store where you can buy stuff from bazillion producers. And since the size of store matters and “system-centric” distro packages were unable to achive in quater-century what Appstores achieved in a couple of years.

But because they forced all the app authors that wanted to do their usual one-of-a-kind braindamaged private app-centric deployment thing to go through the single system-wide channel users wanted.

You mean like Nintendo and SONY were doing with DS and PSP? Sorry to disappoint you but Nintendo sold as many DS systems combined in six years as Apple sells iPhones yearly. iPhones are more expensive, mind you. Android phones may be as cheap as Nintendo DS, but they oursell DS 20:1 - it's not even funny anymore.

No, centrally controlled distribution does not work. Free-for-all market works, but it's different beast and bundled libraries are part of this model.

Packages vs Appstore model

Posted Sep 14, 2013 19:06 UTC (Sat) by gioele (subscriber, #61675) [Link]

> You don't need packagers for that. Appstore model works just fine, thank you very much.

It does not in the general case. To solve the DLL hell most developer just embed libraries and do not care about security in the libraries they embed, nor they care to test their application in many different configurations.

The Appstore model requires more effort from the upstream developers, and this effort is put in only with the hope to gain money from it. Most open source developers do not have such an incentive.

Introducing the Qt WebEngine

Posted Sep 13, 2013 10:23 UTC (Fri) by freetard (guest, #92836) [Link]

On Mac OS X and Windows, proprietary and open source software are similar to end users.

On Linux, however, proprietary software has a major advantage. Proprietary software doesn't have to deal with arrogant packagers. They won't get massaged.

Introducing the Qt WebEngine

Posted Sep 13, 2013 10:54 UTC (Fri) by niner (subscriber, #26151) [Link]

Proprietary software developers may have this advantage. Proprietary software users on the other hand have a massive disadvantage: they are subject to vulnerabilities in bundled libraries, that may long be fixed in the version supplied by the distribution. This is a huge problem on Windows and one of the very few real technical advantages of the Linux desktop with regard to security.

Introducing the Qt WebEngine

Posted Sep 14, 2013 4:53 UTC (Sat) by mrdocs (guest, #21409) [Link]

Huh ?

You have obviously not been exposed to some of the worst packaging I've seen on Windows. Some so bad, that a un-install can break things so badly it causes problems only fixed by a complete re-image of Windows.

I'm no fan of Debian, but you will never hear me diss their packaging standards.

I've never seen proprietary software packaged better than the distro packaging.

Introducing the Qt WebEngine

Posted Sep 15, 2013 17:41 UTC (Sun) by freetard (guest, #92836) [Link]

> You have obviously not been exposed to some of the worst packaging I've seen on Windows. Some so bad, that a un-install can break things so badly it causes problems only fixed by a complete re-image of Windows.

Easy to create a harmful DEB or RPM: http://lwn.net/Articles/367874/

Third party repositories can also screw up your system easily, given that they have root permission.

> I've never seen proprietary software packaged better than the distro packaging.

Compare Chrome, Iceweasel and Opera.

Bonus Exercise: Try install latest Firefox on CentOS 6.

Introducing the Qt WebEngine

Posted Sep 14, 2013 6:43 UTC (Sat) by Otus (guest, #67685) [Link]

> On Linux, however, proprietary software has a major advantage. Proprietary
> software doesn't have to deal with arrogant packagers. They won't get
> massaged.

On Linux, FOSS has an advantage, when I can install directly from the
archive. I'm much more likely to try the software if it comes from my
distro, and not a random web page.

It's not just that I wouldn't trust the software to be non-malicious, the
main problem is I don't trust random software to not screw with my system
even accidentally. You can see what happens when everything has their own
idea of how to do things if you look at a typical aged Windows install.

Introducing the Qt WebEngine

Posted Sep 15, 2013 17:32 UTC (Sun) by freetard (guest, #92836) [Link]

Distro software:
1. May not run at all
2. Generally outdated (unless Arch)
3. Generally fucked (for example Firefox VS Iceweasel)

Random third party software repository is more dangerous than random Windows software. Repository software have root permission to screw your system. While random Windows software have to bypass ever-stricter Windows permission and Windows's built-in protection feature to screw your system.

Introducing the Qt WebEngine

Posted Sep 16, 2013 2:30 UTC (Mon) by pizza (subscriber, #46) [Link]

>Distro software:
>1. May not run at all

...huh? If anything, things are considerably more likely to run when bundled with a distro.

2. Generally outdated (unless Arch)

In other words, "generally adheres to the release cadence of the distro" but some upstream stuff (eg kernel, high profile applications like web browsers, office suites) tend to be kept up to date. In any case, you have a wide choice of distros that adhere to different release philosophies/cadences.

3. Generally fucked (for example Firefox VS Iceweasel)

Given that only Debian ships Iceweasel, I fail to see how you can make this into a generalization -- And let's be honest, these days one's not likely to be using Debian unless they actually care about Debian's Social Contract, which makes your complaint about as valid as if you'd complained that the church choir goes to church on Sundays.

Introducing the Qt WebEngine

Posted Sep 16, 2013 3:33 UTC (Mon) by freetard (guest, #92836) [Link]

> In other words, "generally adheres to the release cadence of the distro" but some upstream stuff (eg kernel, high profile applications like web browsers, office suites) tend to be kept up to date. In any case, you have a wide choice of distros that adhere to different release philosophies/cadences.

Yes, users shouldn't choose software releases on a case-by-case base, instead, they should accept whatever shit distros ship. Worse, some distro maintainers are zombies, making release cadences a joke.

> Given that only Debian ships Iceweasel, I fail to see how you can make this into a generalization

What about https://build.opensuse.org/package/show/mozilla:Factory/M...

Introducing the Qt WebEngine

Posted Sep 16, 2013 5:34 UTC (Mon) by FranTaylor (guest, #80190) [Link]

No doubt your excellent skill with personal relationships and your warm attitude will convince these people of the error in their ways.

I'm sure you enjoy great success in your daily life with the fine manner in which you treat people whom you have not met.

Introducing the Qt WebEngine

Posted Sep 16, 2013 6:52 UTC (Mon) by niner (subscriber, #26151) [Link]

> What about https://build.opensuse.org/package/show/mozilla:Factory/M...

Excellent example! openSUSE gives me a Firefox that's actually somewhat integrated into my KDE desktop. openSUSE's Firefox is thereby better for me than Mozilla's own. Perfect example of what distributions can do for the user. That I often notice the release of a new Firefox version by Firefox checking its extensions and not having to wait for the update itself to be installed is a nice bonus.

Introducing the Qt WebEngine

Posted Sep 16, 2013 6:58 UTC (Mon) by torquay (guest, #92428) [Link]

    In other words, "generally adheres to the release cadence of the distro" but some upstream stuff (eg kernel, high profile applications like web browsers, office suites) tend to be kept up to date. In any case, you have a wide choice of distros that adhere to different release philosophies/cadences.

This is where distros (more correctly, distinct Operating Systems) actually get in the way. Whatever the "cadence", in each distro/OS release we quite often end up with almost everything being updated, which in turn causes a lot of breakage of user software due to API/ABI breakage in the underlying OS libraries. By user software I mean stuff that's not included in the OS (eg. the user's own software).

Furthermore, for a random software within the OS that's been updated by the original developers, we need to wait for the the appropriate OS overlords to grant us with an update, which typically happens upon the next release of the entire OS (bar the exceptions noted above by pizza). I'd much rather get the updated software immediately (possibly directly from the vendor or through an app store), rather than end up being forced to wait x months and then upgrade my entire system, which in turn brings in a fresh set of API/ABI breakage.

The underlying idea of a distro is a way of covering up API/ABI breakage, by pre-building a lot of stuff and putting in the effort to work around API/ABI breaks. In other words, the people doing the distro in effect become part of the developers of a bazillion programs and libraries. This is not a long-term or a sustainable solution. It also needlessly causes delays in software updates.

A much better alternative to a distro is to have a bare-bones OS, where the updates/upgrades move at a different speed to the applications sitting on top of the OS. The underlying OS will have a strict guarantee of no API/ABI breaks, which doesn't preclude new APIs being added.

Introducing the Qt WebEngine

Posted Sep 16, 2013 10:16 UTC (Mon) by Otus (guest, #67685) [Link]

> Whatever the "cadence", in each distro/OS release we quite often end up
> with almost everything being updated, which in turn causes a lot of
> breakage of user software due to API/ABI breakage in the underlying OS
> libraries.

I agree that upgrades too often break AP/BI, but "often" is very relative.
With Ubuntu LTS I get 5 years between releases. Within a release there's
~no breakage, IME.

Then there's of course RHEL with 10+ years of support.

I.e. if your distro has a too fast cadence, isn't it simply the wrong one
for you?

> I'd much rather get the updated software immediately (possibly directly
> from the vendor or through an app store)[...]

And you can... Isn't it the best of both worlds that you can either use the
distro version or go directly to the upstream to download either a tarball
or often packages? Many even have repos or PPAs for common distros.

Introducing the Qt WebEngine

Posted Sep 16, 2013 12:33 UTC (Mon) by torquay (guest, #92428) [Link]

    if your distro has a too fast cadence, isn't it simply the wrong one for you?

That's not the point. The current idea of a distro is broken. Instead of decoupling apps from the underlying OS, we have a massive mud ball of everything. The "long-term" releases of distros simply mask and postpone the problem of API/ABI instability.

Within the current distro model, everything will be updated at the next release (at whatever cadence). This causes a lot of problems. Instead I'm arguing for a decoupled OS, where there is a clear separation between the OS and apps, with the two being updated at varying speeds. (The separation can be in several layers, or rings if you like).

PPAs for Ubuntu are barking in the right direction, but they only do part of the job, and are sporadic at best in terms of software coverage.

Introducing the Qt WebEngine

Posted Sep 16, 2013 13:22 UTC (Mon) by anselm (subscriber, #2796) [Link]

Instead I'm arguing for a decoupled OS, where there is a clear separation between the OS and apps, with the two being updated at varying speeds.

If that's your itch, then go scratch it.

One more distribution can't possibly hurt, and if it does turn out to have an advantage over the mainstream model and become popular, then more power to you. (In the meantime I'll stay with my distribution, which works very nicely for me, thank you very much.)

Introducing the Qt WebEngine

Posted Sep 17, 2013 19:29 UTC (Tue) by khim (subscriber, #9252) [Link]

One more distribution can't possibly hurt, and if it does turn out to have an advantage over the mainstream model and become popular, then more power to you.

Are you really sure? Because it's essentially solved problem: one such distribution exists, it's extremely popular and now people behind other distributions loudly complain about it. Because they are left behind and could not compete.

P.S. I'm talking about Android here, of course. It's already on phones and tablets, there are plenty of mini-desktops with it. You can not currently use it for development, but hey, after PC was introduced for the next 10 years a lot of software for it was developed on UNIX workstations, too!

Introducing the Qt WebEngine

Posted Sep 17, 2013 22:06 UTC (Tue) by anselm (subscriber, #2796) [Link]

I don't actually see the big Linux distributions slowing down on account of Android. It's too different a beast to be real competition in the space that the other distributions occupy already. It is true that Android is very popular on phones and tablets but so far not on (non-ARM) desktops and servers, which is where most of the other Linux distributions live – and I don't see it making inroads there anytime soon. (It doesn't seem to be part of Google's strategy, anyway, but that is neither here nor there.)

If torquay thinks that an Android-like model would work for a desktop- or server-type Linux distribution that actually competes on the same hardware with the likes of Ubuntu, Debian or openSUSE then he will have to do the legwork (unless he can get somebody else to do it for him). The success of Android on completely different types of hardware that didn't have an established presence of those distributions doesn't really prove anything either way.

(For the record, I have an ASUS Transformer Pad TF700, which is a high-end Android tablet with an optional physical keyboard. It is a great little machine for what it's worth and very useful indeed, but I would never consider replacing my Linux PC with it – Android isn't up to it and won't be for the foreseeable future, and even the Debian chroot I have on it sucks in various respects compared to a »real« Linux machine.)

Introducing the Qt WebEngine

Posted Sep 16, 2013 14:15 UTC (Mon) by pizza (subscriber, #46) [Link]

> That's not the point. The current idea of a distro is broken. Instead of decoupling apps from the underlying OS, we have a massive mud ball of everything. The "long-term" releases of distros simply mask and postpone the problem of API/ABI instability.

So... what do you propose as a "better" alternative? I'm genuinely asking here -- I mean, it's one thing to say what we have now isn't ideal (and I doubt anyone would disagree!) but I'm not aware of any non-trivial gains we could make on one axis without adversely affecting another.

Introducing the Qt WebEngine

Posted Sep 16, 2013 14:57 UTC (Mon) by torquay (guest, #92428) [Link]

  1. It'd be good to get away from user facing apps being purely controlled by the distro. Let the vendors themselves deliver software more directly to the users, possibly via an appstore-like setup (which itself could be controlled by a distro). The software can rely on OS provided libraries that are guaranteed to have no API/ABI breakage. Other libraries which do not have such guarantees can be bundled with the software. A sandbox approach (eg. a quickstart/light KVM) can be used to address security concerns.
     
  2. Implement the above within a software stack based on a rings/layers framework, where there are specific guarantees & policies at each layer. For example, the "More Agile Fedora" rings framework proposed by Matthew Miller. Matthew's presentation covers a lot of related points (use space bar to advance slides). IMHO this is the way forward.

Introducing the Qt WebEngine

Posted Sep 16, 2013 15:32 UTC (Mon) by niner (subscriber, #26151) [Link]

Sounds like you just want to use openSUSE. User facing apps are _not_ purely controlled by the distro. And neither are other lower level parts of my system. I use openSUSE 12.3 but have kernel 3.11, KDE 4.11 and PostgreSQL 9.3. I also run an up to date git version of Mesa. All this I got from my distribution's app store which is called openSUSE build service: http://software.opensuse.org/search

I installed this system more than a decade ago and went through countless distribution upgrades despite me generally using latest versions of the most important packages. And it works. It works very well for me.

Introducing the Qt WebEngine

Posted Sep 16, 2013 15:48 UTC (Mon) by rahulsundaram (subscriber, #21946) [Link]

You are still going through a set of ad-hoc repos provided by the distribution. To move away from the model to distro neutral systems requires a set of core changes including solid support for sandboxing so random untrusted software can be installed and used. systemd project is doing some work in this regard and GNOME Software Center is building on it with some DE and distro neutral specifications. Of course, torquay shouldn't pretend that this model is always better either. It is just a different set of tradeoffs.

Introducing the Qt WebEngine

Posted Sep 18, 2013 10:37 UTC (Wed) by krake (subscriber, #55996) [Link]

It will also require a store/repository that can deal with shared dependencies for a single vendor.

Most "app stores" currently lack the option for vendors of larger portfolios to share cores between their applications.

Which is fine for the simple apps we find on phones and tables today but would not work for anything like office or creativity suites.

Introducing the Qt WebEngine

Posted Sep 18, 2013 14:26 UTC (Wed) by khim (subscriber, #9252) [Link]

Which is fine for the simple apps we find on phones and tables today but would not work for anything like office or creativity suites.

Why? Shared components even for large “office or creativity suites” are smaller then most games!

Introducing the Qt WebEngine

Posted Oct 2, 2013 11:39 UTC (Wed) by krake (subscriber, #55996) [Link]

Maybe. I had the impression that for such suits the main part of the functionality is in the shared components, with the apps providing the different user interfaces and really domain specific extension.

Bundling each app with the shared compontens is a huge overhead in, space, download time, packaging efforts, updating efforts and so on.
Doable? Yes. Desirable? No.

Introducing the Qt WebEngine

Posted Sep 24, 2013 9:45 UTC (Tue) by jospoortvliet (subscriber, #33164) [Link]

Not entirely true, quite a few of the repo's on build.opensuse.org are actually managed by the upstream projects themselves and others have <a href="https://obs.kolabsys.com/">their own build service instance</a>. We'd love to have more upstream projects use OBS to build packages for the distro's they work with - you can use build.o.o to build your official packages for Fedora, Ubuntu, Mageia, Arch, openSUSE, RHEL, SLES, Debian and more.

Introducing the Qt WebEngine

Posted Sep 16, 2013 21:42 UTC (Mon) by lsl (subscriber, #86508) [Link]

> Let the vendors themselves deliver software more directly to the users,

Many upstreams don't want to get into the business of shipping binaries. Also, many users actually like the way software management is handled in most distros.

Please don't assume that the current distro model is broken just because it doesn't suit you.

Introducing the Qt WebEngine

Posted Sep 17, 2013 15:00 UTC (Tue) by mathstuf (subscriber, #69389) [Link]

And don't assume that because it works for you, it's all ponies and rainbows. What about upstreams who would rather ship the bits themselves (e.g., Chromium, Steam, Humble Bundle games, etc.)? If all they have to rely on is libc.so and X11 libraries, there's no wondering why their own copies (as much as I hate it) for things like libSDL.so get bundled up. If they don't, they either don't work on last year's release or won't work on next year's.

Introducing the Qt WebEngine

Posted Sep 16, 2013 15:13 UTC (Mon) by Otus (guest, #67685) [Link]

> Within the current distro model, everything will be updated at the next
> release (at whatever cadence). This causes a lot of problems. Instead I'm
> arguing for a decoupled OS, where there is a clear separation between the
> OS and apps, with the two being updated at varying speeds. (The
> separation can be in several layers, or rings if you like).

Certainly things could be better, but there's always a trade-off. If parts
of the system are upgraded more often, they also have a higher chance of
regressing.

Many of the things that would be classified as "apps" under a decoupled
OS/apps system are things I don't want to see upgrade often. Things like
file browser, email client, messengers, etc. There's innovation to be had,
but I don't need it, or when I do I'm fine with going through the effort of
upgrading manually.

Currently I can keep most of the system stable for up to five years with 
the LTS I use, still get kernel and X upgrades every six months, and track 
the bleeding edge of the few apps I really care about more closely.

I.e. I already have "OS" (95+% of the software on my system) vs. "apps".

Sure, every five years I must either upgrade with my fingers crossed or
rebuild the system on a new foundation, but that's seldom enough that I'm
fine with it. I seem to rebuild my computers more frequently than that in
any case.

Introducing the Qt WebEngine

Posted Sep 16, 2013 10:24 UTC (Mon) by Otus (guest, #67685) [Link]

> Distro software:
> 1. May not run at all

I don't know which distros you've been running, but not my experience at
all. Probability to run on your system >> random non-distro software.

> Random third party software repository is more dangerous than random
> Windows software. Repository software have root permission to screw your
> system. While random Windows software have to bypass ever-stricter Windows > permission and Windows's built-in protection feature to screw your system.

Yes, third party repositories are dangerous. However, most random Windows
software seems to ship with their own as-root auto-updaters these days. And
of course the installer you run when you first get it usually requires root
as well (and you will grant it).

At least with package managers the list of places that can feed you malware
is centralized. You can always drop a repository from the list after the
initial install if you don't want updates.

Copyright © 2013, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds