|
|
Subscribe / Log in / New account

Poettering: Revisiting how we put together Linux systems

Poettering: Revisiting how we put together Linux systems

Posted Sep 1, 2014 23:41 UTC (Mon) by mezcalero (subscriber, #45103)
In reply to: Poettering: Revisiting how we put together Linux systems by torquay
Parent article: Poettering: Revisiting how we put together Linux systems

"overly elaborate scheme"? I don't actually think it's that complex. I mean, it's just agreeing on a relative simple naming scheme for sub-volumes, and then exchanging them with btrfs send/recv, and that's pretty much it. We simply rely on the solutions btrfs already provides us with for the problems at hand, we push the de-dup problem, the packaging problem, the verification problem, all down to the fs, so that we don't have to come up with anything new for that!

I love the ultimate simplicity of this scheme. I mean, coming up with a good scheme to name btrfs sub-volume is a good idea anyway, and then just going on single step further and actually packaging the OS that way is actually not that big a leap!

I mean, maybe it isn't obvious when one comes from classic packaging systems with all there dependency graph theory, but looking back, after figuring out that this could work, it's more like "oh, well, this one was obvious..."


to post comments

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 17:18 UTC (Tue) by paulj (subscriber, #341) [Link] (50 responses)

On the app:org.libreoffice.LibreOffice:GNOME3_20:x86_64:133 example: The idea is that multiple distros could provide that GNOME3_20:x86_64:133 dependency, is that correct?

If so, I'm wondering:

- Who assigns or registers or otherwise coördinates these distro-abstract dependency names?

- Who specifies the ABI for these abstract dependencies? I guess for GNOME3_20?

- What if multiple dependencies are needed? How is that dealt with?

The ABI specification thing for the labels seems a potentially tricky issue. E.g., should GNOME specify the one in this example? But what if there are optional features distros might want to enable/disable? That means labels are needed for every possible combination of ABI-affecting options that any dependency might have?

?

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 18:55 UTC (Tue) by raven667 (subscriber, #5198) [Link] (49 responses)

Re-reading the proposal, the vendorid should include where it comes from, org.gnome.GNOME3_20 in one of the examples, which would be different than org.fedoraproject.GNOME3_20 so you might have app:org.libreoffice.LibreOffice:org.gnome.GNOME3_20:x86_64:4.3.0 which depends on the GNOME libraries built by gnome.org themselves which is different than a distribution provided app.org.fedoraproject.LibreOffice:org.fedoraproject.GNOME3_20:x86_64:4.2.6

I think some of the vendor ids are getting truncated when making examples.

Poettering: Revisiting how we put together Linux systems

Posted Sep 4, 2014 9:33 UTC (Thu) by paulj (subscriber, #341) [Link] (48 responses)

So basically this proposal is that applications have a way to specify what distro they have been built for? Additionally, it envisages that GNOME and KDE would start providing distros or at least fully specifying an ABI?

On which point: Lennart has used "API" in comments here quite a lot, but I think he means ABI. ABI is even more difficult to keep stable than API, and the Linux desktop people havn't even managed to keep APIs stable!

#include "rants/referencer-got-broken-by-glib-gnome-vfs-changes.txt"

Poettering: Revisiting how we put together Linux systems

Posted Sep 4, 2014 14:46 UTC (Thu) by raven667 (subscriber, #5198) [Link] (47 responses)

ABI stability is much easier to have when you specify exactly what the environment was you built your binaries against and just ship the whole thing to the target system. A lot of people are solving this same problem in a similar fashion with Docker, by specifying a whole runtime with their application, this proposal maybe has more de-duping.

Poettering: Revisiting how we put together Linux systems

Posted Sep 4, 2014 22:34 UTC (Thu) by dlang (guest, #313) [Link] (31 responses)

That's not keeping the ABI stable, that's just selecting a particular ABI and distributing it. It's only stable if it's not going to change with future releases.

Poettering: Revisiting how we put together Linux systems

Posted Sep 5, 2014 4:14 UTC (Fri) by raven667 (subscriber, #5198) [Link] (30 responses)

That's the only style of ABI stability widely deployed in the Linux distribution world, it is the essential ingredient is what makes an Enterprise distro. What is being discussed is the same kind of ABI stability promised by RHEL for example.

Poettering: Revisiting how we put together Linux systems

Posted Sep 5, 2014 6:08 UTC (Fri) by dlang (guest, #313) [Link] (29 responses)

and converting software from one RHEL version to another is a major pain, but if you only have to do it once a decade you sort of live with it.

But if every upgrade of every software package on your machine is the same way, ti will be a fiasco

Remember that the "base system" used for this "unchanging binary compatibility" is subject to change at he whim of the software developer, any update they do, you will (potentially) have to do as well, so that you have the identical environment.

Poettering: Revisiting how we put together Linux systems

Posted Sep 5, 2014 16:08 UTC (Fri) by raven667 (subscriber, #5198) [Link] (28 responses)

I'm not sure I understand your point, the purpose of this proposal is to have a standard for easily supporting having different /usr base systems so you can have long term ABI compatibility, by having an RHEL/LTS style long-term stable system installed simultaneously alongside more Fedora/Ubuntu quickly updated releases so you can have the best of both worlds, applications which you don't have to port but once a decade and the latest shiny toys without having dependency hell force upgrades to your working system.

Poettering: Revisiting how we put together Linux systems

Posted Sep 5, 2014 16:14 UTC (Fri) by paulj (subscriber, #341) [Link] (12 responses)

The problem is that ABIs extend outside of the base distro, to state this proposals intends to share across the multiple installed distros. E.g., in $HOME, etc, and /var.

Poettering: Revisiting how we put together Linux systems

Posted Sep 5, 2014 17:07 UTC (Fri) by raven667 (subscriber, #5198) [Link] (11 responses)

I didn't think that was what dlang was referring to but maybe that's my confusion, there is definitely work to maintain compatibility for config data in /home and IPC protocols in /var, but that is a much smaller and more well defined set than /usr.

One thing if this proposal is worked on that it puts pressure on distros to define a subset as stable and it puts pressure on app makers to standardize on few runtimes so even if this proposal does not become the standard, it may create the discussion which that results in a new LSB standard for distros binary compatibility which is much more comprehensive than the weak-sauce LSB currently is. I think the discussion of what goes into /usr is very useful on its own even if nothing else comes out of this proposal.

Poettering: Revisiting how we put together Linux systems

Posted Sep 5, 2014 18:02 UTC (Fri) by paulj (subscriber, #341) [Link] (10 responses)

Yes, it'd be good if this kind of thing gave app/desktop-env an incentive again to think about the compatibility issues of their state, so that they'd eventually fix things and we could move towards an ideal world.

However, note that this is the "This is how the world should be, and we're going to change our bit to make it so, and if it ends up hurting users then that will supply the pressure needed to make the rest of the world so" approach. An approach to making progress which I think has been tried at least a few times before in the Linux world, which I'm not sure always helps in the greater scheme of things. The users who get hurt may not be willing to stick around to see the utopia realised, and not everything may get fixed.

Poettering: Revisiting how we put together Linux systems

Posted Sep 5, 2014 18:53 UTC (Fri) by raven667 (subscriber, #5198) [Link] (9 responses)

Well there is no real pressure to take this protocol on, it exists and people can work on it if they choose and they believe it solves problems for them, but there is no authority who can push such things, they only happen by consensus.

Poettering: Revisiting how we put together Linux systems

Posted Sep 6, 2014 10:40 UTC (Sat) by paulj (subscriber, #341) [Link] (8 responses)

Unless of course control of some important project is used to lever this idea into place. ;)

Poettering: Revisiting how we put together Linux systems

Posted Sep 7, 2014 15:17 UTC (Sun) by raven667 (subscriber, #5198) [Link] (7 responses)

You can build it and they still might not come, this proposal doesn't exist, even if systemd implements it, without an economy of /usr providers willing to package their runtimes and apps in this fashion. While I know you are just being funny but it still feeds into the trolls to think there is some magic evil power which compels and corrupts the distros, rather than they just independently coming to the conclusion that systemd is a good idea. Poettering can't be evil, I've seen pictures, he doesn't even have a moustache to twirl!

Poettering: Revisiting how we put together Linux systems

Posted Sep 8, 2014 9:30 UTC (Mon) by paulj (subscriber, #341) [Link] (6 responses)

Poettering isn't evil, no. He writes a lot of useful code. Though, he's not perfect either of course.

If I had to make a criticism, it wouldn't be of him specifically, but of an attitude common amongst Linux user-space devs, that holds that it is acceptable to burn users with breakage, in the quest for some future code or system utopia. So code gets written that assumes other components are perfect, so that if they are not breakage will expose them and those other components will eventually get fixed. Code gets rewritten, because the old code was ugly, and the new code will of course be much better, and worth the regressions.

The root cause of this seems to be because of the fractured way Linux works. There is generally no authority that can represent the users' interests for stability. There is no authority that can coördinate and ensure that if subsystem X requires subsystem Y to implement something that was never really used before, that subsystem Y is given time to do this before X goes out in the wild. Or no authority to coördinate rewrites and release cycles.

Instead the various fractured groups of developers, in a way, interact by pushing code to users who, if agitated sufficiently, will investigate and report bugs or even help fix them.

You could also argue this is because of a lack of QA resources. As a result of which the user has to help fill that role. However, the lack of resources could also be seen as being in part due to the Linux desktop user environment never having grown out of treating the user as QA, and regularly burning users away.

Poettering: Revisiting how we put together Linux systems

Posted Sep 8, 2014 10:59 UTC (Mon) by dlang (guest, #313) [Link]

The distro supposed to be the ones that make sure that Y is working before they ship X.

Poettering: Revisiting how we put together Linux systems

Posted Sep 8, 2014 11:59 UTC (Mon) by pizza (subscriber, #46) [Link] (4 responses)

> If I had to make a criticism, it wouldn't be of him specifically, but of an attitude common amongst Linux user-space devs, that holds that it is acceptable to burn users with breakage, in the quest for some future code or system utopia. So code gets written that assumes other components are perfect, so that if they are not breakage will expose them and those other components will eventually get fixed. Code gets rewritten, because the old code was ugly, and the new code will of course be much better, and worth the regressions.

I've been on both sides of this argument, as both an end-user and as a developer.

In balance, I'd much rather have the situation today; where stuff is written assuming the other components work properly, and where bugs get fixed in their actual locations rather than independently, inconsistently, and incompatibly papered over.

For example, this attitude is why Sound and [Wireless] Networking JustWork(tm) in Linux today.

The workaround-everything approach is only necessary when you are stuck with components you can't fix at the source -- ie proprietary crap. We don't have that problem, so let's do this properly!

The days where completely independent, underspecified, and barely-coupled components are a viable path forward have been over for a long, long time.

Poettering: Revisiting how we put together Linux systems

Posted Sep 8, 2014 12:33 UTC (Mon) by nye (subscriber, #51576) [Link] (1 responses)

>For example, this attitude is why Sound and [Wireless] Networking JustWork(tm) in Linux today

Except they don't.

A couple of examples:

My home machine has both the front channels and the rear channels connected, for a total of four audio channels. In Windows, I go in to my sound properties and pick the 'quadrophonic' option; now applications like games or VLC will mix the available channels appropriately so that I get sound correctly coming out of the appropriate speakers - not only all four channels, but all four channels *correctly*, so that, for example, 5.1 audio is appropriately remixed by the application without any configuration needed.

I had a look at how I might get this in OpenSuSE earlier this year, and eventually concluded either that PA simply can't do this *at all*, or that if it can, nobody knows how[0]. I did find some instructions for how to set up something like this using a custom ALSA configuration, though that would have required that applications be configured to know about it (rather than doing the right thing automatically), and I never got around to trying it out before giving up on OS for a multitude of reasons.

Another example:
I have a laptop running Kubuntu. Occasionally, I can persuade it to make a wireless connection, but mostly it just doesn't - no messages, just nothing happening. It also doesn't properly manage the wired network, especially if the cable is unplugged or replugged. NetworkManager is essentially just a black box that might work, and if it doesn't then you're screwed. I eventually persuaded it to kind-of work without NM, though it then required some manual effort to switch between wired and wireless networks.

A related example:
For several consecutive releases, the Ubuntu live CD images didn't have fully working wired networking out of the box, due to some DHCP cock-up. Sadly I no longer remember the specifics for certain, but I *believe* it was ignoring the DNS setting specified by the DHCP server, and manually disabling the DHCP client and editing resolv.conf fixed the problem. There may have been something more to it than that, like getting the default gateway wrong; I don't recall.

It's great that some people have more reliable wireless thanks to NM, but it's not great that this comes at the expense of fully-functional ethernet.

[0] Some more recent googling has turned up more promising discussion of configuration file options, but I no longer have that installation to test out.

Poettering: Revisiting how we put together Linux systems

Posted Sep 8, 2014 13:14 UTC (Mon) by pizza (subscriber, #46) [Link]

> My home machine has both the front channels and the rear channels connected, for a total of four audio channels. In Windows, I go in to my sound properties and pick the 'quadrophonic' option; now applications like games or VLC will mix the available channels appropriately so that I get sound correctly coming out of the appropriate speakers - not only all four channels, but all four channels *correctly*, so that, for example, 5.1 audio is appropriately remixed by the application without any configuration needed.

The last three motherboards I've had, with multi-channel audio, have JustWorked(tm) once I selected the appropriate speaker configuration under Fedora/GNOME. Upmixed and downmixed PCM output, and even the analog inputs are mixed properly too.

(Of course, some of the responsibility for handling this is in the hands of the application, even if only to query and respect the system speaker settings instead of using a fixed configuration)

> I have a laptop running Kubuntu. Occasionally, I can persuade it to make a wireless connection, but mostly it just doesn't - no messages, just nothing happening. It also doesn't properly manage the wired network, especially if the cable is unplugged or replugged. NetworkManager is essentially just a black box that might work, and if it doesn't then you're screwed. I eventually persuaded it to kind-of work without NM, though it then required some manual effort to switch between wired and wireless networks.

NM has been effectively flawless for me for several years now (even with switching back and forth), also with Fedora, but that shouldn't matter in this case -- I hope you filed a bug report. Folks can't fix problems they don't know about.

> For several consecutive releases, the Ubuntu live CD images didn't have fully working wired networking out of the box, due to some DHCP cock-up. Sadly I no longer remember the specifics for certain, but I *believe* it was ignoring the DNS setting specified by the DHCP server, and manually disabling the DHCP client and editing resolv.conf fixed the problem. There may have been something more to it than that, like getting the default gateway wrong; I don't recall.

I can't speak to Ubuntu's DHCP stuff here (did you file a bug?) but I've seen a similar problem in the past using dnsmasq's DHCP client -- the basic problem I found was that certian DHCP servers were a bit..special in their configuration and the result is that the DHCP client didn't get a valid DNS entry. dnsmasq eventually implemented a workaround for the buggy server/configuration. This was maybe three years ago?

> It's great that some people have more reliable wireless thanks to NM, but it's not great that this comes at the expense of fully-functional ethernet.

Come on, that's being grossly unfair. Before NM came along, wireless was more unreliable than not, with every driver implementing the WEXT stuff slightly differently requiring every client (or user) to treat every device type slightly differently. Now the only reason things don't work is if the hardware itself is unsupported, and that's quite rare these days.

Poettering: Revisiting how we put together Linux systems

Posted Sep 8, 2014 15:38 UTC (Mon) by paulj (subscriber, #341) [Link] (1 responses)

“For example, this attitude is why Sound and [Wireless] Networking JustWork(tm) in Linux today.”

While I agree this path has led to sound and wireless working well in Linux today, I would disagree it was the only possible path. I'd suggest there were other paths available that would have ultimately led to the same result, but taken more care to avoid regressions and/or provide basic functionality even when other components hadn't been updated to match some new specification.

What's the point of utopia, if you've burned off all your users getting there? Linux laptops were once well-represented at über-techy / dev conferences, now it's OS X almost everywhere.

Poettering: Revisiting how we put together Linux systems

Posted Sep 8, 2014 17:12 UTC (Mon) by pizza (subscriber, #46) [Link]

> While I agree this path has led to sound and wireless working well in Linux today, I would disagree it was the only possible path.

...perhaps you are correct, but those other paths would have taken considerably longer, leading to a different sort of user burning.

> What's the point of utopia, if you've burned off all your users getting there? Linux laptops were once well-represented at über-techy / dev conferences, now it's OS X almost everywhere.

These days, Linux just isn't a cool counterculture status symbol any more. It's part of the boring infrastructure that's someone else's problem.

Anyway. The technical ones basically boil down to the benefits of Apple controlling the entire hardware/software/cloud stack -- Stuff JustWorks(tm). As long as you color within the lines, anyway.

Poettering: Revisiting how we put together Linux systems

Posted Sep 6, 2014 4:41 UTC (Sat) by dlang (guest, #313) [Link] (14 responses)

I agree that this proposal makes it easy to have many different ABIs on the same computer.

I disagree that that is a wonderful thing and everyone should be making use of it.

we are getting conflicting messages about who would maintain these base systems, if it's the distros, how are they any different than the current situation?

if it's joe random developer defining the ABI that his software is built against, it's going to be a disaster.

The assertion is being made that all the random developers (who can't agree on much of anything today, not language, not development distro, not packaging standards even within a distro, etc) are going to somehow standardize on a small set if ABIs

I see this as being a way for app developers to care LESS about maintainability, because now they don't have to deal with distro upgrades any longer, they can just state that they need ABI X and stick with it forever. When they do finally decide to upgrade, it will be a major undertaking, of the scale that kills projects (because they stop adding new features for a long time, and even loose features, as they are doing the infrastructure upgrade)

building up technical debt with the intention to pay it off later almost never works, and even when it does, it doesn't work well.

anything that encourages developers to build up technical debt by being able to ignore compatibility is bad

This encourages developers to ignore compatibilty in two ways.

1. the app developers don't have to worry about compatibility because they just stick with the version that they started with.

2.library developers don't have to worry about compatibility because anyone who complains about the difficulty in upgrading can now be told to just stick with the old ABI

Poettering: Revisiting how we put together Linux systems

Posted Sep 6, 2014 11:22 UTC (Sat) by Wol (subscriber, #4433) [Link] (4 responses)

On the flip side there's going to be counter-pressure from

1) People who don't want their disk space chewed up with multiple environments.

2) People who don't want (or like me can't understand :-) btrfs.

3) Devs who (like me) like to run an up-to-date rolling distro.

4) Distro packagers who don't want the hassle of current packages that won't run on current systems.

Personally, I think any dev who ignores compatibility is likely to find themselves in the "deprecated" bin fairly quickly, and will just get left behind.

Where it will really score is commercial software which will be sold against something like a SLES or RHEL image, which will then continue to run "for ever" :-)

Cheers,
Wol

Poettering: Revisiting how we put together Linux systems

Posted Sep 6, 2014 21:00 UTC (Sat) by dlang (guest, #313) [Link] (3 responses)

> Where it will really score is commercial software which will be sold against something like a SLES or RHEL image, which will then continue to run "for ever" :-)

Yep, that's part of my fear.

this 'forever' doesn't include security updates.

People are already doing this with virtualization (see the push from vmware about how it allows people to keep running Windows XP forever), and you are seeing a lot of RHEL5 in cloud deployments, with no plans to ever upgrade to anything newer.

Poettering: Revisiting how we put together Linux systems

Posted Sep 6, 2014 22:14 UTC (Sat) by raven667 (subscriber, #5198) [Link] (2 responses)

As you say this is a dynamic that exists today so I'm not sure how it can be a con of the proposal as one of the main reasons VMs have taken over the industry is because of this same ABI management problem (and consolidation), with VMs you can run a particular tested userspace indefinately without impact to other software which needs a different ABI on the same system. This proposal doesn't really change this dynamic much, the same amount of pressure comes from end-users of proprietary vendor-ware to re-base and support newer OS releases even given that you can run old software indefinitely in VMs or containers.

Poettering: Revisiting how we put together Linux systems

Posted Sep 6, 2014 23:00 UTC (Sat) by dlang (guest, #313) [Link] (1 responses)

Is this a dynamic that we should be encouraging and move from a misuse of virtualization by some Enterprise customers to the status quo for everyone?

I don't think so.

Poettering: Revisiting how we put together Linux systems

Posted Sep 7, 2014 15:58 UTC (Sun) by raven667 (subscriber, #5198) [Link]

I think it is a dynamic that exists because it fills a need, it's an equilibrium, and we don't have control over he needs that drive it, but we can change the friction of different implementations to make life easier. Being able to easily run multiple ABIs on a system reduces the friction for upgrading just as much as VMs allow you to hold on to old systems forever.

On desktops as well being able to run older apps on newer systems rather than being force-upgraded because the distro updates and also being able to run other newer apps (and bugfixes) on a cadence faster than what a distro that releases every 6mo or 1yr gives, is a benefit that many seem to be looking for, staying on GNOME2 for example while keeping up with Firefox and LibreOffice updates or whatever. Being able to run multiple userspaces on the same system with low friction allows them to fight it out and compete more directly than dual-booting or VMs, rather than being locked in to what your preferred distro provides.

Poettering: Revisiting how we put together Linux systems

Posted Sep 6, 2014 23:15 UTC (Sat) by raven667 (subscriber, #5198) [Link] (8 responses)

> who would maintain these base systems, if it's the distros, how are they any different than the current situation?

I think you answered that in your first paragraph:

> I agree that this proposal makes it easy to have many different ABIs on the same computer.

There is currently more friction in running different ABIs on the same computer, there is no standard automated means for doing so, people have to build and run their own VMs or containers with limited to non-existant integration between the VMs or containers.

The other big win is a standard way to do updates that works with any kind of distro, from Desktop to Server to Android to IVI and embedded, without each kind of systems needing to redesign updates in their own special way.

> if it's joe random developer defining the ABI that his software is built against, it's going to be a disaster.

The proposal is that a developer would pick an ABI to build against, such as OpenSuSE 16.4 or for a non-desktop example OpenWRT 16.04, not that every developer would be responsible for bundling and building all of their library dependancies.

> The assertion is being made that all the random developers (who can't agree on much of anything today, not language, not development distro, not packaging standards even within a distro, etc) are going to somehow standardize on a small set if ABIs

This whole proposal is a way to try to use technology to change the social and political dynamic by changing the cost of different incentives, it is not guaranteed how it will play out. I think there is pressure though from end users who don't want to run 18 different ABI distros to run 18 different applications, to pick a few winners, maybe 2 or 3 at most, in the process there might be a de-facto standard created which re-vitalizes the LSB.

> I see this as being a way for app developers to care LESS about maintainability, because now they don't have to deal with distro upgrades any longer, they can just state that they need ABI X and stick with it forever. When they do finally decide to upgrade, it will be a major undertaking, of the scale that kills projects (because they stop adding new features for a long time, and even loose features, as they are doing the infrastructure upgrade)

I don't see it playing out that way, developers love having the latest and greatest too much for them to continue to deploy apps built against really old runtimes, all of the pressure is for them to build against the latest ABI release of their preferred distro. The thing is that one of the problems this is trying to solve is that many people don't want to have to upgrade their entire computer with all new software every six months just to keep updated on a few applications they care about, or conversely be forced to update their main applications because their distro has moved on, it might actually make more sense to run the latest distro ABI alongside the users preferred environment, satisfying both developers and end-users.

Poettering: Revisiting how we put together Linux systems

Posted Sep 7, 2014 8:20 UTC (Sun) by dlang (guest, #313) [Link] (7 responses)

so a developer gets to ignore all competing distros other than their favourite.

I can see why developers would like this, but I still say that this is a bad result.

Poettering: Revisiting how we put together Linux systems

Posted Sep 7, 2014 8:29 UTC (Sun) by Cyberax (✭ supporter ✭, #52523) [Link] (6 responses)

Speaking as an application developer, it's much better than doing nothing at all. Right now there's no standard set of packages at all, LSB is a joke that no one cares about.

And since there's no base ABI, everybody just does whatever suits them. Just last week we found out that Docker images for Ubuntu don't have libcrypto installed, for example.

Maybe this container-lite initiative will motivate distros to create a set of basic runtimes, that can actually be downloaded and used directly.

Poettering: Revisiting how we put together Linux systems

Posted Sep 7, 2014 9:48 UTC (Sun) by dlang (guest, #313) [Link] (5 responses)

so if you define these baselines to include every package, nobody is going to install them (disk space)

if you don't define them to include every package, you will run into the one that you are missing.

These baselines are no easier to standardize than the LSB or distros.

In fact, they are worse than distros because there aren't any dependencies available (other than on "RHEL10 baseline")

Poettering: Revisiting how we put together Linux systems

Posted Sep 7, 2014 9:56 UTC (Sun) by Cyberax (✭ supporter ✭, #52523) [Link] (4 responses)

> so if you define these baselines to include every package, nobody is going to install them (disk space)
Not every package, but at least _something_. And dedup partially solves the space problem.

> These baselines are no easier to standardize than the LSB or distros.
There are no standards right now. None. And LSB has shown us that when a group of vendors dictate their vision of a standard, distors simply ignore them.

The ecosystem of runtime might help to evolve at least a de-facto standard. I'm pretty sure that it can be done for the server-side (and let's face it, that's the main area of non-Android Linux use right now) but I'm less sure about the desktop.

Poettering: Revisiting how we put together Linux systems

Posted Sep 7, 2014 10:43 UTC (Sun) by dlang (guest, #313) [Link] (3 responses)

dedup only helps if the packages are exactly the same

>> These baselines are no easier to standardize than the LSB or distros.

> There are no standards right now. None. And LSB has shown us that when a group of vendors dictate their vision of a standard, distors simply ignore them.

so who is going to define the standards for the new approach? Every distro will just declare each of their releases to be a standard (and stop supporting them as quickly as they do today)

that gains nothing over the current status quo, except give legitimacy to people who don't want to upgrade

If the "standards" are going to be defined by anyone else, then they are going to be doing the same work that the distros are doing today, they will be just another distro group, with all the drawbacks of having to back-port security fixes (or fail to do so) that that implies.

Poettering: Revisiting how we put together Linux systems

Posted Sep 7, 2014 10:53 UTC (Sun) by Cyberax (✭ supporter ✭, #52523) [Link] (2 responses)

> so who is going to define the standards for the new approach? Every distro will just declare each of their releases to be a standard (and stop supporting them as quickly as they do today)
I'm hoping that the "market" is going to dictate the standard. Application developers will prefer to use a runtime that is well-supported and easy to maintain. And perhaps in time it will the become _the_ runtime.

> If the "standards" are going to be defined by anyone else, then they are going to be doing the same work that the distros are doing today, they will be just another distro group, with all the drawbacks of having to back-port security fixes (or fail to do so) that that implies.
Exactly. However, as an app developer I won't have to play against network effects of distributions - my software will run on ALL distributions supporting the /usr-based packaging.

I'm hoping that somebody like Debian or RHEL/CentOS can pick up the job of making runtimes. It shouldn't be hard, after all.

Poettering: Revisiting how we put together Linux systems

Posted Sep 7, 2014 11:18 UTC (Sun) by dlang (guest, #313) [Link] (1 responses)

> I'm hoping that the "market" is going to dictate the standard. Application developers will prefer to use a runtime that is well-supported and easy to maintain. And perhaps in time it will the become _the_ runtime.

thanks for the laugh

there isn't going to be "the runtime" any more than there will be "the distro", for the same reason, different people want different things and have different tolerance of risky new features.

> I'm hoping that somebody like Debian or RHEL/CentOS can pick up the job of making runtimes. It shouldn't be hard, after all.

they already do, it's called their distro releases

> as an app developer I won't have to play against network effects of distributions - my software will run on ALL distributions supporting the /usr-based packaging.

no, your users may just have to download a few tens of GB of base packaging to run it instead.

Plus, if the baseline you pick has security problems, your users will blame you for them (because if you had picked a different base that didn't have those problems, their system wouldn't have been hit by X)

Poettering: Revisiting how we put together Linux systems

Posted Sep 7, 2014 11:52 UTC (Sun) by Cyberax (✭ supporter ✭, #52523) [Link]

> there isn't going to be "the runtime" any more than there will be "the distro", for the same reason, different people want different things and have different tolerance of risky new features.
No. Most developers want pretty much the same basic feature set with small custom additions.

> they already do, it's called their distro releases
No they don't. Distro model is exclusionary - I can't just ship RHEL along with my software package (well, I can but it's impractical). So either I have to FORCE my clients to use a specific version of RHEL or I have to test my package on lots of different distros.

That's the crux of the problem - distros are wildly incompatible and there's no real hope that they'll merge any time soon.

> no, your users may just have to download a few tens of GB of base packaging to run it instead.
Bullshit. Minimal Docker image for Ubuntu is less than 100Mb and it contains lots of software. There's no reason at all for the basic system to be more than 100Mb in size.

>Plus, if the baseline you pick has security problems, your users will blame you for them (because if you had picked a different base that didn't have those problems, their system wouldn't have been hit by X)
Who cares. All existing software, except for high-profile stuff like browsers, is insecure like hell. Get over it.

Poettering: Revisiting how we put together Linux systems

Posted Sep 5, 2014 3:01 UTC (Fri) by paulj (subscriber, #341) [Link] (14 responses)

Right, so "Specify the precise distro to use, or ship your own runtime with your app".

The de-duping thing seems tenuous to me, for the "ship your own runtime" case. What are the chances that two different application vendors happen to pick the exact same combination of compiler toolchain, compile flags and libraries necessary to give identical binaries?

Having a system to make it possible to run different applications, built against different "distros" (or runtimes), at the same time, running with the same root/config (/etc, /var) and /home seems good. Though, I am sceptical that:

a) There won't be configuration compatibility issues with different apps using slightly different versions of a dependency that reads some config in /home (ditto for /etc).

This kind of thing used to not be an issue, back when it was more common to share /home across different computers thanks to NFS, and application developers would get more complaints if they broke multi-version access. However $HOME sharing is rare these days, and I got sick a long time ago of, e.g., GNOME not dealing well with different versions accessing the same $HOME (non-concurrently!).

b) Sharing /var across different runtimes similarly is likely fraught with multi-version incompatibility issues.

It's ironic that shared (even non-concurrent) $HOME support got broken / neglected in Linux, and now it seems we need it to help solve the runtime-ABI proliferation problem of Linux. :)

Poettering: Revisiting how we put together Linux systems

Posted Sep 5, 2014 3:03 UTC (Fri) by rahulsundaram (subscriber, #21946) [Link] (2 responses)

It appears this discussion might be a relevant read

https://mail.gnome.org/archives/gnome-os-list/2014-Septem...

Poettering: Revisiting how we put together Linux systems

Posted Sep 5, 2014 3:06 UTC (Fri) by martin.langhoff (subscriber, #61417) [Link] (1 responses)

If you are really going to lean on the de-dupe, then just ship the whole OS you need for your app and be done with it. Trust the magic pixie dust in the FS to de-dupe it all. Why do we have to burden the world with defining the part of the stack that is "base OS"?

Poettering: Revisiting how we put together Linux systems

Posted Sep 5, 2014 3:11 UTC (Fri) by rahulsundaram (subscriber, #21946) [Link]

Not sure if the question is directed at me but potentially because it is not magic pixie dust and we would want to define a platform while allowing distributions to change things if needed at other layers.

Poettering: Revisiting how we put together Linux systems

Posted Sep 8, 2014 1:20 UTC (Mon) by Arker (guest, #14205) [Link] (10 responses)

"However $HOME sharing is rare these days, and I got sick a long time ago of, e.g., GNOME not dealing well with different versions accessing the same $HOME (non-concurrently!)."

That used to bother me too. I deleted GNOME. As GNOME is the source of the breakage (in this and so many other situations) that is the only sensible response. The herd instinct to embrace insanity instead, in order to keep GNOME (a process that is actually ongoing, and accelerating) is what I do not understand. Why are people so insistent on throwing away things that work and replacing them with things that do not?

Poettering: Revisiting how we put together Linux systems

Posted Sep 8, 2014 10:14 UTC (Mon) by mchapman (subscriber, #66589) [Link] (9 responses)

> The herd instinct to embrace insanity instead, in order to keep GNOME (a process that is actually ongoing, and accelerating) is what I do not understand. Why are people so insistent on throwing away things that work and replacing them with things that do not?

What's there to understand? Clearly these people you're talking about are having a different experience with the software than you are. Why would you think your particular experience with it is canonical? Is it so hard to believe other people's experiences are different?

Poettering: Revisiting how we put together Linux systems

Posted Sep 8, 2014 12:53 UTC (Mon) by Arker (guest, #14205) [Link] (8 responses)

That makes no sense. The objection to GNOME is broken, insane code. Are you seriously proposing that other people using GNOME are not using the same broken, insane code? If they were not they would not be using GNOME. You make no sense.

Poettering: Revisiting how we put together Linux systems

Posted Sep 8, 2014 13:18 UTC (Mon) by JGR (subscriber, #93631) [Link] (5 responses)

Other people using GNOME are mostly users. They're not going to look at the code itself, or really know or care if it could be described as insane and/or broken, as long as it meets their (often rather straightforward) use case.

Or to put it another way, not everyone necessarily shares your view of what is "broken" and what is not.

Poettering: Revisiting how we put together Linux systems

Posted Sep 8, 2014 14:07 UTC (Mon) by Arker (guest, #14205) [Link] (4 responses)

You're feeling around in the dark but you're not too far off.

This is a problem that affects the entire market for computers, worldwide. Markets work well when buyers and sellers are informed. Buyers of computer systems, on the other hand, are for the most part as far from well informed as imaginable. A market where the buyers do not understand the products well enough to make informed choices between competitors is a market which has problems. And Free Software is part of that larger market.

And that's what we see with GNOME. The example we were discussing above had to do with the $home directory. The GNOME devs simply refuse to even try to do it right. Since none of them used shared $home directories, they did not see the problem, and had no interest in fixing it. And here is where the broken market comes in - because there were enough end users who like the GNOME devs did not understand how $home works and how it is to be used who simply did not understand why they should care.

And that's how the breakage created by a cluster of ADD spreads out to tear down the ecosystem generally. That's the path the herd is on right now. Anything that your 13 year old doesnt want to take the time to understand - it' s gone or going. A few more years of this and we will have computing systems setting world records for potential power and actual uselessness simultaneously.

Poettering: Revisiting how we put together Linux systems

Posted Sep 8, 2014 15:03 UTC (Mon) by pizza (subscriber, #46) [Link] (3 responses)

> Since none of them used shared $home directories, they did not see the problem, and had no interest in fixing it. And here is where the broken market comes in - because there were enough end users who like the GNOME devs did not understand how $home works and how it is to be used who simply did not understand why they should care.

I'm not quite sure what your point here is... you're basically blaming GNOME for the fact that its users are uninformed, and further, it's also GNOME's fault because those same uninformed users don't know enough to care about a philosophical argument under the hood about a situation those same uninformed users will never actually encounter?

> And that's how the breakage created by a cluster of ADD spreads out to tear down the ecosystem generally.

Please, lay off on the ad honimem insults.

Poettering: Revisiting how we put together Linux systems

Posted Sep 8, 2014 16:17 UTC (Mon) by Arker (guest, #14205) [Link] (2 responses)

"I'm not quite sure what your point here is... you're basically blaming GNOME for the fact that its users are uninformed, and further, it's also GNOME's fault because those same uninformed users don't know enough to care about a philosophical argument under the hood about a situation those same uninformed users will never actually encounter?"

I was not assessing blame, I was simply making you aware of the progression of events.

"Please, lay off on the ad honimem insults."

Please, learn to distinguish between ad honimem(sic) insults and statements of fact you find inconvenient or embarrasing.

Poettering: Revisiting how we put together Linux systems

Posted Sep 8, 2014 17:20 UTC (Mon) by pizza (subscriber, #46) [Link] (1 responses)

Y> "..breakage created by a cluster of ADD.."

> Please, learn to distinguish between ad honimem(sic) insults and statements of fact you find inconvenient or embarrasing.

Personally, I would be embarrased(sic) if I was the one who considered the above a statement of fact, and petulantly pointed out a spelling error while making one of your own.

But hey, thanks for the chuckle.

Poettering: Revisiting how we put together Linux systems

Posted Sep 8, 2014 17:34 UTC (Mon) by Arker (guest, #14205) [Link]

Petulance is entirely in your imagination, I guess that's your right, enjoy it.

The pattern of behavior from the GNOME project is indeed a fact, it's not disputable, the tracks are all over the internet and since it has been the same pattern for over a decade it certainly seems fair to expect it to continue. If you think you have an objection to that characterization that is legitimate, please feel free to put it forward concretely. Putting forward baseless personal accusations instead cuts no ice.

Poettering: Revisiting how we put together Linux systems

Posted Sep 8, 2014 14:12 UTC (Mon) by mchapman (subscriber, #66589) [Link] (1 responses)

> That makes no sense. The objection to GNOME is broken, insane code.

But that wasn't what you claimed used to bother you. You were talking about broken behaviour, not code. Now maybe GNOME's behaviour *is* broken, that sharing a $HOME between multiple versions doesn't work properly, and it's just because I don't do that that I've never noticed.

So I'm having trouble following your argument. Are you saying people shouldn't be supporting GNOME -- that the only sensible thing to do with it is uninstall it -- because there are *some* use cases that for *some* people don't work properly? That seems really unfair for everybody else.

Poettering: Revisiting how we put together Linux systems

Posted Sep 8, 2014 14:30 UTC (Mon) by Arker (guest, #14205) [Link]

"You were talking about broken behaviour, not code."

A distinction with no difference. Behaviour is the result of code, and code is experienced as behaviour.

"Now maybe GNOME's behaviour *is* broken, that sharing a $HOME between multiple versions doesn't work properly, and it's just because I don't do that that I've never noticed."

That is correct, but also incomplete. Since GNOME screwed this up, they set a (bad) example that has been followed by others as well, and I am afraid today you will find so many commonly used programs have now emulated the breakage that it's widespread and this essential core OS feature is now practically defunct.

Of course YMMV, but in my universe, the damage done in this single, relatively small domain, done simply by not caring and setting a bad example and being followed by those who know no better, is orders of magnitude greater than their positive contributions. I am not trying to be mean I am simply being honest.

Poettering: Revisiting how we put together Linux systems

Posted Sep 3, 2014 22:19 UTC (Wed) by nix (subscriber, #2304) [Link]

While this scheme seems by and large lovely, I'm a bit concerned that btrfs deduplication may not be quite up to the job. Everything I can see suggests that it is block-based and based on rather large blocks at that (fs-sized), rather than something like rsync / bup sliding hashes which can eliminate duplication at arbitrary boundaries.

Now *if* the majority of the data we're dealing with is either block-aligned at a similarly large block size or compressed (and thus more or less non-deduplicable anyway unless identical) we might be OK with a block-based deduplicator. I hope this is actually the case, but fear it might not be: certainly many things in ELF files are aligned, but not on anything remotely as large as fs block boundaries!

But maybe we don't care about all our ELF executables being stored repeatedly as long as stuff that, e.g. hasn't been recompiled between runtime / app images gets deduplicated -- and btrfs deduplication should certainly be able to manage that.


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds