Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Posted Sep 5, 2014 4:14 UTC (Fri) by raven667 (subscriber, #5198)In reply to: Poettering: Revisiting how we put together Linux systems by dlang
Parent article: Poettering: Revisiting how we put together Linux systems
Posted Sep 5, 2014 6:08 UTC (Fri)
by dlang (guest, #313)
[Link] (29 responses)
But if every upgrade of every software package on your machine is the same way, ti will be a fiasco
Remember that the "base system" used for this "unchanging binary compatibility" is subject to change at he whim of the software developer, any update they do, you will (potentially) have to do as well, so that you have the identical environment.
Posted Sep 5, 2014 16:08 UTC (Fri)
by raven667 (subscriber, #5198)
[Link] (28 responses)
Posted Sep 5, 2014 16:14 UTC (Fri)
by paulj (subscriber, #341)
[Link] (12 responses)
Posted Sep 5, 2014 17:07 UTC (Fri)
by raven667 (subscriber, #5198)
[Link] (11 responses)
One thing if this proposal is worked on that it puts pressure on distros to define a subset as stable and it puts pressure on app makers to standardize on few runtimes so even if this proposal does not become the standard, it may create the discussion which that results in a new LSB standard for distros binary compatibility which is much more comprehensive than the weak-sauce LSB currently is. I think the discussion of what goes into /usr is very useful on its own even if nothing else comes out of this proposal.
Posted Sep 5, 2014 18:02 UTC (Fri)
by paulj (subscriber, #341)
[Link] (10 responses)
However, note that this is the "This is how the world should be, and we're going to change our bit to make it so, and if it ends up hurting users then that will supply the pressure needed to make the rest of the world so" approach. An approach to making progress which I think has been tried at least a few times before in the Linux world, which I'm not sure always helps in the greater scheme of things. The users who get hurt may not be willing to stick around to see the utopia realised, and not everything may get fixed.
Posted Sep 5, 2014 18:53 UTC (Fri)
by raven667 (subscriber, #5198)
[Link] (9 responses)
Posted Sep 6, 2014 10:40 UTC (Sat)
by paulj (subscriber, #341)
[Link] (8 responses)
Posted Sep 7, 2014 15:17 UTC (Sun)
by raven667 (subscriber, #5198)
[Link] (7 responses)
Posted Sep 8, 2014 9:30 UTC (Mon)
by paulj (subscriber, #341)
[Link] (6 responses)
If I had to make a criticism, it wouldn't be of him specifically, but of an attitude common amongst Linux user-space devs, that holds that it is acceptable to burn users with breakage, in the quest for some future code or system utopia. So code gets written that assumes other components are perfect, so that if they are not breakage will expose them and those other components will eventually get fixed. Code gets rewritten, because the old code was ugly, and the new code will of course be much better, and worth the regressions.
The root cause of this seems to be because of the fractured way Linux works. There is generally no authority that can represent the users' interests for stability. There is no authority that can coördinate and ensure that if subsystem X requires subsystem Y to implement something that was never really used before, that subsystem Y is given time to do this before X goes out in the wild. Or no authority to coördinate rewrites and release cycles.
Instead the various fractured groups of developers, in a way, interact by pushing code to users who, if agitated sufficiently, will investigate and report bugs or even help fix them.
You could also argue this is because of a lack of QA resources. As a result of which the user has to help fill that role. However, the lack of resources could also be seen as being in part due to the Linux desktop user environment never having grown out of treating the user as QA, and regularly burning users away.
Posted Sep 8, 2014 10:59 UTC (Mon)
by dlang (guest, #313)
[Link]
Posted Sep 8, 2014 11:59 UTC (Mon)
by pizza (subscriber, #46)
[Link] (4 responses)
I've been on both sides of this argument, as both an end-user and as a developer.
In balance, I'd much rather have the situation today; where stuff is written assuming the other components work properly, and where bugs get fixed in their actual locations rather than independently, inconsistently, and incompatibly papered over.
For example, this attitude is why Sound and [Wireless] Networking JustWork(tm) in Linux today.
The workaround-everything approach is only necessary when you are stuck with components you can't fix at the source -- ie proprietary crap. We don't have that problem, so let's do this properly!
The days where completely independent, underspecified, and barely-coupled components are a viable path forward have been over for a long, long time.
Posted Sep 8, 2014 12:33 UTC (Mon)
by nye (subscriber, #51576)
[Link] (1 responses)
Except they don't.
A couple of examples:
My home machine has both the front channels and the rear channels connected, for a total of four audio channels. In Windows, I go in to my sound properties and pick the 'quadrophonic' option; now applications like games or VLC will mix the available channels appropriately so that I get sound correctly coming out of the appropriate speakers - not only all four channels, but all four channels *correctly*, so that, for example, 5.1 audio is appropriately remixed by the application without any configuration needed.
I had a look at how I might get this in OpenSuSE earlier this year, and eventually concluded either that PA simply can't do this *at all*, or that if it can, nobody knows how[0]. I did find some instructions for how to set up something like this using a custom ALSA configuration, though that would have required that applications be configured to know about it (rather than doing the right thing automatically), and I never got around to trying it out before giving up on OS for a multitude of reasons.
Another example:
A related example:
It's great that some people have more reliable wireless thanks to NM, but it's not great that this comes at the expense of fully-functional ethernet.
[0] Some more recent googling has turned up more promising discussion of configuration file options, but I no longer have that installation to test out.
Posted Sep 8, 2014 13:14 UTC (Mon)
by pizza (subscriber, #46)
[Link]
The last three motherboards I've had, with multi-channel audio, have JustWorked(tm) once I selected the appropriate speaker configuration under Fedora/GNOME. Upmixed and downmixed PCM output, and even the analog inputs are mixed properly too.
(Of course, some of the responsibility for handling this is in the hands of the application, even if only to query and respect the system speaker settings instead of using a fixed configuration)
> I have a laptop running Kubuntu. Occasionally, I can persuade it to make a wireless connection, but mostly it just doesn't - no messages, just nothing happening. It also doesn't properly manage the wired network, especially if the cable is unplugged or replugged. NetworkManager is essentially just a black box that might work, and if it doesn't then you're screwed. I eventually persuaded it to kind-of work without NM, though it then required some manual effort to switch between wired and wireless networks.
NM has been effectively flawless for me for several years now (even with switching back and forth), also with Fedora, but that shouldn't matter in this case -- I hope you filed a bug report. Folks can't fix problems they don't know about.
> For several consecutive releases, the Ubuntu live CD images didn't have fully working wired networking out of the box, due to some DHCP cock-up. Sadly I no longer remember the specifics for certain, but I *believe* it was ignoring the DNS setting specified by the DHCP server, and manually disabling the DHCP client and editing resolv.conf fixed the problem. There may have been something more to it than that, like getting the default gateway wrong; I don't recall.
I can't speak to Ubuntu's DHCP stuff here (did you file a bug?) but I've seen a similar problem in the past using dnsmasq's DHCP client -- the basic problem I found was that certian DHCP servers were a bit..special in their configuration and the result is that the DHCP client didn't get a valid DNS entry. dnsmasq eventually implemented a workaround for the buggy server/configuration. This was maybe three years ago?
> It's great that some people have more reliable wireless thanks to NM, but it's not great that this comes at the expense of fully-functional ethernet.
Come on, that's being grossly unfair. Before NM came along, wireless was more unreliable than not, with every driver implementing the WEXT stuff slightly differently requiring every client (or user) to treat every device type slightly differently. Now the only reason things don't work is if the hardware itself is unsupported, and that's quite rare these days.
Posted Sep 8, 2014 15:38 UTC (Mon)
by paulj (subscriber, #341)
[Link] (1 responses)
While I agree this path has led to sound and wireless working well in Linux today, I would disagree it was the only possible path. I'd suggest there were other paths available that would have ultimately led to the same result, but taken more care to avoid regressions and/or provide basic functionality even when other components hadn't been updated to match some new specification.
What's the point of utopia, if you've burned off all your users getting there? Linux laptops were once well-represented at über-techy / dev conferences, now it's OS X almost everywhere.
Posted Sep 8, 2014 17:12 UTC (Mon)
by pizza (subscriber, #46)
[Link]
...perhaps you are correct, but those other paths would have taken considerably longer, leading to a different sort of user burning.
> What's the point of utopia, if you've burned off all your users getting there? Linux laptops were once well-represented at über-techy / dev conferences, now it's OS X almost everywhere.
These days, Linux just isn't a cool counterculture status symbol any more. It's part of the boring infrastructure that's someone else's problem.
Anyway. The technical ones basically boil down to the benefits of Apple controlling the entire hardware/software/cloud stack -- Stuff JustWorks(tm). As long as you color within the lines, anyway.
Posted Sep 6, 2014 4:41 UTC (Sat)
by dlang (guest, #313)
[Link] (14 responses)
I disagree that that is a wonderful thing and everyone should be making use of it.
we are getting conflicting messages about who would maintain these base systems, if it's the distros, how are they any different than the current situation?
if it's joe random developer defining the ABI that his software is built against, it's going to be a disaster.
The assertion is being made that all the random developers (who can't agree on much of anything today, not language, not development distro, not packaging standards even within a distro, etc) are going to somehow standardize on a small set if ABIs
I see this as being a way for app developers to care LESS about maintainability, because now they don't have to deal with distro upgrades any longer, they can just state that they need ABI X and stick with it forever. When they do finally decide to upgrade, it will be a major undertaking, of the scale that kills projects (because they stop adding new features for a long time, and even loose features, as they are doing the infrastructure upgrade)
building up technical debt with the intention to pay it off later almost never works, and even when it does, it doesn't work well.
anything that encourages developers to build up technical debt by being able to ignore compatibility is bad
This encourages developers to ignore compatibilty in two ways.
1. the app developers don't have to worry about compatibility because they just stick with the version that they started with.
2.library developers don't have to worry about compatibility because anyone who complains about the difficulty in upgrading can now be told to just stick with the old ABI
Posted Sep 6, 2014 11:22 UTC (Sat)
by Wol (subscriber, #4433)
[Link] (4 responses)
1) People who don't want their disk space chewed up with multiple environments.
2) People who don't want (or like me can't understand :-) btrfs.
3) Devs who (like me) like to run an up-to-date rolling distro.
4) Distro packagers who don't want the hassle of current packages that won't run on current systems.
Personally, I think any dev who ignores compatibility is likely to find themselves in the "deprecated" bin fairly quickly, and will just get left behind.
Where it will really score is commercial software which will be sold against something like a SLES or RHEL image, which will then continue to run "for ever" :-)
Cheers,
Posted Sep 6, 2014 21:00 UTC (Sat)
by dlang (guest, #313)
[Link] (3 responses)
Yep, that's part of my fear.
this 'forever' doesn't include security updates.
People are already doing this with virtualization (see the push from vmware about how it allows people to keep running Windows XP forever), and you are seeing a lot of RHEL5 in cloud deployments, with no plans to ever upgrade to anything newer.
Posted Sep 6, 2014 22:14 UTC (Sat)
by raven667 (subscriber, #5198)
[Link] (2 responses)
Posted Sep 6, 2014 23:00 UTC (Sat)
by dlang (guest, #313)
[Link] (1 responses)
I don't think so.
Posted Sep 7, 2014 15:58 UTC (Sun)
by raven667 (subscriber, #5198)
[Link]
On desktops as well being able to run older apps on newer systems rather than being force-upgraded because the distro updates and also being able to run other newer apps (and bugfixes) on a cadence faster than what a distro that releases every 6mo or 1yr gives, is a benefit that many seem to be looking for, staying on GNOME2 for example while keeping up with Firefox and LibreOffice updates or whatever. Being able to run multiple userspaces on the same system with low friction allows them to fight it out and compete more directly than dual-booting or VMs, rather than being locked in to what your preferred distro provides.
Posted Sep 6, 2014 23:15 UTC (Sat)
by raven667 (subscriber, #5198)
[Link] (8 responses)
I think you answered that in your first paragraph:
> I agree that this proposal makes it easy to have many different ABIs on the same computer.
There is currently more friction in running different ABIs on the same computer, there is no standard automated means for doing so, people have to build and run their own VMs or containers with limited to non-existant integration between the VMs or containers.
The other big win is a standard way to do updates that works with any kind of distro, from Desktop to Server to Android to IVI and embedded, without each kind of systems needing to redesign updates in their own special way.
> if it's joe random developer defining the ABI that his software is built against, it's going to be a disaster.
The proposal is that a developer would pick an ABI to build against, such as OpenSuSE 16.4 or for a non-desktop example OpenWRT 16.04, not that every developer would be responsible for bundling and building all of their library dependancies.
> The assertion is being made that all the random developers (who can't agree on much of anything today, not language, not development distro, not packaging standards even within a distro, etc) are going to somehow standardize on a small set if ABIs
This whole proposal is a way to try to use technology to change the social and political dynamic by changing the cost of different incentives, it is not guaranteed how it will play out. I think there is pressure though from end users who don't want to run 18 different ABI distros to run 18 different applications, to pick a few winners, maybe 2 or 3 at most, in the process there might be a de-facto standard created which re-vitalizes the LSB.
> I see this as being a way for app developers to care LESS about maintainability, because now they don't have to deal with distro upgrades any longer, they can just state that they need ABI X and stick with it forever. When they do finally decide to upgrade, it will be a major undertaking, of the scale that kills projects (because they stop adding new features for a long time, and even loose features, as they are doing the infrastructure upgrade)
I don't see it playing out that way, developers love having the latest and greatest too much for them to continue to deploy apps built against really old runtimes, all of the pressure is for them to build against the latest ABI release of their preferred distro. The thing is that one of the problems this is trying to solve is that many people don't want to have to upgrade their entire computer with all new software every six months just to keep updated on a few applications they care about, or conversely be forced to update their main applications because their distro has moved on, it might actually make more sense to run the latest distro ABI alongside the users preferred environment, satisfying both developers and end-users.
Posted Sep 7, 2014 8:20 UTC (Sun)
by dlang (guest, #313)
[Link] (7 responses)
I can see why developers would like this, but I still say that this is a bad result.
Posted Sep 7, 2014 8:29 UTC (Sun)
by Cyberax (✭ supporter ✭, #52523)
[Link] (6 responses)
And since there's no base ABI, everybody just does whatever suits them. Just last week we found out that Docker images for Ubuntu don't have libcrypto installed, for example.
Maybe this container-lite initiative will motivate distros to create a set of basic runtimes, that can actually be downloaded and used directly.
Posted Sep 7, 2014 9:48 UTC (Sun)
by dlang (guest, #313)
[Link] (5 responses)
if you don't define them to include every package, you will run into the one that you are missing.
These baselines are no easier to standardize than the LSB or distros.
In fact, they are worse than distros because there aren't any dependencies available (other than on "RHEL10 baseline")
Posted Sep 7, 2014 9:56 UTC (Sun)
by Cyberax (✭ supporter ✭, #52523)
[Link] (4 responses)
> These baselines are no easier to standardize than the LSB or distros.
The ecosystem of runtime might help to evolve at least a de-facto standard. I'm pretty sure that it can be done for the server-side (and let's face it, that's the main area of non-Android Linux use right now) but I'm less sure about the desktop.
Posted Sep 7, 2014 10:43 UTC (Sun)
by dlang (guest, #313)
[Link] (3 responses)
>> These baselines are no easier to standardize than the LSB or distros.
> There are no standards right now. None. And LSB has shown us that when a group of vendors dictate their vision of a standard, distors simply ignore them.
so who is going to define the standards for the new approach? Every distro will just declare each of their releases to be a standard (and stop supporting them as quickly as they do today)
that gains nothing over the current status quo, except give legitimacy to people who don't want to upgrade
If the "standards" are going to be defined by anyone else, then they are going to be doing the same work that the distros are doing today, they will be just another distro group, with all the drawbacks of having to back-port security fixes (or fail to do so) that that implies.
Posted Sep 7, 2014 10:53 UTC (Sun)
by Cyberax (✭ supporter ✭, #52523)
[Link] (2 responses)
> If the "standards" are going to be defined by anyone else, then they are going to be doing the same work that the distros are doing today, they will be just another distro group, with all the drawbacks of having to back-port security fixes (or fail to do so) that that implies.
I'm hoping that somebody like Debian or RHEL/CentOS can pick up the job of making runtimes. It shouldn't be hard, after all.
Posted Sep 7, 2014 11:18 UTC (Sun)
by dlang (guest, #313)
[Link] (1 responses)
thanks for the laugh
there isn't going to be "the runtime" any more than there will be "the distro", for the same reason, different people want different things and have different tolerance of risky new features.
> I'm hoping that somebody like Debian or RHEL/CentOS can pick up the job of making runtimes. It shouldn't be hard, after all.
they already do, it's called their distro releases
> as an app developer I won't have to play against network effects of distributions - my software will run on ALL distributions supporting the /usr-based packaging.
no, your users may just have to download a few tens of GB of base packaging to run it instead.
Plus, if the baseline you pick has security problems, your users will blame you for them (because if you had picked a different base that didn't have those problems, their system wouldn't have been hit by X)
Posted Sep 7, 2014 11:52 UTC (Sun)
by Cyberax (✭ supporter ✭, #52523)
[Link]
> they already do, it's called their distro releases
That's the crux of the problem - distros are wildly incompatible and there's no real hope that they'll merge any time soon.
> no, your users may just have to download a few tens of GB of base packaging to run it instead.
>Plus, if the baseline you pick has security problems, your users will blame you for them (because if you had picked a different base that didn't have those problems, their system wouldn't have been hit by X)
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
I have a laptop running Kubuntu. Occasionally, I can persuade it to make a wireless connection, but mostly it just doesn't - no messages, just nothing happening. It also doesn't properly manage the wired network, especially if the cable is unplugged or replugged. NetworkManager is essentially just a black box that might work, and if it doesn't then you're screwed. I eventually persuaded it to kind-of work without NM, though it then required some manual effort to switch between wired and wireless networks.
For several consecutive releases, the Ubuntu live CD images didn't have fully working wired networking out of the box, due to some DHCP cock-up. Sadly I no longer remember the specifics for certain, but I *believe* it was ignoring the DNS setting specified by the DHCP server, and manually disabling the DHCP client and editing resolv.conf fixed the problem. There may have been something more to it than that, like getting the default gateway wrong; I don't recall.
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Wol
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Not every package, but at least _something_. And dedup partially solves the space problem.
There are no standards right now. None. And LSB has shown us that when a group of vendors dictate their vision of a standard, distors simply ignore them.
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
I'm hoping that the "market" is going to dictate the standard. Application developers will prefer to use a runtime that is well-supported and easy to maintain. And perhaps in time it will the become _the_ runtime.
Exactly. However, as an app developer I won't have to play against network effects of distributions - my software will run on ALL distributions supporting the /usr-based packaging.
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
No. Most developers want pretty much the same basic feature set with small custom additions.
No they don't. Distro model is exclusionary - I can't just ship RHEL along with my software package (well, I can but it's impractical). So either I have to FORCE my clients to use a specific version of RHEL or I have to test my package on lots of different distros.
Bullshit. Minimal Docker image for Ubuntu is less than 100Mb and it contains lots of software. There's no reason at all for the basic system to be more than 100Mb in size.
Who cares. All existing software, except for high-profile stuff like browsers, is insecure like hell. Get over it.