|
|
Subscribe / Log in / New account

Poettering: Revisiting how we put together Linux systems

Lennart Poettering has posted a lengthy writeup of a plan put together by the "systemd cabal" (his words) to rework Linux software distribution. It is based heavily on namespaces and Btrfs snapshots. "Now, with the name-spacing concepts we introduced above, we can actually relatively freely mix and match apps and OSes, or develop against specific frameworks in specific versions on any operating system. It doesn't matter if you booted your ArchLinux instance, or your Fedora one, you can execute both LibreOffice and Firefox just fine, because at execution time they get matched up with the right runtime, and all of them are available from all the operating systems you installed. You get the precise runtime that the upstream vendor of Firefox/LibreOffice did their testing with. It doesn't matter anymore which distribution you run, and which distribution the vendor prefers."

to post comments

Poettering: Revisiting how we put together Linux systems

Posted Sep 1, 2014 12:04 UTC (Mon) by colo (guest, #45564) [Link] (146 responses)

I can't help it (and therefore don't like it) - it sounds kind of like statically linking all binaries while de-duplicating their embedded object files by means of btrfs-only features. :/

Poettering: Revisiting how we put together Linux systems

Posted Sep 1, 2014 12:40 UTC (Mon) by cjcoats (guest, #9833) [Link] (22 responses)

Yet another Embrace-Extend-Extinguish -- they seem to have learned
their lessons very well from Microsoft...

Serious system architects really understand that multiplying
dependencies the way "systemd" does, and the way this does is
a Bad Idea®

And what about those few of us who write small-niche compile-and-run
software like environmental models -- I have only a few thousand
clients worldwide, I don't have the resources for all these builds on
who knows how many versions of what distributions, and the clients
don't either -- it looks like he's freezing us out: Linux for the
masses (only), and niche users can just forget about it.

Poettering: Revisiting how we put together Linux systems

Posted Sep 1, 2014 12:52 UTC (Mon) by dvdhrm (subscriber, #85474) [Link]

> And what about those few of us who write small-niche compile-and-run
> software like environmental models -- I have only a few thousand
> clients worldwide, I don't have the resources for all these builds on
> who knows how many versions of what distributions, and the clients
> don't either -- it looks like he's freezing us out: Linux for the
> masses (only), and niche users can just forget about it.

I'm not sure where you got that from, but this is definitely not the intention of the proposal. On the contrary, "small-niche compile-and-run software" should benefit from this, as you can provide your application as a bundle that can just run, instead of requiring package-descriptions for each distribution.
Obviously, this relies on other distributions to support such bundles. Otherwise, it just becomes one more package you need to provide.

Also note that users are free to run package-managers on top of this system. Sure, /usr will be read-only, but you're free to install traditional packages into /opt/ or /home/<user>/.local/ just like you do right now.

Poettering: Revisiting how we put together Linux systems

Posted Sep 1, 2014 12:55 UTC (Mon) by Robin.Hill (subscriber, #4385) [Link] (20 responses)

You seem to be missing the point of this project altogether.

The idea would seem to be that you (as an app developer) only need to build it for a single platform (as you do at the moment, I guess). If the user wants to run your software then they may need to install the runtime for that platform in order to use the software, but can do that without needing to use that platform for everything else as well. They can (for example) use debian unstable for their desktop platform and still run apps built for RHEL/Ubuntu/Slackware without needing to worry about incompatibilities or differing dependencies.

I'm not sure how well it will work in practice, or what sort of overhead there'll be with all these differing versions, but it seems a pretty neat solution. Of course, it does depend on the stability of btrfs and I've had some issues with that in the past.

Poettering: Revisiting how we put together Linux systems

Posted Sep 1, 2014 13:48 UTC (Mon) by nix (subscriber, #2304) [Link] (2 responses)

Quite. This isn't *multiplying* dependencies: it's recognizing that things *have* lots of dependencies, and finding a way to make that less hellish than now. It all seems very sensible to me, and completely compatible with any package manager at all, and even with roll-yer-own systems. I want it! (Once btrfs is stable enough for production use, that is.)

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 11:17 UTC (Tue) by cjcoats (guest, #9833) [Link] (1 responses)

It is my experience that the claim, "things have lots of dependencies"
almost always comes from not having thought the thing through -- and
I've seen lots of it over the course of thirty years experience with
systems architecture.

To be honest, "thinking things through" properly is quite rare ;-(

Poettering: Revisiting how we put together Linux systems

Posted Sep 3, 2014 22:33 UTC (Wed) by nix (subscriber, #2304) [Link]

Uh. Things really *do* have lots of dependencies. Have a look at the dep list for gnumeric, or Chromium with all the third-party stuff possible split out, or Firefox, or almost *anything* other than games (which for cross-compatibility reasons try to avoid depending on anything but SDL and OpenAL much of the time).

Poettering: Revisiting how we put together Linux systems

Posted Sep 1, 2014 13:54 UTC (Mon) by nim-nim (subscriber, #34454) [Link] (5 responses)

I'm pretty sure *I* don't want to be the one tasked with maintaining in operational condition the kind of house of cards this wants to enable.

Which is pretty much why all the attempts to "help poor developers" at the expanse of ops never went anywhere. Developers may decide what is cool but without operational buy-in it stays in the R&D dep (strangely normal beings don't want to self-maintain their computer devices).

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 7:08 UTC (Tue) by drag (guest, #31333) [Link] (4 responses)

ALMOST NOTHING I have to use and maintain, professionally, is provided for by distributions.

When I want to use python I can't rely on distro python at all. I can't use any of the packages they use internally.

The apache servers we use are not distro provided. The java tomcat installations are not the ones provided for distributions. I can't rely on distro provided dependencies for any of the software I want to use because distributions do not provide the versions I need, and even if they did they wouldn't maintain it in a way useful for me...

And this is not a small problem. I deal with hundreds of applications, several thousands of servers. All of it is a mixture of in house software and external software. I have to deal with data that gets exchanged between dozens of different applications made by half a dozen different application development groups. Languages that get used range from C++, to Java, to Node.JS, python, ruby, and a massive amount of perl and shell... Some of the stuff has been under development for almost 20 years, some of it is brand new following latest trends, some is stuff that hasn't been touched in 7 years, but is still expected to work as well as the first day it was deployed on a secure OS on modern hardware. The complexity is mind boggling sometimes.

What do I do about it in Linux? I can deal with it and make it work, but it's not easy. It needs custom ways to setup environments, custom ways to deploy application configs and metadata... lots of perl, lots of puppet, lots of custom packages.

(Oh, and I came from Ops background first.)

Business/Users needs dictate how applications behave and are written. How applications are written dictate the environment that needs to be provided by the OS. If you think that it's proper that the OS should dictate how applications look and behave you are looking at things completely backwards; The job of the OS is to make it as easy to develop and run applications as possible...

Distributions really have only two choices in this sort of situation, long term. Either embrace it and get their inputs into the design by contributing upstream, or see themselves minimized as users realize there is better tools to solve their problems. I have no doubt that the containerized approach will eventually find acceptance, though. It's just a better way.

Poettering: Revisiting how we put together Linux systems

Posted Sep 4, 2014 19:24 UTC (Thu) by NightMonkey (subscriber, #23051) [Link] (1 responses)

The answer you seek is Gentoo. It handles the wicked combination of 'complexity' and 'customization' in a managed fashion, like no binary distribution ever can. :)

Poettering: Revisiting how we put together Linux systems

Posted Sep 8, 2014 1:11 UTC (Mon) by gerdesj (subscriber, #5446) [Link]

mmmm slotting

Cheers
Jon

Poettering: Revisiting how we put together Linux systems

Posted Sep 7, 2014 23:02 UTC (Sun) by Arker (guest, #14205) [Link]

It sounds like you have a mighty mess that you are expected to keep running without actually having the power to do so. This is nothing but the mother of all migraines laying in wait for you and I say that from experience.

I can certainly understand the desire to have a tool that will let you just keep it running till the end of the day till you clock out, anyone in that position would feel it, but the real problem will simply continue to fester until it reaches the point that no tool can paper it over and the entire enterprise grinds to a halt.

The real problem here is not technical, it's social. You need to impose sanity and you do not have the juice to do it. That's not a problem with a technical solution.

Poettering: Revisiting how we put together Linux systems

Posted Sep 8, 2014 1:10 UTC (Mon) by gerdesj (subscriber, #5446) [Link]

@Drag: Sir(?), you really need to discover the word "No" and possibly "budget proposal". However, I know the world you inhabit by its description and sympathise - bollocks isn't it?

Custom Apache n Tomcat installs though: Have the API changes for those really got you in a twist or do you have to deal with rather unimaginative devs who refuse to look at later point releases ...

Cheers
Jon

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 7:52 UTC (Tue) by ncm (guest, #165) [Link] (3 responses)

So much pain to work around the lack of environment variable expansion when following symbolic links.

Seriously, Aegis, and then Domain/ix had this in the '80s, and had BSD and SYSV personalities per-shell session. Dragonfly BSD cobbled up a version recently. No, it's not a security hole.

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 12:28 UTC (Tue) by foom (subscriber, #14868) [Link] (2 responses)

> No, it's not a security hole.

Because it doesn't use normal environment vars, but a brand new kind of var.

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 21:45 UTC (Tue) by ncm (guest, #165) [Link] (1 responses)

On Dragonfly it's not an env var, but not for any spurious security reason. It is for backward compatibility.

Poettering: Revisiting how we put together Linux systems

Posted Sep 3, 2014 1:38 UTC (Wed) by foom (subscriber, #14868) [Link]

If it used env vars, it seems like it could lead to disclosure of env vars to an attacker. And a process's env vars are generally considered private today, potentially containing sensitive information, aren't they?

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 15:15 UTC (Tue) by landley (guest, #6789) [Link] (6 responses)

Didn't lxc and kvm already exist?

Dear Redhat: Katamari Damacy was not an engineering roadmap.

Poettering: Revisiting how we put together Linux systems

Posted Sep 3, 2014 2:53 UTC (Wed) by mathstuf (subscriber, #69389) [Link] (5 responses)

> Katamari Damacy was not an engineering roadmap.

But isn't that what the Unix Philosophy tells me I should do? Take small programs and string them together into larger tools?

Just because I can write a one liner which uses find, sort, and awk piped to awk to generate a file given a directory tree doesn't mean I should.

Poettering: Revisiting how we put together Linux systems

Posted Sep 3, 2014 4:20 UTC (Wed) by rodgerd (guest, #58896) [Link]

*thunderous applause*

Poettering: Revisiting how we put together Linux systems

Posted Sep 4, 2014 19:27 UTC (Thu) by NightMonkey (subscriber, #23051) [Link] (3 responses)

Doesn't mean you shouldn't, either. :)

Poettering: Revisiting how we put together Linux systems

Posted Sep 5, 2014 14:42 UTC (Fri) by mathstuf (subscriber, #69389) [Link] (2 responses)

True, but I've found that the limit is very easy to meet for it to make sense in non-shell code. In fact, the case I'm referencing was actually *much* better in Python (even though it was 300+ lines afterwards) because we went from 3n+l stat calls (n == number of files, l == number of symlinks) to n+l because we could remember more than one thing at a time. Plus someone other than I could grok it in 5 minutes. That kind of change is something just not possible in shell code. Hell, I had to do "awk | awk" because system() output in awk only goes to stdout, not to a variable.

Poettering: Revisiting how we put together Linux systems

Posted Sep 7, 2014 17:45 UTC (Sun) by jwakely (subscriber, #60262) [Link] (1 responses)

> Hell, I had to do "awk | awk" because system() output in awk only goes to stdout, not to a variable.

You can use awk's pipe operator instead:

cmd = "ls -l";
while ((cmd | getline x) == 1) xx = xx x "\n";
close(cmd);

Poettering: Revisiting how we put together Linux systems

Posted Sep 8, 2014 5:15 UTC (Mon) by mathstuf (subscriber, #69389) [Link]

That should have a "see also" near the docs for system() in awk's manage. Thanks for the info at least.

Poettering: Revisiting how we put together Linux systems

Posted Sep 1, 2014 13:03 UTC (Mon) by ovitters (guest, #27950) [Link] (118 responses)

I saw someone on Goolgle+ Plus mentioning that Linus said in the QA at Debconf 2014 that he dislikes shared libraries. There should be available of the QA. I haven't fact checked this. So shared libraries are IMO good, but maybe shared libraries per "subvolume"? A little less shared than before I guess.

btrfs-only seems like a step back. The various filesystems are better in some workloads than others. I guess you could have everything in btrfs except the data somehow. But then how would systemd automatically know that these things belong together? Hrm...

Poettering: Revisiting how we put together Linux systems

Posted Sep 1, 2014 13:12 UTC (Mon) by cate (subscriber, #1359) [Link] (102 responses)

No, really he said that he dislike the ABI breakage, the libraries who break ABI, (and also the small libraries maintained by two people, one of which is crazy).

So the static libraries are only a workaround until people and distribution will behave more stable.

Poettering: Revisiting how we put together Linux systems

Posted Sep 1, 2014 15:37 UTC (Mon) by torquay (guest, #92428) [Link] (100 responses)

    So the static libraries are only a workaround until people and distribution will behave more stable.

Which is going to be never, as almost all distributions have no qualms about breaking APIs and ABIs from one release to the next. Fedora being the prime example, with Ubuntu not far behind in this broken state of affairs. (And hence it's no wonder many people have moved to Mac OS X, which provides a refreshingly stable environment, with the OS updates being free on Mac machines).

The distributions in turn try to shift the blame to "upstream", because they have no manpower to fix the breakage, nor the power or willingness to punish upstream developers. Many upstream developers behave well and try to maintain backwards compatibility, but on the scale of a distribution the number of broken and/or changed libraries (made by undisciplined kids with Attention Deficit Disorder) quickly accumulates. The constant mess created by Gnome and GTK comes to mind.

Hence we end up with the effort by the systemd folks to try to fix this mess, by proposing in effect a massive abstraction layer. While it seems to be an overly elaborate scheme with many moving parts, any effort in fixing the mess is certainly welcome.

Perhaps there's an easier way to skin the cat: have each app run inside its own Docker container, but with access to a common /home partition. All the libraries and runtime required for the app are bundled with the app, including an X or Wayland display server (*). The windows produced by the app are captured and shown by a "master" display server. It's certainly a heavy handed and size-inefficient solution, but this is the price to pay to tame the constant API and ABI brokenness.

(*) perhaps this requirement can be relaxed to omit components that are guaranteed to never break their APIs/ABIs; by default all libraries and components are treated as incompatible from one version to the next, unless explicitly shown otherwise through extensive regression tests.

Security

Posted Sep 1, 2014 16:43 UTC (Mon) by cyperpunks (subscriber, #39406) [Link] (9 responses)

There are some very, very good reasons for using a distribution which dont seems to be present in the blog.

Let's use the Heartbleed issue as an example.

To get fully protected after the bug, all work a distro user was required to do was to install the latest openssl package form the distro.

Now, in this new scheme of things, the user is forced to upgrade every single instance and check each for any possible Heartbleed issue.

The new scheme brings flexibility, however from a security viewpoint, it seems like a nightmare.


Security

Posted Sep 1, 2014 17:00 UTC (Mon) by rahulsundaram (subscriber, #21946) [Link] (3 responses)

What you claim is true only if all apps are installed using distribution repositories. real world deployments often have ad hoc installations and what is being proposed might help with that pain.

Security

Posted Sep 1, 2014 17:26 UTC (Mon) by cyperpunks (subscriber, #39406) [Link] (2 responses)

Not if the ad hoc installed apps are using system libraries.

Security

Posted Sep 1, 2014 21:12 UTC (Mon) by rahulsundaram (subscriber, #21946) [Link] (1 responses)

Which they often don't. Instead ad-hoc installations tend to bundle the libraries because they want to be independent of the distribution.

Security

Posted Sep 2, 2014 9:52 UTC (Tue) by NAR (subscriber, #1313) [Link]

I did use a software that bundled the openssl library because they wanted to be independent of the distribution. Of course they failed, because a newer version of the same distribution had newer glibc with new bugs, so after an OS upgrade the software stopped working...

Security

Posted Sep 1, 2014 19:45 UTC (Mon) by Wol (subscriber, #4433) [Link] (3 responses)

> To get fully protected after the bug, all work a distro user was required to do was to install the latest openssl package form the distro.

For a non-distro user (or, like me, a gentoo user), all that was needed was to not switch on the broken functionality in the first place! The reports I've seen all said that - for most machines - heartbleed was functionality that wasn't wanted and should not have been enabled to start with.

Yes I know users "don't want" the hassle, but gentoo suits me fine. I switch things on if I need them. That *should* be the norm.

Cheers,
Wol

Security

Posted Sep 2, 2014 17:17 UTC (Tue) by rich0 (guest, #55509) [Link] (2 responses)

So, I run Gentoo, but I'm not sure I buy that argument. In this case the bug only occurred if TLS heartbeat was enabled. What if next time a bug only occurs if something you might not think you need is disabled?

I think you just got lucky, and running USE=-* has its own issues.

Security

Posted Sep 2, 2014 18:29 UTC (Tue) by Wol (subscriber, #4433) [Link]

Well, I gather one of the BIG reasons heartbleed was such a disaster was

(a) most people had it switched on
(b) most people weren't using it

That's a recipe for minimal testing and maximal problems.

Your scenario is where most people need the functionality, so I'm in a minority of not wanting or needing. I don't think that is anywhere near as likely (although I could be wrong ...)

Cheers,
Wol

Security

Posted Sep 4, 2014 19:30 UTC (Thu) by NightMonkey (subscriber, #23051) [Link]

Gentoo would at least have given you a chance to disable the offending subcomponent (in a managed way), had a fix from the OpenSSL camp not come quickly enough.

Security

Posted Sep 2, 2014 2:26 UTC (Tue) by raven667 (subscriber, #5198) [Link]

This actually isn't any different in the proposed scheme because the base of the proposed runtimes _are_ the existing distros, which each have to apply security updates to the shared libraries they ship, we are already living that nightmare.

Poettering: Revisiting how we put together Linux systems

Posted Sep 1, 2014 16:46 UTC (Mon) by raven667 (subscriber, #5198) [Link]

> Perhaps there's an easier way to skin the cat: have each app run inside its own Docker container, but with access to a common /home partition. All the libraries and runtime required for the app are bundled with the app, including an X or Wayland display server (*). The windows produced by the app are captured and shown by a "master" display server. It's certainly a heavy handed and size-inefficient solution, but this is the price to pay to tame the constant API and ABI brokenness.

Sure, one of the reasons Docker exists and is so popular is to try and skin this cat, heck the reason VMs are so popular is to abstract away this software dependency management problem in a simple but heavy handed way. The problem is that no one really wants to run nested kernels to make this work, you lose a lot of information about the work to be scheduled when nesting kernels, so this is a possible way to solve the fundamental software compatibility management problem on a shared kernel.

I'm sure that as others digest this proposal and try to build systems using it they will discover corner cases which are not handled, compatibility issues, ways to simplify it so that the final result may be somewhat different but this could be a vastly useful system.

Poettering: Revisiting how we put together Linux systems

Posted Sep 1, 2014 18:26 UTC (Mon) by HenrikH (subscriber, #31152) [Link] (17 responses)

Wouldn't it be possible instead to create shim libraries when new API/ABI versions come out? For example when SDL-1.2.4.so is released then a SDL-1.2.3 shim is created that exposes the same ABI as the original 1.2.3 but that calls the 1.2.4.so behind the scene so to speak. Would of course be lot's of work though :)

Poettering: Revisiting how we put together Linux systems

Posted Sep 1, 2014 20:22 UTC (Mon) by Karellen (subscriber, #67644) [Link] (16 responses)

SDL-1.2.4 should expose the exact same ABI as SDL-1.2.3.

That's generally the point of major.minor.patch versioning, at least among (shared) libraries. An update which changes the "patch" level should not change the ABI *at all*, it should only change (improve) the functionality of the existing ABI.

A change to the "minor" level should only *add* to the ABI, so that all users of 1.2.x should be able to use 1.3.0+ if it's dropped in as a replacement.

If, as a library author, you need to change the ABI, by for instance modifying a function signature, or deleting a function that shouldn't be used any more, that's when you change the "major" version to 2.0.0, and make SDL-2.a.b co-installable with SDL-1.x.y. That way, legacy apps linked against the old and busted SDL1 can continue to work, while their modern replacements can link with the new hotness SDL2 and run together on the same system.

It's not always easy, and requires care and discipline. But creating shims would be just as much work, and tools already exist to help get it right, like abicheck.

Poettering: Revisiting how we put together Linux systems

Posted Sep 1, 2014 20:26 UTC (Mon) by dlang (guest, #313) [Link] (15 responses)

The problem is that ABI discipline is almost non-existant among "Desktop Environment" developers, there are a few exceptions, but very few.

If ABIs were managed properly, we wouldn't be having these disucssions.

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 2:48 UTC (Tue) by raven667 (subscriber, #5198) [Link] (11 responses)

It's not just desktop environment libraries, it's most of the ecosystem does not have ABI discipline. See http://upstream-tracker.org/ it's not all desktop software.

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 19:38 UTC (Tue) by robclark (subscriber, #74945) [Link] (8 responses)

> It's not just desktop environment libraries, it's most of the ecosystem does not have ABI discipline. See http://upstream-tracker.org/ it's not all desktop software.

that's kind of cool.. I hadn't seen it before. Perhaps they should add a 'wall of shame' ;-)

At least better awareness amongst dev's about ABI compat seems like a good idea.

Poettering: Revisiting how we put together Linux systems

Posted Sep 3, 2014 10:13 UTC (Wed) by accumulator (guest, #95885) [Link] (7 responses)

What the Upstream Tracker shows is that Desktop Environment developers are actually remarkable ABI disciplined.

Poettering: Revisiting how we put together Linux systems

Posted Sep 3, 2014 22:35 UTC (Wed) by nix (subscriber, #2304) [Link] (6 responses)

Quite, particularly given that it considers things like the addition of 'volatile' to a data structure member to be an ABI break (hint: it isn't).

Poettering: Revisiting how we put together Linux systems

Posted Sep 4, 2014 19:06 UTC (Thu) by zlynx (guest, #2285) [Link] (5 responses)

Adding volatile or const does break ABI linking in C++ applications. It changes the symbol name. So perhaps they are smart to track it.

Poettering: Revisiting how we put together Linux systems

Posted Sep 5, 2014 13:58 UTC (Fri) by jwakely (subscriber, #60262) [Link] (4 responses)

The grandparent was talking about adding cv-qualifiers to structure members, which doesn't affect symbol names at all.

Poettering: Revisiting how we put together Linux systems

Posted Sep 5, 2014 14:38 UTC (Fri) by mathstuf (subscriber, #69389) [Link] (3 responses)

Removing the qualifiers causes issues though, doesn't it (since optimizations done in previous builds may no longer be valid)?

Poettering: Revisiting how we put together Linux systems

Posted Sep 8, 2014 16:05 UTC (Mon) by nix (subscriber, #2304) [Link] (2 responses)

The first thing I looked at was complaining about something adding a volatile to a structure member. Sorry, that's not an ABI break, and any tool that says it is is just wrong. (It might require code changes to avoid cast warnings in some cases, but the code should still compile and work.)

Poettering: Revisiting how we put together Linux systems

Posted Sep 9, 2014 12:47 UTC (Tue) by mpr22 (subscriber, #60784) [Link] (1 responses)

Am I missing something? It seems to me that any binaries that were compiled against the version of the header that didn't have the volatile in it may display incorrect behaviour with respect to accessing it, and on that basis it seems to me that it's reasonable to call it an ABI break (since the only way to fix the break is to recompile).

Poettering: Revisiting how we put together Linux systems

Posted Sep 9, 2014 13:54 UTC (Tue) by nix (subscriber, #2304) [Link]

Hm. You might be right, though I'd not normally call 'earlier versions may have been misoptimized' to be something that constitutes an ABI break, at least not without investigation to see why the qualifier was added.

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 22:05 UTC (Tue) by jondo (guest, #69852) [Link] (1 responses)

Could a distro be frozen to such an extent that only packages get updated that pass such strong ABI testing? Quasi "Debian superstable".

Reality kicks in: This would simply stop all updates ...

Poettering: Revisiting how we put together Linux systems

Posted Sep 3, 2014 15:05 UTC (Wed) by raven667 (subscriber, #5198) [Link]

Nah, that's pretty much why you have today backports of security fixes without changing the version of software, this is the service that Redhat makes their millions on.

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 6:21 UTC (Tue) by krake (guest, #55996) [Link]

Not sure if you mean application developers or those creating workspace implementations (a.k.a. desktop environments), but for example KDE, as a vendor of both, has very strict policies on maintaining ABI and API stability.

Its wiki page on C++ ABI dos-and-don'ts has become one of the most often referenced resource in that matter.

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 11:26 UTC (Tue) by cjcoats (guest, #9833) [Link] (1 responses)

Not just desktop-developers. There were at least 147 ABI breakages
in the WRF weather-forecast modeling system between WRF-3.0.1 and 3.0.2.
Or at least, there were that many that I found and had to re-program
for...

Poettering: Revisiting how we put together Linux systems

Posted Sep 3, 2014 12:32 UTC (Wed) by rweir (subscriber, #24833) [Link]

you mean API?

Poettering: Revisiting how we put together Linux systems

Posted Sep 1, 2014 19:38 UTC (Mon) by Wol (subscriber, #4433) [Link] (7 responses)

> with the OS updates being free on Mac machines).

So why, even in the last couple of days, having I been hearing moans from Mac users that they can't upgrade their system (this chap was stuck on Snow Leopard).

Cheers,
Wol

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 1:42 UTC (Tue) by mathstuf (subscriber, #69389) [Link] (6 responses)

They are free provided you have given your monetary offering to the shrine of Jobs recently enough. Didn't you read the pamphlet?

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 5:53 UTC (Tue) by torquay (guest, #92428) [Link] (5 responses)

    They are free provided you have given your monetary offering to the shrine of Jobs recently enough.

And that may not be such a bad thing, if it gives you piece of mind and freedom from the broken API/ABI in the Linux world.

Constantly dealing with broken API/ABI is certainly not free: it takes up your time, which could have been used for more productive things.

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 10:45 UTC (Tue) by mathstuf (subscriber, #69389) [Link] (4 responses)

I've found that OS X is more annoying to deal with upgrades than any other system (Windows "never" breaks, Linux is stable where we're concerned for the most part and there are canary distros anyways, and no one else really updates in the first place). For example, 10.9 introduced the wonderful behavior where the library loader uses a statically sized array to store the library path (it was dynamic before). We overflowed this and now our app breaks. The solution has been to add support into the build to "bundle" libraries together, which was wanted anyways, but being forced to it because Apple regressed on some stupid internal thing which worked just fine is a pain in the ass. This happens all the time, every release.

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 14:34 UTC (Tue) by madscientist (subscriber, #16861) [Link] (1 responses)

Totally agree. I wonder how much the people praising Mac OS really develop/support/maintain software that run on Mac OS or iOS? Not only does Apple not seem to care much about backward compatibility, but they are extremely aggressive about pushing updates so it's almost impossible to get a stable and consistent environment across a team of developers; code that builds and runs fine on one developer's system throws errors on another. Keeping software building and running properly on Mac OS/iOS is a never-ending treadmill for developers, IME.

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 17:11 UTC (Tue) by pizza (subscriber, #46) [Link]

> Keeping software building and running properly on Mac OS/iOS is a never-ending treadmill for developers, IME.

I'm heavily involved with Gutenprint, and it is not an exaggeration to say that every major OSX release has broken (oh, sorry, "incompatibly changed") something we depended on.

Poettering: Revisiting how we put together Linux systems

Posted Sep 3, 2014 9:08 UTC (Wed) by dgm (subscriber, #49227) [Link] (1 responses)

Interesting. Where your path names longer than PATH_MAX?

Poettering: Revisiting how we put together Linux systems

Posted Sep 3, 2014 11:32 UTC (Wed) by mathstuf (subscriber, #69389) [Link]

Nowhere. There are just 200+ shared objects in the bundle which overflows it (they're all concatenated into a single buffer of libraries to look at).

Poettering: Revisiting how we put together Linux systems

Posted Sep 1, 2014 22:02 UTC (Mon) by sramkrishna (subscriber, #72628) [Link]

> number of broken and/or changed libraries (made by undisciplined kids with > Attention Deficit Disorder) quickly accumulates. The constant mess created > by Gnome and GTK comes to mind.

Yet, it is from that community where application sandboxing and single binary is coming from.

Poettering: Revisiting how we put together Linux systems

Posted Sep 1, 2014 23:41 UTC (Mon) by mezcalero (subscriber, #45103) [Link] (52 responses)

"overly elaborate scheme"? I don't actually think it's that complex. I mean, it's just agreeing on a relative simple naming scheme for sub-volumes, and then exchanging them with btrfs send/recv, and that's pretty much it. We simply rely on the solutions btrfs already provides us with for the problems at hand, we push the de-dup problem, the packaging problem, the verification problem, all down to the fs, so that we don't have to come up with anything new for that!

I love the ultimate simplicity of this scheme. I mean, coming up with a good scheme to name btrfs sub-volume is a good idea anyway, and then just going on single step further and actually packaging the OS that way is actually not that big a leap!

I mean, maybe it isn't obvious when one comes from classic packaging systems with all there dependency graph theory, but looking back, after figuring out that this could work, it's more like "oh, well, this one was obvious..."

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 17:18 UTC (Tue) by paulj (subscriber, #341) [Link] (50 responses)

On the app:org.libreoffice.LibreOffice:GNOME3_20:x86_64:133 example: The idea is that multiple distros could provide that GNOME3_20:x86_64:133 dependency, is that correct?

If so, I'm wondering:

- Who assigns or registers or otherwise coördinates these distro-abstract dependency names?

- Who specifies the ABI for these abstract dependencies? I guess for GNOME3_20?

- What if multiple dependencies are needed? How is that dealt with?

The ABI specification thing for the labels seems a potentially tricky issue. E.g., should GNOME specify the one in this example? But what if there are optional features distros might want to enable/disable? That means labels are needed for every possible combination of ABI-affecting options that any dependency might have?

?

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 18:55 UTC (Tue) by raven667 (subscriber, #5198) [Link] (49 responses)

Re-reading the proposal, the vendorid should include where it comes from, org.gnome.GNOME3_20 in one of the examples, which would be different than org.fedoraproject.GNOME3_20 so you might have app:org.libreoffice.LibreOffice:org.gnome.GNOME3_20:x86_64:4.3.0 which depends on the GNOME libraries built by gnome.org themselves which is different than a distribution provided app.org.fedoraproject.LibreOffice:org.fedoraproject.GNOME3_20:x86_64:4.2.6

I think some of the vendor ids are getting truncated when making examples.

Poettering: Revisiting how we put together Linux systems

Posted Sep 4, 2014 9:33 UTC (Thu) by paulj (subscriber, #341) [Link] (48 responses)

So basically this proposal is that applications have a way to specify what distro they have been built for? Additionally, it envisages that GNOME and KDE would start providing distros or at least fully specifying an ABI?

On which point: Lennart has used "API" in comments here quite a lot, but I think he means ABI. ABI is even more difficult to keep stable than API, and the Linux desktop people havn't even managed to keep APIs stable!

#include "rants/referencer-got-broken-by-glib-gnome-vfs-changes.txt"

Poettering: Revisiting how we put together Linux systems

Posted Sep 4, 2014 14:46 UTC (Thu) by raven667 (subscriber, #5198) [Link] (47 responses)

ABI stability is much easier to have when you specify exactly what the environment was you built your binaries against and just ship the whole thing to the target system. A lot of people are solving this same problem in a similar fashion with Docker, by specifying a whole runtime with their application, this proposal maybe has more de-duping.

Poettering: Revisiting how we put together Linux systems

Posted Sep 4, 2014 22:34 UTC (Thu) by dlang (guest, #313) [Link] (31 responses)

That's not keeping the ABI stable, that's just selecting a particular ABI and distributing it. It's only stable if it's not going to change with future releases.

Poettering: Revisiting how we put together Linux systems

Posted Sep 5, 2014 4:14 UTC (Fri) by raven667 (subscriber, #5198) [Link] (30 responses)

That's the only style of ABI stability widely deployed in the Linux distribution world, it is the essential ingredient is what makes an Enterprise distro. What is being discussed is the same kind of ABI stability promised by RHEL for example.

Poettering: Revisiting how we put together Linux systems

Posted Sep 5, 2014 6:08 UTC (Fri) by dlang (guest, #313) [Link] (29 responses)

and converting software from one RHEL version to another is a major pain, but if you only have to do it once a decade you sort of live with it.

But if every upgrade of every software package on your machine is the same way, ti will be a fiasco

Remember that the "base system" used for this "unchanging binary compatibility" is subject to change at he whim of the software developer, any update they do, you will (potentially) have to do as well, so that you have the identical environment.

Poettering: Revisiting how we put together Linux systems

Posted Sep 5, 2014 16:08 UTC (Fri) by raven667 (subscriber, #5198) [Link] (28 responses)

I'm not sure I understand your point, the purpose of this proposal is to have a standard for easily supporting having different /usr base systems so you can have long term ABI compatibility, by having an RHEL/LTS style long-term stable system installed simultaneously alongside more Fedora/Ubuntu quickly updated releases so you can have the best of both worlds, applications which you don't have to port but once a decade and the latest shiny toys without having dependency hell force upgrades to your working system.

Poettering: Revisiting how we put together Linux systems

Posted Sep 5, 2014 16:14 UTC (Fri) by paulj (subscriber, #341) [Link] (12 responses)

The problem is that ABIs extend outside of the base distro, to state this proposals intends to share across the multiple installed distros. E.g., in $HOME, etc, and /var.

Poettering: Revisiting how we put together Linux systems

Posted Sep 5, 2014 17:07 UTC (Fri) by raven667 (subscriber, #5198) [Link] (11 responses)

I didn't think that was what dlang was referring to but maybe that's my confusion, there is definitely work to maintain compatibility for config data in /home and IPC protocols in /var, but that is a much smaller and more well defined set than /usr.

One thing if this proposal is worked on that it puts pressure on distros to define a subset as stable and it puts pressure on app makers to standardize on few runtimes so even if this proposal does not become the standard, it may create the discussion which that results in a new LSB standard for distros binary compatibility which is much more comprehensive than the weak-sauce LSB currently is. I think the discussion of what goes into /usr is very useful on its own even if nothing else comes out of this proposal.

Poettering: Revisiting how we put together Linux systems

Posted Sep 5, 2014 18:02 UTC (Fri) by paulj (subscriber, #341) [Link] (10 responses)

Yes, it'd be good if this kind of thing gave app/desktop-env an incentive again to think about the compatibility issues of their state, so that they'd eventually fix things and we could move towards an ideal world.

However, note that this is the "This is how the world should be, and we're going to change our bit to make it so, and if it ends up hurting users then that will supply the pressure needed to make the rest of the world so" approach. An approach to making progress which I think has been tried at least a few times before in the Linux world, which I'm not sure always helps in the greater scheme of things. The users who get hurt may not be willing to stick around to see the utopia realised, and not everything may get fixed.

Poettering: Revisiting how we put together Linux systems

Posted Sep 5, 2014 18:53 UTC (Fri) by raven667 (subscriber, #5198) [Link] (9 responses)

Well there is no real pressure to take this protocol on, it exists and people can work on it if they choose and they believe it solves problems for them, but there is no authority who can push such things, they only happen by consensus.

Poettering: Revisiting how we put together Linux systems

Posted Sep 6, 2014 10:40 UTC (Sat) by paulj (subscriber, #341) [Link] (8 responses)

Unless of course control of some important project is used to lever this idea into place. ;)

Poettering: Revisiting how we put together Linux systems

Posted Sep 7, 2014 15:17 UTC (Sun) by raven667 (subscriber, #5198) [Link] (7 responses)

You can build it and they still might not come, this proposal doesn't exist, even if systemd implements it, without an economy of /usr providers willing to package their runtimes and apps in this fashion. While I know you are just being funny but it still feeds into the trolls to think there is some magic evil power which compels and corrupts the distros, rather than they just independently coming to the conclusion that systemd is a good idea. Poettering can't be evil, I've seen pictures, he doesn't even have a moustache to twirl!

Poettering: Revisiting how we put together Linux systems

Posted Sep 8, 2014 9:30 UTC (Mon) by paulj (subscriber, #341) [Link] (6 responses)

Poettering isn't evil, no. He writes a lot of useful code. Though, he's not perfect either of course.

If I had to make a criticism, it wouldn't be of him specifically, but of an attitude common amongst Linux user-space devs, that holds that it is acceptable to burn users with breakage, in the quest for some future code or system utopia. So code gets written that assumes other components are perfect, so that if they are not breakage will expose them and those other components will eventually get fixed. Code gets rewritten, because the old code was ugly, and the new code will of course be much better, and worth the regressions.

The root cause of this seems to be because of the fractured way Linux works. There is generally no authority that can represent the users' interests for stability. There is no authority that can coördinate and ensure that if subsystem X requires subsystem Y to implement something that was never really used before, that subsystem Y is given time to do this before X goes out in the wild. Or no authority to coördinate rewrites and release cycles.

Instead the various fractured groups of developers, in a way, interact by pushing code to users who, if agitated sufficiently, will investigate and report bugs or even help fix them.

You could also argue this is because of a lack of QA resources. As a result of which the user has to help fill that role. However, the lack of resources could also be seen as being in part due to the Linux desktop user environment never having grown out of treating the user as QA, and regularly burning users away.

Poettering: Revisiting how we put together Linux systems

Posted Sep 8, 2014 10:59 UTC (Mon) by dlang (guest, #313) [Link]

The distro supposed to be the ones that make sure that Y is working before they ship X.

Poettering: Revisiting how we put together Linux systems

Posted Sep 8, 2014 11:59 UTC (Mon) by pizza (subscriber, #46) [Link] (4 responses)

> If I had to make a criticism, it wouldn't be of him specifically, but of an attitude common amongst Linux user-space devs, that holds that it is acceptable to burn users with breakage, in the quest for some future code or system utopia. So code gets written that assumes other components are perfect, so that if they are not breakage will expose them and those other components will eventually get fixed. Code gets rewritten, because the old code was ugly, and the new code will of course be much better, and worth the regressions.

I've been on both sides of this argument, as both an end-user and as a developer.

In balance, I'd much rather have the situation today; where stuff is written assuming the other components work properly, and where bugs get fixed in their actual locations rather than independently, inconsistently, and incompatibly papered over.

For example, this attitude is why Sound and [Wireless] Networking JustWork(tm) in Linux today.

The workaround-everything approach is only necessary when you are stuck with components you can't fix at the source -- ie proprietary crap. We don't have that problem, so let's do this properly!

The days where completely independent, underspecified, and barely-coupled components are a viable path forward have been over for a long, long time.

Poettering: Revisiting how we put together Linux systems

Posted Sep 8, 2014 12:33 UTC (Mon) by nye (subscriber, #51576) [Link] (1 responses)

>For example, this attitude is why Sound and [Wireless] Networking JustWork(tm) in Linux today

Except they don't.

A couple of examples:

My home machine has both the front channels and the rear channels connected, for a total of four audio channels. In Windows, I go in to my sound properties and pick the 'quadrophonic' option; now applications like games or VLC will mix the available channels appropriately so that I get sound correctly coming out of the appropriate speakers - not only all four channels, but all four channels *correctly*, so that, for example, 5.1 audio is appropriately remixed by the application without any configuration needed.

I had a look at how I might get this in OpenSuSE earlier this year, and eventually concluded either that PA simply can't do this *at all*, or that if it can, nobody knows how[0]. I did find some instructions for how to set up something like this using a custom ALSA configuration, though that would have required that applications be configured to know about it (rather than doing the right thing automatically), and I never got around to trying it out before giving up on OS for a multitude of reasons.

Another example:
I have a laptop running Kubuntu. Occasionally, I can persuade it to make a wireless connection, but mostly it just doesn't - no messages, just nothing happening. It also doesn't properly manage the wired network, especially if the cable is unplugged or replugged. NetworkManager is essentially just a black box that might work, and if it doesn't then you're screwed. I eventually persuaded it to kind-of work without NM, though it then required some manual effort to switch between wired and wireless networks.

A related example:
For several consecutive releases, the Ubuntu live CD images didn't have fully working wired networking out of the box, due to some DHCP cock-up. Sadly I no longer remember the specifics for certain, but I *believe* it was ignoring the DNS setting specified by the DHCP server, and manually disabling the DHCP client and editing resolv.conf fixed the problem. There may have been something more to it than that, like getting the default gateway wrong; I don't recall.

It's great that some people have more reliable wireless thanks to NM, but it's not great that this comes at the expense of fully-functional ethernet.

[0] Some more recent googling has turned up more promising discussion of configuration file options, but I no longer have that installation to test out.

Poettering: Revisiting how we put together Linux systems

Posted Sep 8, 2014 13:14 UTC (Mon) by pizza (subscriber, #46) [Link]

> My home machine has both the front channels and the rear channels connected, for a total of four audio channels. In Windows, I go in to my sound properties and pick the 'quadrophonic' option; now applications like games or VLC will mix the available channels appropriately so that I get sound correctly coming out of the appropriate speakers - not only all four channels, but all four channels *correctly*, so that, for example, 5.1 audio is appropriately remixed by the application without any configuration needed.

The last three motherboards I've had, with multi-channel audio, have JustWorked(tm) once I selected the appropriate speaker configuration under Fedora/GNOME. Upmixed and downmixed PCM output, and even the analog inputs are mixed properly too.

(Of course, some of the responsibility for handling this is in the hands of the application, even if only to query and respect the system speaker settings instead of using a fixed configuration)

> I have a laptop running Kubuntu. Occasionally, I can persuade it to make a wireless connection, but mostly it just doesn't - no messages, just nothing happening. It also doesn't properly manage the wired network, especially if the cable is unplugged or replugged. NetworkManager is essentially just a black box that might work, and if it doesn't then you're screwed. I eventually persuaded it to kind-of work without NM, though it then required some manual effort to switch between wired and wireless networks.

NM has been effectively flawless for me for several years now (even with switching back and forth), also with Fedora, but that shouldn't matter in this case -- I hope you filed a bug report. Folks can't fix problems they don't know about.

> For several consecutive releases, the Ubuntu live CD images didn't have fully working wired networking out of the box, due to some DHCP cock-up. Sadly I no longer remember the specifics for certain, but I *believe* it was ignoring the DNS setting specified by the DHCP server, and manually disabling the DHCP client and editing resolv.conf fixed the problem. There may have been something more to it than that, like getting the default gateway wrong; I don't recall.

I can't speak to Ubuntu's DHCP stuff here (did you file a bug?) but I've seen a similar problem in the past using dnsmasq's DHCP client -- the basic problem I found was that certian DHCP servers were a bit..special in their configuration and the result is that the DHCP client didn't get a valid DNS entry. dnsmasq eventually implemented a workaround for the buggy server/configuration. This was maybe three years ago?

> It's great that some people have more reliable wireless thanks to NM, but it's not great that this comes at the expense of fully-functional ethernet.

Come on, that's being grossly unfair. Before NM came along, wireless was more unreliable than not, with every driver implementing the WEXT stuff slightly differently requiring every client (or user) to treat every device type slightly differently. Now the only reason things don't work is if the hardware itself is unsupported, and that's quite rare these days.

Poettering: Revisiting how we put together Linux systems

Posted Sep 8, 2014 15:38 UTC (Mon) by paulj (subscriber, #341) [Link] (1 responses)

“For example, this attitude is why Sound and [Wireless] Networking JustWork(tm) in Linux today.”

While I agree this path has led to sound and wireless working well in Linux today, I would disagree it was the only possible path. I'd suggest there were other paths available that would have ultimately led to the same result, but taken more care to avoid regressions and/or provide basic functionality even when other components hadn't been updated to match some new specification.

What's the point of utopia, if you've burned off all your users getting there? Linux laptops were once well-represented at über-techy / dev conferences, now it's OS X almost everywhere.

Poettering: Revisiting how we put together Linux systems

Posted Sep 8, 2014 17:12 UTC (Mon) by pizza (subscriber, #46) [Link]

> While I agree this path has led to sound and wireless working well in Linux today, I would disagree it was the only possible path.

...perhaps you are correct, but those other paths would have taken considerably longer, leading to a different sort of user burning.

> What's the point of utopia, if you've burned off all your users getting there? Linux laptops were once well-represented at über-techy / dev conferences, now it's OS X almost everywhere.

These days, Linux just isn't a cool counterculture status symbol any more. It's part of the boring infrastructure that's someone else's problem.

Anyway. The technical ones basically boil down to the benefits of Apple controlling the entire hardware/software/cloud stack -- Stuff JustWorks(tm). As long as you color within the lines, anyway.

Poettering: Revisiting how we put together Linux systems

Posted Sep 6, 2014 4:41 UTC (Sat) by dlang (guest, #313) [Link] (14 responses)

I agree that this proposal makes it easy to have many different ABIs on the same computer.

I disagree that that is a wonderful thing and everyone should be making use of it.

we are getting conflicting messages about who would maintain these base systems, if it's the distros, how are they any different than the current situation?

if it's joe random developer defining the ABI that his software is built against, it's going to be a disaster.

The assertion is being made that all the random developers (who can't agree on much of anything today, not language, not development distro, not packaging standards even within a distro, etc) are going to somehow standardize on a small set if ABIs

I see this as being a way for app developers to care LESS about maintainability, because now they don't have to deal with distro upgrades any longer, they can just state that they need ABI X and stick with it forever. When they do finally decide to upgrade, it will be a major undertaking, of the scale that kills projects (because they stop adding new features for a long time, and even loose features, as they are doing the infrastructure upgrade)

building up technical debt with the intention to pay it off later almost never works, and even when it does, it doesn't work well.

anything that encourages developers to build up technical debt by being able to ignore compatibility is bad

This encourages developers to ignore compatibilty in two ways.

1. the app developers don't have to worry about compatibility because they just stick with the version that they started with.

2.library developers don't have to worry about compatibility because anyone who complains about the difficulty in upgrading can now be told to just stick with the old ABI

Poettering: Revisiting how we put together Linux systems

Posted Sep 6, 2014 11:22 UTC (Sat) by Wol (subscriber, #4433) [Link] (4 responses)

On the flip side there's going to be counter-pressure from

1) People who don't want their disk space chewed up with multiple environments.

2) People who don't want (or like me can't understand :-) btrfs.

3) Devs who (like me) like to run an up-to-date rolling distro.

4) Distro packagers who don't want the hassle of current packages that won't run on current systems.

Personally, I think any dev who ignores compatibility is likely to find themselves in the "deprecated" bin fairly quickly, and will just get left behind.

Where it will really score is commercial software which will be sold against something like a SLES or RHEL image, which will then continue to run "for ever" :-)

Cheers,
Wol

Poettering: Revisiting how we put together Linux systems

Posted Sep 6, 2014 21:00 UTC (Sat) by dlang (guest, #313) [Link] (3 responses)

> Where it will really score is commercial software which will be sold against something like a SLES or RHEL image, which will then continue to run "for ever" :-)

Yep, that's part of my fear.

this 'forever' doesn't include security updates.

People are already doing this with virtualization (see the push from vmware about how it allows people to keep running Windows XP forever), and you are seeing a lot of RHEL5 in cloud deployments, with no plans to ever upgrade to anything newer.

Poettering: Revisiting how we put together Linux systems

Posted Sep 6, 2014 22:14 UTC (Sat) by raven667 (subscriber, #5198) [Link] (2 responses)

As you say this is a dynamic that exists today so I'm not sure how it can be a con of the proposal as one of the main reasons VMs have taken over the industry is because of this same ABI management problem (and consolidation), with VMs you can run a particular tested userspace indefinately without impact to other software which needs a different ABI on the same system. This proposal doesn't really change this dynamic much, the same amount of pressure comes from end-users of proprietary vendor-ware to re-base and support newer OS releases even given that you can run old software indefinitely in VMs or containers.

Poettering: Revisiting how we put together Linux systems

Posted Sep 6, 2014 23:00 UTC (Sat) by dlang (guest, #313) [Link] (1 responses)

Is this a dynamic that we should be encouraging and move from a misuse of virtualization by some Enterprise customers to the status quo for everyone?

I don't think so.

Poettering: Revisiting how we put together Linux systems

Posted Sep 7, 2014 15:58 UTC (Sun) by raven667 (subscriber, #5198) [Link]

I think it is a dynamic that exists because it fills a need, it's an equilibrium, and we don't have control over he needs that drive it, but we can change the friction of different implementations to make life easier. Being able to easily run multiple ABIs on a system reduces the friction for upgrading just as much as VMs allow you to hold on to old systems forever.

On desktops as well being able to run older apps on newer systems rather than being force-upgraded because the distro updates and also being able to run other newer apps (and bugfixes) on a cadence faster than what a distro that releases every 6mo or 1yr gives, is a benefit that many seem to be looking for, staying on GNOME2 for example while keeping up with Firefox and LibreOffice updates or whatever. Being able to run multiple userspaces on the same system with low friction allows them to fight it out and compete more directly than dual-booting or VMs, rather than being locked in to what your preferred distro provides.

Poettering: Revisiting how we put together Linux systems

Posted Sep 6, 2014 23:15 UTC (Sat) by raven667 (subscriber, #5198) [Link] (8 responses)

> who would maintain these base systems, if it's the distros, how are they any different than the current situation?

I think you answered that in your first paragraph:

> I agree that this proposal makes it easy to have many different ABIs on the same computer.

There is currently more friction in running different ABIs on the same computer, there is no standard automated means for doing so, people have to build and run their own VMs or containers with limited to non-existant integration between the VMs or containers.

The other big win is a standard way to do updates that works with any kind of distro, from Desktop to Server to Android to IVI and embedded, without each kind of systems needing to redesign updates in their own special way.

> if it's joe random developer defining the ABI that his software is built against, it's going to be a disaster.

The proposal is that a developer would pick an ABI to build against, such as OpenSuSE 16.4 or for a non-desktop example OpenWRT 16.04, not that every developer would be responsible for bundling and building all of their library dependancies.

> The assertion is being made that all the random developers (who can't agree on much of anything today, not language, not development distro, not packaging standards even within a distro, etc) are going to somehow standardize on a small set if ABIs

This whole proposal is a way to try to use technology to change the social and political dynamic by changing the cost of different incentives, it is not guaranteed how it will play out. I think there is pressure though from end users who don't want to run 18 different ABI distros to run 18 different applications, to pick a few winners, maybe 2 or 3 at most, in the process there might be a de-facto standard created which re-vitalizes the LSB.

> I see this as being a way for app developers to care LESS about maintainability, because now they don't have to deal with distro upgrades any longer, they can just state that they need ABI X and stick with it forever. When they do finally decide to upgrade, it will be a major undertaking, of the scale that kills projects (because they stop adding new features for a long time, and even loose features, as they are doing the infrastructure upgrade)

I don't see it playing out that way, developers love having the latest and greatest too much for them to continue to deploy apps built against really old runtimes, all of the pressure is for them to build against the latest ABI release of their preferred distro. The thing is that one of the problems this is trying to solve is that many people don't want to have to upgrade their entire computer with all new software every six months just to keep updated on a few applications they care about, or conversely be forced to update their main applications because their distro has moved on, it might actually make more sense to run the latest distro ABI alongside the users preferred environment, satisfying both developers and end-users.

Poettering: Revisiting how we put together Linux systems

Posted Sep 7, 2014 8:20 UTC (Sun) by dlang (guest, #313) [Link] (7 responses)

so a developer gets to ignore all competing distros other than their favourite.

I can see why developers would like this, but I still say that this is a bad result.

Poettering: Revisiting how we put together Linux systems

Posted Sep 7, 2014 8:29 UTC (Sun) by Cyberax (✭ supporter ✭, #52523) [Link] (6 responses)

Speaking as an application developer, it's much better than doing nothing at all. Right now there's no standard set of packages at all, LSB is a joke that no one cares about.

And since there's no base ABI, everybody just does whatever suits them. Just last week we found out that Docker images for Ubuntu don't have libcrypto installed, for example.

Maybe this container-lite initiative will motivate distros to create a set of basic runtimes, that can actually be downloaded and used directly.

Poettering: Revisiting how we put together Linux systems

Posted Sep 7, 2014 9:48 UTC (Sun) by dlang (guest, #313) [Link] (5 responses)

so if you define these baselines to include every package, nobody is going to install them (disk space)

if you don't define them to include every package, you will run into the one that you are missing.

These baselines are no easier to standardize than the LSB or distros.

In fact, they are worse than distros because there aren't any dependencies available (other than on "RHEL10 baseline")

Poettering: Revisiting how we put together Linux systems

Posted Sep 7, 2014 9:56 UTC (Sun) by Cyberax (✭ supporter ✭, #52523) [Link] (4 responses)

> so if you define these baselines to include every package, nobody is going to install them (disk space)
Not every package, but at least _something_. And dedup partially solves the space problem.

> These baselines are no easier to standardize than the LSB or distros.
There are no standards right now. None. And LSB has shown us that when a group of vendors dictate their vision of a standard, distors simply ignore them.

The ecosystem of runtime might help to evolve at least a de-facto standard. I'm pretty sure that it can be done for the server-side (and let's face it, that's the main area of non-Android Linux use right now) but I'm less sure about the desktop.

Poettering: Revisiting how we put together Linux systems

Posted Sep 7, 2014 10:43 UTC (Sun) by dlang (guest, #313) [Link] (3 responses)

dedup only helps if the packages are exactly the same

>> These baselines are no easier to standardize than the LSB or distros.

> There are no standards right now. None. And LSB has shown us that when a group of vendors dictate their vision of a standard, distors simply ignore them.

so who is going to define the standards for the new approach? Every distro will just declare each of their releases to be a standard (and stop supporting them as quickly as they do today)

that gains nothing over the current status quo, except give legitimacy to people who don't want to upgrade

If the "standards" are going to be defined by anyone else, then they are going to be doing the same work that the distros are doing today, they will be just another distro group, with all the drawbacks of having to back-port security fixes (or fail to do so) that that implies.

Poettering: Revisiting how we put together Linux systems

Posted Sep 7, 2014 10:53 UTC (Sun) by Cyberax (✭ supporter ✭, #52523) [Link] (2 responses)

> so who is going to define the standards for the new approach? Every distro will just declare each of their releases to be a standard (and stop supporting them as quickly as they do today)
I'm hoping that the "market" is going to dictate the standard. Application developers will prefer to use a runtime that is well-supported and easy to maintain. And perhaps in time it will the become _the_ runtime.

> If the "standards" are going to be defined by anyone else, then they are going to be doing the same work that the distros are doing today, they will be just another distro group, with all the drawbacks of having to back-port security fixes (or fail to do so) that that implies.
Exactly. However, as an app developer I won't have to play against network effects of distributions - my software will run on ALL distributions supporting the /usr-based packaging.

I'm hoping that somebody like Debian or RHEL/CentOS can pick up the job of making runtimes. It shouldn't be hard, after all.

Poettering: Revisiting how we put together Linux systems

Posted Sep 7, 2014 11:18 UTC (Sun) by dlang (guest, #313) [Link] (1 responses)

> I'm hoping that the "market" is going to dictate the standard. Application developers will prefer to use a runtime that is well-supported and easy to maintain. And perhaps in time it will the become _the_ runtime.

thanks for the laugh

there isn't going to be "the runtime" any more than there will be "the distro", for the same reason, different people want different things and have different tolerance of risky new features.

> I'm hoping that somebody like Debian or RHEL/CentOS can pick up the job of making runtimes. It shouldn't be hard, after all.

they already do, it's called their distro releases

> as an app developer I won't have to play against network effects of distributions - my software will run on ALL distributions supporting the /usr-based packaging.

no, your users may just have to download a few tens of GB of base packaging to run it instead.

Plus, if the baseline you pick has security problems, your users will blame you for them (because if you had picked a different base that didn't have those problems, their system wouldn't have been hit by X)

Poettering: Revisiting how we put together Linux systems

Posted Sep 7, 2014 11:52 UTC (Sun) by Cyberax (✭ supporter ✭, #52523) [Link]

> there isn't going to be "the runtime" any more than there will be "the distro", for the same reason, different people want different things and have different tolerance of risky new features.
No. Most developers want pretty much the same basic feature set with small custom additions.

> they already do, it's called their distro releases
No they don't. Distro model is exclusionary - I can't just ship RHEL along with my software package (well, I can but it's impractical). So either I have to FORCE my clients to use a specific version of RHEL or I have to test my package on lots of different distros.

That's the crux of the problem - distros are wildly incompatible and there's no real hope that they'll merge any time soon.

> no, your users may just have to download a few tens of GB of base packaging to run it instead.
Bullshit. Minimal Docker image for Ubuntu is less than 100Mb and it contains lots of software. There's no reason at all for the basic system to be more than 100Mb in size.

>Plus, if the baseline you pick has security problems, your users will blame you for them (because if you had picked a different base that didn't have those problems, their system wouldn't have been hit by X)
Who cares. All existing software, except for high-profile stuff like browsers, is insecure like hell. Get over it.

Poettering: Revisiting how we put together Linux systems

Posted Sep 5, 2014 3:01 UTC (Fri) by paulj (subscriber, #341) [Link] (14 responses)

Right, so "Specify the precise distro to use, or ship your own runtime with your app".

The de-duping thing seems tenuous to me, for the "ship your own runtime" case. What are the chances that two different application vendors happen to pick the exact same combination of compiler toolchain, compile flags and libraries necessary to give identical binaries?

Having a system to make it possible to run different applications, built against different "distros" (or runtimes), at the same time, running with the same root/config (/etc, /var) and /home seems good. Though, I am sceptical that:

a) There won't be configuration compatibility issues with different apps using slightly different versions of a dependency that reads some config in /home (ditto for /etc).

This kind of thing used to not be an issue, back when it was more common to share /home across different computers thanks to NFS, and application developers would get more complaints if they broke multi-version access. However $HOME sharing is rare these days, and I got sick a long time ago of, e.g., GNOME not dealing well with different versions accessing the same $HOME (non-concurrently!).

b) Sharing /var across different runtimes similarly is likely fraught with multi-version incompatibility issues.

It's ironic that shared (even non-concurrent) $HOME support got broken / neglected in Linux, and now it seems we need it to help solve the runtime-ABI proliferation problem of Linux. :)

Poettering: Revisiting how we put together Linux systems

Posted Sep 5, 2014 3:03 UTC (Fri) by rahulsundaram (subscriber, #21946) [Link] (2 responses)

It appears this discussion might be a relevant read

https://mail.gnome.org/archives/gnome-os-list/2014-Septem...

Poettering: Revisiting how we put together Linux systems

Posted Sep 5, 2014 3:06 UTC (Fri) by martin.langhoff (subscriber, #61417) [Link] (1 responses)

If you are really going to lean on the de-dupe, then just ship the whole OS you need for your app and be done with it. Trust the magic pixie dust in the FS to de-dupe it all. Why do we have to burden the world with defining the part of the stack that is "base OS"?

Poettering: Revisiting how we put together Linux systems

Posted Sep 5, 2014 3:11 UTC (Fri) by rahulsundaram (subscriber, #21946) [Link]

Not sure if the question is directed at me but potentially because it is not magic pixie dust and we would want to define a platform while allowing distributions to change things if needed at other layers.

Poettering: Revisiting how we put together Linux systems

Posted Sep 8, 2014 1:20 UTC (Mon) by Arker (guest, #14205) [Link] (10 responses)

"However $HOME sharing is rare these days, and I got sick a long time ago of, e.g., GNOME not dealing well with different versions accessing the same $HOME (non-concurrently!)."

That used to bother me too. I deleted GNOME. As GNOME is the source of the breakage (in this and so many other situations) that is the only sensible response. The herd instinct to embrace insanity instead, in order to keep GNOME (a process that is actually ongoing, and accelerating) is what I do not understand. Why are people so insistent on throwing away things that work and replacing them with things that do not?

Poettering: Revisiting how we put together Linux systems

Posted Sep 8, 2014 10:14 UTC (Mon) by mchapman (subscriber, #66589) [Link] (9 responses)

> The herd instinct to embrace insanity instead, in order to keep GNOME (a process that is actually ongoing, and accelerating) is what I do not understand. Why are people so insistent on throwing away things that work and replacing them with things that do not?

What's there to understand? Clearly these people you're talking about are having a different experience with the software than you are. Why would you think your particular experience with it is canonical? Is it so hard to believe other people's experiences are different?

Poettering: Revisiting how we put together Linux systems

Posted Sep 8, 2014 12:53 UTC (Mon) by Arker (guest, #14205) [Link] (8 responses)

That makes no sense. The objection to GNOME is broken, insane code. Are you seriously proposing that other people using GNOME are not using the same broken, insane code? If they were not they would not be using GNOME. You make no sense.

Poettering: Revisiting how we put together Linux systems

Posted Sep 8, 2014 13:18 UTC (Mon) by JGR (subscriber, #93631) [Link] (5 responses)

Other people using GNOME are mostly users. They're not going to look at the code itself, or really know or care if it could be described as insane and/or broken, as long as it meets their (often rather straightforward) use case.

Or to put it another way, not everyone necessarily shares your view of what is "broken" and what is not.

Poettering: Revisiting how we put together Linux systems

Posted Sep 8, 2014 14:07 UTC (Mon) by Arker (guest, #14205) [Link] (4 responses)

You're feeling around in the dark but you're not too far off.

This is a problem that affects the entire market for computers, worldwide. Markets work well when buyers and sellers are informed. Buyers of computer systems, on the other hand, are for the most part as far from well informed as imaginable. A market where the buyers do not understand the products well enough to make informed choices between competitors is a market which has problems. And Free Software is part of that larger market.

And that's what we see with GNOME. The example we were discussing above had to do with the $home directory. The GNOME devs simply refuse to even try to do it right. Since none of them used shared $home directories, they did not see the problem, and had no interest in fixing it. And here is where the broken market comes in - because there were enough end users who like the GNOME devs did not understand how $home works and how it is to be used who simply did not understand why they should care.

And that's how the breakage created by a cluster of ADD spreads out to tear down the ecosystem generally. That's the path the herd is on right now. Anything that your 13 year old doesnt want to take the time to understand - it' s gone or going. A few more years of this and we will have computing systems setting world records for potential power and actual uselessness simultaneously.

Poettering: Revisiting how we put together Linux systems

Posted Sep 8, 2014 15:03 UTC (Mon) by pizza (subscriber, #46) [Link] (3 responses)

> Since none of them used shared $home directories, they did not see the problem, and had no interest in fixing it. And here is where the broken market comes in - because there were enough end users who like the GNOME devs did not understand how $home works and how it is to be used who simply did not understand why they should care.

I'm not quite sure what your point here is... you're basically blaming GNOME for the fact that its users are uninformed, and further, it's also GNOME's fault because those same uninformed users don't know enough to care about a philosophical argument under the hood about a situation those same uninformed users will never actually encounter?

> And that's how the breakage created by a cluster of ADD spreads out to tear down the ecosystem generally.

Please, lay off on the ad honimem insults.

Poettering: Revisiting how we put together Linux systems

Posted Sep 8, 2014 16:17 UTC (Mon) by Arker (guest, #14205) [Link] (2 responses)

"I'm not quite sure what your point here is... you're basically blaming GNOME for the fact that its users are uninformed, and further, it's also GNOME's fault because those same uninformed users don't know enough to care about a philosophical argument under the hood about a situation those same uninformed users will never actually encounter?"

I was not assessing blame, I was simply making you aware of the progression of events.

"Please, lay off on the ad honimem insults."

Please, learn to distinguish between ad honimem(sic) insults and statements of fact you find inconvenient or embarrasing.

Poettering: Revisiting how we put together Linux systems

Posted Sep 8, 2014 17:20 UTC (Mon) by pizza (subscriber, #46) [Link] (1 responses)

Y> "..breakage created by a cluster of ADD.."

> Please, learn to distinguish between ad honimem(sic) insults and statements of fact you find inconvenient or embarrasing.

Personally, I would be embarrased(sic) if I was the one who considered the above a statement of fact, and petulantly pointed out a spelling error while making one of your own.

But hey, thanks for the chuckle.

Poettering: Revisiting how we put together Linux systems

Posted Sep 8, 2014 17:34 UTC (Mon) by Arker (guest, #14205) [Link]

Petulance is entirely in your imagination, I guess that's your right, enjoy it.

The pattern of behavior from the GNOME project is indeed a fact, it's not disputable, the tracks are all over the internet and since it has been the same pattern for over a decade it certainly seems fair to expect it to continue. If you think you have an objection to that characterization that is legitimate, please feel free to put it forward concretely. Putting forward baseless personal accusations instead cuts no ice.

Poettering: Revisiting how we put together Linux systems

Posted Sep 8, 2014 14:12 UTC (Mon) by mchapman (subscriber, #66589) [Link] (1 responses)

> That makes no sense. The objection to GNOME is broken, insane code.

But that wasn't what you claimed used to bother you. You were talking about broken behaviour, not code. Now maybe GNOME's behaviour *is* broken, that sharing a $HOME between multiple versions doesn't work properly, and it's just because I don't do that that I've never noticed.

So I'm having trouble following your argument. Are you saying people shouldn't be supporting GNOME -- that the only sensible thing to do with it is uninstall it -- because there are *some* use cases that for *some* people don't work properly? That seems really unfair for everybody else.

Poettering: Revisiting how we put together Linux systems

Posted Sep 8, 2014 14:30 UTC (Mon) by Arker (guest, #14205) [Link]

"You were talking about broken behaviour, not code."

A distinction with no difference. Behaviour is the result of code, and code is experienced as behaviour.

"Now maybe GNOME's behaviour *is* broken, that sharing a $HOME between multiple versions doesn't work properly, and it's just because I don't do that that I've never noticed."

That is correct, but also incomplete. Since GNOME screwed this up, they set a (bad) example that has been followed by others as well, and I am afraid today you will find so many commonly used programs have now emulated the breakage that it's widespread and this essential core OS feature is now practically defunct.

Of course YMMV, but in my universe, the damage done in this single, relatively small domain, done simply by not caring and setting a bad example and being followed by those who know no better, is orders of magnitude greater than their positive contributions. I am not trying to be mean I am simply being honest.

Poettering: Revisiting how we put together Linux systems

Posted Sep 3, 2014 22:19 UTC (Wed) by nix (subscriber, #2304) [Link]

While this scheme seems by and large lovely, I'm a bit concerned that btrfs deduplication may not be quite up to the job. Everything I can see suggests that it is block-based and based on rather large blocks at that (fs-sized), rather than something like rsync / bup sliding hashes which can eliminate duplication at arbitrary boundaries.

Now *if* the majority of the data we're dealing with is either block-aligned at a similarly large block size or compressed (and thus more or less non-deduplicable anyway unless identical) we might be OK with a block-based deduplicator. I hope this is actually the case, but fear it might not be: certainly many things in ELF files are aligned, but not on anything remotely as large as fs block boundaries!

But maybe we don't care about all our ELF executables being stored repeatedly as long as stuff that, e.g. hasn't been recompiled between runtime / app images gets deduplicated -- and btrfs deduplication should certainly be able to manage that.

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 7:15 UTC (Tue) by ringerc (subscriber, #3071) [Link] (2 responses)

"And hence it's no wonder many people have moved to Mac OS X, which provides a refreshingly stable environment"

Sorry, but...

*doubles over laughing for several minutes*

I don't think there's yet been a Mac OS X release in which libedit has not been broken in one of several exciting ways. They change bundled libraries constantly, and often in non-BC ways. It's routine for major commercial software (think: Adobe Creative Suite) to break partially or fully a couple of OS X releases after the release of the package, so you just have to buy the new version. Their Cocoa APIs clearly aren't much more stable than their POSIX ones.

Then there's the system level stuff. Launchd was introduced abruptly, and simply broke all prior code that expected to get started on boot. (Sound familiar?). NetInfo was abruptly replaced by OpenDirectory and most things that used to be done with NetInfo stopped working, or had to be done in different (and usually undocumented) ways.

I had the pleasure of being a sysadmin who had to manage macs over the OS X 10.3 to 10.6 period, and I tell you, Fedora has nothing on the breakage Apple threw at us every release.

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 7:21 UTC (Tue) by ringerc (subscriber, #3071) [Link] (1 responses)

Oh, also, the sudden switch from LLVM to GCC. Switching to bash as the default shell. Changing their CIFS client. Breaking AFP support repeatedly in every release (did you know network search over AFP used to work?).

About the only highly backward compatible platforms out there are the moribund ones (Solaris, AIX, etc); FreeBSD, which makes a fair bit of effort but still breaks things occasionally; and Windows, which suffers greatly because it's so backward compatible.

Part of why OS X is so successful is *because* it breaks stuff, so it's free to change.

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 11:24 UTC (Tue) by jwakely (subscriber, #60262) [Link]

Stop it, you're not supposed to respond with facts when someone praises Mac OS X, that's not how it works.

The things that get broken probably needed to break and you should be grateful for it, you worthless non-believer.

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 13:47 UTC (Tue) by javispedro (guest, #83660) [Link] (5 responses)

And then applications will want to use, say, Bluetooth, and then you realize library ABI/APIs is only 10% of the problem.

It extends much further. Simply put, the Cascade of Attention Deficit Teenagers problem prevents every other OSS project from ever committing to an specification. Gtk+ will change its library API. But then Bluez will change its D-Bus specification and all of your containers become useless (library API didn't change). Or Gtk+ decides not to break the ABI, but rather start rendering things in a slightly different way and your window manager breaks (e.g. client side decorations). Etc. Etc.

I just don't see how having a new massive abstraction layer is going to help. In fact, I don't even even see how even a universal abstraction layer is feasible. Efforts like have freedesktop.org have made the most progress (look, icons of Gnome applications now appear in KDE menus! tbh this had been unthinkable for me less than 10 years ago). But now they have been corrupted into furthering the agendas of some people with "visions" instead of trying to be a common ground of disparaging APIs.

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 15:52 UTC (Tue) by raven667 (subscriber, #5198) [Link]

Most of this breakage you describe already exists in the current system, this proposal is about routing around the damage caused by lack of ABI discipline. You are right that the next battle is then moved to the IPC APIs such as D-Bus services like Bluez, and the proposal already touches on that, but this may be a smaller problem to get API discipline on IPC services than the entire set of shared libraries.

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 22:33 UTC (Tue) by ovitters (guest, #27950) [Link] (3 responses)

> But now they have been corrupted into furthering the agendas of some
> people with "visions" instead of trying to be a common ground of
> disparaging APIs.

Can you give me any specifics?

For instance, GNOME makes use of logind due to ConsoleKit being deprecated. We actually still support ConsoleKit, though probably it's pretty buggy. The logind bits resulted in a clear specification and allowing even more things to be shared across different Desktop Environments.

We still have stuff like gnome-session. What is does is pretty similar across various Desktop Environments. It was proposed to make use of the infrastructure provided by systemd user sessions, though that's not fully ready yet. This would then allow various Desktop Environments to handle their session bits in the same way. AFAIK, this is something KDE, GNOME and Enlightenment appreciate.

Regarding Client Side Decorations: GNOME is working with other Desktop Environments as well as Window Managers. Suggest to read the bugreports that the KWin maintainer linked to. It's not so doom and gloom as he makes it out to be. Further, it's nice that he dislikes the idea of CSD, but in his Google+ post he sometimes goes too much into anti-CSD advocacy based more on feelings than anything happening. That just on Google+, in Bugzilla + mailing lists he's awesome (don't recall KWin maintainers name on the top of my head -- I assume everyone knows who I am talking about).

The D-Bus APIs changing is a real problem. I'd suggest not calling people names. Lennart wrote how to properly handle versioning in D-BUs interfaces. But yeah, just be an ass and say shit like "Cascade of Attention Deficit Teenagers problem", because that's how you'll get a friendly response from e.g. me (nope!).

> But now they have been corrupted

Get lost with calling me corrupted.

Poettering: Revisiting how we put together Linux systems

Posted Sep 3, 2014 14:24 UTC (Wed) by javispedro (guest, #83660) [Link] (2 responses)

> Can you give me any specifics?

When was the last XDG specification published? Most times I see the freedesktop.org domain referenced these days is because systemd is hosted there, for some reason.

In the meantime, the existing standards are being "deprecated" or ignored e.g. notification icons is at this point not supported by 2 out of the 3 largest X desktops. There's still no replacement even when these two desktops have their own version of "notification icons".

But I do not really want to argue about FDO's mission. I just used how quickly its standards are becoming useless to show how library APIs is only a small part of the problem. The bigger problem is the lack of commitment to standards (I'm not saying I'm not part of this problem, too). Ideally, good reason should be provided when dropping support for an existing and approved XDG or LSB standard. Not "it's just that we have a different vision for Linux". Without that, a generic abstraction layer is just infeasible.

> Regarding Client Side Decorations

I do not even dislike CSDs. But it's just yet another way in which _existing_ cross-desktop compatibility is being thrown down the drain for no good reason. I do not know about KDE but there are plenty other DEs out there some of which don't even use decorations at all.

And this compatibility change would not be fixed either by the proposal discussed in the article.

> say shit like "Cascade of Attention Deficit Teenagers problem"

That is not my quote. E.g.
http://blogs.gnome.org/tthurman/2009/07/02/cascade-of-att...

Poettering: Revisiting how we put together Linux systems

Posted Sep 3, 2014 17:07 UTC (Wed) by jwarnica (subscriber, #27492) [Link] (1 responses)

Its not Thurman's either, one needs to go to JWZ for the source:

http://www.jwz.org/doc/cadt.html

Poettering: Revisiting how we put together Linux systems

Posted Sep 3, 2014 22:46 UTC (Wed) by nix (subscriber, #2304) [Link]

... and attacking JWZ for using hyperbolic terminology is like attacking water for being wet. To JWZ, ranting is performance art.

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 11:22 UTC (Tue) by cjcoats (guest, #9833) [Link]

Let me give you a nightmare: a large Linux server cluster
running the "module" run-time-package-management system, for
which there are over two dozen different compiler versions,
all of them declared incompatible with each other by the Powers
That Be, each with its own shared libraries, so that whether
a program you've built will run or not depends upon whether
the current state if "module laod" is correct --- and the odds
are that it is not!

I've got to do my work on that &#()^%!! thing.

Poettering: Revisiting how we put together Linux systems

Posted Sep 1, 2014 16:34 UTC (Mon) by raven667 (subscriber, #5198) [Link] (14 responses)

> btrfs-only seems like a step back. The various filesystems are better in some workloads than others. I guess you could have everything in btrfs except the data somehow. But then how would systemd automatically know that these things belong together? Hrm...

I would guess that naming LVMs using the same scheme would extend this to any filesystem, supporting ext4, xfs on lvm and btrfs using the same management framework would cover most users, maybe with some degraded functionality and warnings as you moved into harder to support or less well tested configurations.

Poettering: Revisiting how we put together Linux systems

Posted Sep 1, 2014 23:45 UTC (Mon) by mezcalero (subscriber, #45103) [Link] (13 responses)

Nope. We want the file-level de-dup, we want btrfs send/recv, and neither of that LVM can offer.

The file-level de-dup is actually key here, because if you dump multiple runtimes and the OS into the fs, then you need to make sure you don't pay the price for the duplication. And the file-level de-dup is not only important to ensure that you can share the data on disk, but also later on in RAM.

So no, LVM/DM absolutely doesn't provide us with anything we need here. Sorry. It's technology from the last century, it's not a way to win the future. It conceptually cannot deliver what we need here.

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 0:23 UTC (Tue) by rahulsundaram (subscriber, #21946) [Link] (8 responses)

Would this tie the implementation to only btrfs or would it be feasible to move into something else in the future?

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 1:24 UTC (Tue) by dlang (guest, #313) [Link] (6 responses)

once again btrfs is declared the one and only future filesystem for Linux. It would be nice to have it finish stabilizing first

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 1:33 UTC (Tue) by rahulsundaram (subscriber, #21946) [Link] (1 responses)

There is no clear switch over point from unstable to stable for filesystems. Some features in Btrfs are more stable than others. It should be fine to use a subset of them directly (Ex:OpenSUSE is switching over to Btrfs while disabling some of the more advanced features). So that by itself doesn't concern me as much as the potential to move into something else in the future.

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 2:33 UTC (Tue) by rodgerd (guest, #58896) [Link]

It's in RHEL (tech preview), and SLES and (I think) OEL as a production filesystem. Red Hat expect it to become a production item in the lifecycle of 7.

That's hardly the land of wild and crazy any more.

(Anecdata-wise, I found it rubbish under 3.4 - 3.10, and am running data I care about on 3.12 in the RAID1 config. It's been very reliable and coped with a drive failure/rebuild, growing arrays, and so on and so forth.)

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 20:44 UTC (Tue) by mezcalero (subscriber, #45103) [Link] (3 responses)

Well, how do you stabilize it? By getting it into the hands of users. File systems don't stabilize themselves unless people actually start making use of this.

I think using it in this logic is a great way to get the stabilization process sped up, because we will get it into the hands of people that way, but we don#t actually really care about the data placed in the btrfs volumes (at least initially): it is exclusively vendor-supplied, verified, read-only data. If the file system goes bad, we just download the stuff again, and nothing is lost. It's a nice slow adopt path we can do here...

Actually, we can easily start adopting this by just pushing the runtimes/os images/apps/frameworks into a loopback btrfs pool somewher in /var. This file system would only be mounted into the executed containers and apps, and not even appear in the mount table of the host...

Poettering: Revisiting how we put together Linux systems

Posted Sep 3, 2014 10:09 UTC (Wed) by dgm (subscriber, #49227) [Link] (2 responses)

No Lennart. Users, the kind that cannot diagnose and fix a kernel bug, do not help stabilize anything. They can only suffer, although some will not do it silently.

Put it in the hands of developers. Or volunteers. But please! Let users alone.

Poettering: Revisiting how we put together Linux systems

Posted Sep 3, 2014 18:49 UTC (Wed) by ermo (subscriber, #86690) [Link]

First of all, LWN is almost per definition not a forum where end users congregate. Second, I assume Lennart is talking about users in the "early-adopter" category of which developers are implicitly a part and not necessarily someones tech-averse grandmother?

I also happen to think that Lennart is correct in taking the longer view that btfrs needs to be included gradually in the ecosystem if it is ever to become a mature, trusted, default Linux FS.

There will be bumps in the road, sure, that's a given. But Lennart's point that he wants to ease the pain by starting off with storing non-essential data (in the easily replaceable sense) in btfrs while this process is onging, is IMHO both sound and valid.

Others may see it differently, of course.

Poettering: Revisiting how we put together Linux systems

Posted Sep 5, 2014 20:06 UTC (Fri) by HenrikH (subscriber, #31152) [Link]

I don't think that Lennart means that the end users should fix the problems. I assume here that he like every one else that performs large scale distribution understands that the real QA does not start until you get the end users to run your software.

You can perform all the QA you want internally and yet some random user with his random setup and random hardware till find tons of bugs on the first day of use.

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 20:40 UTC (Tue) by mezcalero (subscriber, #45103) [Link]

Well, the concepts we need all exist on ZFS too. dedup, snapshots, send/recv does. No, btrfs didn't invent them. But btrfs is the only fs in the kernel that can do this. And if it isn't totally stable, it as still close enough, at least for the feature-set we need.

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 2:13 UTC (Tue) by raven667 (subscriber, #5198) [Link] (2 responses)

I understand that btrfs has all the features you want to make this work efficiently but I think there will be increased uptake if people have the flexibility to run the filesystem they want and pay the increased overhead for their decision. I suppose some of those alternate filesystem choosers will complain about the overhead of their choice and claim the whole scheme is stupid, but they are probably insincere in any case.

I am also interested in how this plays out with NFS based NAS devices, it seems a lot like VDI where you have a set of very hot gold master images, mixed with something like Docker you have a whole data center humming along with a very high level of deduplication and standardization.

If this makes any sort of sense then someone will try to implement it for sure, maybe everyone will end up with btrfs in the end but the path to there might go through stages of using block level copy-on-write, and failing, before they are convinced.

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 11:53 UTC (Tue) by Wol (subscriber, #4433) [Link] (1 responses)

Eggsackerly. I was looking at btrfs because I'm changing my disk setup, but I decided to stay with ext4 (over raid) because I couldn't understand btrfs.

And if people CAN run this stuff over ext4, or xfs, or reiser (does anyone still use it :-), then maybe people will also be motivated to add these features to those file systems. Succeed or fail, it's all within the Linus philosophy of "show me you can do it, show me it's a good idea". That's the way you get new stuff into the Linux kernel, that should be the way you get stuff into Linux distros.

And succeed or fail, it's good for the developers to have a go :-)

Cheers,
Wol

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 22:59 UTC (Tue) by rodgerd (guest, #58896) [Link]

btrfs is a lot easier to understand if you've run ZFS. It's a pretty radically different model.

Poettering: Revisiting how we put together Linux systems

Posted Sep 9, 2014 11:23 UTC (Tue) by alexl (subscriber, #19068) [Link]

> And the file-level de-dup is not only important to ensure that you can
> share the data on disk, but also later on in RAM.

This is unfortunately not true. The files on each btrfs subvolume have a per-subvolume st_dev (different minor nr), and the page-cache is per-device. So, block sharing between btrfs volumes is strictly an on-disk thing, they will be cached separately in RAM.

I know this because I wrote the btrfs docker backend hoping to get this feaure (among others), and it didn't work.

Poettering: Revisiting how we put together Linux systems

Posted Sep 1, 2014 18:06 UTC (Mon) by kreijack (guest, #43513) [Link] (3 responses)

> I can't help it (and therefore don't like it) - it sounds kind of like
> statically linking all binaries while de-duplicating their embedded
> object files by means of btrfs-only features. :/

In the Lennart post, I don't see any btrfs-only feature. Even if he seems to told the opposite.

The snapshot is a "photo" of a subvolume (or an another snapshot). After a snapshot, the subvolume and the snapshot are two interdependently trees, but BTRFS handles transparently the uncommon data as unshared and the common data as shared until they diverge. So send and receive are efficient because they are able to skip common data computing the diff.

But to work the subvolume and the snapshot have to be in a relationship parent-child (in historical sense) and have to be build in the same filesystem.

Instead the runtime/usr/app subvolume aren't coupled at all. They may be even build in different computer (upstream vs downstream). In this case btrfs send and receive aren't different/more efficient than rsync.
The fact that these subvolumes share some files is like having an hard link between these files (remember these subvolume are RO, so an hard link works).

Poettering: Revisiting how we put together Linux systems

Posted Sep 6, 2014 6:56 UTC (Sat) by mab (guest, #314) [Link] (2 responses)

File level de-dup is needed.

Poettering: Revisiting how we put together Linux systems

Posted Sep 6, 2014 11:32 UTC (Sat) by Wol (subscriber, #4433) [Link] (1 responses)

Is it *needed* (unlikely) or just a "stupid thing to do without".

If I have loads of disk space, and am happy to tolerate the waste, would it work?

(My home system has multiple users, and photos are shared between user accounts. I make extensive use of hard links to do so. Would something like that work? I know deduping is hard work, but I've got a script that does MD5 hashes, checks for duplicates, and replaces duplicates with hard links.)

Cheers,
Wol

Poettering: Revisiting how we put together Linux systems

Posted Sep 8, 2014 16:17 UTC (Mon) by nix (subscriber, #2304) [Link]

You would waste lots of memory as well, since the binaries would all have distinct inode numbers, and distinct pages with identical content in memory. (I suppose ksmd would eventually fix that up, if it was running...)

Poettering: Revisiting how we put together Linux systems

Posted Sep 1, 2014 12:28 UTC (Mon) by thexf (guest, #91471) [Link] (20 responses)

All that LP doing is bullshit!

Poettering: Revisiting how we put together Linux systems

Posted Sep 1, 2014 12:31 UTC (Mon) by corbet (editor, #1) [Link] (19 responses)

Please, can we do without this kind of stuff? If you have specific technical objections then by all means express them. But this kind of comment helps nobody.

Poettering: Revisiting how we put together Linux systems

Posted Sep 1, 2014 14:23 UTC (Mon) by bersl2 (guest, #34928) [Link] (18 responses)

> Please, can we do without this kind of stuff?

I'm not sure we can, unfortunately, not until the whole systemd thing gets resolved. It took a lot of self-control myself to not hastily post something snarky about systemd seemingly capturing udev and other core parts of the system. Yes, it in fact helps nobody, but it does help us cope (badly) with how we see a future we do not want playing out.

Factual or not, it feels like systemd is threatening future compatibility for all distros which want to remain with modern Linux as a Unix-like OS, for gains which do not make any sense for a significant number of users. And it is a paralyzing feeling.

It's really hard to compartmentalize the different things a person or group does when one feels like the other has a sword or gun pointed in one's face. It does not matter whether the sword is rubber, or the gun is a fake, or even if there's nothing at all and it's all in our imaginations: we think it's real, so we act as though its real.

Communication remains poor due to trust issues, I think. I don't think many of us actually trust Poettering, et al. with the parts of the core system being bulldozed and replaced figuratively overnight, compatibility with existing components be damned. Certainly we don't trust their words.

Nor can I personally provide factual backing for this lack of trust. This affects technical issues, but at its core, these are human issues.

Poettering: Revisiting how we put together Linux systems

Posted Sep 1, 2014 15:37 UTC (Mon) by ovitters (guest, #27950) [Link] (4 responses)

> This affects technical issues, but at its core, these are human issues.

Aside from pretending you're speaking on behalf of multiple people, you're post is quite ok. However, the case you argue for is not.

Your post is totally fine. I think you're wrong, but I see nothing wrong in your post. It expresses how you feel things are going.

However, you're arguing that snarky comments are ok. That entirely unreasonable stance to take. Such comments provides no value at all, result in emotional responses and the value for this site is 0. The original poster expresses his/hers emotions and likely feels a bit better, but that is IMO done at the expense of everyone reading this site. It's just not within reason to tolerate such behaviour.

Poettering: Revisiting how we put together Linux systems

Posted Sep 1, 2014 19:24 UTC (Mon) by SiB (subscriber, #4048) [Link] (3 responses)

> Aside from pretending you're speaking on behalf of multiple people, ..

He is.

> ... Such comments provides no value at all,

This one brought us to this post from bersl2, which helped me, at least.

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 7:15 UTC (Tue) by oldtomas (guest, #72579) [Link]

> [ovitters] Aside from pretending you're speaking on behalf of multiple people,

> [SiB] He is.

Seconded. And Olav: you should know that.

Whenever I see something like this, I think "oh, noes! another Lennart Poettering thread" and turn away in disgust. That's most probably why those "multiple people" are overheard.

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 9:07 UTC (Tue) by ovitters (guest, #27950) [Link] (1 responses)

> This one brought us to this post from bersl2

No, it did not. There's a warning by the someone from LWN that such comments are not useful. A hit and run comment is terrible. That maybe perhaps there can be value out of such comment: yeah whatever. Let's get concrete, that comment is utterly useless, negative and leads nowhere.

You value the post from bersl2, that is what should be on LWN. Not the initial comments because maybe after a totally crappy comment followed by a warning by LWN site owner you finally get to a better comment.

You're advocating terrible commenting. Start your own site.

Poettering: Revisiting how we put together Linux systems

Posted Sep 3, 2014 14:54 UTC (Wed) by ms-tg (subscriber, #89231) [Link]

+1

Very well said.

Poettering: Revisiting how we put together Linux systems

Posted Sep 1, 2014 19:18 UTC (Mon) by rgmoore (✭ supporter ✭, #75) [Link] (12 responses)

I think there's a big difference between what you did and what the grandfather post that corbet is complaining about did. Your worries may be more emotional than factual, but in theory somebody could write a reply that would address them. In contrast, the grandfather post is just slinging an insult without further details. There's no hope of any kind of productive discussion coming from it.

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 16:34 UTC (Tue) by JoeBuck (subscriber, #2330) [Link] (11 responses)

Agreed, and LWN should stop tolerating this sort of thing. LWN comment threads on anything controversial have become a sewer, which is particularly disturbing because we are paying for this. If the LWN staff don't want to censor, they could let subscribers upvote and downvote comments Slashdot-style and people can then set a score threshold for what they want to see.

Comment quality

Posted Sep 2, 2014 16:42 UTC (Tue) by corbet (editor, #1) [Link] (10 responses)

Voting is one of those things that has been on the list for a while. One of these days.

That said, I'm not sure that the current system is working all that badly? We had one troll; I griped at them and we heard no more. Beyond that, what would you have us censor, were we willing to do so? The comment thread has been way too long and somewhat repetitive at times, but there has also been some good discussion. I don't really feel that we could have improved it by applying a heavy hand.

Comment quality

Posted Sep 2, 2014 18:18 UTC (Tue) by smoogen (subscriber, #97) [Link]

Well I have found that being able to ignore many posters has been very useful. Sadly I have to ask that I can turn off Guest versus paid subscribers as I am finding that the 'guests' are increasingly trolling.

Comment quality

Posted Sep 2, 2014 18:31 UTC (Tue) by andresfreund (subscriber, #69562) [Link]

> That said, I'm not sure that the current system is working all that badly?

Yes, it is. The signal/noise ratio in here has made it increasingly pointless to even read the comments. There's so many repetitions of the same inflammatory bullshit that the significant number of very capable people here that I very much want to read are a) completely drowned out b) commenting far less c) understandably can't always resist the trolls.

It also makes the 'Unread comments' feature far less useful because there's always just lots of repetitive stuff in there. While sad I'd much prefer ignoring certain article's comments so I can read the rest in peace. Sifting through 50 comments to two flamed articles, just to find the two others is annoying.

Comment quality

Posted Sep 2, 2014 19:45 UTC (Tue) by gmaxwell (guest, #30048) [Link]

Experience elsewhere suggests that comment voting has many negative effects and few good ones. That LWN doesn't do that is one of the reasons I've thought that it's generally higher quality.

Back on topic here, more then complaining— there is something you can do which provide absolute protection against this stuff: don't run it. E.g. Gentoo runs fine without you using the latest trendy mac/smartphone architectural imports.

If you're like me, there are alternatives that better meet your principles and work styles— and perhaps they're only not as attractive because they don't get the enormous resource investment that things like Fedora do, but there is only one way to fix that...

Comment quality

Posted Sep 2, 2014 19:53 UTC (Tue) by tjc (guest, #137) [Link]

> That said, I'm not sure that the current system is working all that badly?

The discussion here is significantly better than the "discussion" unfolding at That Other Site around the same topic, so I'm an advocate for leaving things as they are.

Comment quality

Posted Sep 3, 2014 0:49 UTC (Wed) by rgmoore (✭ supporter ✭, #75) [Link]

I think the current system is working OK, except for a handful of topics that bring out the worst in people. Unfortunately, "anything proposed by Lennart Poettering" seems to be on that list. I certainly wouldn't mind seeing topics that are likely to spark big arguments be set to subscribers only, since guest posters do seem to be more prone to angry, unproductive arguments.

Comment quality

Posted Sep 3, 2014 2:19 UTC (Wed) by leoc (guest, #39773) [Link] (1 responses)

I rather have a setting where I can just hide all the comments by guest accounts rather than having slashdot style comment voting. If people want to troll, make them contribute something to running LWN.

Comment quality

Posted Sep 3, 2014 9:04 UTC (Wed) by cladisch (✭ supporter ✭, #50193) [Link]

A agree that if anything is to be done, it should be the ability to filter guests’ comments instead of voting.

Voting might be able to remove the few bad comments, but it definitely would change the way how (much) people write their comments for the intended audience, and I cannot see this as an improvement.

Comment quality

Posted Sep 3, 2014 4:46 UTC (Wed) by speedster1 (guest, #8143) [Link] (1 responses)

Are we getting a bit nostalgiac perhaps, remembering all the wonderful articles over the years and forgetting the minority of rude and flame-provoking comments, most of which appear on hot-button issues of the day?

I still find the signal to noise ratio much higher here than almost anywhere else, and am pleased that my subscription is helping support this community with such excellent leadership.

Comment quality

Posted Sep 3, 2014 17:44 UTC (Wed) by Trelane (subscriber, #56877) [Link]

Right. That's what I'd like to preserve. It'd be sad to see lwn going towards commentary irrelevance.

Comment quality - voting vs censoring vs social pressure

Posted Sep 5, 2014 7:39 UTC (Fri) by karath (subscriber, #19025) [Link]

This is a complex topic. As a generalisation, I think that voting on comments is a bad thing that, if anything, increases the level of trolling.

Censoring is both a very emotive word and an extremely complex topic. If a site publisher has a clear policy that abusive and/or spam postings will be removed then all users of the site have to accept that policy or go elsewhere. I believe that removing posts as part of the process of maintaining that policy is _not_ censorship. And of course, users, particularly paying users, are free to lobby for a change in policy.

However, I think the best policy of all is that the editorial team publicly call out the postings that they consider abusive. As has happened twice in the comments to this article. It makes it clear to all where the borders of taste are and generally most people are willing to go along with this kind of social pressure. Serial offenders may eventually have to have their posting privileges curtailed.

Like many suggestions, mine have the downside of requiring more effort from the editorial team that would be better put towards identifying high quality news and (continuing) writing higher quality articles.

Poettering: Revisiting how we put together Linux systems

Posted Sep 1, 2014 12:45 UTC (Mon) by roc (subscriber, #30627) [Link] (37 responses)

It's not clear how the scope of a "runtime" will be determined.

Poettering: Revisiting how we put together Linux systems

Posted Sep 1, 2014 12:56 UTC (Mon) by AlexHudson (guest, #41828) [Link]

Does it need to be? Isn't it effectively self-defining?

You can readily imagine that a popular Linux distro, dividing things up into reasonable run-times, will create the defacto notion. The run-time is just a label and a promise to maintain stability basically, what technically goes into it doesn't matter so much - Gtk plus OCaml could be an entirely valid runtime, it will stand or fall on use and popularity.

Poettering: Revisiting how we put together Linux systems

Posted Sep 1, 2014 15:43 UTC (Mon) by mezcalero (subscriber, #45103) [Link] (35 responses)

My assumption is that in the end there will be a GNOME runtime, and a KDE runtime, and maybe a couple more, but that should really cover it.

The runtimes include everything you need to develop an app. All the way from glibc, to gtk, glib, gstreamer, the crypto libs and everything else. An app picks one of these runtimes, and gets the full API they provide. This is not unlike you'd do app development for Android where you also pick one android version you want to develop for and then get all its APIs. Of course we are Linux and we are pluralistic, hence the need for multiple parallel runtimes.

Poettering: Revisiting how we put together Linux systems

Posted Sep 1, 2014 18:15 UTC (Mon) by clopez (guest, #66009) [Link] (25 responses)

How security and upgrades of the runtimes are going to be handled?

1) Say, that I have App1 and App2 that depend on the gnome-3.6 runtime. Then a new version of App2 goes out that requires gnome-3.8 runtime. The user installs this new version of App2.

Will App1 continue to use the gnome-3.6 runtime, or its runtime will be forcibly upgraded to gnome-3.8 at the risk of breaking App1?

2) Who will take care of security upgrades on the runtimes?

Say, that the developer of App1 don't cares to upgrade his application to a newer gnome runtime. Who will fix the security bugs on the gnome-3.6 runtime? Do you really expect that the developers will have the patience and determination to upgrade their runtimes each time that a security bulletin is issued (probably weekly)?

Poettering: Revisiting how we put together Linux systems

Posted Sep 1, 2014 22:41 UTC (Mon) by droundy (subscriber, #4559) [Link] (1 responses)

My understanding (from reading the article) is that each app will continue to use the same runtime until it is upgraded. This means that App1 will not be forcibly upgraded, but also means that each app developer must make new releases of their app each time their runtime has a security update, or users will perhaps be using an insecure system.

Poettering: Revisiting how we put together Linux systems

Posted Sep 1, 2014 23:26 UTC (Mon) by droundy (subscriber, #4559) [Link]

I take back what I said. I now see that *runtimes* can (and hopefully will) provide security updates, and apps won't need to bother with this. So the runtime is filling the role that the distros currently play of providing security updates for apps.

Poettering: Revisiting how we put together Linux systems

Posted Sep 1, 2014 23:54 UTC (Mon) by mezcalero (subscriber, #45103) [Link] (22 responses)

Again, you can have multiple runtimes installed at the same time, in multiple versions. It's just a sub-volumes, and you can have as many of them as you like.

So, if your App2 requires the gnome-3.8 runtime, and the App1 wants gnome-3.6, totally fine, you will get both runtimes installed.

The guys who put the runtime together are responsible for the security. It's the same as for distros: the guys who put together the distros are responsible for the security updates. You can consider a runtime little more than a fixed set of packages, taken out of a distro, stabilized, with a long-term security fixes, and under the strict focus on being a somewhat consisten set of APIs that is interesting for app developers to hack against. And then you can have multiple runtimes from different vendors, and you can have multiple runtime versions of the same runtime around.

If you are an app developer, and want to write your app for GNOME, then you pick a specific major version of gnome you want to focus on, let's say GNOME_3_38. Then you write your code for that, release it. It's then GNOME's job to do security fixes, and push out minimally modified updates to GNOME_3_38. Then one day, you actually invest some time, want to make use of newer GNOME features, so you rebase your app onto GNOME_3_40. This is a completely new runtime, which means it will be downloaded to the clients, and also be available. Again, this runtime will need fixes over time, for CVE stuff. But they key here is that you can have many of the runtimes installed in parallel. While they stay mostly immutable after their release by the runtime vendor, they do receive minimal updates to cover for CVEs and stuff.

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 1:09 UTC (Tue) by clopez (guest, #66009) [Link] (5 responses)

> The guys who put the runtime together are responsible for the security. It's the same as for distros: the guys who put together the distros are responsible for the security updates. You can consider a runtime little more than a fixed set of packages, taken out of a distro, stabilized, with a long-term security fixes, and under the strict focus on being a somewhat consisten set of APIs that is interesting for app developers to hack against. And then you can have multiple runtimes from different vendors, and you can have multiple runtime versions of the same runtime around.

That sounds good in theory. But in practice what is going to happen is that most runtimes are not going to have an acceptable level of security support.

Also, with this new setup updating becomes much more complicated: instead of upgrading one runtime (your system), you have to upgrade dozens of runtimes (assuming that the runtime provider cared to release an update)

Just imagine the pain of patching all your runtimes after a bug like heartbleed....

To put some examples:

I install an application that uses the Fedora 18 runtime. For how long I'm going to have security upgrades for the Fedora 18 runtime? What happens after that, if the application wasn't updated for a new Fedora runtime? I'm on my own?

Even worse, say that a developer publishes an application using a custom Gentoo runtime. Do you trust the developer to provide security updates for that custom runtime? really?

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 4:54 UTC (Tue) by NightMonkey (subscriber, #23051) [Link] (4 responses)

Why would a Gentoo runtime be any worse or better than the others? Why slag Gentoo specifically here?

Like any software, you will have to carefully choose where you get it from. The straw man of some wild and crazy and irresponsible Gentoo-using compiling-even-on-Sunday developer being the ONLY ONE who supports the software that ONLY YOU AND A MILLION OTHERS need is preposterous.

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 6:14 UTC (Tue) by dlang (guest, #313) [Link] (1 responses)

actually, the number of people working on some of the low-level pieces is far less than you would like to think about.

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 7:35 UTC (Tue) by NightMonkey (subscriber, #23051) [Link]

Great point. Heartbleed and OpenSSL certainly unmasked that, sadly.

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 10:32 UTC (Tue) by clopez (guest, #66009) [Link] (1 responses)

> Why would a Gentoo runtime be any worse or better than the others? Why slag Gentoo specifically here?

My point here is that a developer that releases an App using an Ubuntu or Debian runtime can rely on other developers (or even the distribution) upgrading the runtime. He don't has to be the one that upgrades the runtime with security upgrades, he can "outsource" that job to the distribution or other developers.

However, for a developer using a Gentoo runtime, outsourcing that job is pretty much impossible. This is because Gentoo is both a rolling release (the package version numbers change constantly) and because each package can have very customized compilation flags or patches.

Everybody using the "Ubuntu X" runtime shares the same runtime, so outsourcing (or delegating) security upgrades to others becomes easier. However, each one of the Gentoo runtimes are different. No one is going to share a Gentoo runtime. So the responsibility of security upgrades on a Gentoo runtime falls only on the developer of the application using that runtime.

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 14:34 UTC (Tue) by Wol (subscriber, #4433) [Link]

But, as a gentoo user, I don't think gentoo fits this setup very well in some respects, while it fits very well in others ...

I can take a snapshot and then do an "emerge world" - great for keeping my system up-to-date, and makes a great development platform.

But any developer who develops for just the one distro - the one on his own system - is an idiot if he wants others to use it too. For testing purposes you really need to build it on a couple of distros. In my case, I'd build it on the latest SLES (provided it wasn't too long in the tooth).

Then the version that's released for general use is against some version of LTS. Those who want bleeding edge run the rolling release, those who want stable run it against an LTS.

Cheers,
Wol

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 1:23 UTC (Tue) by dlang (guest, #313) [Link]

so how many different versions of the gnome runtime are going to get security patches? is it going to only be the latest one as is normal for all other support?

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 1:43 UTC (Tue) by torquay (guest, #92428) [Link] (14 responses)

    It's then GNOME's job to do security fixes, and push out minimally modified updates to GNOME_3_38. Then one day, you actually invest some time, want to make use of newer GNOME features, so you rebase your app onto GNOME_3_40.

Except that this won't match up to reality. The current status-quo is that as soon as Gnome version N is released, the Gnome kids don't want anything to do with Gnome version N-1, and certainly much less with version N-2 (ie. Gnome versions < N essentially become AbandonWare). I suspect a similar situation occurs in the KDE camp.

So who maintains Gnome N-1 and KDE N-1 ? A given distro to some extent, but then most distros are on a 6-12 month cycle (not including RHEL and Ubuntu LTS). In other words, a given run-time provided by a distro becomes outdated within a year. This is an awfully short time from both the developers' and users' points of view.

Sure, we can still develop against an "obsolete" run-time, but it will get no security fixes, nor fixes for critical bugs. So what exactly is the value of having multiple run-times, if essentially they're still forcing application developers to deal with broken APIs and ABIs in order to run on a security-supported run-time?

The proposal put forward by the systemd folks is certainly interesting, but I can only see it useful for having 2 run-times: (1) the Ubuntu LTS run-time, (2) and the RHEL/CentOS/Scientific run-time. Essentially it becomes an abstraction layer for the (allegedly) two most practical run-times. Every other run-time is pointless, as it provides no value over a separate distro.

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 2:41 UTC (Tue) by raven667 (subscriber, #5198) [Link] (6 responses)

> The proposal put forward by the systemd folks is certainly interesting, but I can only see it useful for having 2 run-times: (1) the Ubuntu LTS run-time, (2) and the RHEL/CentOS/Scientific run-time. Essentially it becomes an abstraction layer for the (allegedly) two most practical run-times. Every other run-time is pointless, as it provides no value over a separate distro.

Shhh... don't tell everyone that most distros are redundant, they might get restless ... 8-)

If this scheme gets any traction I think the next question everyone will have is why they have so many different runtimes installed to get the apps they want and try to minimize and standardize, asking some hard questions about why exactly the distros are different and the API/ABIs are so broken.

The next question is one of branding, people brand themselves as a Debian, Ubuntu, Redhat, Gentoo, etc. person, like vi vs. emacs, but what point is this self-identification if the distros run co-equally on the same kernel and you can run a mix of them.

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 4:00 UTC (Tue) by mgb (guest, #3226) [Link]

But the distros are not equal. And that is good. Ubuntu, RHEL, Gentoo, Fedora, Slackware, etc all have different use cases.

Until the TC drank the systemd kool-aid we were very happy with Debian Stable for its breadth, stability, security, and seamless upgrades between releases.

But allowing RH to leverage systemd to churn a distro into oblivion is not a smart move.

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 4:50 UTC (Tue) by NightMonkey (subscriber, #23051) [Link] (2 responses)

Really, Gentoo is not like those other you list. It's not a distribution. It is a meta-distribution; a set of recipes for building binaries. It's not fair to lump them together, to either type.

Gentoo's primary reason for existence is to avoid the pitfalls that apparently have been plaguing binary distros for a decade. The task of proper dependency management is what Gentoo is just fantastic at accomplishing.

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 8:28 UTC (Tue) by Wol (subscriber, #4433) [Link] (1 responses)

:-)

I switched to gentoo, because when I was running the latest stable SuSE, I couldn't (for whatever reason) upgrade to the latest stable lilypond.

Now although I normally don't bother, I have full control if I need it ... (and I gather there are several linux developers who run gentoo, presumably for the same reason ...)

Cheers,
Wol

Poettering: Revisiting how we put together Linux systems

Posted Sep 3, 2014 4:54 UTC (Wed) by speedster1 (guest, #8143) [Link]

> I gather there are several linux developers who run gentoo

I know Greg KH is a long-time gentoo dev who runs it on servers and build machines; just curious what other kernel devs have mentioned running gentoo?

Poettering: Revisiting how we put together Linux systems

Posted Sep 5, 2014 19:40 UTC (Fri) by picca (subscriber, #90087) [Link] (1 responses)

And at the end GNOME maintainer will rely on only one runtime because it is time consumming... an step by step peoples will decide to work only with the gnome runtime because maintaing a bundle is a pain...

and eventually at the end only one runtime will remain.

Who will install a gnome runtime not maintain by gnome ?

so it will reduce diversity at the end.

Poettering: Revisiting how we put together Linux systems

Posted Sep 21, 2014 14:39 UTC (Sun) by vonbrand (subscriber, #4458) [Link]

The Gnome developers I know do use different distributions...

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 9:19 UTC (Tue) by ovitters (guest, #27950) [Link] (1 responses)

You're totally correct. Though individual maintainers can still support their old modules in case they wish. We highly encourage bugfixes to happen in latest releases though. And IMO it makes sense: 3.14 is the continued development effort based on 3.12. Sometimes a 3.14 version is not much more than 3.12 + new translations, called 3.14 for simplicity sake.

Regarding this proposal: Lennart mentioned somewhere else that he only expects the bare minimum of fixes to go in. Security fixes and that's it. That's so minimal that I think it is something GNOME could take up.

We still run into and rely on all the other points you made. Maybe solution is indeed to rely on LTS distributions. Have two runtimes: LTS based one, and a shorter supported one.

I do see the usefulness of this though: When GNOME is released anyone in any distribution can immediately make use of GNOME. That's a question we get fairly often. How to use latest GNOME in their current distribution. There's a lot of practicalities though; GNOME often relies on newer versions of lower level stuff (e.g. Wayland).

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 11:05 UTC (Tue) by warmcat (guest, #26416) [Link]

Security assurance becomes a bit fuzzy like that.

Signed distro packages say "something"... maybe not much if some source packages came from sourceforge or somebody's USB stick or whatever, but something. People have rallied around distro security policy as their starting point for their system being clean, rightly or wrongly.

If Gnome put out a sort of layer of stuff I can install and run as a unit, that does sound useful, however they might sign the image but the process that sourced and created the contents is kind of opaque and unrelated to how a distro functions.

Obviously it differs but at heart this is not a million miles from "some kind of filesystem apk", and Android has to expect they are malicious, control their system access with an enforced manifest you can inspect before installation, etc. Something like that also seems to be needed here.

Poettering: Revisiting how we put together Linux systems

Posted Sep 3, 2014 10:34 UTC (Wed) by ebassi (subscriber, #54855) [Link] (3 responses)

I fully expect that if this scheme takes hold then we'll see upstreams coping with it, and coming up with new security teams. plus, I fully expect efforts like a Long Term Support GNOME release to happen. again, this is conditional on this scheme working: right up until now, there has been no need for upstream to cope with long term support or security rollouts, since the distributions insulated upstreams pretty much completely.

as a side note, could please stop calling the GNOME project members "kids"? it comes off as patronizing and insulting.

Poettering: Revisiting how we put together Linux systems

Posted Sep 8, 2014 13:31 UTC (Mon) by Arker (guest, #14205) [Link] (2 responses)

I doubt very much that will ever happen. They've spent roughly the last two decades acting like the worst stereotypes of teenagers (e.g. http://www.jwz.org/doc/cadt.html) and at this point it seems a safe bet that the culture they have built up is firmly set in that mode and will never leave it. You may see it as insulting or derogatory but I suspect the people saying it see it as simply an acknowledgement of a disappointing fact.

Poettering: Revisiting how we put together Linux systems

Posted Sep 8, 2014 13:59 UTC (Mon) by ebassi (subscriber, #54855) [Link] (1 responses)

They've spent roughly the last two decades acting like the worst stereotypes of teenagers

if I had a nickel every time somebody linked Jamie's CADT not ironically, I'd be a millionaire. that page is not the gospel from on high, and if you think nobody, ever, declared "bug bankruptcy" and marked stuff as obsolete or "needs reproduction with a newer version", then you, like Jamie, are kidding yourself. plus, as a user and as a maintainer, I prefer upstreams closing bugs with OBSOLETE/NEEDINFO, as opposed to bugs lying around forever. it's not like Jamie couldn't re-open bugs at the time either: he just decided to be a prick about it (jwz acting like an emo teenager instead of an adult? that literally never happened in the history of ever!)

anyway, you'll note that for the past 20 years we had distributions, and for the past 20 years distributions did shield many upstreams. if things change, and responsibilities shift, processes will change — or projects will simply die. we are actually discussing this in GNOME, and have been doing that since we started generating continuous integration VM images. plus, the people doing the security updates downstream will just have to push their work upstream, like they already do. it's not like the people that comprise security and QA teams in distributions will magically cease to exist.

Poettering: Revisiting how we put together Linux systems

Posted Sep 8, 2014 14:24 UTC (Mon) by Arker (guest, #14205) [Link]

"if I had a nickel every time somebody linked Jamie's CADT not ironically, I'd be a millionaire."

I do not doubt that one bit. But it sounds like you need to think about why that is true.

"if you think nobody, ever, declared "bug bankruptcy" and marked stuff as obsolete or "needs reproduction with a newer version", then you, like Jamie, are kidding yourself."

And that is just a straw man. Neither I nor Jamie nor anyone else I can think of right off has said otherwise. The issue is not declaring bug bankruptcy, the problem is a long-term, consistent pattern of ignoring bugs, avoiding maint. and refusing to fix, simply kicking every problem down the road till the next version comes out and 'bug bankruptcy' is invoked.

"it's not like Jamie couldn't re-open bugs at the time either"

There is little more pointless than re-opening or re-filing a bug with the same team that studiously ignored your bug for years already.

And this was really an old pattern already by the time jwz wrote that. Let me repeat that - 12 years ago, when jwz wrote that, this was already an old pattern.

Sure it would be different if this were a new project, or one that had a good reputation. But it's just not. GNOME has been on this past for nearly 15 years, expecting that to suddenly change seems quite irrational.

Poettering: Revisiting how we put together Linux systems

Posted Sep 9, 2014 15:19 UTC (Tue) by jonnor (guest, #76768) [Link]

>> Except that this won't match up to reality. The current status-quo is that as soon as Gnome version N is released, the Gnome kids don't want anything to do with Gnome version N-1, and certainly much less with version N-2 (ie. Gnome versions < N essentially become AbandonWare).

Currently there is not much of a point in updating older upstream releases, as to get the fixes out to users, each of the NN distributions have to be involved. This process is painful, slow and largely outside control of upstream.
If we had runtimes, which would be distributed directly to end-users by upstream, the potential benefit of fixes would increase significatly. Thus one would at least hope it would happen more frequently.

Poettering: Revisiting how we put together Linux systems

Posted Sep 1, 2014 21:24 UTC (Mon) by roc (subscriber, #30627) [Link] (5 responses)

Currently distributions select those libraries. If by "GNOME runtime" you really mean "GNOME" and not "Fedora-GNOME" etc, then these decisions will be the same across distributions. That's a large power shift. Who will make those decisions instead? The GNOME project? Hmm.

FTR I quite like the ideas here from the point of view of an app vendor (Mozilla). It's just a rather large change in the way Linux is organized (at the human level, not just the technical level), and I don't think this blog post makes those changes clear enough.

Poettering: Revisiting how we put together Linux systems

Posted Sep 1, 2014 22:08 UTC (Mon) by sramkrishna (subscriber, #72628) [Link]

Yes, the powershift will be towards the desktop communities. The run time would be maintained by GNOME, KDE and whoever else. Morever, they will be on the hook to providing those security fixes, bug fixes and what not. It is a pretty awesome responsibility for these projects.

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 0:08 UTC (Tue) by mezcalero (subscriber, #45103) [Link] (3 responses)

Well, my examples use GNOME and KDE as the guys who put together runtimes. But anybody can do a runtime. I'd expect that in many cases the distributions provide a number of runtimes. THink of Fedora putting together a FedoraWebRuntime or so, which consists of the usual stuff to do web development with, PHP, Python, Ruby and so on...

But it's not just the desktop projects and the distributions that can put a runtime together. Let's say you are building a phone platform. Great! So you put together your PHONE_PLATFORM_1_0 runtime, and everybody who writes apps for your platform links against that. You do a couple of CVE fixes for that runtime, hence you do minor updates to it. But eventually, you want to introduce new functionality, so you do PHONE_PLATFORM_1_2, and then your apps link against that. But the old apps continue to work, because you can keep them both around easily.

And similar I figure the IVI people could agree on a runtime. Or if you are a TV manufacturer you can do a runtime for your series of TVs, and people can hack against that.

And even certain smaller open source projects could define their own runtime, like let's say some media center thing like XBMC or so. They could do a runtime for their major releases, that people can write plugins a again, and then support a couple of the runtimes in parallel.

And so on, you get the idea.

And note that runtimes are not necessarily something you completely make up of thin air. If you did, you would make yourself a lot of work, because then you have to do CVE fixes and shit, which most people don't want to be burdened with. So if I'd be KDE or GNOME I would build my runtime out of existing distro packages. That way, one can take benefit of the good work the distros already do in the CVE area. And then I pick a couple of packages that I think should make up my runtime, and there you go. Or you could even base your runtime on some packaged stuff you get from a distro (so that you don't have to maintain glibc yourself), but then you add compiled versions of the libraries you actually care about yourself. IF you do that, you can take benefit of the CVE work of the distro you built on, and only have to do the CVE work for the stuff you added yourself on top.

That all said, I ultimately don't think that one the usual desktops we will really see that many different runtimes. My hopes at least is that there will be KDE's and GNOME's and maybe a couple of more, but that would be it. And I think this will be self-regulating a bit, since these will be well maintained, and you will get frequent CVE updates for a long time for, and that are likely already installed on your system when you first installed it. If apps otoh pick random exotic runtimes, then this would already mean a much bigger download since you would have to get the runtime first.

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 1:52 UTC (Tue) by torquay (guest, #92428) [Link] (1 responses)

    My hopes at least is that there will be KDE's and GNOME's and maybe a couple of more, but that would be it. And I think this will be self-regulating a bit, since these will be well maintained, and you will get frequent CVE updates for a long time for, and that are likely already installed on your system when you first installed it.
Going by the past behaviour of Gnome, this is wishful thinking.

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 18:27 UTC (Tue) by daniels (subscriber, #16193) [Link]

Yes, we get the point. No need to reply to every single post mentioning GNOME, just in order to prove to everyone how much better a developer you are than them.

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 8:02 UTC (Tue) by imunsie (guest, #68550) [Link]

If I'm understanding correctly this proposal basically requires one runtime per library or group of related libraries.

Apps choose one single runtime.

Any library the app needs that is not in that runtime must be provided by the app.

Therefore, the app is responsible for security updates of all libraries it used that were not provided by the runtime.

Fail.

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 17:26 UTC (Tue) by rich0 (guest, #55509) [Link] (1 responses)

Only a few runtimes will cover it? Maybe if all you do is draw menus that might be true.

Suppose your software wants to do arbitrary-precision math. Do you think the Gnome devs will include libraries for that?

How about using some ANSI standard for data exchange written in the 70s?

How about OCR?

There are lots of libraries that do things other than draw menus and render the top 3 video/image/audio formats.

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 19:12 UTC (Tue) by raven667 (subscriber, #5198) [Link]

It wouldn't surprise me if math and OCR were included in a runtime, although the runtime provider gets to pick which math and OCR library they want to include, as far as some obscure library you may end up including a packaged version from a distro into your application but that's counter balanced by all the other libraries which other apps like browsers can stop bundling.

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 17:54 UTC (Tue) by paulj (subscriber, #341) [Link]

If I take an app built on Fedora for some version of GNOME runtime, and I run it on another distro with the same version of GNOME runtime, will it work?

I havn't tried it recently, because my experience says "next to no chance".

Have the distros converged significantly ABI wise (for same abstract runtime) somehow, that I missed? Or do the ABI issues go much deeper?

Poettering: Revisiting how we put together Linux systems

Posted Sep 1, 2014 12:58 UTC (Mon) by rsidd (subscriber, #2582) [Link]

Sounds awesome as a research project.

Poettering: Revisiting how we put together Linux systems

Posted Sep 1, 2014 13:08 UTC (Mon) by Cyberax (✭ supporter ✭, #52523) [Link]

This is kinda like the NixOS approach, except with binary containers and namespacing instead of rebuilding everything from source with changed path. A neat idea.

Poettering: Revisiting how we put together Linux systems

Posted Sep 1, 2014 14:00 UTC (Mon) by gracinet (guest, #89400) [Link] (2 responses)

This sounds really interesting, especially in the way this is designed to work with distributions, not against them. It's hard to bet on the success of the proposal at this stage, with lots of competing schemes around.

If distributions provide an easy way to install these snapshots/building blocks, if that can be automated by a descriptor in which the downloaded version of the end app would prescribe the blocks it's been validated for…
Just hoping it won't become a case of perpetual movement, with the same story repeating in 10-20 years ;-)

Poettering: Revisiting how we put together Linux systems

Posted Sep 1, 2014 22:10 UTC (Mon) by sramkrishna (subscriber, #72628) [Link] (1 responses)

That depends on what distributions see their roles as. In some way, installation of apps would all be from app stores provided by the desktop since they have their own run time, it would make sense that they would also be in control on where they get their apps as well.

It would be harder to brand differentiation between distros I think.

Poettering: Revisiting how we put together Linux systems

Posted Sep 3, 2014 19:19 UTC (Wed) by ermo (subscriber, #86690) [Link]

As I see it, the whole point of systemd-the-cabal is unification and streamlining of the basic plumbing associated with running a system based around a Linux kernel.

In theory at least, this could potentially free up currently duplicated resources across 'brands' to work on something more productive (like security updates), possibly to the benefit of the entire ecosystem.

For someone like Ubuntu, this means more resources can be allocated towards the UX, for instance, making for even stronger 'branding'.

For someone like Scientific Linux, this means that more more resources can be allocated towards developing and including the software used in academic circles.

For someone like CentOS, this means that more resources can be allocated towards creating DevOps documentation and service bundles that sysadmins can leverage in the deployment of their services and infrastructure.

In other words, this will potentially help create and define fewer but stronger and more well-supported brands (or frameworks/platforms/runtimes), which better serve the needs of a particular set of users than they do now due to each brand/distribution having to wear a lot of different hats.

At least that's my take on it in an ideal world. Things probably won't work out quite like that, be one can dream can't he?

Poettering: Revisiting how we put together Linux systems

Posted Sep 1, 2014 14:05 UTC (Mon) by flussence (guest, #85566) [Link] (3 responses)

I can't help but notice how this scheme closely resembles Android packaging superficially, and Android/Windows software installation in all other aspects - especially the complete lack of quality control it enables.

Poettering: Revisiting how we put together Linux systems

Posted Sep 1, 2014 16:28 UTC (Mon) by ovitters (guest, #27950) [Link]

Agreed, not every upstream software is perfect. It might work for them, but actually do various broken things. During the packaging a lot of problems are found and fixed upstream. This is then shared across every distribution/user of that software.

This likely impacts the quality of the software itself. As in, you might more easily get the runtime but the packaging might take more work than it did before.

I've read the Google+ comments, and then it was explained that the expectation is that a runtime is actually made using a distribution. Various distributions have as feature/goal that they can be used to make a distribution. Instead of ending up being used in another distribution, they'd be used for creating a runtime.

The other expectation that there are a limited amount of runtimes. Taking this into account, I have less reservations about the idea.

Quality control

Posted Sep 1, 2014 16:50 UTC (Mon) by bjartur (guest, #67801) [Link]

especially the complete lack of quality control it enables

Don't force people into dependancy on software repositories by braking indie packages. Lure them by promising quality, simplicity and loyalty (i.e. less crapware and less malware).

Allow people to lock down their systems and install only signed packages, as Lennart et al. propose. Make that option attractive, please do. But don't stop that inexperienced kid from toying with software from a variety of sources unless you really must (e.g. to prevent him from accidentally spamming everyone in his contact list).

Free software is more easily locked down in a custom fashion. Ease of packaging will enable larger sofware repositories. More people will be able to package their software for the same platform. More vendors will be able to repackage applications targeting either the same software stack—or repackage applications from a single, standard format to their custom non-standard to suit their niche. More software distributors will be able to recommend their own sub- or superset of packages to more administrators with access to standard tools. More software repositories competing in a larger market means you can hopefully choose the best binary repository, not just the best deb repository. Of course, those of us who like e.g. source packages will continue to do things differently. But we don't need to force Arch and Fedora spend more time on repackaging common software when they really want to be the first ones to package some bleeding edge library or application. Nor force Red Hat and users to risk anything when an exact application image has been widely tested already.

Debian didn't kill Ubuntu. It laid the ground for it. Downstream can change things if they wish. With standardization, those changes can hopefully be applied in a more systematic fashion.

Standardization doesn't necessarily kill competition. It shifts it. Especially if oddballs are allowed to break them. And unlike, say, an extended libc like glibc, this doesn't need developers to adopt as much of an incompatible interface. It just presents you with the option of skipping the whole repackaging step, if ever you so desire.

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 3:46 UTC (Tue) by quotemstr (subscriber, #45331) [Link]

It resembles Windows more closely than you might imagine; it bears a stark resemblance to the side-by-side storage model that was put in place to resolve "DLL hell". Windows suffered from DLL hell because software distribution was decentralized and uncoordinated. Because major Linux distributions have package managers to manage dependencies, I don't think Poettering's proposal buys us much.

Poettering: Revisiting how we put together Linux systems

Posted Sep 1, 2014 14:13 UTC (Mon) by ken (subscriber, #625) [Link] (27 responses)

I wonder how well this will work with dependencies that are not filesystem based. Any IPC that relies on one server running on the system is still going to be problematic.

What do you do when the base OS runs a version that is incompatible with what the app is expecting? somehow force two versions to run anyway ??

Poettering: Revisiting how we put together Linux systems

Posted Sep 1, 2014 16:00 UTC (Mon) by mezcalero (subscriber, #45103) [Link] (26 responses)

Yes, the design reduces the API compatibility question a lot, but does not remove it. I mean, there needs to be an interface somewhere, if we want components to talk to each other.

In our concept we put a lot of emphasis on the kernel/userspace interfaces, on the bus interfaces and on file system formats. So far the kernel API has probably been the most stable API for all we ever had on Linux, so we should be good on that. File formats tend to stay pretty compatible too over a long period of time. Currently we suck though at keeping bus API stable. If we ever want to get to a healthy app ecosystem, this is where we need to improve things.

Poettering: Revisiting how we put together Linux systems

Posted Sep 1, 2014 16:21 UTC (Mon) by ken (subscriber, #625) [Link] (25 responses)

Well I'm not so sure about the file formats. at least not when it comes to configuration data.

I once used to have home directories on NFS and mount it from different computers. But that has not worked well in years as the "desktop" people apparently cant handle having different versions of the same program.

I won't repeat what came out of my mouth when I tested a new version of a distro but used my old home directory and evolution converting the on disk storage format to a new one but failing to understand its own config so nothing worked in the new version and obviously totally broke everything for the old version. I don't run evolution anymore or try to use the same homedir from different distro versions.

in the 90s I used the same nfs home dir for solaris and diffrent linux versions now doing it with only linux and different version of the same distor is just asking for trouble.

Poettering: Revisiting how we put together Linux systems

Posted Sep 1, 2014 22:52 UTC (Mon) by droundy (subscriber, #4559) [Link]

I am also frustrated with the unwillingness of desktop developers to handle the shared-home-directory configuration. At least browsers refuse to run, but my experience has been that Bad Things happen if I run gnome on multiple computers using the same account with a NFS shared home directory.

Interestingly, if this plan were to take off, it might force desktop developers to be more considerate in what they do with your home directory, since at its essence the scheme uses a single home directory for multiple running operating systems.

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 0:16 UTC (Tue) by mezcalero (subscriber, #45103) [Link] (23 responses)

Well, file formats is one thing. Concurrent random access to the same files another.

At parsing old file formats our software generally has been pretty good. Doing concurrent access on the same files is a much much harder problem. And quite frankly I don't think it is really worthy goal, and something people seldom test. In particular, since NFS systems are usually utter crap, and you probably find more NFS setups where locking says it works but actually doesn't and is a NOP.

If it was about me I'd change GNOME to try to lock the home directory as soon as you logged in, so that you can only have a single GNOME session at a time on the same home directory. It's the only honest thing to do, since we don't test against this kind of parallel use. However, we can't actually really do such a singleton lock, since NFS is a pile of crap, and as mentioned locking more often doesn't work than it does. And you cannot really emulate locking with the renames, and stuff, because then you get not automatic cleanup of the locks when the GNOME session abnormally terminates.

Or in other words: concurrent graphical sessions on the same $HOME are fucked...

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 9:34 UTC (Tue) by nix (subscriber, #2304) [Link] (17 responses)

And quite frankly I don't think it is really worthy goal, and something people seldom test. In particular, since NFS systems are usually utter crap, and you probably find more NFS setups where locking says it works but actually doesn't and is a NOP.
This might be true in some interoperable situations, but it hasn't been true of Linux-only deployments for a very long time indeed (well over a decade, more like a decade and a half).

I've had $HOME on NFS for eighteen years now, with mail delivery and MUAs *both* on different clients (neither on the server) for the vast majority of that time. Number of locking-related problems in all that time? Zero -- I can remember because I too expected locking hell, and was surprised when it didn't happen. NFS locking works perfectly well, or at least well enough for long-term practical use.

Really, the only problem I have with NFS is interfaces like inotify which totally punt on the problem of doing file notification on networked filesystems, and desktop environments that then assume that inotify is sufficient and that they don't need to find a way to ship inotifies on servers over to interested clients, when for people with $HOME on NFS single-machine inotify is utterly useless.

That's the only problem I have observed -- and because people like me exist, you can be reasonably sure that major NFS regressions will get reported fairly promptly, so there won't be many other problems either.

Oh yeah -- there is one other problem: developers saying 'NFS sucks, we don't support it' and proceeding to design their software in the expectation that all the world is their development laptop. Not so.

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 20:23 UTC (Tue) by mezcalero (subscriber, #45103) [Link] (16 responses)

NFS locking on Linux is a complete disaster. For example, the Linux NFS client implicitly forwards BSD locks made on an NFS share to the server where the kernel picks it up as POSIX locks. Hence: if you lock a file with BSD locks as well as POSIX on a local file system, then that works fine, they don't conflict. If you do the same on NFS then you get a deadlock.

And yeah, this happens, because BSD locks are per-fd and hence actually usable to use. And POSIX locks are per-process, which makes them very hard to use (especially as *any* close() invoked by the fd on the file drops the lock implicitly), but then again they support byte-range locking. Hence people end up using both inter-mixed quite frequently, maybe not on purpose, but certainly in real-life.

So yeah, file locking is awful on Linux anyway, and it's particularly bad on NFS.

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 21:36 UTC (Tue) by bfields (subscriber, #19510) [Link] (9 responses)

And yeah, this happens, because BSD locks are per-fd and hence actually usable to use.

For what it's worth, note Jeff Layton's File-private POSIX locks have been merged now.

Poettering: Revisiting how we put together Linux systems

Posted Sep 3, 2014 19:30 UTC (Wed) by ermo (subscriber, #86690) [Link] (8 responses)

I was going to naïvely ask why POSIX hadn't adopted the superior (in the context of e.g. NFS) BSD-style locking and then you posted that link, which contains this little gem at the very top:

"File-private POSIX locks are an attempt to take elements of both BSD-style and POSIX locks and combine them into a more threading-friendly file locking API."

Sounds like the above is just what the doctor ordered?

Poettering: Revisiting how we put together Linux systems

Posted Sep 3, 2014 23:07 UTC (Wed) by nix (subscriber, #2304) [Link] (7 responses)

It doesn't really help. The problem is that local fses have multiple locks which do not conflict with each other, but the NFS protocol has only one way to signal a lock to the remote end. So there's a trilemma: either you use that for all lock types (and suddenly they conflict remotely where they did not locally), or you don't signal one lock type at all (and suddenly you have things not locking at all remotely where they did locally), or you use a protocol extension, which has horrible compatibility problems.

I don't see a way to solve this without a new protocol revision :(

Poettering: Revisiting how we put together Linux systems

Posted Sep 4, 2014 14:03 UTC (Thu) by foom (subscriber, #14868) [Link]

It does help because it removes any reason to use the BSD lock API (at least when running on Linux, new enough kernel). Before that addition, the POSIX lock programming model was so broken, nobody sane would ever *want* to use it.

Yet, on Linux, local POSIX locks interoperate properly with POSIX locks via NFS, so, if software all switches to using POSIX locks, it'll work properly when used both locally and remotely at the same time.

Of course, very often, nothing is ever running on the NFS server that touches the exported data (or at least, nothing that needs to lock it) -- the NFS server is *just* a fileserver. In such an environment, using BSD locks over NFS on linux works properly too.

Poettering: Revisiting how we put together Linux systems

Posted Sep 5, 2014 0:44 UTC (Fri) by mezcalero (subscriber, #45103) [Link] (5 responses)

I think a big step forward would actually be if the NFS implementations were honest, and would return a clean error if they cannot actually provide correct locking. But that's not what happens, you have no way to figure what is going on a file system...

Just pretending that locking works, even if it doesn't, and returning success to apps is really the worst thing to do...

Poettering: Revisiting how we put together Linux systems

Posted Sep 8, 2014 16:12 UTC (Mon) by nix (subscriber, #2304) [Link] (4 responses)

You're suggesting erroring if a lock of one type is held on on a file when an attempt is made to take out a lock of the other type? I suspect this is the only possible fix, if you can call it a fix. Now we just have to hope that programs check for errors from the locking functions! But of course they will, everyone checks for errors religiously :P

Poettering: Revisiting how we put together Linux systems

Posted Sep 8, 2014 18:56 UTC (Mon) by bfields (subscriber, #19510) [Link] (1 responses)

That wouldn't help. I think he's suggesting just returning -ENOLCK to BSD locks unconditionally. I agree that that's cleanest but in practice I suspect it would break a lot of existing setups.

I suppose you could make it yet another mount option and then advocate making it the default. Or just add support NFS protocol support for BSD locks if it's really a priority, doesn't seem like it should be that hard.

Poettering: Revisiting how we put together Linux systems

Posted Sep 9, 2014 13:56 UTC (Tue) by nix (subscriber, #2304) [Link]

That wouldn't help. I think he's suggesting just returning -ENOLCK to BSD locks unconditionally. I agree that that's cleanest but in practice I suspect it would break a lot of existing setups.
Given how awful POSIX locks are (until you have a very recent kernel and glibc 2.20), and how sane people therefore avoided using the bloody things, I'd say it would break almost every setup relying on locking over NFS at all. A very bad idea.

Poettering: Revisiting how we put together Linux systems

Posted Sep 9, 2014 14:43 UTC (Tue) by foom (subscriber, #14868) [Link] (1 responses)

I don't think he was suggesting that, but that's actually what BSD does with BSD/POSIX locks.
A BSD lock will block a POSIX lock, and v.v.. (At least that's what happens locally; no idea what the BSD's NFS clients do.)

Linux also had that behavior a long time ago IIRC. Not sure why it changed, that was before I paid attention.

Poettering: Revisiting how we put together Linux systems

Posted Sep 9, 2014 15:27 UTC (Tue) by bfields (subscriber, #19510) [Link]

Huh. A freebsd man page agrees with you:
https://www.freebsd.org/cgi/man.cgi?query=flock&sektion=2

If a file is locked by a process through flock(), any record within the file will be seen as locked from the viewpoint of another process using fcntl(2) or lockf(3), and vice versa.

Recent linux's flock(2) suggests the Linux behavior was an attempt to match BSD behavior that has since changed?:

http://man7.org/linux/man-pages/man2/flock.2.html

Since kernel 2.0, flock() is implemented as a system call in its own right rather than being emulated in the GNU C library as a call to fcntl(2). This yields classical BSD semantics: there is no interaction between the types of lock placed by flock() and fcntl(2), and flock() does not detect deadlock. (Note, however, that on some modern BSDs, flock() and fcntl(2) locks do interact with one another.)

Strange. In any case, changing the local Linux behavior is probably out of the question at this point.

Poettering: Revisiting how we put together Linux systems

Posted Sep 3, 2014 23:05 UTC (Wed) by nix (subscriber, #2304) [Link] (5 responses)

Oh yes, I can see how that's problematic -- though TBH it sounds like a bug (the server should export BSD locks as BSD locks, though I can understand the protocol difficulties in doing so). The fact remains that it can't be that common: it has never happened to me at all, and everything I do, I do over NFS.

What I really want -- and still seems not to exist -- is something that gives you the POSIXness of local filesystems (and things like ceph, IIRC) while retaining the 'just take a local filesystem tree, possibly constituting one or many or parts of local filesystems, and export them to other machines' property of NFS: i.e,. not needing to make a new filesystem or move things around madly on the local machine just in order to export the fs. I know, this property is really hard to retain due to the need to make unique inums on the remote machine without exhausting local state, and NFS doesn't quite get it right -- but it would be very nice if it could be done.

Poettering: Revisiting how we put together Linux systems

Posted Sep 4, 2014 15:09 UTC (Thu) by bfields (subscriber, #19510) [Link] (4 responses)

"What I really want -- and still seems not to exist -- is something that gives you the POSIXness of local filesystems"

What exactly are you missing?

"not needing to make a new filesystem or move things around madly on the local machine just in order to export the fs. I know, this property is really hard to retain due to the need to make unique inums on the remote machine without exhausting local state"

I'm not sure I understand that description of the problem. The problem I'm aware of is just that it's difficult to determine given a filehandle whether the object pointed to by that filehandle is exported or not.

"NFS doesn't quite get it right"

Specifically, if you export a subtree of a filesystem then it's possible for someone with a custom NFS client and access to the network to access things outside that subtree by guessing filehandles.

Poettering: Revisiting how we put together Linux systems

Posted Sep 8, 2014 15:55 UTC (Mon) by nix (subscriber, #2304) [Link] (3 responses)

On the POSIXness side of things, I'd like the atomicity guarantees you get from a local fs, rather than having just rename() be atomic; I'd like to not have to deal with silly-rename leaving spew all over my disks that it is hard to figure out when it is safe to clean up; I'd like the same ACL system on the local and the remote filesystems rather than its being mapped through a crazy system designed to be interoperable with Windows... oh, and decent performance would be nice (like NFSv4 allegedly has, though I haven't yet managed to get NFSv4 to work -- haven't tried hard enough, I think its requirements for strong authentication are getting in my way).

Clearly NFS can't do all this: silly-rename and the rest are intrinsic to (the way NFS has chosen to do) statelessness. So I guess we need something else.

As for the not-quite-rightness of NFS's lovely ability to just ad-hoc export things, I have seen spurious but persistent -ESTALEs from nested exports and exports crossing host filesystems in the last year or two, and am still carrying round a horrific patch to make them go away (I was going to submit it, but it's a) horrific and b) I have to retest and make sure it's actually still needed: the underlying bug may have been fixed).

Poettering: Revisiting how we put together Linux systems

Posted Sep 8, 2014 16:30 UTC (Mon) by rleigh (guest, #14622) [Link] (1 responses)

At least with NFSv4 and ZFS, ACLs are propagated to client systems just fine (it's storing NFSv4 ACLs natively in ZFS on disk). For a combination of FreeBSD server and client at least. With a FreeBSD server and Linux client, NFSv4 ACL support isn't working for me, though the standard ownership and perms work correctly. I put this down to the Linux NFS client being less sophisticated and/or buggy, but I can't rule out some configuration issue.

Poettering: Revisiting how we put together Linux systems

Posted Sep 8, 2014 18:49 UTC (Mon) by bfields (subscriber, #19510) [Link]

With a FreeBSD server and Linux client, NFSv4 ACL support isn't working for me, though the standard ownership and perms work correctly. I put this down to the Linux NFS client being less sophisticated and/or buggy, but I can't rule out some configuration issue.

The actual kernel client code is pretty trivial, so the bug's probably either in the FreeBSD server or the client-side nfs4-acl-tools. Please report the problem.

Poettering: Revisiting how we put together Linux systems

Posted Sep 8, 2014 18:46 UTC (Mon) by bfields (subscriber, #19510) [Link]

I think its requirements for strong authentication are getting in my way

The spec does require that it be implemented, but you're not required to use it. If you're using NFS between two hosts with a recent linux boxes then you're likely already using NFSv4. (It's default since RHEL6, for example.)

silly-rename and the rest are intrinsic

See the discussion of OPEN4_RESULT_PRESERVE_UNLINKED in RFC 5661. It hasn't been implemented. I don't expect it's hard, so will probably get done some time depending on the priority, at which point you'll no longer see sillyrenames between updated 4.1 clients and servers.

spurious but persistent -ESTALEs from nested exports and exports crossing host filesystems

Do let us know what you figure out (linux-nfs@vger.kernel.org, or your distro).

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 11:01 UTC (Tue) by helge.bahmann (subscriber, #56804) [Link]

> If it was about me I'd change GNOME to try to lock the home directory as soon as you logged in, so that you can only have a single GNOME session at a time on the same home directory

How about vnc/nx & friends?

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 18:11 UTC (Tue) by paulj (subscriber, #341) [Link] (2 responses)

Concurrent access wasn't the issue in the case you're responding to. It was access to the same $HOME with different versions of software, non-concurrently.

Note, version here doesn't just specify the release version of the software concerned, but ABI issues like 64 v 32bit. You might have some software where one ABI's version can read and upgrade files from the other but not other way around.

Does this mean $HOME may need to have dependencies on apps?

Poettering: Revisiting how we put together Linux systems

Posted Sep 3, 2014 23:09 UTC (Wed) by nix (subscriber, #2304) [Link] (1 responses)

It does anyway, even on a single machine. A classic example is KDE, which has a subsystem which automatically upgrades configuration files (and has a little language to specify the modifications required). Update your system without quitting KDE, and it is at least theoretically possible that a newly launched application can trigger an upgrade, and then an already-running instance of the same application can try to read the upgraded configuration file and barf.

This is a really hard problem to solve as long as you permit more than one instance of an application (not a desktop!) requiring configuration to run at once :( which is clearly a desirable property!

Poettering: Revisiting how we put together Linux systems

Posted Sep 4, 2014 21:54 UTC (Thu) by Wol (subscriber, #4433) [Link]

Why is it hard? The config file is best as text :-) and should contain a revision number.

And more to the point, old config versions shouldn't be wiped as a matter of course, they should exist in parallel. Of course, this then has the side effect that when the second, older version gets upgraded it doesn't upgrade the old config but will spot and use the pre-existing newer config. Is that good or bad? I don't know.

Cheers,
Wol

Poettering: Revisiting how we put together Linux systems

Posted Sep 8, 2014 14:51 UTC (Mon) by Arker (guest, #14205) [Link]

"Or in other words: concurrent graphical sessions on the same $HOME are fucked..."

And yet they worked just fine until GNOME came along...

Reinventing bundled libraries?

Posted Sep 1, 2014 14:16 UTC (Mon) by NAR (subscriber, #1313) [Link] (10 responses)

I might miss something, but isn't it simply bundling libraries into runtime volumes instead of the application's lib directory? If applications rely on particular runtime volumes, I don't see that much gain. Or are they hoping that application developers will standardize on a couple of runtime volumes (like they didn't standardize on distributions)?

Reinventing bundled libraries?

Posted Sep 1, 2014 16:06 UTC (Mon) by mezcalero (subscriber, #45103) [Link] (9 responses)

Well, the distinction to the classic distro model is that you can have multiple runtimes around, and they are basically frozen to a set of very clear APIs that the upstream developers wrote their app against. You can even have runtimes around that originate from different distros. Classic distrs provide one set APIs at a time generally, and you never know what you end up with, which makes it very hard for upstream developers to target.

Reinventing bundled libraries?

Posted Sep 2, 2014 0:54 UTC (Tue) by aquasync (guest, #26654) [Link] (7 responses)

But if developers really start to make use of this, you end up with Firefox installed (compiled against perhaps Debian's Gnome runtime), Libreoffice (on Ubuntu's Gnome), and an old version of Chromium (on Fedora 18 say). Even if they're all very current - different compiler versions, flags etc will lead to minor differences in binaries that will defeat the de-duping. That will in turn lead to a substantially larger disk (& memory) footprint.

Realistically I see this working best if a single distro provides these runtimes and commits to long-term support for these (maybe RHEL/Centos-based?). Anyway I think they're worthy goals and hope the project succeeds.

Reinventing bundled libraries?

Posted Sep 2, 2014 9:35 UTC (Tue) by nix (subscriber, #2304) [Link]

I think the point of the 'runtimes' is fairly clearly that they *are* distros, or rather distro snapshots.

Reinventing bundled libraries?

Posted Sep 2, 2014 10:01 UTC (Tue) by fmuellner (subscriber, #70150) [Link] (5 responses)

No, the idea is that all GNOME runtimes are provided by the GNOME project, so all applications that target the same runtime version will actually get the same runtime (e.g. GNOME_3_24). GNOME would likely pick a distribution to compose the runtimes, but that is irrelevant to the application - for applications, all that matters is that there is a single "vendor" for GNOME runtimes (likewise for KDE, Enlightenment etc.).

Reinventing bundled libraries?

Posted Sep 2, 2014 11:10 UTC (Tue) by NAR (subscriber, #1313) [Link] (4 responses)

I wonder - is it really feasible to produce a distribution-neutral GNOME/KDE runtime? Will the same binaries run on different kernel/glibc combination? I have doubts...

Reinventing bundled libraries?

Posted Sep 2, 2014 13:36 UTC (Tue) by raven667 (subscriber, #5198) [Link] (3 responses)

The runtime would include all the components of a distro that gnome needs to run. There is no problem running many distros with their own libc on the same kernel because Linus is a stickler for ABI discipline.

Reinventing bundled libraries?

Posted Sep 2, 2014 14:02 UTC (Tue) by NAR (subscriber, #1313) [Link] (2 responses)

So it means that the GNOME project will provide the libc? Because the parent comment says that GNOME runtime will be provided by the GNOME project. This would make the distributions redundant - which might not be a bad idea...

Reinventing bundled libraries?

Posted Sep 2, 2014 15:43 UTC (Tue) by raven667 (subscriber, #5198) [Link] (1 responses)

You are right, what I said didn't sound right, I re-read the proposal and there are several levels which are related so this is better abstracted. So you have a root filesystem which is unique and you can have several of these, they depend on a /usr filesystem which is from a distro and is shared and is a dependency of various runtimes which are shared infrastructure that apps depend on, additionally you have frameworks which are the devel libraries for building apps against. I'll have to read through again but it might also be that runtimes are supposed to be able to run against multiple different /usr distros but that doesn't seem possible because the ABIs of the /usr are different.

Examples taken from original article

  • root:testmachine:org.fedoraproject.WorkStation:x86_64
    • usr:org.fedoraproject.WorkStation:x86_64:24.9
      • runtime:org.gnome.GNOME3_20:x86_64:3.20.5
        • app:org.mozilla.Firefox:GNOME3_20:x86_64:40
  • root:bar:org.archlinux.Desktop:x86_64
    • usr:org.archlinux.Desktop:x86_64:302.7.10
      • runtime:org.gnome.GNOME3_22:x86_64:3.22.0
        • app:org.libreoffice.LibreOffice:GNOME3_22:x86_64:166
        • framework:org.gnome.GNOME3_22:x86_64:3.22.0
  • home:lennart:1000:1000

Reinventing bundled libraries?

Posted Sep 2, 2014 15:56 UTC (Tue) by raven667 (subscriber, #5198) [Link]

Actually I'm wrong in that the runtimes are mounted at /usr, and need to be at least complete enough for everything in the runtime to work as well as any app which depends on the runtime, so it seems to me that runtimes will also have to be full distros or based on a full distro.

Reinventing bundled libraries?

Posted Sep 2, 2014 18:00 UTC (Tue) by paulj (subscriber, #341) [Link]

Clear APIs are one thing, but for this work it surely requires clear ABIs?

Poettering: Revisiting how we put together Linux systems

Posted Sep 1, 2014 14:50 UTC (Mon) by guus (subscriber, #41608) [Link] (21 responses)

I have set up chroots with specific versions of specific distributions in the past just to be able to run some third-party closed-source apps that only worked in the right environment. This proposal seems to want to automate this way of work. However, I would hardly call that optimal.
One of the problems is that it removes any pressure from upstreams to adhere to any overarching standard like LSB, and just code against whatever distribution they have. Also, it removes the pressure to make sure their code runs on newer versions of that distribution. That may not seem so important, but in fact there are many libraries that need to be updated when the kernel and its drivers are updated. Just an example: the binary nVidia driver needs to be the same version as the nVidia GLX library, otherwise 3D acceleration does not work. Old versions of Glibc might not work correctly on the latest Linux kernel. Or what about when you want to run multiple apps, and some use ALSA directly, others PulseAudio? There are good reasons why you want to stick to a single distribution.

And of course, the obligatory XKCD: http://xkcd.com/927/

Poettering: Revisiting how we put together Linux systems

Posted Sep 1, 2014 15:15 UTC (Mon) by mpr22 (subscriber, #60784) [Link] (17 responses)

Old versions of Glibc might not work correctly on the latest Linux kernel.

In that case, you file a bug against the kernel.

Poettering: Revisiting how we put together Linux systems

Posted Sep 1, 2014 16:24 UTC (Mon) by amck (subscriber, #7270) [Link] (16 responses)

Thats missing the point: filing and fixing the bugs is the procedure for fixing the individual problem, but there is no guarantee that there will ever be a set of software (kernel, runtime, app) that works.

The "runtime" in this picture is a large set of libraries (all of the libs on a distro?). This will break several times a week (look at the security releases on lwn.net). Hence there is no guarantee that this stuff stays stable.

This is what distros do. Its essentially a guarantee : "we've tested this stuff to make sure it all works together, and fixed it when it didn't and froze it when it did to get you release X. There will be point releases of X as there are security fixes, but they won't break your apps' ABI".

This included the kernel. Now you're breaking that by removing the kernel, in order to avoid fixing a problem that has to be fixed within the distro anyway (versioning, compatibility checking): why not look at the work that does on in distros like Debian to ensure that library ABIs and APIs work and learn?

Poettering: Revisiting how we put together Linux systems

Posted Sep 1, 2014 16:31 UTC (Mon) by ovitters (guest, #27950) [Link]

This is explained in more detail in the Google+ comments. The expectation is that the runtime would be based on a distribution. So the runtime is basically just the collection of packages from a distribution. There is an overlap, but you'd not duplicate the efforts that the distributions are making.

Poettering: Revisiting how we put together Linux systems

Posted Sep 1, 2014 17:04 UTC (Mon) by ggiunta (guest, #30983) [Link] (14 responses)

I kinda agree that this might lead to less-stable and less-observable systems in the long run, even though it can make life simpler for many people and thus see quick widespread adoption.

What if app X relies on runtime A-1 which has known security bugs and does not claim compatibility with the current runtime A-99? End user will just run unsafe libraries to get X running while the developer of X can still claim his app is fully functional and has little incentive to fix it.

Bloat: even in the age of large ssds, keeping 5 versions times 5 OS-packages installed "just for fun" is not something I'd like to do. Heck, I already resent the constant stream of updates I get on Android for apps I barely use, I really do not need to clog the pipe with 25 x downloads from security.linuxdDistibutionZ.org.
I have seen the rise of "composer" in php-land, which uses a somewhat-related scheme (each app magically gets all the dependencies it needs) and the times for dependency resolution and download are ugly.

What about userland apps which keep open ports? Say LibreOffice plugin X which integrates with Pidgin-29, while Audacity plugin Y integrates with Pidgin-92. Even if there was a namespace for sockets, I'd not like to run 2 concurrent copies of the same IM application.

I wish there was a magical hammer allowing the to move on the other direction instead, and force-push the ABI-stability concept into the mind of each oss developer... (in fact I use windows as my everyday os, mainly because its core apis are stable, and I can generally upgrade any app independently of each other and expect them to work together. True, it is nowhere near linux in flexibility)

Poettering: Revisiting how we put together Linux systems

Posted Sep 1, 2014 18:11 UTC (Mon) by xtifr (guest, #143) [Link] (13 responses)

Bloat: even in the age of large ssds, keeping 5 versions times 5 OS-packages installed "just for fun" is not something I'd like to do.

And its not just disk use that will skyrocket. One of the advantages of shared libraries on Linux is shared memory use. If my browser, editor, and compiler each use a different version of glibc, that means a lot more memory used up on different copies of glibc. Not to mention the various applets and daemons I have running. Then factor in the various versions of all the other libraries these various things use. The term "combinatorial explosion" comes to mind.

Poettering: Revisiting how we put together Linux systems

Posted Sep 1, 2014 19:01 UTC (Mon) by mjthayer (guest, #39183) [Link]

I haven't tested this personally, but I have heard claims that correctly optimised static libraries can actually beat shared ones on disk and memory usage due to the fact that each application only pulls in the parts which it really needs.

Poettering: Revisiting how we put together Linux systems

Posted Sep 1, 2014 20:03 UTC (Mon) by robclark (subscriber, #74945) [Link]

> And its not just disk use that will skyrocket. One of the advantages of shared libraries on Linux is shared memory use. If my browser, editor, and compiler each use a different version of glibc, that means a lot more memory used up on different copies of glibc. Not to mention the various applets and daemons I have running. Then factor in the various versions of all the other libraries these various things use. The term "combinatorial explosion" comes to mind.

so.. running things in a separate VM or chroot (which is what this is an alternative for) is somehow less wasteful?

Poettering: Revisiting how we put together Linux systems

Posted Sep 3, 2014 16:15 UTC (Wed) by nye (subscriber, #51576) [Link] (10 responses)

In practice, the idea that using shared libraries reduces memory usage is basically irrelevant - even assuming it's true at all, which it may not be, as michaeljt points out.

I've just had a look at the nearest Linux machine to hand: this is only a rough estimate based on what top reports, but it appears that, out of a little under 30GB RSS, there's about 30MB shared - and that's just by adding up the 'shared' column, so I guess it's probably counting memory multiple times if it's used by multiple processes(?)

Either way, I'm not going to lose much sleep over a memory increase on the order of a tenth of a percent if it makes other things simpler.

Poettering: Revisiting how we put together Linux systems

Posted Sep 3, 2014 17:47 UTC (Wed) by Trelane (subscriber, #56877) [Link] (9 responses)

It would be interesting to have two gentoo installs on the same machine : one compiled statically and one not and otherwise identical.

Poettering: Revisiting how we put together Linux systems

Posted Sep 3, 2014 20:50 UTC (Wed) by mjthayer (guest, #39183) [Link] (8 responses)

I will just point out to both that I was talking about correctly optimised static libraries. I suspect that these days the only correctly optimised ones are those which specifically target embedded development. I just tried statically linking the X11 libraries (all other libraries were dynamically linked) to a pretty trivial client, xkey for those who know it, and the resulting binary was one megabyte in size after stripping. I actually expected that X11 would be reasonably well optimised, though that probably only applied before the days when libX11 was a wrapper around libxcb.

Poettering: Revisiting how we put together Linux systems

Posted Sep 3, 2014 21:52 UTC (Wed) by Trelane (subscriber, #56877) [Link] (7 responses)

Pardon my ignorance, but what does, "correctly optimized" mean precisely?

Poettering: Revisiting how we put together Linux systems

Posted Sep 3, 2014 22:03 UTC (Wed) by zlynx (guest, #2285) [Link] (6 responses)

I believe that a properly put together static library has multiple .o files inside it. Each .o file should be one function, possibly including any required functions that aren't shared.

This is because the static linker reads .a libraries and includes required .o files.

A badly put together static library has one, or just a few .o files in it. Using any function from the library pulls in all of the unrelated code as well.

Poettering: Revisiting how we put together Linux systems

Posted Sep 4, 2014 5:51 UTC (Thu) by mjthayer (guest, #39183) [Link] (5 responses)

Exactly. Actually something I would expect the compiler and linker to be able to handle, say by the compiler creating multiple .o files, each containing as few functions as possible (one, or several if there are circular references).

Poettering: Revisiting how we put together Linux systems

Posted Sep 4, 2014 6:55 UTC (Thu) by Wol (subscriber, #4433) [Link] (1 responses)

That sounds like libraries and the loader on Pr1mos. If your library was recursive it could catch you out (and if you had different functions with the same name in different libraries).

Each time you loaded a library, it checked the list of unsatisfied functions in the program against the list of functions in the library, and pulled them across.

So if one library function referenced another function in the same library, you often had to load the library twice to satisfy the second reference.

I've often felt that was better than the monolithic "just link the entire library", but it does prevent the "shared library across processes" approach.

Cheers,
Wol

Poettering: Revisiting how we put together Linux systems

Posted Sep 4, 2014 7:20 UTC (Thu) by mjthayer (guest, #39183) [Link]

That is how static linking works today with the standard GNU toolchain. If you are linking a binary statically you sometimes have to include a given library twice on the linker command line for that reason.

Poettering: Revisiting how we put together Linux systems

Posted Sep 4, 2014 14:14 UTC (Thu) by nix (subscriber, #2304) [Link] (2 responses)

Yeah. --ffunction-sections -fdata-sections -Wl,--gc-sections can do that, in theory, but it makes binaries a lot bigger (and thus *more* memory-hungry) due to ELF alignment rules, and is rarely tested, complicated, and extremely prone to malfunction as a result. Use if wizard or highly confident only.

Poettering: Revisiting how we put together Linux systems

Posted Sep 5, 2014 7:54 UTC (Fri) by mjthayer (guest, #39183) [Link] (1 responses)

Yes, I can see that. Is there any reason though (I am asking you as you know considerably more about the subject than I do) why the linker would not be able to merge ELF sections during the final link if they were not yet relocated?

Poettering: Revisiting how we put together Linux systems

Posted Sep 8, 2014 15:42 UTC (Mon) by nix (subscriber, #2304) [Link]

No reason that I can see (though obviously this must be an optional behaviour: some programs would really *want* one section per function, unlike people who are just using it for GCing.)

Poettering: Revisiting how we put together Linux systems

Posted Sep 1, 2014 16:05 UTC (Mon) by bokr (guest, #58369) [Link] (1 responses)

I have put a few things into /etc/rc.d/rc.local
conditional on booted kernel name, and a few other things,
but not chrooted, wondering what needful thing might I
make inaccessible.

Is there a way to get something like a dmesg/ftrace log for the
entire boot process up to presentation of console login prompt?

I'd like to see exactly what was accessed and how.

Also how the comparable trace would look under the proposed system.

Poettering: Revisiting how we put together Linux systems

Posted Sep 1, 2014 16:20 UTC (Mon) by amacater (subscriber, #790) [Link]

Dependency management depends on proper dependency resolution - sadly, that's a lost cause in most distributions (with one notable exception).

Notably, Lennart is used to Red Hat - enough said about the problems of running any software requiring multiple third party repositories ... :(

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 7:21 UTC (Tue) by imunsie (guest, #68550) [Link]

> And of course, the obligatory XKCD: http://xkcd.com/927/

I was just about to post the same comic.

Poettering: Revisiting how we put together Linux systems

Posted Sep 1, 2014 16:43 UTC (Mon) by LightBit (guest, #88716) [Link] (1 responses)

Great! System full of outdated vulnerable and broken libraries.

Poettering: Revisiting how we put together Linux systems

Posted Sep 4, 2014 16:14 UTC (Thu) by hkario (subscriber, #94864) [Link]

That's progress! Just see what Java, Ruby and Windows binaries have been doing. This certainly is no recipe for disaster, no way.

Poettering: Revisiting how we put together Linux systems

Posted Sep 1, 2014 17:10 UTC (Mon) by mgb (guest, #3226) [Link] (25 responses)

How many incompatible versions of systemd_networking_emperor are going to be running in one of these terabyte rats' nests?

lennart.shark_jumped_p = true;

Poettering: Revisiting how we put together Linux systems

Posted Sep 1, 2014 17:39 UTC (Mon) by Trelane (subscriber, #56877) [Link] (24 responses)

Yet another troll comment from a guest account.

Perhaps future Poettering / systemd articles should be paywalled.

Poettering: Revisiting how we put together Linux systems

Posted Sep 1, 2014 20:44 UTC (Mon) by edomaur (subscriber, #14520) [Link]

it would be nice.

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 1:44 UTC (Tue) by mathstuf (subscriber, #69389) [Link]

Can guests comment on paywall articles from a subscriber link (though I guess it'd be possible to see who is running such a (presumably) sock puppet show in that case)?

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 13:32 UTC (Tue) by javispedro (guest, #83660) [Link]

I don't think that would help.

Poettering: Revisiting how we put together Linux systems

Posted Sep 11, 2014 11:19 UTC (Thu) by ksandstr (guest, #60862) [Link] (20 responses)

Yet another troll-calling comment from a subscriber account. What value do you add to this thread, to have justified your posting such a thing? For that matter, what value do the "+1, I agree" comments add, and why don't they attract calls to ban, moderate, restrict, and control? Certainly they don't try to use the word "fuck" in a serious screenplay -- but no one's going to throw a wobbly at scatological emphasis; we're all adults here.

What's more, I don't see your response to the actual question being asked: what if a per-app btrfs subvolume depends on a version of Lennartware that's fundamentally incompatible with the One True systemd In The Sky which the outer system is based around? Will there be multiple concurrent instances? How does the division of authority work? "Where's the spec, Eggman?"

It's usually the case that when a difficult question is posed, a bright young spark comes along and responds with practiced derision in order to conceal his/her inability to feel like the question has been adequately addressed previously, let alone make an answer that could be discussed further. Yet it's not these people that attract demands for bannination but the so-called "trolls", who (we're supposed to accept) are infinitely worse for reasons that've gone entirely unstated.

As it stands, systemd continues to be backed by this particular branch of schoolyard debate tactics, and anyone who doesn't appreciate the fact will be branded a troll and (as the self-nominated inquisition would have) excluded by administrative fiat. This alone should cause immediate and severe dissatisfaction in the systemd movement's actions.

Poettering: Revisiting how we put together Linux systems

Posted Sep 11, 2014 23:12 UTC (Thu) by Cyberax (✭ supporter ✭, #52523) [Link] (19 responses)

> What's more, I don't see your response to the actual question being asked: what if a per-app btrfs subvolume depends on a version of Lennartware that's fundamentally incompatible with the One True systemd In The Sky which the outer system is based around?
SystemD has an actual ABI compatibility promise: http://www.freedesktop.org/wiki/Software/systemd/Interfac...

That can't be said about SysV scripts.

Poettering: Revisiting how we put together Linux systems

Posted Sep 12, 2014 1:53 UTC (Fri) by mgb (guest, #3226) [Link] (18 responses)

The table looks very impressive but it doesn't include the D-bus interfaces of the main systemd daemon, nor anything that RH decides in future to designate as "legacy".

http://www.freedesktop.org/wiki/Software/systemd/Interfac...

However systemd's ABIs are a relatively minor problem, as are systemd's bugs. The serious problem with systemd is the endless churning of the plumbing layer every time RH decides that all systemd distros must downgrade yet another feature to match Fedora.

"One of the main goals of systemd is to unify basic Linux configurations and service behaviors across all distributions."

Poettering: Revisiting how we put together Linux systems

Posted Sep 12, 2014 2:00 UTC (Fri) by Cyberax (✭ supporter ✭, #52523) [Link] (16 responses)

They are promising that systemd's main daemon interface will eventually be covered by the stability promise.

>The serious problem with systemd is the endless churning of the plumbing layer every time RH decides that all systemd distros must downgrade yet another feature to match Fedora.
Forcing everyone (including Fedora!) to reimplement broken features is a nice side of systemd's adoption.

Poettering: Revisiting how we put together Linux systems

Posted Sep 12, 2014 2:23 UTC (Fri) by mgb (guest, #3226) [Link] (15 responses)

If people want to use Fedora they can.

But people who value Debian Stable shouldn't have to suffer endless churn as systemd drags it down to Fedora's level.

Poettering: Revisiting how we put together Linux systems

Posted Sep 12, 2014 2:30 UTC (Fri) by Cyberax (✭ supporter ✭, #52523) [Link] (14 responses)

What is this 'churn' you're speaking about and why is it not suitable for Debian Stable?

Poettering: Revisiting how we put together Linux systems

Posted Sep 12, 2014 3:54 UTC (Fri) by mgb (guest, #3226) [Link] (13 responses)

Some of the problems you'll find on bugs.debian.org under systemd - but certainly not all. The openvpn problem is not there and only one small facet of the inittab problem is mentioned.

For a better perspective you'll have to read a few months of debian-user and debian-devel.

Or you can save some time by deducing the scope of the churn from RH's admission that "One of the main goals of systemd is to unify basic Linux configurations and service behaviors across all distributions." RH can't do that without throwing away great features and millions of person hours in every non-RH systemd distro.

Poettering: Revisiting how we put together Linux systems

Posted Sep 12, 2014 4:29 UTC (Fri) by Cyberax (✭ supporter ✭, #52523) [Link] (12 responses)

> Some of the problems you'll find on bugs.debian.org under systemd - but certainly not all. The openvpn problem is not there and only one small facet of the inittab problem is mentioned.
Care to point out this problem?

>For a better perspective you'll have to read a few months of debian-user and debian-devel.
I'm browsing debian-devel, I don't see anything worse than the usual bugfixing cycle.

>Or you can save some time by deducing the scope of the churn from RH's admission that "One of the main goals of systemd is to unify basic Linux configurations and service behaviors across all distributions." RH can't do that without throwing away great features and millions of person hours in every non-RH systemd distro.
These 'great features' being? Fragmentation for the sake of fragmentation? Buggy init scripts?

Poettering: Revisiting how we put together Linux systems

Posted Sep 12, 2014 4:54 UTC (Fri) by mgb (guest, #3226) [Link] (11 responses)

Poettering: Revisiting how we put together Linux systems

Posted Sep 12, 2014 6:07 UTC (Fri) by Cyberax (✭ supporter ✭, #52523) [Link] (9 responses)

So we have a crusty old system held together with duct tape (OpenVPN config, shudder). It works OK with systemd in pure SysV-mode (it really does, I'm using it RIGHT NOW), but the maintainer wants to write a native systemd unit.

A good solution requires a bit of rethinking of how services are organized, resulting in a more robust and system: https://lists.debian.org/debian-devel/2014/09/msg00403.html which is accepted by the author https://lists.debian.org/debian-devel/2014/09/msg00434.html And it really is better, because it's possible to introspect the running system with the usual tools, without knowing the magic OpenVPN-specific init.d arguments.

Poettering: Revisiting how we put together Linux systems

Posted Sep 12, 2014 13:52 UTC (Fri) by mgb (guest, #3226) [Link] (8 responses)

"Reorganizing" Debian Stable functionality and reliability down to Fedora's level is a serious waste of everybody's time and the opposite of progress.

If DDs want to spend their time making things work with systemd they are of course at liberty to choose how they spend their time.

But using unnecessary dependencies to force Debian users to switch to systemd is a serious violation of Debian's Social Contract and very very wrong.

Poettering: Revisiting how we put together Linux systems

Posted Sep 12, 2014 14:26 UTC (Fri) by johannbg (guest, #65743) [Link] (7 responses)

"Reorganizing" Debian Stable functionality and reliability down to Fedora's level is a serious waste of everybody's time and the opposite of progress.

Please provide the reference to the information that shows maintenance within Fedora is less functional and reliable than it is in Debian as you are indicating with this remark

Thanks

Poettering: Revisiting how we put together Linux systems

Posted Sep 12, 2014 14:42 UTC (Fri) by mgb (guest, #3226) [Link] (6 responses)

If you haven't tried both Fedora and Debian Stable you might want to do so.

Fedora is bleeding edge which is appropriate for some use cases.

Debian Stable is magnificently reliable and upgrades smoothly from one release to the next without reinstalling.

Poettering: Revisiting how we put together Linux systems

Posted Sep 12, 2014 15:00 UTC (Fri) by anselm (subscriber, #2796) [Link] (4 responses)

Debian Stable is magnificently reliable and upgrades smoothly from one release to the next without reinstalling.

What gives you the idea that this will no longer be the case once systemd is Debian's default init system? If there are snags with the upgrade from wheezy to jessie then bugs will be filed and the problems will be fixed, like Debian has worked for the last 20 years. That's what the pre-release freeze is for, after all.

Poettering: Revisiting how we put together Linux systems

Posted Sep 12, 2014 15:14 UTC (Fri) by mgb (guest, #3226) [Link] (3 responses)

systemd ignores inittab and therefore any claims of "drop-in compatibility" are meaningless.

Forcing systemd on unwilling Debian users is an egregious violation of Debian's Social Contract.

Leaving servers inaccessible or even unbootable after an upgrade is distinctly below the standard achieved by previous Debian upgrades.

Poettering: Revisiting how we put together Linux systems

Posted Sep 12, 2014 16:55 UTC (Fri) by anselm (subscriber, #2796) [Link]

systemd ignores inittab and therefore any claims of "drop-in compatibility" are meaningless.

Inittab isn't exactly rocket science. If there was sufficient demand it would certainly be possible to come up with a program that looks at a system's /etc/inittab and generates a corresponding set of systemd configuration files, at least for the common use cases. This already works for other legacy configuration files like /etc/fstab.

Note that, even before systemd, Debian never guaranteed you a “seamless upgrade” if you'd tweaked the hell out of your /etc/inittab file, or for that matter any configuration file. Debian does make an honest effort not to break things but we are not, and never were, in the business of promising miracles.

Forcing systemd on unwilling Debian users is an egregious violation of Debian's Social Contract.

So far the only change is that Debian has (for a variety of reasonable reasons) decided that new installs of the distribution will use systemd unless otherwise specified. SysV init still exists as a Debian package. Package maintainers are still free to include SysV init support in their packages, and users who would rather use SysV init are still free to contribute SysV init scripts for packages that don't come with them (while maintainers are encouraged to include these). So far nothing is being “forced” on anybody.

As far as the Social Contract is concerned, nothing in it says that Debian shouldn't use systemd. If anything, it stipulates that, in the interest of its users, the software in Debian – especially the software that is installed by default – should embody the appropriate state of the art. In 2014, the state-of-the-art solution for basic Linux plumbing seems to be systemd, and this is further corroborated by the fact that all other mainstream Linux distributions seem to concur with this.

Leaving servers inaccessible or even unbootable after an upgrade is distinctly below the standard achieved by previous Debian upgrades.

You may have noticed that so far no stable Debian release actually involved systemd in an upgrade. Therefore there is no evidence whatsoever that upgrading Debian from one stable version to the next will, in fact, leave “servers inaccessible or even unbootable“ on account of systemd. On the contrary, it is safe to say that the upgrade from wheezy to jessie will be extensively tested by Debian maintainers, and showstopping problems will hopefully be corrected before jessie is actually released.

Poettering: Revisiting how we put together Linux systems

Posted Sep 13, 2014 1:20 UTC (Sat) by mchapman (subscriber, #66589) [Link] (1 responses)

> systemd ignores inittab and therefore any claims of "drop-in compatibility" are meaningless.

Why is this such a problem? Upstart ignores all service definitions in inittab too, and yet I don't remember much complaint about that.

Different init systems are different, just as different windows managers are different and different text editors are different. There is no reason they should be "compatible" with one another in any sense. Where they *are* compatible is simply a bonus.

Poettering: Revisiting how we put together Linux systems

Posted Sep 13, 2014 1:23 UTC (Sat) by mchapman (subscriber, #66589) [Link]

> There is no reason they should be "compatible" with one another in any sense. Where they *are* compatible is simply a bonus.

To clarify, I too think claims of "drop-in compatibility" are meaningless. But that's OK, since I don't consider "drop-in compatibility" to be a necessary feature anyway.

Poettering: Revisiting how we put together Linux systems

Posted Sep 12, 2014 15:33 UTC (Fri) by johannbg (guest, #65743) [Link]

Irrelevant if I have tried Debian or Fedora or any other distribution for that matter.

Fedora is not "bleeding edge" ( rawhide is which is comparable to Debian Unstable I suppose ) or "First" for that matter even if it claims to be ( Arch took that title long time ago ) and people have been ( proudly ) upgrading Fedora from FC1 and boast about doing so in the process with every new release of Fedora.

On one hand Fedora releases with newer release of an component containing bugfixes and enhancements more often than Debian does given it has shorter <-- release cycles then Debian, while Debian chooses to backport those fixes into the release due to it having longer <--- release cycle then Fedora.

You are claiming that the maintainership and quality assurance community in Fedora is of lesser quality than it is in Debian yet both of those distribution work closely with their upstream to the best of their ability as far as I know so please by all means enlighten me and elaborate how Debian manages to maintain their component "better" than maintainers within the Fedora and help the maintainers within Fedora as well as it's quality assurance community understand what they need to improve to be on par with Debians maintainership.

Poettering: Revisiting how we put together Linux systems

Posted Sep 12, 2014 8:44 UTC (Fri) by anselm (subscriber, #2796) [Link]

So? As far as I can tell that thread came up with a few sensible suggestions on how to make OpenVPN work with systemd, and the Debian OpenVPN maintainer seemed to like them.

As far as »systemd makes every distribution into Fedora« is concerned: The systemd developers seem to be looking for good solutions, not necessarily Fedora solutions. In point of fact some of systemd's approaches had been patterned on Debian (rather than Fedora) long before Debian declared that systemd would be the new default init system. If some distribution finds that whatever systemd does is too limited they can (a) lobby the systemd developers to adapt, which if there are good technical reasons they probably will, or (b) replace that particular bit of systemd (which, you know, is pretty modular as these things go) with one that is closer to their requirements.

Poettering: Revisiting how we put together Linux systems

Posted Sep 12, 2014 10:50 UTC (Fri) by johannbg (guest, #65743) [Link]

"The table looks very impressive but it doesn't include the D-bus interfaces of the main systemd daemon, nor anything that RH decides in future to designate as "legacy".

I have to ask why are you under the impression that Red Hat decides anything that happens in the systemd community?

Where does that thought originate from?

Poettering: Revisiting how we put together Linux systems

Posted Sep 1, 2014 17:38 UTC (Mon) by dashesy (guest, #74652) [Link]

This is great! I was hoping to see an app-bundle solution based on Docker for Linux, but this approach seems more elegant. Hope to see that in Fedora workstation, and hope to see smart vendors pick on that and start selling pre-loaded Linux desktops/laptops. Yes desktops are going to die, but perhaps not for us developers any time soon.

Poettering: Revisiting how we put together Linux systems

Posted Sep 1, 2014 17:58 UTC (Mon) by ibukanov (subscriber, #3942) [Link] (6 responses)

A lot of problems with API stability comes from the need to upgrade to the latest versions of libraries where there is a chance to have security-related bugs fixed quickly. If system architecture is sufficiently resilient with very little chance of bugs turning into exploits that can harm the user beyond simple annoyances, then API stability is no longer an issue. An application developer can link everything statically and the job of distribution would be to to ensure stability of kernel interface and GUI protocols rather than maintaining all the applications and their libraries.

As Lennart's proposal does not explain how the new architecture can provide such resilience against bugs, I do not see how it would simplify the job for Linux distributions. They still need to fix critical bugs in all applications they provide one way or another.

The good part of the proposal is that it wants to ensure that all updates are atomic and can be safely reverted. But than again, a 100% safe revert is not that useful if the only choice it brings is between working but exploitable application or the one that does not work.

Poettering: Revisiting how we put together Linux systems

Posted Sep 1, 2014 18:12 UTC (Mon) by mjthayer (guest, #39183) [Link] (1 responses)

If I see things correctly, the application developer would still need to test against all updates (security or otherwise) to the run-time they depend on, but the advantage would be not having to do this for all the distributions on which their application might run.

Poettering: Revisiting how we put together Linux systems

Posted Sep 1, 2014 19:15 UTC (Mon) by ibukanov (subscriber, #3942) [Link]

I doubt that this new promise of test-once run anywhere can do any better than previous ones. Ideally the application should not need any additional testing after the release of a particular version, when a developer still has insensitive in testing on different platforms or at least pay attention to bug reports. But this again requires full compatibility in eveolving system interfaces and resilience to harm from bugs, the issues that the proposal does not address.

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 0:25 UTC (Tue) by mezcalero (subscriber, #45103) [Link] (3 responses)

Well, we can certainly improve the technical measure to make security bugs less problematic (and we do, because we'll do sandboxing of desktop apps you download according this scheme), but software will *always* have bugs, it's written by humans after all. There is no system that can make all security bugs disappear.

Security fixes must happen, there is no way around that. However, we need to make sure that we allow them to be done by people who have the expertise and focus on fixing them. Hence: programs like firefox or google earth that you download from their respective website usually come with a ton of bundled libraries, in the versions mozilla or google has tested their stuff with. Now these vendors are actually not that interested in those libraries, they are primarily just interested in their own app code. So, the runtime concept is about attempting to put together a fixed set of libraries in a fixed set of versions that is basically immutable (modulo the minimal changes necessary to do CVE fixes), maintained by people who actually care about the library code. This way, you give the app vendors what they want (which is a fixed set of libraries, in specific versions that they can test stuff with and where they know that it is exactly this version the stuff will ultimately run on) but at the same time you retain the ability to minimally update the libraries for CVEs, because the runtimes are still maintained by the runtime vendor, and not by a mostly-desinterested app vendor.

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 5:59 UTC (Tue) by ibukanov (subscriber, #3942) [Link] (2 responses)

The question is why Google and Mozilla bundle libraries in the first place. This happens precisely because distributions failed for provide stable interfaces to maintained libraries with CVE fixes. I do not see how the proposal changes the situation.

And that is the reason I am rather skeptical about compatibility claims in the proposal. On the other hand anything that can get 100% reliable and revertible updates together with goodies likes read-only /usr are extremely welcomed.

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 10:08 UTC (Tue) by roc (subscriber, #30627) [Link] (1 responses)

We bundle libraries for various reasons:
a) To use later versions of libraries than distros are shipping. This lets us fix security and other bugs faster.
b) To expose interfaces and functionality that aren't widely deployed yet and possibly won't ever go upstream.
c) To increase consistency across platforms. This helps reduce our bug load.

Poettering: Revisiting how we put together Linux systems

Posted Sep 7, 2014 17:51 UTC (Sun) by pabs (subscriber, #43278) [Link]

Unfortunately embedding makes more work for distributions, which is why they have policies against it.

https://wiki.debian.org/EmbeddedCodeCopies

Poettering: Revisiting how we put together Linux systems

Posted Sep 1, 2014 18:16 UTC (Mon) by mjthayer (guest, #39183) [Link]

This is slightly tangential, but maintaining a package which provides a statically linked copy of Qt I am painfully aware of how badly it integrates visually with Qt applications provided by the distribution. If various run-times will be present on the system which each provide their own GUI libraries this is something which would be good to solved.

Poettering: Revisiting how we put together Linux systems

Posted Sep 1, 2014 18:45 UTC (Mon) by NightMonkey (subscriber, #23051) [Link] (35 responses)

And, patiently, Gentoo sits, waiting for your tired, your poor, your huddled SysAdmins, yearning to breathe easy with OpenRC. :)

Poettering: Revisiting how we put together Linux systems

Posted Sep 1, 2014 20:13 UTC (Mon) by Wol (subscriber, #4433) [Link] (34 responses)

Except, as a gentoo user, I want to use systemd!

Not because I use Gnome (I personally can't stand it), but because I'd like to have multiple stations on a single pc. Comes by default with systemd apparently, but OpenRC well I don't know - I got the impression it couldn't.

Dunno when, but I suspect I will be upgrading my dev system soon, in preparation for upgrading the main system later (I'm currently upgrading both systems to raid).

Cheers,
Wol

Poettering: Revisiting how we put together Linux systems

Posted Sep 1, 2014 22:57 UTC (Mon) by NightMonkey (subscriber, #23051) [Link] (3 responses)

"...because I'd like to have multiple stations on a single pc."

Pardon? You mean like multiboot? Or different runlevels? Or radio stations? ;)

Poettering: Revisiting how we put together Linux systems

Posted Sep 1, 2014 23:16 UTC (Mon) by Cyberax (✭ supporter ✭, #52523) [Link] (2 responses)

He probably means multiseat support.

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 0:01 UTC (Tue) by NightMonkey (subscriber, #23051) [Link] (1 responses)

Thank you. That makes sense.

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 8:34 UTC (Tue) by Wol (subscriber, #4433) [Link]

Yes I did. Can be a bit difficult remembering the correct technical term for all these things :-)

Cheers,
Wol

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 19:18 UTC (Tue) by jackb (guest, #41909) [Link] (29 responses)

Except, as a gentoo user, I want to use systemd! Not because I use Gnome (I personally can't stand it), but because I'd like to have multiple stations on a single pc. Comes by default with systemd apparently, but OpenRC well I don't know - I got the impression it couldn't.

Likewise.

Except in my case, I really just want the feature of automatically restarting crashed daemons, and automatically funnelling all their console output into the journal.

I've made a lot of progress converting a lot of my (virtualized guest) machines to systemd, but it's been incredibly difficult and I can't convert the host machines due to bugs with systemd's dmcrypt/mdraid/lvm setup. (apparently they only test on Fedora or something)

I've had to deal with problems like how one release of systemd-networkd worked perfectly, but the next release it consistently failed to set an ip address as a dhcp client. No errors, no warning, no indication of what the problem might have been at all. After one update I had ~30 guest machines that couldn't get networking parameters that I had to manually log into and set up static IP addressing to get working again.

If it wasn't for the two features I listed at the beginning of this comment, I wouldn't even bother with systemd at all, and even given those improvements it's barely worth it to switch.

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 20:20 UTC (Tue) by NightMonkey (subscriber, #23051) [Link] (22 responses)

"Except in my case, I really just want the feature of automatically restarting crashed daemons, and automatically funneling all their console output into the journal."

Those are behaviors that shouldn't be in the init system, if you like UNIX philosophical models of "do one thing and one thing well." These complicate an already complex job, better done by task-specific narrowly-scoped tools. Monit, Puppet, Chef, watchdog, and many other programs can do that simply defined task and do it well. And fix those crashing daemons! Crashing should never become accepted, routine program behavior! :)

For the console output, can't syslong-ng (or rsyslog, or similar) do that?

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 20:51 UTC (Tue) by dlang (guest, #313) [Link] (6 responses)

no, syslog-ng and rsyslog do not capture stdout and stderr from programs that start on the system (they assume that if an application is generating output, it's because the programmer wants the user to see it instead if it going into some log somewhere)

There's nothing preventing you from running the output of any program into logger (or equivalent) to have that data sent to syslog-ng or rsyslog ( 2|logger -t appname.err |logger -t appname.stdout or something similar)

Poettering: Revisiting how we put together Linux systems

Posted Sep 4, 2014 15:20 UTC (Thu) by xslogic (guest, #97478) [Link] (5 responses)

Surely most daemons (...although I don't know about the systemd varieties...) close off stdin/stdout/stderr when they daemonise?

Poettering: Revisiting how we put together Linux systems

Posted Sep 4, 2014 15:51 UTC (Thu) by raven667 (subscriber, #5198) [Link] (4 responses)

"most" things you try to run as a daemon do the whole daemonization song-and-dance but not "all" things you want to run as daemons do so, systemd handles all cases correctly, not just the ones where a well-behaved daemon does all the right things.

Poettering: Revisiting how we put together Linux systems

Posted Sep 4, 2014 22:10 UTC (Thu) by Darkmere (subscriber, #53695) [Link] (3 responses)

Systemd actually encourages writing of "Daemons" that don't fork/close, but simply linger until done, which is wonderful as it copies the way "proper" daemons work with Runit and similar systems.

Poettering: Revisiting how we put together Linux systems

Posted Sep 4, 2014 22:36 UTC (Thu) by dlang (guest, #313) [Link] (2 responses)

so a"proper systemd" daemon acts differntly that a "proper Linux" daemon for any other init system?

why do so few people see the problems with this sort of thing?

Poettering: Revisiting how we put together Linux systems

Posted Sep 4, 2014 23:25 UTC (Thu) by anselm (subscriber, #2796) [Link]

The non-forking daemon approach as recommended for systemd is what basically every init system other than System-V init prefers (check out Upstart or, e.g., the s6 system mentioned here earlier). It allows the init system to notice when the daemon quits because it will receive a SIGCHLD, and the init system can then take appropriate steps like restart the daemon in question. In addition, it makes it reasonably easy to stop the daemon if that is necessary, because the init process always knows the daemon's PID (systemd uses cgroups to make this work even if the daemon forks other processes).

The »double-forking« strategy is needed with System-V init so that daemon processes will be adopted by init (PID 1). The problem with this is that in this case the init process does receive the notification if the daemon exits but has no idea what to do with it. The init process also has no idea which daemons are running on the system in the first place and where to find them, which is why many »proper Linux daemons« need to write their PID to a file just so the init script has a fighting chance of being able to perform a »stop« – but this is completely non-standardised, requires custom handling in every daemon's init script, and has a certain (if low) probability of being wrong.

In general it is a good idea to push this sort of thing down into the infrastructure rather than to force every daemon author to write it out themselves. That way we can be reasonably sure that it actually works consistently across different services and is well-debugged and well-documented. That this hasn't happened earlier is only too bad but that is not a reason not to start doing it now. People who would like to run their code on System-V init are free to include the required song and dance as an option, but few if any systems other than Linux actually use System-V init these days – and chances are that the simple style of daemon that suits systemd better is also more appropriate for the various init replacements that most other Unix-like systems have adopted in the meantime.

Poettering: Revisiting how we put together Linux systems

Posted Sep 5, 2014 8:34 UTC (Fri) by cortana (subscriber, #24596) [Link]

It seems unrealistic to expect all daemons to correctly implement all the steps mentioned under "SysV Daemons" here: http://www.freedesktop.org/software/systemd/man/daemon.html; particularly when some of them are written in languages that go out of their way to make such operations difficult or impossible (e.g., Java). Compare that list to the much simpler and easier-to-implement list under "New-Style Daemons".

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 22:47 UTC (Tue) by jackb (guest, #41909) [Link]

Those are behaviors that shouldn't be in the init system, if you like UNIX philosophical models of "do one thing and one thing well." These complicate an already complex job, better done by task-specific narrowly-scoped tools. Monit, Puppet, Chef, watchdog, and many other programs can do that simply defined task and do it well. And fix those crashing daemons! Crashing should never become accepted, routine program behavior! :)

That's the kind of philosophy that's useless to me.

I have a lot of work to get done. I don't have time to fix all the broken daemons in the world.

I welcome tools that help me get my work done and reject tools that get in my way .

Unfortunately systemd is complicated because it contains a mixture of both so I'm always ambivalent.

Poettering: Revisiting how we put together Linux systems

Posted Sep 3, 2014 10:26 UTC (Wed) by cortana (subscriber, #24596) [Link] (13 responses)

> Monit, Puppet, Chef, watchdog, and many other programs can do that simply defined task and do it well.

If they do it by running '/etc/init.d/foo status' then, no, they can't.

Poettering: Revisiting how we put together Linux systems

Posted Sep 3, 2014 15:26 UTC (Wed) by NightMonkey (subscriber, #23051) [Link] (12 responses)

If your init scripts suck, then sure. In that case, you'd also have Nagios/Icinga checks making sure that N number of processes matching the name of the daemon are running at all times. But, this misses the overarching point I was making that there are better ways to achieve the goal than making your init system more fragile and brittle.

Poettering: Revisiting how we put together Linux systems

Posted Sep 3, 2014 15:46 UTC (Wed) by raven667 (subscriber, #5198) [Link] (11 responses)

The init system is not fragile, it is widely deployed and does not seem to be falling over, most of the ongoing work in in the ancillary tool box, like logind, and not in pid 1. Init scripts universally suck, even compared to service monitors like daemontools from the last century, claiming that periodic nagios checks are a replacement for having a real parent/child relationship with started services is silly, that's the near-beer version of service monitoring, not the real thing. Capturing and logging stdout/stderr and the exit code from a service are great features that don't exist in a standard fashion with init scripts, sure you could hack some of that together on an individual script basis but it's not a feature provided to you.

Poettering: Revisiting how we put together Linux systems

Posted Sep 3, 2014 16:09 UTC (Wed) by NightMonkey (subscriber, #23051) [Link] (10 responses)

Icinga/Nagios is not 'near-beer'. It has parent/child relationships, too. You are shouting "we must have a SYSTEM", when I don't know if that's the answer. People have just not used the tools correctly. This is just One More Thing To Go Wrong. And, yay, there will likely be a LPI exam level for figuring out what went wrong. ;)

More work is needed to make the relationship between users and developers LESS obscured, not more. When I reported a core Apache bug to the Apache developers in 1999, I had a fix in 24 hours, and so did everyone else. Now, if instead I just relied on some system to restart Apache, that bug might have gone unnoticed and unfixed, at least for longer.

Systems like this are a band-aid. Putting more and more complex systems in as substitutes for bug fixing and more human-to-human communication are what are the problem.

Poettering: Revisiting how we put together Linux systems

Posted Sep 3, 2014 16:23 UTC (Wed) by raven667 (subscriber, #5198) [Link] (9 responses)

> Icinga/Nagios is not 'near-beer'. It has parent/child relationships, too

I'm not sure what you are talking about nagios check_procs for eaxmple just shells out to ps to walk /proc, it is not the parent of services and doesn't have the same kind of iron clad handle to their execution state that something like runit or daemontools or systemd has.

> Systems like this are a band-aid.

You are never going to fix every possible piece of software out in the world to never crash, the first step is to admit that such a problem is possible, then you can go about mitigating the risks, not by building fragile systems that pretend the world is perfect and fall apart as soon as something doesn't go right, especially not as some form of self-punishment to cause pain to force bug fixes.

Poettering: Revisiting how we put together Linux systems

Posted Sep 3, 2014 16:34 UTC (Wed) by NightMonkey (subscriber, #23051) [Link] (8 responses)

I am prepared to be wrong about my experience-based opinions. :)

I just think a lot of this is because of decisions made to keep the binary-distro model going.

I'm not interested in fixing all the softwares. :) I am interested in getting the specific software I need, use and am paid to administer in working order. There are certainly many ways to skin that sad, sad cat. ;)

Poettering: Revisiting how we put together Linux systems

Posted Sep 4, 2014 22:05 UTC (Thu) by Wol (subscriber, #4433) [Link] (7 responses)

Well, can I just point you at an example (reported here on LWN first hand ...) of a bog-standard bind install. (Dunno the ref, you'll have to search for it.)

The sysadmin did a "shutdown -r". The system (using init scripts) made the mistake of shutting the network down before it shut bind down. Bind - unable to access the network - got stuck in an infinite loop. OOPS!

The sysadmin, 3000 miles away, couldn't get to the console or the power switch, and with no network couldn't contact the machine ...

If a heavily-used program like bind can have such disasters lurking in its sysvinit scripts ...

And systemd would just have shut it down.

Cheers,
Wol

Poettering: Revisiting how we put together Linux systems

Posted Sep 4, 2014 23:02 UTC (Thu) by NightMonkey (subscriber, #23051) [Link] (6 responses)

That a system architecture problem! Never put a system into production (or, hell, into 'staging') without remote power switches. And practice what happens when critical daemons go down (aka outage drills).

http://www.synaccess-net.com/

I don't care how "robust" the OS is. It's just being cheap that gets your organization into these kinds of situations (and that *is* an organizational problem, not just yours as the sysadmin.)

Poettering: Revisiting how we put together Linux systems

Posted Sep 4, 2014 23:15 UTC (Thu) by Cyberax (✭ supporter ✭, #52523) [Link] (5 responses)

Yeah, sure. So then you practice everything (and we did test it), and it works fine in testing. You schedule a time when nobody is around to upgrade the system without interruptions.

And then it breaks because of a race condition that happens in only about 10% of cases.

That sysadmin in this dramatization was me, and the offending script was: http://anonscm.debian.org/cgit/users/lamont/bind9.git/tre... near the line 92.

Of course, REAL Unix sysadmins must live in the server room, spending 100% of their time tweaking init scripts and auditing all the system code to make sure it can NEVER hang. NEVER. Also, they disable memory protection because their software doesn't need it.

Poettering: Revisiting how we put together Linux systems

Posted Sep 4, 2014 23:20 UTC (Thu) by NightMonkey (subscriber, #23051) [Link] (4 responses)

No need to take it personally. But, sad to say, you learned a good lesson the hard way in sysadmining - never rely on one system for anything vital. :)

Again, the answer to system hangs (which are *inevitable* - this is *commodity hardware* we're talking about most of the time, not mainframes) is remote power booters. I don't like living in the DC, myself.

Poettering: Revisiting how we put together Linux systems

Posted Sep 4, 2014 23:33 UTC (Thu) by Cyberax (✭ supporter ✭, #52523) [Link]

Lots of companies can't afford to make a completely redundant infrastructure. Especially if you have a fairly powerful server (that one was handling storage for CCTV streams and lots of other tasks). And anyway, it's non-trivial in any case.

> Again, the answer to system hangs (which are *inevitable* - this is *commodity hardware* we're talking about most of the time, not mainframes) is remote power booters.
Oh, I've witnessed mainframe hangups. Remote reboot is nice, but that server was from around 2006 so it didn't have IPMI and the datacenter where it was hosted offered only manual reboots.

Poettering: Revisiting how we put together Linux systems

Posted Sep 5, 2014 4:09 UTC (Fri) by raven667 (subscriber, #5198) [Link] (2 responses)

> - never rely on one system for anything vital. :)

That sounds dangerously close to internalizing and making excuses for unreliable software rather than engineering better systems that work even in the crazy imperfect world, duct taping and RPS to the side of a machine is in addition to not a replacement for making it work right in the first place.

Poettering: Revisiting how we put together Linux systems

Posted Sep 5, 2014 4:43 UTC (Fri) by NightMonkey (subscriber, #23051) [Link] (1 responses)

It might sound close, but it isn't. :)

I don't think what you are saying are actually separate tasks. All software has bugs.

Poettering: Revisiting how we put together Linux systems

Posted Sep 5, 2014 14:45 UTC (Fri) by raven667 (subscriber, #5198) [Link]

Sure all software has bugs but the risk and impact are not equally distributed.

Poettering: Revisiting how we put together Linux systems

Posted Sep 4, 2014 23:46 UTC (Thu) by flussence (guest, #85566) [Link] (5 responses)

> Except in my case, I really just want the feature of automatically restarting crashed daemons, and automatically funnelling all their console output into the journal.

I've found runit handles both those things flawlessly. Half of my system daemons are running under it — the other half's still on OpenRC, but that's mostly due to laziness.

Poettering: Revisiting how we put together Linux systems

Posted Sep 5, 2014 8:50 UTC (Fri) by cortana (subscriber, #24596) [Link] (4 responses)

Runit fails to deal with daemons that themselves fork and manage their own children.

Poettering: Revisiting how we put together Linux systems

Posted Sep 6, 2014 0:21 UTC (Sat) by flussence (guest, #85566) [Link] (3 responses)

That's true, but I've yet to encounter anything that misbehaves in that way. If and when that happens I'll be sure to complain upstream.

Poettering: Revisiting how we put together Linux systems

Posted Sep 6, 2014 0:47 UTC (Sat) by Cyberax (✭ supporter ✭, #52523) [Link]

A Python daemon using the multiprocessing module. Yes, I've seen it personally.

Poettering: Revisiting how we put together Linux systems

Posted Sep 6, 2014 9:38 UTC (Sat) by cortana (subscriber, #24596) [Link]

Jenkins and pretty much anything else written in Java.

Poettering: Revisiting how we put together Linux systems

Posted Sep 6, 2014 15:45 UTC (Sat) by rahulsundaram (subscriber, #21946) [Link]

Complaining about this is pointless unless the upstream you are referring to, is the init system itself. There is a ton of software out there that does it. If your init system cannot handle it, it is a failure of the init system.

Poettering: Revisiting how we put together Linux systems

Posted Sep 1, 2014 21:02 UTC (Mon) by roskegg (subscriber, #105) [Link] (34 responses)

Good grief. Lennart is reinventing Plan9... badly. Plan9 is small and tight and does all of this stuff the Unix Way. It is more Unix than Unix. Simple, small, very discoverable. All the cool features that were supposed to be in GNU/Hurd; Plan9 had from the beginning. D-Bus is a poor rip-off of the 9P2000 protocol.

Around 2007, the Linux desktop was just about sweet, and poised to start taking on MacOS and Windows. Then, like a flood, Lennart and friends start dumping half-baked bloatware into the OS.

In the last 7 years, things haven't gotten better, we've just experienced a lot of code churn in the Linux world.

The BSDs don't have this problem, because they have integrated systems. Kernel and base go together, and things get fixed at the right abstraction layer. Coupling and cohesion are properly handled. It is doubtful that Lennart and his friends even know about the Demeter principle.

It seems Lennart is trying to impose a "base system" for the Linux kernel. Which is a good idea, but he is doing it badly, not in the Unix/BSD spirit, but in the spirit of VMS/Win32 and OSX.

If there was a simple way to port a modern web browser to Plan9, I'd seriously consider switching. When everyone is trying to re-implement Plan9 poorly, why not go back to the real deal? It fulfills the vision of the Unix creators.

Oh, and as of recently, Plan9 source is available under the GPLv2 license.

Poettering: Revisiting how we put together Linux systems

Posted Sep 1, 2014 21:22 UTC (Mon) by roskegg (subscriber, #105) [Link] (27 responses)

I believe if LLVM was ported to Plan9, a full featured modern web browser based on WebKit would be ported fairly quickly. Making it run inside acme, and fit with the Plan9 scheme of mouse cutting/pasting/executing would take a little more work. But, perhaps if uzbl was ported over, that would be much simpler. uzbl is already following the proper Unix Way for browsers. Just some acme integration needs adding.

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 1:48 UTC (Tue) by mathstuf (subscriber, #69389) [Link] (26 responses)

Patches welcome[1]; I certainly don't care all that much about plan9 personally. There can certainly be comments in the default config file for things like this (or a config.plan9 example file if it is extensive enough). Uzbl already has Xembed support (used in uzbl-tabbed so presumably it works).

[1]https://github.com/uzbl/uzbl

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 2:41 UTC (Tue) by roskegg (subscriber, #105) [Link] (25 responses)

Thank you. If C++ support comes to plan9, I'll look into porting Webkit and then uzbl itself. Why oh why wasn't webkit written in C... The pain of writing a fully compliant HTML5/CSS3/JavaScript rendering stack is blocking a lot of interesting development in new operating systems. Network effects as barrier to entry.

Imagine if every new operating system had to implement PL/I and COBOL and FORTRAN to be usable. It's kind of like that.

The pcc compiler was a valiant effort, but network effects defeated it. Network effects are so strong that the top coders, even those who invent the technology, can't make much headway. Backward compatibility is a corker.

How hard is it to write an html5 renderer anyway? I mean, parsing HTML5 is easy. Parsing CSS is easy. But rendering... really, how much C code would it take? Seriously, why is C++ creeping into and infecting everything.

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 7:44 UTC (Tue) by ncm (guest, #165) [Link]

How hard is it to make a plan9 c++ runtime? Or, more aptly, an llvm target? Clearly the hurdle is ideological, not technical.

C++ exists and grows because it *uniquely* meets a real need. It's far from a perfect solution to that need, but it persists because hardly else seems to be trying. Rust seems to be trying, but it's a decade or two away from sufficient maturity. Worse, it could make some fatal mistake any day, not recognize it for a little too long, and then never get there. (Might have done already, I haven't kept up.)

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 10:49 UTC (Tue) by mathstuf (subscriber, #69389) [Link]

> parsing HTML5 is easy. Parsing CSS is easy

You say this, forgetting that both specify what happens when the syntax is broken (e.g., <i><b><p>text</i></b>) and that CSS is such a mess in the semantics department. Take a dive into the WebKit codebase sometime and tell me how C would have been simpler, easier to read, and shorter. I'm sure the world would be thrilled to get that monstrosity simplified.

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 10:51 UTC (Tue) by ibukanov (subscriber, #3942) [Link]

> parsing HTML5 is easy. Parsing CSS is easy. But rendering... really, how much C code would it take?

10 man-years is a minimum if you want to render substantial portion of the web...

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 13:04 UTC (Tue) by jwakely (subscriber, #60262) [Link]

> Seriously, why is C++ creeping into and infecting everything.

Maybe developers choose to use it. You know, the people who are actually doing the work not just complaining and criticising.

But no, probably not that, it's probably some kind of fungal infection like ophiocordyceps unilateralis that corrupts them away from your idea of purity.

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 13:53 UTC (Tue) by mpr22 (subscriber, #60784) [Link]

Seriously, why is C++ creeping into and infecting everything.

An assortment of reasons, frequently involving one or more of the features that your tone suggests you think should be set on fire, indented six feet down, and covered with dirt.

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 15:28 UTC (Tue) by JGR (subscriber, #93631) [Link] (18 responses)

> Seriously, why is C++ creeping into and infecting everything.

Because it enables developers to get more done for less effort than C, with minimal performance penalties, better type safety/compile-time checking and less bugs (or at least, different bugs).
In particular, memory management and string handling are fairly key activities in a browser engine, and in C these are labour-intensive and therefore error-prone. (Not that C++ is some language of perfection, but it is better in this regard).

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 22:05 UTC (Tue) by zlynx (guest, #2285) [Link] (8 responses)

Oh, FAR better!

Anyone who has ever seen shared_ptr implemented in pure C will run to C++ and grab it with open arms.

Sure it is done in Python and Perl runtime interpreters. And it sucks. Macros, macros everywhere, and so very easy to make mistakes. So very easy to forget an increment or decrement at some important point or use the wrong macro.

And GObject. Come on. GObject is an argument in favor of C++ if I've ever seen one!

Poettering: Revisiting how we put together Linux systems

Posted Sep 3, 2014 9:50 UTC (Wed) by dgm (subscriber, #49227) [Link] (7 responses)

> shared_ptr implemented in pure C

I would be interested in seeing some example of this. Really. I'm tempted to say it's simply impossible, because C lacks destructors.

Unless... you mean plain old reference counting, which is rather trivial and easy to understand. Much easier than, for instance, the subtle differences between all the smart pointer templates in the STL. And you can add BOOST and/or Qt or Microsoft's for extra fun. Easy-peasy.

Poettering: Revisiting how we put together Linux systems

Posted Sep 3, 2014 18:26 UTC (Wed) by Trelane (subscriber, #56877) [Link]

The nearest I know of its the gcc/clang extension to add function to be automatically called when an automatic various goes out of scope (is an Attribute: cleanup)

Poettering: Revisiting how we put together Linux systems

Posted Sep 3, 2014 18:52 UTC (Wed) by zlynx (guest, #2285) [Link] (2 responses)

I meant plain old reference counting. I've seen it done a lot in script language interpreters. It's done manually, it's prone to mistakes and is buggy as hell.

Poettering: Revisiting how we put together Linux systems

Posted Sep 5, 2014 8:56 UTC (Fri) by quotemstr (subscriber, #45331) [Link] (1 responses)

> It's done manually, it's prone to mistakes and is buggy as hell.

I don't agree that manual reference counting is particularly hard. Practically the entire world does it, and it works fine.

I've read my share of interpreters. Reference counting isn't particularly hard, although you want to use tracing GC if you don't want your users ripping their hair out over cycles.

I've seen far more problems with shared_ptr.

Poettering: Revisiting how we put together Linux systems

Posted Sep 5, 2014 18:13 UTC (Fri) by zlynx (guest, #2285) [Link]

Ah then, tell me if adding a value to a Python list increments the ref count or not. What about if you add it to a list?

Poettering: Revisiting how we put together Linux systems

Posted Sep 4, 2014 5:25 UTC (Thu) by jra (subscriber, #55261) [Link] (2 responses)

Look at the talloc library we use in Samba. It's written in C. It has destructors. Helps keep the Samba C code sane :-).

Poettering: Revisiting how we put together Linux systems

Posted Sep 23, 2014 13:05 UTC (Tue) by dgm (subscriber, #49227) [Link] (1 responses)

I'm looking at it right now and, the only thing I can think of is: Clever. Very clever.

Thanks for the pointer (pun intended).

Poettering: Revisiting how we put together Linux systems

Posted Sep 23, 2014 16:56 UTC (Tue) by rahulsundaram (subscriber, #21946) [Link]

Poettering: Revisiting how we put together Linux systems

Posted Sep 3, 2014 10:47 UTC (Wed) by jb.1234abcd (guest, #95827) [Link] (8 responses)

@ncm
"C++ exists and grows because it *uniquely* meets a real need."

@jwakely
"Maybe developers choose to use it. You know, the people who are actually doing the work not just complaining and criticising."

@mpr22
"An assortment of reasons, frequently involving one or more of the features that your tone suggests you think should be set on fire, indented six feet down, and covered with dirt."

@JGR

Why do you spread misinformation ?
This LWN.net site has some ambition to become a source of good technical
knowledge about Linux, UNIX, and Computer Science in general.

Now, to support your claims:
http://imagebin.org/318679
C++ popularity as a language of choice has declined from 18% in 2004 to
4.7% as of today.
If anything, this is a disaster !

jb

Poettering: Revisiting how we put together Linux systems

Posted Sep 3, 2014 11:29 UTC (Wed) by jb.1234abcd (guest, #95827) [Link] (4 responses)

This has happened despite an assult from C++ Standards Committee and other
apologists who for more than 10 years tried to cover C with dirt (C++, misinformation, and politics) in order to deny it improvements where needed and make it ready for a graveyard.

Well, the market has spoken thanks to minority of people in the know.

Now be nice, and please the last one turn off the light -:)

jb

Poettering: Revisiting how we put together Linux systems

Posted Sep 3, 2014 23:22 UTC (Wed) by nix (subscriber, #2304) [Link] (3 responses)

Simple question: do you actually *know* anyone on the C++ committee? These are not particularly political animals, and many of them are extremely knowledgeable about C as well: a good few of them use it often. The likelihood of such people trying to "cover C with dirt", at least unjustified dirt is... low. They know perfectly well that C has its place, as does C++.

(Given that you clearly have no idea even how long the committee has been in existence -- hint, it's more than twice as long as you suggested -- the likelihood of this seems low.)

Poettering: Revisiting how we put together Linux systems

Posted Sep 4, 2014 10:14 UTC (Thu) by jb.1234abcd (guest, #95827) [Link] (2 responses)

Why do not you educate yourself a bit in facts that are available in public
domain, but obviously require some mental effort to do it ?
Otherwise you will not understand what people are talking about and react like a cat whose tail was stepped on.

Firstly, a C++ Standards Committee is a technical, but also a political body.
You should understand the origin of the term "designed by committee".

Secondly, you have to understand what C++ is, and its history.
C++ was built on C; Stroustrup originally called it "C with Classes".
What it means is that majority of C became a "subset" and a hostage of C++.
So, it is clear that C++, thru its governing body C++ Standards Committee,
suffers from a split personality disorder - letting C evolve would shake C++ boat. It would create C and C++ incompatibilities (C99 anybody ?) that are not desired. This works both ways.

Thirdly, there is an interesting inverse relationship between an expansion of semantics and syntax of C++ (C++11, soon C++14), called "featurism" by some, and a rapid decline in C++ acceptance as shown on chart I quoted. The OOP part of "a new paradigm" contributed to it as well.
According to Stroustrup, there is another language trying to emerge from C++. The question is: with or without C "subset" hostage of C++ ?

jb

Poettering: Revisiting how we put together Linux systems

Posted Sep 4, 2014 14:29 UTC (Thu) by nix (subscriber, #2304) [Link]

OK, so it's pretty clear that you don't know anyone who attends committee meetings and thus you're calumnifying people when you have no idea who they even are. (Hint: some of them read LWN.)

Parrotting bits of D&E at me would be more impressive if there were any sign you'd understood it -- Stroustrup doesn't exactly display any signs there of wanting to cover any parts of C with dirt (other than decrying the use of the C preprocessor in C++ programs, which is pretty justified, I'd say).

btw, C *has* evolved since C++ was created: you even mention one example. Nobody much likes having the languages drift into incompatibility, but not because of some nefarious plot on the part of either committee: rather because nobody wants 'extern "C"' and link-compatibility to break.

If the C++ committee wanted to cover C with dirt, would the two committees really have spent so much time and effort making sure their newly-formalized memory models were to some degree compatible? And yes, though C11 did incorporate the model from C++11 rather than the other way round there was definitely attention paid on the part of the people defining the C++11 memory model to make sure they weren't specifying something that made no sense for C.

Poettering: Revisiting how we put together Linux systems

Posted Sep 5, 2014 14:03 UTC (Fri) by jwakely (subscriber, #60262) [Link]

Erm, C has it's own standards committee, WG14, who are not held hostage by the C++ committee and do their own thing. The fact that WG14 don't make many changes to the language nowadays is nothing to do with the C++ committee.

Poettering: Revisiting how we put together Linux systems

Posted Sep 3, 2014 11:32 UTC (Wed) by niner (subscriber, #26151) [Link] (1 responses)

Because some graph from an unknown origin, made from an unknown source of data with an unknown methodology showing that _all_ languages except Objective-C are on a decline on an undefined "Ratings" axis really makes an argument...

Poettering: Revisiting how we put together Linux systems

Posted Sep 3, 2014 11:59 UTC (Wed) by jb.1234abcd (guest, #95827) [Link]

OK, I understand you want to be "in the know" group ...
http://www.tiobe.com/index.php/content/paperinfo/tpci/ind...

Now please spread the knowledge instead of misinformation.

jb

Poettering: Revisiting how we put together Linux systems

Posted Sep 4, 2014 7:04 UTC (Thu) by Wol (subscriber, #4433) [Link]

> C++ popularity as a language of choice has declined from 18% in 2004 to
4.7% as of today.
> If anything, this is a disaster !

Is that because of all the new web script kiddies that have appeared? Really, I don't know. But a shrinkage in %age can easily hide a rise in absolute numbers. And if the target audience hasn't grown, then those stats are lying.

"Statistics tell you how to get from A to B. What they don't tell you is that you're all at C."

Cheers,
Wol

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 18:38 UTC (Tue) by daniels (subscriber, #16193) [Link]

No matter; since all these problems are already solved for you much more elegantly, and they're the obvious choice of those who _really_ know UNIX, they'll surely have all these features any day now as developers come flocking.

Poettering: Revisiting how we put together Linux systems

Posted Sep 1, 2014 21:33 UTC (Mon) by JGR (subscriber, #93631) [Link] (4 responses)

> If there was a simple way to port a modern web browser to Plan9, I'd seriously consider switching.
This frankly does not encourage me to look to Plan9 for a solution.

> Around 2007, the Linux desktop was just about sweet, and poised to start taking on MacOS and Windows.
Personally I suspect that this is somewhat optimistic.

> In the last 7 years, things haven't gotten better, we've just experienced a lot of code churn in the Linux world.
All that "code churn" has resulted in better functionality and usability for end users. This is more important that paying homage to the "Unix Way".

It's not 1980 any more, and if some other "Way" turns out to be better for modern systems/requirements than the "Unix Way" of old, then so be it. This implies some new ideas, experimentation and pushing of boundaries, rather than just sticking with whatever was cool 30 years ago.

Poettering: Revisiting how we put together Linux systems

Posted Sep 1, 2014 22:18 UTC (Mon) by sramkrishna (subscriber, #72628) [Link]

>It's not 1980 any more, and if some other "Way" turns out to be better for >modern systems/requirements than the "Unix Way" of old, then so be it. This >implies some new ideas, experimentation and pushing of boundaries, rather >than just sticking with whatever was cool 30 years ago.

Blind allegiance to the Unix Way is an injustice to.. the Unix Way.

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 1:25 UTC (Tue) by efitton (guest, #93063) [Link] (1 responses)

Some honestly and sincerely feel like there is less usability for end users now than 7 years ago. I certainly fall in that camp. Only now as a reaction to KDE and Gnome is there again a move to a full featured desktop that isn't experimental design.

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 6:36 UTC (Tue) by eru (subscriber, #2753) [Link]

Some honestly and sincerely feel like there is less usability for end users now than 7 years ago. I certainly fall in that camp.

Me too. I have sadly found the last Mandriva:s to ship with KDE3.* (2008 or thereabouts) were probably the best desktop distributions, ever. Featureful, the software was well integrated, easy to administer, yet lightweight enough to run satisfactorily on a Pentium-M Thinkpad.

I guess one problem is the people developing the desktops want to have fun and an interesting time doing it, therefore change things. But for end users the desktop (and the OS in general) is a "necessary evil". What they really are interested in are the applications. The desktop system is nothing but a way to manage them, and arbitrate screen space and other "desktop peripherals" (which may include removable disks, speakers, cameras or USB sticks). Otherwise it should stay out of the way.

Poettering: Revisiting how we put together Linux systems

Posted Sep 3, 2014 14:12 UTC (Wed) by jb.1234abcd (guest, #95827) [Link]

> > Around 2007, the Linux desktop was just about sweet, and poised to start taking on MacOS and Windows.
> Personally I suspect that this is somewhat optimistic.

I assume you were around the time of an introduction of Gnome 3 ?
There was a downloads counter on Fedora's site. It included their default
Gnome 3 desktop edition in a distant place, after KDE and XFCE.
Btw, the current counter does not show actual download numbers and does not include Gnome 3, just the spins. Oh well, if we do not like the message, let's kill the messanger ...

It is judged that half of former Gnome 2 users switched to other spins (KDE, XFCE, even LXDE, and others), most never to return.
The other effect was that the officially unsupported Gnome 2 was resurrected, which tells us a lot.

This is what happens when "new wave" of devs in Linux OS ecosystem think
that users are just a bunch of exorcists.

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 9:47 UTC (Tue) by k3ninho (subscriber, #50375) [Link]

I disagree that it's being invented badly, I mean, Git is Venti applied to version control, so the features of Plan 9 from Bell Labs will appear evolutionarily over time in other spaces. The problem space and the computer science constraints which shape the solutions haven't changed, so the solutions will look similar -- and I think it's great that you're championing a helpful approach like Brenda brought us. I've already said that cloudy stuff, containers and Docker probably could learn from Plan 9, with the very words that 'those that don't understand Plan 9 are doomed to reinvent it, poorly'.

In the details, the idea that [storage] is a file and [program] is a server, that's a clear sense of what Unix wanted anyway. We probably want to go from Bell Labs into Outer Space next, with distances and comms times factored into scheduling the work that [program] has to do.

Now that I think about it, why dedup and btrfs-send/recv when there exists Git Annex or Venti? Particularly when you can use the hash of the library's interfaces, graphics and sounds to find it within the git or Venti storage.

K3n.

Poettering: Revisiting the fragmentation

Posted Sep 1, 2014 23:32 UTC (Mon) by bojan (subscriber, #14302) [Link] (13 responses)

This is a little bit like current desktop situation: total fragmentation.

According to this plan, in order to be able to run things properly, I will have to have a number copies of pretty much the same thing loaded into memory, doing pretty much the same thing, all at once. Great.

And why? Because, after over a decade, we cannot agree on some basic stuff.

Communications breakdown indeed.

Poettering: Revisiting the fragmentation

Posted Sep 2, 2014 0:28 UTC (Tue) by mezcalero (subscriber, #45103) [Link] (12 responses)

Not entirely correct. Same files will be de-dupped by btrfs. And no only on disk, but also when they are mapped into RAM. That said, if you have the same library version, but possibly compiled by a different vendor you might get different files, so they will not be shared... This is not too different from havint the same shared lib around with two different soversions....

Poettering: Revisiting the fragmentation

Posted Sep 2, 2014 1:17 UTC (Tue) by bojan (subscriber, #14302) [Link]

Possibly different files? I'll bet $10 that glibc on Fedora and Debian do not have the same checksum. Ditto pretty much any other lib.

Poettering: Revisiting the fragmentation

Posted Sep 2, 2014 8:59 UTC (Tue) by oldtomas (guest, #72579) [Link] (9 responses)

> Same files will be de-dupped by btrfs.

So "lib compatibility" for an app means hash-equality of libs. Thanks for making that clear. Now I know this whole thing ain't for me.

I still cling to the old dream that an app has the responsibility of working with a whole range of environments (file system layout, minor variances in lib versions, etc.)

I don't care about the lazy app developers whose bloated monsters stop working because some file ain't in /etc/foo or because $libbar went from 1.2.7.15 to 1.2.7.16. I don't want to cater to that -- not on the boxes I'm responsible.

Poettering: Revisiting the fragmentation

Posted Sep 2, 2014 16:00 UTC (Tue) by dskoll (subscriber, #1630) [Link] (3 responses)

+1

We work very hard to ensure our product runs on any Linux distro out there, as well as FreeBSD and pretty much any UNIX-like system. That's what proper programmers do.

Poettering: Revisiting the fragmentation

Posted Sep 3, 2014 2:57 UTC (Wed) by mathstuf (subscriber, #69389) [Link] (2 responses)

> That's what proper programmers do.

Depends on the layer you're targeting. If you're writing a Linux driver, caring about FreeBSD doesn't make much sense.

Poettering: Revisiting the fragmentation

Posted Sep 3, 2014 6:58 UTC (Wed) by oldtomas (guest, #72579) [Link]

> Depends on the layer you're targeting. If you're writing a Linux driver, caring about FreeBSD doesn't make much sense.


I beg to differ: programming is abstraction. It always pays off to (a) make as little assumptions as might make sense and (b) make as many of those assumptions explicit as you ever can.

It not only makes other's lives easier (the poor FreeBSD gal/guy willing to use said device will thank you if she can rip off parts of your code, which for in-house use would be perfectly OK), but it ends up making the code clearer, more readable, and in the long term healthier.

Now I'd grant you that writing a kernel driver might put the tradeoff at another point that writing (say) an SMTP daemon, but "doesn't make much sense" seems to me too low a level for any case.

It's more work (as dskoll put it), but it's definitely worth it. And every time I see code like that, I thank the likes of dskoll.

Poettering: Revisiting the fragmentation

Posted Sep 3, 2014 15:44 UTC (Wed) by dskoll (subscriber, #1630) [Link]

Well, yes. :) I'm referring to applications, not kernel or driver programming.

Poettering: Revisiting the fragmentation

Posted Sep 3, 2014 22:31 UTC (Wed) by nix (subscriber, #2304) [Link] (4 responses)

I still cling to the old dream that an app has the responsibility of working with a whole range of environments (file system layout, minor variances in lib versions, etc.)
It would be nice... but in practice this means an exponential explosion of test environments, and what it really means is that your personal environment has never been tested by anyone but you, ever. Which means you get your own personal bugs. Now, I like this -- it means I get to help fix those bugs, and improve the quality of the software for everyone -- but for end users? Not so good.

Poettering: Revisiting the fragmentation

Posted Sep 4, 2014 7:04 UTC (Thu) by oldtomas (guest, #72579) [Link] (3 responses)

> but in practice this means an exponential explosion of test environments [...]

I think this is a very valid concern. Still, I think it's worth to take a step back and look at it from some distance: Tests, after all, are just a last line of defense. To keep software correct (or "as correct as possible"), we need first and foremost good interfaces. Meaning simple, understandable, well-designed. Small is paramount here -- you can't fulfill a contract you don't understand (and bad examples abound!).

By all means, test -- but first you gotta get a feeling that your software is doing the right thing.

Poettering: Revisiting the fragmentation

Posted Sep 4, 2014 14:15 UTC (Thu) by nix (subscriber, #2304) [Link] (2 responses)

Agreed -- but given how hard it is to design good interfaces (much harder than writing the actual software, IMNSHO) and how often they churn... do you think we're anywhere near your utopia yet? Not really...

Poettering: Revisiting the fragmentation

Posted Sep 5, 2014 6:35 UTC (Fri) by oldtomas (guest, #72579) [Link] (1 responses)

> do you think we're anywhere near your utopia yet? Not really...

Strongly agree: not yet, and by a far stretch.

But utopia is a place to "move towards" and not to "be in", anyway. So watch me making uncomfortable noises whenever I think the direction is wrong.

And yes, designing a good interface is definitely the hard part. But it's rewarding. And we as a profession should insist on getting that reward :-)

Poettering: Revisiting the fragmentation

Posted Sep 5, 2014 16:17 UTC (Fri) by raven667 (subscriber, #5198) [Link]

It's hard to insist on anything when there is no barrier to entry for writing library code, anyone can write what they want, and anyone can use it regardless of its ABI discipline, if enough people use it then there is a lot of pressure to package it up for the major distros and the distros have shown a limited amount of pushback for enforcing quality standards on upstreams.

Poettering: Revisiting the fragmentation

Posted Sep 2, 2014 21:35 UTC (Tue) by martin.langhoff (subscriber, #61417) [Link]

@mezcalero -- is btrfs de-duping post-facto, as a netapp filer does? I have never seen that feature as an integrated thing! Can you shed more light on that track?

Is this up to date? https://btrfs.wiki.kernel.org/index.php/Deduplication -- it seems fairly limited. Yes, you can run "hardlink" style programs telling btrfs that they are dupes instead off hardlinking. However that does not scale very well at all: (a) to get savings across VMs/containers you need to see "everything", and (b) "everything" in a large system is far too many files to use this strategy.

Netapp filers have a fast-and-small hash for each block, computed and saved at write time, and use those to get a hint of dedupe candidates. This solves the issue of finding dedupe candidates across large volumes, without having a "user land" that "can see everything". Cost is ~7% slowdown in writes...

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 2:42 UTC (Tue) by ofranja (guest, #11084) [Link] (4 responses)

I fail to see how using a systemd+btrfs-centric approach is giving any real advantage to anyone.

It seems to me that everything outlined in this article can be already done by using bind mounts (w/some specialized filesystem hierarchy), LVM (for snapshots) and namespaces/chroot. No hard dependencies on any init system or any specific filesystem features seems required. In fact, I wonder if the current systemd interfaces are not responsible for making it inherently harder to do.

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 5:36 UTC (Tue) by kloczek (guest, #6391) [Link] (1 responses)

Did you try to use LVM with more than one snapshot?
Each snapshot adds own overhead and slows down everything.
LVM in his fundamentals it is almost two decades old technology.

Poettering: Revisiting how we put together Linux systems

Posted Sep 3, 2014 17:56 UTC (Wed) by ofranja (guest, #11084) [Link]

Maybe I should have been more specific and say "dm-multisnap".

Well, the Linux kernel is much more than two decades old technology, but here we are. :)

Poettering: Revisiting how we put together Linux systems

Posted Sep 3, 2014 0:21 UTC (Wed) by kreijack (guest, #43513) [Link] (1 responses)

> It seems to me that everything outlined in this article can be already
> done by using bind mounts (w/some specialized filesystem hierarchy), LVM
> (for snapshots) and namespaces/chroot.

The snapshots have a totally different scope: they are needed to take a *atomic* photo of a filesystem.
For what Lennart needed it is more simple to hardlink the common files during the "package" installation: you need a database of the hash and the path, when there is an hash collision you create an hard link instead of a copy of the file. This should work because these trees are RO.

Poettering: Revisiting how we put together Linux systems

Posted Sep 3, 2014 18:09 UTC (Wed) by ofranja (guest, #11084) [Link]

That's what snapshots are for, atomic photos of the system.

The specific technology used is actually not that important, as long as it stays as a different layer - if you are creative enough, you could even use union mounts for that.

My point is: there are mechanisms to implement snapshots which do not create any dependency on a specific filesystem feature.

You might not agree that this is important - but for me, it's a must.

This is what Lennart consistently Misses:

Posted Sep 2, 2014 2:51 UTC (Tue) by roskegg (subscriber, #105) [Link] (4 responses)

Quote from http://harmful.eugenics-research.org/software/dynamic_lin...

in the late 70s, Edsger Dijkstra and Tony Hoare advocated the
"humble-programmer" philosophy, which says that humans tend to
overestimate their ability to handle complexity in software and
consequently one should strive (in addition to one's other objectives)
to pessimize the complexity (measured in lines of code) of the software
one relies on. often, this is achieved by finding a novel way of
viewing or conceptualizing the problem (like per-process namespaces).
they pointed out the programmer who can meet a set of requirements with
fewer lines of code is the better programmer because a smaller program
will usually be easier for the user to control, more likely to behave
the way the programmer thinks it will behave and easier for future
programmers to modify to do something the original programmer did not
provide for.

in summary, a "humbly-written" program will not unnecessarily waste the
time and the attention of the programmer trying to modify it or the user
trying to control it.

End of Quote

Lennart and his crowd aren't humble. And it is damaging the whole Free Software movement, and will continue to until we route around the damage.

This is what Lennart consistently Misses:

Posted Sep 2, 2014 17:42 UTC (Tue) by flussence (guest, #85566) [Link] (1 responses)

> Lennart and his crowd aren't humble. And it is damaging the whole Free Software movement, and will continue to until we route around the damage.

Allow me to respond to your biblical quotation tirade with another from the same mythology:

"Patches welcome."

This is what Lennart consistently Misses:

Posted Sep 3, 2014 23:27 UTC (Wed) by nix (subscriber, #2304) [Link]

There are patches to make people humble?!

Awesome! Can I have one? I think I may need it. (The mind-patching technology would be very useful too.)

This is what Lennart consistently Misses:

Posted Sep 2, 2014 18:42 UTC (Tue) by daniels (subscriber, #16193) [Link] (1 responses)

Random internet commenters who claim to speak on behalf of the whole free software movement, are destroying the free software movement.

This is what Lennart consistently Misses:

Posted Sep 4, 2014 16:43 UTC (Thu) by luzy (subscriber, #90204) [Link]

Who is claiming to speak for the whole Free Software movement?

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 2:55 UTC (Tue) by BradReed (subscriber, #5917) [Link] (1 responses)

I don't think I understand this well at all, as it seems to require a massive developer buy-in, or a massive distro buy-in to make things work.

Say some game company writes a new game they distribute via Steam or HumbleBundle. How is this made into a "runtime?" Who keeps it updated?

If Kovid Goyal releases a new version of Calibre every week, who makes the "runtime?"

I personally don't see the problem this runtime-based system is "fixing."

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 3:32 UTC (Tue) by raven667 (subscriber, #5198) [Link]

In the case of Steam the runtime is probably Ubuntu where as for Calibre maybe the runtime is also Ubuntu or maybe it is Fedora because that's supporting some newer infrastructure that Calibre uses which is different than what Steam tests against. The problem this is fixing is allowing you the flexibility to have shared per-application user spaces so to be able to mix and match, running bleeding edge dependencies for one application without forcing upgrade or breaking your other applications and without the overhead of VMs.

OMG are they going to blow it up?

Posted Sep 2, 2014 5:32 UTC (Tue) by kloczek (guest, #6391) [Link]

'The classic Linux distribution scheme is frequently not what end users want, either. Many users are used to app markets like Android, Windows or iOS/Mac have. Markets are a platform that doesn't package, build or maintain software like distributions do, but simply allows users to quickly find and download the software they need, with the app vendor responsible for keeping the app updated, secured, and all that on the vendor's release cycle. Users tend to be impatient. They want their software quickly, and the fine distinction between trusting a single distribution or a myriad of app developers individually is usually not important for them."

Bollocks. Main problem is not do something "quickly" but handle any install and upgrade issue to not leave installed or half installed resources in unknown state.
BTW: these guys should really ave look on Solaris IPS which is around more than 5 years. Incredible but seems no one of these "inventors" been looking on what was done up to now to solve similar issues.
Again NIH Linux syndrome .. what a shame :->

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 7:53 UTC (Tue) by suckfish (guest, #69919) [Link] (2 responses)

One downside of any attempt to impose how subvolumes are used on peoples systems, is that it will immediately break people who want to use subvols for administering their own systems.

I rely heavily on maintaining various parts of my systems in subvols that are regularly cloned & manipulated in various ways (chroots/containerisation, backups, major upgrades). I believe I am far from unique here.

This relies on me being able to decide what constitutes a subvolume. As soon as a different idea gets imposed on my system (e.g., fragmenting my OS install into multiple subvols) my ability to use subvols to manage my system will be impeded (e.g., I could no longer create a standalone environment just by "btrfs subvol clone"ing my OS).

OTOH a stow (remember that?) like system of isolating packages into their own directories sounds great to me, not least because I can choose to put a package into it's own subvol if I wish.

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 18:49 UTC (Tue) by mezcalero (subscriber, #45103) [Link] (1 responses)

sub-volumes carry flexible names. You can always pick names for your private sub-volumes that don't clash with the naming scheme we impose and you should be good.

Poettering: Revisiting how we put together Linux systems

Posted Sep 3, 2014 6:37 UTC (Wed) by suckfish (guest, #69919) [Link]

No naming is not the issue I was talking about.

If upstream packaging imposes a decision on how the filesystem is split into subvolumes, then the sysadmin no longer gets to choose.

This makes it much harder for the sysadmin to subvolumes to maintain their system.

The bottom line is sysadmins want to choose what 'btrfs subvol clone ...' clones, not have that choice imposed by upstream.

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 11:13 UTC (Tue) by helge.bahmann (subscriber, #56804) [Link] (3 responses)

And how will integration work out? You download something in chrome which is inside its compartment, and expect "show download folder" to delegate to the system file viewer, or launch the "correct" libreoffice instance (out of the possible multiple ones if different compartments)? Interface to print dialogue/network manager/desktop notification mechanism?

Standardizing containers for runtimes to catch odd-ball applications is one thing, declaring it the primary paradigm for application distribution is another thing. There is not even consideration of integration (which is the really hard problem), only about isolation (which is the trivial problem).

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 18:48 UTC (Tue) by mezcalero (subscriber, #45103) [Link] (2 responses)

Well, there are two options for the sandboxing apps: grant full access to specific dirs in $HOME, or do so only indirectly via "portals", where an app simply invokes some bus call to generically tell the system that something is supposed to be done, and the system then figures out a way to do it, with involving the user, so that the apps can't do bad stuff...

Chrome in your example would probably get fully access to the XDG userdir download directory. It would be mounted into the app's sandbox to the exact same place it appears at externally, so chrome wouldn't have to care...

Poettering: Revisiting how we put together Linux systems

Posted Sep 3, 2014 23:32 UTC (Wed) by nix (subscriber, #2304) [Link]

Of course, the problem with Chrome is less its userdir and more its hundreds of megabytes of version-dependent, non-downgradeable configuration state. (But then, this proposal isn't about version migration, only dependency management -- one assumes that all the Chromes you were choosing between would be the same version and would in some way conflict with all later versions, such that installing even one later container upgrades all the others... which is suddenly looking very like a package manager again.)

Poettering: Revisiting how we put together Linux systems

Posted Sep 7, 2014 18:19 UTC (Sun) by helge.bahmann (subscriber, #56804) [Link]

The problem is less about sharing of storage, but sharing of behavior that is in turn dependent on common code -- see the example about maintaining a consistent print dialog throughout the system: Full sandboxing makes this problem even worse, and nothing in the proposal even acknowledges this (lest hint at an intended direction for a solution).

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 14:08 UTC (Tue) by mst@redhat.com (subscriber, #60682) [Link]

It would be interesting to see a comparison with Android's model.

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 16:21 UTC (Tue) by bokr (guest, #58369) [Link]

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 18:51 UTC (Tue) by kim (subscriber, #73716) [Link] (1 responses)

From a technical standpoint this seems fine (or at least worth trying). I also agree that the problem that distros suffer from (as listed by Lennart) are real, but I see one "social" downside (with technical consequences) and one big problem with the proposed approach (and one reality check).

1) Requiring btrfs for the "runtime" is fine, but if say Gnome or KDE devs switch their development model to "we provide binaries and the runtime and that's it" then I can forsee developpers saying... "well since btrfs (or anyother dependency) is required for the runtime and we are the one releasing the runtime then we can make use of btrfs features in our binaries as well (i.e. in gnome, kde, libreoffice) directly". All that maybe with the attitude of "hey, if you want OUR program to run on YOUR system, then YOU have to provide a shim or an emulation layer for the btrfs features that are missing".

2) If again say KDE or Gnome or Libreoffice is only released by means of a binary runtime (and of course source code that, with time, will not make any effort to compile with/for anything else than the runtime), then I found doubtfull that distro will be able to package gnome/kde/libreoffice the old fashioned way. The will just bundle a huge "runtime_libroffice.deb" for instance with the whole image inside, thus defeating their purpose as a package distribution.

So when Lennart's say that distros serve a certain use case, it's all fine, but his solution defacto makes that distro will not be able to provide runtime-based apply (unless they do so in a clumsy way).

3) I don't belive for one second that KDE/GNOME/Libroffice can commit to provide N and N-1 and LTS security support for their runtime. Debian can do it because of the sheer number of debian volonteers (seriously thank you guys and girls). Redhat and Canonical can do it because they have a business model and can somehow pay for it (and canonical also benefits from the work of debian in that area).
Upstream projects are notoriously understaffed and underfunded.

Poettering: Revisiting how we put together Linux systems

Posted Sep 3, 2014 9:59 UTC (Wed) by HelloWorld (guest, #56129) [Link]

How do you know Debian can *actually* pull this through? I think that Debian suffers from developer shortage just as much as any other free software project, which is also why Ian Jackson's anti-systemd ranting was so utterly misguided: whenever somebody pointed out something that systemd can do that others can't, his response was that Debian can reimplement that themselves. No you can't, don't even try. And I certainly *don't* trust Debian to backport all relevant fixes to the outdated versions of the software they ship, also because bugs are often fixed without the relevant developer even realizing that it's a security issue.
I think that the only way to do this properly is to get upstream involved, and if Lennart's proposal achieves that, I'm all for it.

Poettering: Revisiting how we put together Linux systems

Posted Sep 2, 2014 22:35 UTC (Tue) by martin.langhoff (subscriber, #61417) [Link] (12 responses)

So this has several points of contact with OSX's "DMG" files for application distribution. Package an app with its libraries in a "disk image", apply a magic action that makes that disk image available. It is also similar in many points with Klik and ZeroInstall -- the application bundle rules.

Of course, with brtfs you get some pixie dust (dedupe, "mountpoints" without a disk partition) that make it work well; but the underlying problems persist...

* bundled libraries and their (unpredictable level of) maintenance
* infrastructure and ABI changes in the underlying OS
* trust -- instead of trusting one distro team, I have to trust N number of teams dealing with bundling, security and distribution

It is the first move from the "systemd cabal" that leaves me scratching my head. Everything else so far has been IMO very well defined.

As an alternative view into a related problem-space...

Back at Sugar/OLPC we ended up building our own "bundles" for sugar apps (.xo packages). Our goal was to install sandboxed apps in the users' homedirs, not requiring root or sudo access.

The main alternative at the time, in my view, was to teach the yum/rpm toolchain to make truly "relocatable" packages, which could be installed under an arbitrary prefix. So you could install rpms in your homedir, without system-wide privileges.

This would allow you to install a base OS, then say "yum install --relocateprefix /foo/postgres9.2" and have all the dependencies under /foo/postgres9.2 . It would combine very well with Copr (PPAs in Debian/Ubuntu) to install experimental versions of an app, toolchain or desktop; and it would not be hard to imagine it being "yum install --relocateprefix /foo/pg9.2 --makeitasnapshot"; at which point we get to Lennart's dream scenario with a toolchain improvement that has many valuable uses.

A similar thing would be doable with the dpkg toolchain. It will take a long list of fixups (path, ld, a secondary yum/apt db, etc...), but there are no mysteries to solve, it is essentially a SMoP.

Poettering: Revisiting how we put together Linux systems

Posted Sep 3, 2014 14:36 UTC (Wed) by mjthayer (guest, #39183) [Link]

I think that some of the underlying problems with application bundles would go away if they were more widely used (by popular FLOSS projects I mean, not just by commercial ones). On the one hand people would gain more experience with creating them. A few examples:

* Experience of which libraries link well statically and which not. E.g. glibc vs uclibc.
* Experience of what ABIs one can depend on on a random system . E.g. the Linux kernel system call interface, the glibc dynamic interface (as long as one knows a few tricks).
* Avoiding statically linking to high-frequency update libraries. E.g. piping and shelling out to openssl(1) rather than linking in the library.

On the other hand I can also imagine popular hosting services adding build services which would improve the security problem. A developer who did not have the resources to follow all security updates could just let the service re-build and re-package the software whenever there was a security update to a bundled library, and they could use a standard (statically linked) library to check for and download updates at the hosting service on software start-up.

Poettering: Revisiting how we put together Linux systems

Posted Sep 3, 2014 15:17 UTC (Wed) by raven667 (subscriber, #5198) [Link] (10 responses)

> It will take a long list of fixups (path, ld, a secondary yum/apt db, etc...), but there are no mysteries to solve, it is essentially a SMoP.

I think the idea is that with this proposal, by using mounts and mount namespaces, you don't have to retro-fit the whole world to use some new lookup scheme to find what they want on the filesystem, every service sees a standard and consistent filesystem that just works as it always has.

Poettering: Revisiting how we put together Linux systems

Posted Sep 3, 2014 16:02 UTC (Wed) by martin.langhoff (subscriber, #61417) [Link] (9 responses)

That is clear. The downside, OTOH, is in library maintenance and the implied trust. Some app bundles are well maintained, others are not so well maintained.

Am I going to trust OpenSSL libs bundled in a dozen apps? When a significant bug is found, I will be depending on the responsiveness and expertise of a dozen teams -- some will patch/update early and correctly, some will mess it up, some teams will be dormant and never get updated.

This is not a theory, it happens on OSX today. I have had OSX as a secondary desktop env for ~10 years now, alongside Debian/Ubuntu/Fedora desktops.

On these mainstream Linux OSs we have a fantastic thing: even old, lightly-maintained applications get active care from packagers working as a team (with some level of consistency). Apps get updates to their libraries, and the packagers are knowledgeable enough to sort out minor compatibility issues, or to bring them to upstream's attention with good diagnostics in hand.

"Fat" app bundles forgo all that. It is a gigantic loss.

Poettering: Revisiting how we put together Linux systems

Posted Sep 3, 2014 16:13 UTC (Wed) by raven667 (subscriber, #5198) [Link] (8 responses)

This all depends on where the lines are drawn for what is included in standard /usr filesystems that apps use and what needs to be bundled with the apps. Part of the purpose of this whole thing is to _reduce_ the desire to bundle libraries with apps by making the common frameworks include a maximal /usr, apps specify which /usr they want and the provider of that /usr, likely one of the existing distros, processes security updates much the same way they do now. If people choose to bundle common libraries like openssl with their apps then they have failed utterly to take advantage of this proposal.

Poettering: Revisiting how we put together Linux systems

Posted Sep 3, 2014 18:13 UTC (Wed) by HelloWorld (guest, #56129) [Link] (7 responses)

But a maximal /usr is *not* what users want. You either ship too much, resulting in waste of space (not what you want on an embedded device), or you ship too little, resulting in bundled libraries and the associated problems. This whole thing doesn't solve any real problem, and that's not a surprise given that the underlying problem is basically unsolvable: you want bugfixes for the libraries, but no regressions or incompatibilities. Given that there's no way to tell them apart, you're hosed.

Poettering: Revisiting how we put together Linux systems

Posted Sep 3, 2014 19:02 UTC (Wed) by raven667 (subscriber, #5198) [Link]

> But a maximal /usr is *not* what users want. You either ship too much, resulting in waste of space (not what you want on an embedded device), or you ship too little, resulting in bundled libraries and the associated problems.

That remains to be seen, there is room in this scheme for a whole marketplace of different pre-baked /usr filesystems with as much or as little included, how popular those runtimes are can help determine where the sweet spot is. There will surely be many different runtimes for different workloads under this scheme, very minimal ones for embedded and maximal ones for desktops, which can be built using the same tools that distros use now for yum grouplists and such.

There are definitely a group of people though who have a visceral reaction against anything they are not actively using being on their system and feel unclean, they will be resistant to a generic /usr because of the amount of stuff that must go in it. I figure for a generic desktop you can afford to spend maybe 10G on system libraries and apps given that most desktops have at least a 128G SSD, which is enough for 2 maybe 3 maximal /usr filesystems, for a generic server maybe 1-2G is appropriate and one /usr, although for a Docker-like container server it may be appropriate to have 50G or more with dozens of /usr filesystems for each distro and server framework, much like AWS images.

> This whole thing doesn't solve any real problem, and that's not a surprise given that the underlying problem is basically unsolvable: you want bugfixes for the libraries, but no regressions or incompatibilities. Given that there's no way to tell them apart, you're hosed.

You are right in that it doesn't solve this problem, it sidesteps it entirely because the problem is practically unsolvable as you state. By having a standardized scheme for managing multiple /usr filesystems it lowers the friction and increases the integration of a mix-and-match system, compared to running each app in a VM with nested kernels and limited shared system state (no shared services like d-bus or x or wayland). Instead of picking one way or the other, do both, cut the gordian knot.

Poettering: Revisiting how we put together Linux systems

Posted Sep 3, 2014 19:08 UTC (Wed) by martin.langhoff (subscriber, #61417) [Link] (5 responses)

The battle for what should be in that "base" and what should not be there is a significant one. Linux distros have sidestepped it with good package management and deps.

OSX and Windows have not, and the price they have paid is: a "base" install that is bloated and includes much of the graphical stack even for servers, yet so lean that it forgets to include important libraries, so app authors have to bundle them.

Wherever you draw the line, you are doing it wrong :-)

Fedora seems poised to draw some line between a "base" and "stacks", but that line us a lot more fluid because it is still underpinned by yum. The promise is that each stack will be better integrated and easy to install "whole", providing a better defined platform. And still, you get your openssl security patches from one high quality source.

Poettering: Revisiting how we put together Linux systems

Posted Sep 3, 2014 19:33 UTC (Wed) by raven667 (subscriber, #5198) [Link] (4 responses)

> Wherever you draw the line, you are doing it wrong :-)

This whole thing is about not having to make a singular choice, you can have 2 or 3 or more different /usr systems which draw the line in different places for different apps. Over time under this proposed system I would expect a few natural groupings to fall out and a feedback between app and runtime developers to negotiate what makes sense at what layer so app developers are not ultimately responsible for system libraries they don't care about.

Poettering: Revisiting how we put together Linux systems

Posted Sep 3, 2014 20:46 UTC (Wed) by martin.langhoff (subscriber, #61417) [Link] (3 responses)

So we expect a new role in the ecosystem: "app packagers"; and we are giving them a rather hellish test matrix. Not only every major distro LTS and cadence releases, but 3 different "base systems" for every release.

Looking at it from this perspective, things look even weirder. If application (and app stack) developers were keen on this kind of distribution, tools like Klik and ZeroInstall would be much more popular amongst app developers and users than they are today.

I honestly believe that most projects are happy to leave packaging, with all the specialized knowledge it entails about ABI changes in different distro releases, to folks on the distro side. The exceptions are very large projects, with the manpower and "ecosystem" to sustain that role. Those projects are hosting their PPA/Copr style repositories already -- but it's not that many, and you can see those repos are not that well maintained.

Poettering: Revisiting how we put together Linux systems

Posted Sep 3, 2014 21:04 UTC (Wed) by raven667 (subscriber, #5198) [Link] (2 responses)

> So we expect a new role in the ecosystem: "app packagers"; and we are giving them a rather hellish test matrix. Not only every major distro LTS and cadence releases, but 3 different "base systems" for every release.

I think the proposal is for the exact opposite, an app package will depend on exactly one runtime and will not claim to work with anything else, reducing the test matrix from all popular distros which a user might conceivably have installed to just the one type of system they built the app on in the first place.

> Klik and ZeroInstall would be much more popular

This does cover some of the same ground as those utilities, this is a discussion about how to solve this in a generic and standard way across all of the different kinds of Linux, maybe avoiding some of the pitfalls those utilities have had.

> I honestly believe that most projects are happy to leave packaging, with all the specialized knowledge it entails about ABI changes in different distro releases, to folks on the distro side.

Which is a system this proposal preserves as the distros are the ones maintaining /usr volumes, as an application developer you get to choose which /usr fits your needs and can rely on the end user being able to easily get a copy of that /usr, supported by its distributor, when they want to run your application, without disturbing the rest of their system preferences.

Poettering: Revisiting how we put together Linux systems

Posted Sep 3, 2014 21:17 UTC (Wed) by martin.langhoff (subscriber, #61417) [Link] (1 responses)

@raven677, can you clarify whether you mean

a - that each bundle will match only one "base OS runtime", but that you expect app packagers to produce one bundle for each popular "base OS runtime"? In this case the test matrix is 1:1 for each bundle, but large for the app packaging team...

b - that each app dev team will publish _one_ bundle, matched to one "base OS runtime". In this scenario, it is the "end user"/sysadmin that might be in a situation of having to install and run a particular base OS runtime because the app bundle demands it.

Or perhaps both depending on manpower. I don't like an ecosystem that spans the gamut between these two dynamics.

Most app projects are under-resourced, so I suspect case "b" will be the most common case. So if on my desktop I'm running a dozen apps, they might in turn each be coupled to a specific OS runtime, so perhaps I'd be running bits and pieces of 6 OS runtimes.

Poettering: Revisiting how we put together Linux systems

Posted Sep 3, 2014 21:36 UTC (Wed) by raven667 (subscriber, #5198) [Link]

Yes, the proposal describes situation B in your listing, situation A is the current status quo where apps are already packaged separately for each base OS runtime and few app packages are maintained by their upstreams because dealing with the quirks of N possible base OS runtimes is too difficult, the ones that do their own app packages in the status quo tend to have to bundle a lot of libraries because there is no ABI discipline between base OS runtimes.

So yes, you might have 6 different base OS /usr runtimes installed to run 6 different apps although it seems clear that there will be a lot of pressure to standardize on just a few /usr runtimes in each use case (Desktop, Server, Phone, Tablet, IVI, Network appliance, other embedded, etc.) and since this reduces the friction of maintaining different /usr spaces (no re-install or booting of VMs) it makes the process of shaking out what developers and users really want from /usr smoother.

Maybe this could lead to a resurgence of LSB, defining the ABI for an entire /usr for applications to target rather than a uselessly small subset. By changing where the pain points are for supporting applications a very different marketplace could stably emerge.

Poettering: Revisiting how we put together Linux systems

Posted Sep 3, 2014 5:32 UTC (Wed) by mrdocs (guest, #21409) [Link] (4 responses)

One thing no one has brought up is in my experience, most developers are not really good as packagers.

And to be provocative, most of them suck at it and find it a point of pain. I'd love to be proven wrong.

Packaging is an afterthought. I've dealt with this in two prior jobs. Undoing an unholy mess because developer A setup his dev environment in such and such a fashion and then Dev B cannot get stuff to run right.

Moreover, developers and coders often approach packaging as a programming paradigm, when there are long settled rules about how to package things properly on Linux. This creates unneeded complexity and fragility. I had one developer with a Ph.d completely redefine every rpm macro in a spec file because he thought he was smarter than enterprise distros ;-)

Lastly, I'm wondering how my Fortune 1000 customers are going to look at this and ask them selves: " How can I audit this properly ? How can I maintain golden images without going insane? Now I need another way to manage what is on my systems ? "

Poettering: Revisiting how we put together Linux systems

Posted Sep 5, 2014 20:37 UTC (Fri) by sjj (guest, #2020) [Link] (1 responses)

+1000000

I haven't really thought this thing through but I'm cautiously positive by default on rethinking systems. I do like systemd because it brings sanity into the twisted nest of hacks that is SysV init.

That being said, this smells of some kind of desktop environment oriented hackery I'm not at all sure is useful on stable servers.

Poettering: Revisiting how we put together Linux systems

Posted Sep 5, 2014 21:03 UTC (Fri) by raven667 (subscriber, #5198) [Link]

I could see something like this tied to something like Docker for building large-scale systems, using these read-only mounts to efficiently cache whatever base OS the container wants

Poettering: Revisiting how we put together Linux systems

Posted Sep 8, 2014 14:06 UTC (Mon) by ebassi (subscriber, #54855) [Link] (1 responses)

One thing no one has brought up is in my experience, most developers are not really good as packagers.

shockingly, when "packaging" is a set of policies that change in order to ensure that "packaging" cannot be automated like it should, in order to keep around fiefdoms and OCD-level of control over them, people actually writing the software have issues complying with all the little rules and incompatibilities.

Poettering: Revisiting how we put together Linux systems

Posted Sep 8, 2014 19:02 UTC (Mon) by tao (subscriber, #17563) [Link]

Debian (I'm a DD) has a lot of packaging policies. But I'd also argue (I'm of course biased) that we have the most consistent distribution. Some of Debian's policies are very strictly enforced, others are merely suggestions on how to ensure a better experience for the end user.

The policies are there to make sure that all packages can either co-exist or, in cases where they're by nature conflicting, said conflicts are formalised in terms of package dependencies/conflicts/etc.

Sure, there are things you'll need to do manually (generally only once per package though, unless the package drastically changes), but for the average Debian developer most of the effort is spent either on:

* Fixing things that should've been done upstream (symbol versioning, portability, ... Even things you'd think would obviously included with every piece of software, such as manual pages)

* Backporting fixes when upstream cannot be bothered to release security fixes for older versions of their software

* Modifying code to be able to run with older versions of libraries, to ease situations where software A, B, C all depend on library version 2.4 but software D depends on library version 2.5

* Reinstating functionality that upstream has removed with the intent to replace it with something better (but have not yet done so)

* Ensuring that the end product is actually legally redistributable (care must always be taken so that the license of software A is compatible with the licenses for library B, C, D, E, ...)

Most of all though, the main reason upstream developers are generally not good packagers is that the developer has software A to care for (and only needs to make sure that it works as long as its dependencies are available), but the packager needs to ensure not only that A works, but also that A doesn't cause breakage in totally unrelated packages B, C, D, ...

The packagers also need to worry about annoying little details such as unrelated software sharing the same name (git, node, epiphany for instance).

PS: I cannot speak for other distributions, but none of the packaging policies in Debian change to "ensure that packaging cannot be automated". If you have something you believe to be a counter-example, please share.

Poettering: Revisiting how we put together Linux systems

Posted Sep 3, 2014 8:45 UTC (Wed) by lottin (guest, #98688) [Link] (1 responses)

Is it me or has the GNU package manager already solved all these problems?

I don't know the details, but in a recent demo I saw it looked like it was based on "profiles" which allowed you to install multiple incompatible versions of a package system-wide or in the user's home directory.

Poettering: Revisiting how we put together Linux systems

Posted Sep 5, 2014 22:44 UTC (Fri) by thoughtpolice (subscriber, #87455) [Link]

If you're referring to Guix, then yes. Guix is based on the Nix package manager (instead using Guile as the configuration language, not the 'Nix language'), and they solve many of the same problems.

But rather than using subvolumes or btrfs, you instead calculate hashes of system dependencies (roughly speaking) based on their own input dependencies, and use that to separate things on the filesystem - this means 'firefox 29' and 'firefox 30' can coexist, because they have separate hashes, and are stored on the FS in separate locations.

The final 'system' is then 'built' using symlinks to these paths, as POSIX (IIRC) guarantees symlink renames are atomic on any filesystem. So the package manager is by design transactional on any filesystem. This means you can rollback any transaction like installing a package that might break.

It also means you can do things like take the 'closure' of a package (the package + the transitive closure of its dependencies) and ship them to other machines using SSH or whatnot.

The Guix developers are also working on a distro (I don't remember the name) based on Guix and the GNU Daemon Managing Daemon (dmd), an alternative to systemd.

In contrast, Nix has NixOS, built on the original Nix package manager. NixOS uses systemd for its init system (and used upstart before).

(For full disclosure, I'm a NixOS developer as well.)

How does this solve any problem?

Posted Sep 3, 2014 13:11 UTC (Wed) by HelloWorld (guest, #56129) [Link] (1 responses)

It seems to me that this proposal doesn't address the fundamental problem: on the one hand, library upgrades sometimes break programs because they inadvertently break compatibility. On the other hand, people do want to get bugfixes and security updates that come with new versions of the libraries.

In Lennarts new model, an “app” is essentially allowed to depend on a single collection of libraries (“runtime”), which will receive updates from the vendor of that collection, leading to the same problem as before: inadvertent compatibility breaks. And of course the “runtime” is never going to ship all the libraries a given app needs, so libraries will have to be bundled with the app, leading to the aforementioned problem of missing security updates and bugfixes. Really, how does this help anyone? And that's not the only problem I see. It says that a “framework” is supposed to ship everything that is necessary to develop an “app” for a given “runtime”, including compilers and the like. What if I develop an application in a language that the “framework” doesn't provide for, or if I need a newer compiler than what ships with the “framework”?

I think I must be missing something, because so far most of what the systemd crew has shipped seemed reasonable and well thought-out. But to me, it doesn't seem that way this time around.

How does this solve any problem?

Posted Sep 3, 2014 16:08 UTC (Wed) by raven667 (subscriber, #5198) [Link]

It does try and solve that problem, the same way that RHEL and Ubuntu LTS does, by supporting apps which depend on /usr filesystems which are stable with back-ported fixes, while simultaneously supporting apps that use different /usr filesystems that have the latest and greatest everything. Instead of having to choose to run LTS and deal with years old libraries or live on the bleeding edge and deal with ABI breakage, have it both ways with mount namespaces per service.

You are right that because the /usr runtime is fixed and readonly it needs to define what is included and what is left out, not every library in existence is going to be in every runtime, so for some apps this will lead to library bundling, but for others the ability to specify which /usr the app runs on means that they can now depend on it and stop bundling. It is unclear until this is tried and some lines are drawn as to what is commonly included in /usr how this will play out.

As far as developing, if you want to develop using particular tools then you need to use a framework, which is a whole /usr filesystem, that includes those tools, or you need to include those tools with your app. How much of a problem this actually is in practice depends on how maximal the frameworks are in what tools they include. The /usr filesystems will be provided by distros under this scheme so if your tools are currently packaged by a distro you have a better chance that they will be commonly included in frameworks.

Poettering: Revisiting how we put together Linux systems

Posted Sep 3, 2014 19:19 UTC (Wed) by xxiao (guest, #9631) [Link] (2 responses)

Base on all these 'revolutionary top-down-design' ideas, I feel more and more it's time to get back to *BSDs, this guy wanted to own audio, then init, now the whole thing?

Poettering: Revisiting how we put together Linux systems

Posted Sep 3, 2014 19:35 UTC (Wed) by raven667 (subscriber, #5198) [Link]

I encourage you to use the system that makes you the most happy, life is too short to do otherwise.

Poettering: Revisiting how we put together Linux systems

Posted Sep 4, 2014 21:00 UTC (Thu) by mezcalero (subscriber, #45103) [Link]

Actually, I am mostly in it for the love I am repaid in! ;-)

Lennart

regarding 'cabal' folks

Posted Sep 3, 2014 21:57 UTC (Wed) by gvy (guest, #11981) [Link] (4 responses)

People like Lennart might be intelligent but they lack any sort of wizdom.

Please.

Posted Sep 3, 2014 22:11 UTC (Wed) by corbet (editor, #1) [Link]

Do I really have to ask again for people to cool it with this kind of stuff? Nothing is accomplished with personal attacks, please take them somewhere else.

regarding 'cabal' folks

Posted Sep 3, 2014 22:18 UTC (Wed) by mjg59 (subscriber, #23239) [Link] (2 responses)

On the contrary, I have it on good authority that Lennart is a wizard.

regarding 'cabal' folks

Posted Sep 4, 2014 1:55 UTC (Thu) by mathstuf (subscriber, #69389) [Link] (1 responses)

But I've never seen him with a monocle.</obscure-reference>

regarding 'cabal' folks

Posted Sep 5, 2014 12:33 UTC (Fri) by carenas (guest, #46541) [Link]

got to meet Lennart once, and while he quickly dismissed me for just being a no name guy and not worth sharing a beer with (even if I got my name in some of his git logs), agree he is smart (probably too much for his own good) and can confirm "no monocle"

funny is I self censored my self when he was introducing systemd and decided to deploy it in /usr/bin; I had to admit too, I was not sporting the required unix beard either, to support my point, but it was obvious that there could be less friction around this if we were a little more willing in general to listen to users (or other developers) concerns in a positive light.

looking forward to meet Junio Hamano sometime, and buy him a beer instead

Poettering: Revisiting how we put together Linux systems

Posted Sep 4, 2014 21:44 UTC (Thu) by jb.1234abcd (guest, #95827) [Link] (3 responses)

Extra ! Extra !
The "systemd cabal" under attack.
New Group Calls For Boycotting Systemd.
http://www.phoronix.com/scan.php?page=news_item&px=MT...

jb

Poettering: Revisiting how we put together Linux systems

Posted Sep 4, 2014 22:51 UTC (Thu) by JGR (subscriber, #93631) [Link] (2 responses)

I doubt that anyone is quaking in their boots at the prospect of such a "boycott".

Poettering: Revisiting how we put together Linux systems

Posted Sep 21, 2014 8:21 UTC (Sun) by jb.1234abcd (guest, #95827) [Link] (1 responses)

They are moving in the right direction.
If they receive more support, half of the original systemd's misfeatures
will be gone soon.

http://boycottsystemd.org/

http://uselessd.darknedgy.net/

jb

Poettering: Revisiting how we put together Linux systems

Posted Sep 21, 2014 20:36 UTC (Sun) by anselm (subscriber, #2796) [Link]

What we have here are the buggy whip manufacturers railing against the ascent of the automobile. Give them another two or three years and the issue will have taken care of itself.

Poettering: Revisiting how we put together Linux systems

Posted Sep 5, 2014 1:09 UTC (Fri) by ms-tg (subscriber, #89231) [Link] (2 responses)

Because there are many folks who don't have a positive reaction to this proposal, I feel compelled to post my extremely positive reaction.

I perceive that adoption of this proposal will facilitate the construction of systems out of separately maintained, read-only components, each of which can be separately distributed and updated.

My favorite implication of this proposal is the market effects it will have. Popular "substrates" of a system will have many others depending on them, focusing community support and security attention in useful places.

Distros can become sources of some of these substrates, but perhaps get out of the business of being responsible for all of them.

I hope it happens!

Poettering: Revisiting how we put together Linux systems

Posted Sep 5, 2014 1:35 UTC (Fri) by dlang (guest, #313) [Link] (1 responses)

> I perceive that adoption of this proposal will facilitate the construction of systems out of separately maintained, read-only components, each of which can be separately distributed and updated.

Or this proposal will facilitate the construction of systems out of separately unmaintained, read-only, components, each of which is separately distributed and abandoned.

I wonder how much experience correlates to opinion between the two extremes?

Poettering: Revisiting how we put together Linux systems

Posted Sep 5, 2014 4:17 UTC (Fri) by raven667 (subscriber, #5198) [Link]

I think you will have the same effort put into maintenance as the current Linux distros have, the status quo already has a large number of unmaintained also-ran distros and a few well maintained ones. The same groups would be the main providers of /usr

Poettering: Revisiting how we put together Linux systems

Posted Sep 8, 2014 11:04 UTC (Mon) by etienne (guest, #25256) [Link] (2 responses)

I wonder why it would not simply be implemented by having directories:
/usr/include-rhel6.5
/usr/lib-rhel6.5
and setting up "gcc"/"ldd" to the right directory tree from some stable applications.
Anyway there won't be any identical files (unless you are running rhel6.5 - then use symbolic or hard links).
But the problems of managing one set of library (incompatibilities ...) may not be solved by adding another set, and it seem people are statically including because they need *newer* versions than available...

Poettering: Revisiting how we put together Linux systems

Posted Sep 8, 2014 14:47 UTC (Mon) by raven667 (subscriber, #5198) [Link] (1 responses)

Sure, you could do that, even other Unix systems implemented a way to do variable expansion in path names so that you could do this cleanly with less change to ld, but you'd still have to make the whole system aware of your path standard, changing how userspace works, which will break some programs when things aren't what they expect, whereas using mount namespaces you can make this setup completely invisible to applications, using the primitives which are already available in Linux and not inventing new ones.

Poettering: Revisiting how we put together Linux systems

Posted Sep 8, 2014 15:30 UTC (Mon) by etienne (guest, #25256) [Link]

> with less change to ld

I was speaking of "gcc -Wl,-rpath=/usr/lib-rhel6.5" so no change needed to ld.

> changing how userspace works

Point taken, but will be difficult to maintain and to keep in sync...

Poettering: Revisiting how we put together Linux systems

Posted Sep 8, 2014 18:45 UTC (Mon) by jb.1234abcd (guest, #95827) [Link]

This vision of putting together Linux systems could be implemented in a highly controlled env only (basically in-house), behind which a formal organization stands, with responsibilities and accountability, and whose customer base and its requirements are focused, predictable, and served in a controlled manner.

Adopting it to Linux, UNIX, BSD* ecosystems (independent projects, distros,
OSs) would be a disaster, a chaos of components and people (many of whom are volunteers).

The argument that the current status of that ecosystem is not better does not take into account the model of its development, which is mostly voluntary, contributory, and of bazaar type. The also-ran distros, unmaintained projects or distros are a natural part of it if you accept the idea of market forces at work or just freedom to experiment and educate.
This model does not prevent a formation of professional organizations around it, which are free to pick and choose and mold it all according to their idea of the next best "Slowaris", on their own terms, but outside of it. The point is to not allow them to monopolize it or force their ideas onto it.

So, learn from the systemd "voluntary enforcement" debacle, please.
The title of this article should be instead:
"Poettering: Revisiting how we put together Linux systems at Red Hat"

jb


Copyright © 2014, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds