|
|
Subscribe / Log in / New account

Yet another systemd fiasco

Yet another systemd fiasco

Posted Nov 19, 2014 2:05 UTC (Wed) by tomegun (guest, #56697)
In reply to: Yet another systemd fiasco by andresfreund
Parent article: Russ Allbery leaves the Debian technical committee

> Just to clarify my POV: I like and use systemd.

Nice to hear!

> It's more realistic to contribute/convince the maintainer udev as a separate project, because the project's concerns aren't as wide. And more importantly, it's much more realistic to have a closely following fork that adds back features if it's just tracking udev, without all the other stuff in systemd going on. You can't really just track src/udev and be done with it.

I see where you are coming from, but as someone who has worked quite closely with udev both before and after the merge, I don't think this is the case in this particular instance. Convincing Kay of taking a patch has not become any more or less difficult, and the code is still pretty much separate (and the code that is shared is not really in the 'problem' areas), so tracking src/udev should really get you very far.

> But I also think that you shouldn't forget that the systemd developers having an easier life (quite the worthy goal!) might just shift the work to several other people - who somewhat understandably might be grumpy about that.

Absolutely, there is a trade-off. Most of the time we (as any open source project) happily do work to easy the work of others even when it does not benefit us directly (usually because it "makes sense"). But sometimes, we (as anyone) will say "no, this makes no sense, we shouldn't be doing things this way, we can't test this stuff and there are better solutions out there, if someone wants this problem solved in this particular way they'll have to do the work themselves".

In the case of the firmware loader we have a situation where no systemd developer would ever test it (as we all run on kernels where it is not used any longer), it was known to be buggy with no nice solution in sight, and the proper solution has been in the kernel for some time. On top of all that, a third-party solution could trivially be maintained outside of udev (please ask on the ml if you want hints on how to get started, but basically you just have to copy the sample bash script from the kernel docs and call that from a udev rule).

The cgroup situation is a bit more sticky, as there is no nice way to "make this work" outside of patching systemd itself. Though, as you recognise the work involved on the systemd side is there much larger, so that we don't support it is hopefully more understandable. That said, my suggestion would be to keep a downstream patch if anyone wants this, and if it is shown over time to be viable and useful I guess people may reconsider merging it...

> I think the kernel is complex enough and developing fast enough that that's just not entirely preventable. So we need to be able to deal with that.

That we can agree on. But this is ultimately a debugging problem. I.e., you (or your customer, or your support contractor, or kernel devs) need to figure out what kernel commit broke your setup, so probably need to do a bisect. I agree that this may be painful in specific cases, and we should look at ways to ease that pain.

However, we should not make the mistake of making the bisection problem into a maintenance nightmare. I.e., yes, you may want to boot an old kernel with a new userspace to test something, but you absolutely do not want to work around a kernel bug by deploying new userspace on an old kernel. If you cannot upgrade your production kernel, then don't upgrade your production userspace. If we start pretending that this is in fact possible, we both know that that is exactly what people will be doing (and just leave the kernel regressions to rot), and then come complaining that our old code (which we have long since stopped testing, and probably was known to be semi-broken in the first place) blew up their production machines.

We should not forget that the reason we usually depend on newer kernels is that they provide some API which fixes a problem we had with the old API (security or otherwise), so just keeping support for the old APIs around almost always means keeping known semi-broken code around, and moreover this code would then only end up actually being used by end-users stuck with old kernels, and never by developers or early adopters (as we always use new kernels anyway).

> Don't forget that outside the Redhat/Fedora world upgrading existing systems without a reinstall is a officially supported thing and has been for a long time.

I think this absolutely makes sense (being a rolling-release person myself), but we should not get stuck on the precise mechanism by which this is done today. I.e., I don't think upgrading a running userspace on top of an ancient kernel makes any sense, but there are technical solutions to get around this so that you can still upgrade without a reinstall, but never have to run old kernel + new userspace. ChromeOS/CoreOS does this in one way, systemd comes with upgrade hooks that allows you to do it in a different way, and we are working on a third proposal of how to achieve the same goal. And then there is of course the Arch way: upgrade your system all the time, and then you'll be fine doing it in-place as your kernel will not be ancient, but still an upstream-maintained one.

Thanks for your comments!


to post comments


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds