|
|
Subscribe / Log in / New account

Distributors ponder a systemd change

Distributors ponder a systemd change

Posted Jun 9, 2016 17:20 UTC (Thu) by pizza (subscriber, #46)
In reply to: Distributors ponder a systemd change by ksandstr
Parent article: Distributors ponder a systemd change

> So how do you justify systemd's new default explicitly breaking what even you recognize, above, having worked before?

FYI, on my personal systems I explicitly turned KillUserProcesses *on*, because I actually want that behavior. A couple of years ago I replaced my regular uses of nohup and screen with native systemd units or timers or whatever was appropriate, and haven't looked back since.

On the old-school shell server I administer, I've left that feature off, and I will do so until at least screen and tmux are shimmed to request proper login sessions. Once that's done, I'll flip the switch there too, and then I can finally get rid of my periodic process reapers that have to clean up after misbehaving crap.

This is a change, yes. But it's a change that, after a very minor amount of learning, leaves me with a more robust system that requires less ongoing attention than before. (Call me strange, but I believe in using the best tools for the task at hand)


to post comments

Distributors ponder a systemd change

Posted Jun 12, 2016 13:31 UTC (Sun) by jspaleta (subscriber, #50639) [Link]

Exactly... starting to do this to... shutting down process linger as much as I possible can..and only enabling it when I'm sure I need it.
And I really love not having to run bolt-on process reapers.

I'm trying to keep my development system and even my workstation as locked down as my production environment now...and tracking the configuration differences..so I know exactly why I'm relaxing constraints on the dev system. I want to push against production constraints using non-production workloads in unexpected ways and see what breaks. The amount of relearning isn't that bad. I mean its not like relearning to jump to python3... this is minor.

Distributors ponder a systemd change

Posted Jun 19, 2016 3:53 UTC (Sun) by zblaxell (subscriber, #26385) [Link] (2 responses)

My first (and last) encounter with systemd three years ago revolved around this behavior. Some Yocto distribution or other had decided to turn this on by default, and it ruined my day.

Since then I've copied the behavior for myself, in the form of a half dozen five-line shell scripts that replicate systemd's cgroup behavior. Every aspect of the cgroups' lives--how much RAM, CPU, and IO they can use, and making sure processes run, live, and die when they're told to--can be handled this way. It's _awesome_, and it's definitely one of the better ideas coming out of the systemd project.

The other thing I realized was that sysvinit had been ruining my days for years, and systemd was going to continue that pattern. To isolate myself from upstreams that should know better, but make breaking behavior changes anyway, I replaced init with a shell script. It's a little longer than five lines--ranging from 55 to 155 lines of code depending on whether it's a desktop, embedded, or server workload--but I haven't looked back since.

It was a painful transition with a bit of learning curve, but it needs much less attention than before. Apparently the best tools for the task at hand were the Unix shell, the & operator, and some small syscall wrapper programs.

Distributors ponder a systemd change

Posted Jun 20, 2016 7:56 UTC (Mon) by zlynx (guest, #2285) [Link] (1 responses)

I'd have to double check but I am pretty sure the shells don't do init's job properly. Signal handing and child reaping, if I recall correctly.

You can get away with it for system rescue, but long term?

Distributors ponder a systemd change

Posted Jun 20, 2016 14:01 UTC (Mon) by zblaxell (subscriber, #26385) [Link]

To be clear, this was never intended to be a rescue system. We did a pilot project and the results were so successful (QA particularly enjoyed having a much more repeatable testing experience) that we promoted it to production and formally terminated plans to switch to anything else. We deploy everything this way now.

I'm not sure if _any_ shell works, but bash and dash do. Any shell that can trap signals (i.e. all of them) and that uses PID 0 as the argument to waitpid (all of them written after 1987) do this just fine. The kernel blocks most of the fatal signals anyway. If /bin/sh is segfaulting you have big problems and you should probably panic the kernel to stop them from getting worse.


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds