User: Password:
|
|
Subscribe / Log in / New account

Poettering: The Biggest Myths

Poettering: The Biggest Myths

Posted Jan 31, 2013 22:15 UTC (Thu) by khim (subscriber, #9252)
In reply to: Poettering: The Biggest Myths by dlang
Parent article: Poettering: The Biggest Myths

Actually, on servers the world is still pretty close to static, you may have a thumb drive plugged in to USB once in a while, but that's aobut it.

Nonetheless on servers systemd makes perfect sense, too: hardware remains static, but services come and go regularly. Today it's solved by a horrible hack (KVM and nested virtual machines) but with systemd it should be possible to develop sane solution.

I've chased around runaways processes from various complex daemons enough to say that even if reliable service stopping is the only advantage systemd will bring to servers it'll be enough to justify all problems it brings. And the fact that I can now reliably limit resources consumption per-service is a nice bonus, too.

the vast majority of Linux systems out there are servers (even including Android as Linux systems).

There are 700 million Android systems out there and less then 100 million of servers (including virtual ones). It's not even a contest.


(Log in to post comments)

Poettering: The Biggest Myths

Posted Jan 31, 2013 22:23 UTC (Thu) by raven667 (subscriber, #5198) [Link]

> even if reliable service stopping is the only advantage systemd will bring to servers it'll be enough to justify ...

As someone who used daemontools for years, having reliable service stop/start/status with automatic restart and capturing stdout/stderr for logging are the main features that I need that existing init systems don't do. I'd love to stop requiring third party infrastructure like daemontools/runit or monit or even hackier stuff like cron jobs, to health check and restart critical services.

Poettering: The Biggest Myths

Posted Jan 31, 2013 22:23 UTC (Thu) by dlang (subscriber, #313) [Link]

> I've chased around runaways processes from various complex daemons enough to say that even if reliable service stopping is the only advantage systemd will bring to servers it'll be enough to justify all problems it brings.

Since systemd doesn't invent the way to do this (it uses cgroups provided by the kernel), why not just create a service launcher command that creates the cgroups as needed instead of taking over everything else that systemd does?

according to you we would gain almost all the benefit while avoiding all the cost.

Poettering: The Biggest Myths

Posted Jan 31, 2013 23:35 UTC (Thu) by khim (subscriber, #9252) [Link]

Since systemd doesn't invent the way to do this (it uses cgroups provided by the kernel), why not just create a service launcher command that creates the cgroups as needed instead of taking over everything else that systemd does?

You assume I know in advance which service will start misbehaving. And I don't. Sure, I have some suspects but in reality all of them can misbehave (I've seed many pretty innocuous services turning to fork bombs because of configuration mistakes). And if you run all your services using such launcher then why do you need to keep useless sysvinit baggage in PID1?

P.S. Cron scripts are especially problematic - that's why I like the fact that systemd offers timer units to replace them.

Poettering: The Biggest Myths

Posted Feb 1, 2013 0:42 UTC (Fri) by dlang (subscriber, #313) [Link]

anyplace you start services, it would be useful to have this capability.

this includes high availability and load balancing managers.

Should systemd now take over those functions as well?

the "unix way" that you are so dismissive of would be to provide a tool to use when starting things, and then that tool could be used by lots of different users, including ones that you don't think of.

The biggest proof of true 'unix way' is when you show the developer of a component what you are doing with it and they are so startled that they exclaim "I didn't know that was even possible".

I've seen this happen a few times, and heard of it happening more times over the years with Unix/Linux based software.

I'm not sure I've ever heard of it happening with other systems.

Poettering: The Biggest Myths

Posted Feb 1, 2013 1:03 UTC (Fri) by raven667 (subscriber, #5198) [Link]

You seem to be making an argument for systemd as your HA manager should be able to very simply stop and start systemd units to reliably do its job, reducing its complexity and making the system as a whole more reliable. systemd is the tool that's used when starting things that you describe.

Poettering: The Biggest Myths

Posted Feb 1, 2013 1:17 UTC (Fri) by dlang (subscriber, #313) [Link]

the HA manager also needs to do the monitoring of those systems, are you saying that it now needs to do that through systemd as well?

Poettering: The Biggest Myths

Posted Feb 1, 2013 1:42 UTC (Fri) by khim (subscriber, #9252) [Link]

Obviosly. Systemd is a tool you asked for in your previous message. You can run multiple separate instance of systemd to manage these processes and you can interact with it to know what goes with services which systemd is manages.

It's the same process which manages your PID1 but that's kind of obvious: why create a separate tool for the exact same task?

Poettering: The Biggest Myths

Posted Feb 1, 2013 2:21 UTC (Fri) by mgb (guest, #3226) [Link]

The function being discussed is to run a group of processes within a cgroup.

PID 1 could use such a function, but that is not the function of PID 1.

A properly engineered design would have factored out that functionality into a separate tool.

Which someone working on one of the the less hyper more reliable distros will no doubt do.

Poettering: The Biggest Myths

Posted Feb 1, 2013 8:22 UTC (Fri) by smurf (subscriber, #17840) [Link]

What would be the advantage of having a separate launcher program, besides conforming to the "one tool per job" dogma for no good reason?

How would you make sure that the launcher undoes literally everything you did in your session so far, in order to start the job in a consistent environment? (Hint: This is no longer possible.)

Poettering: The Biggest Myths

Posted Feb 1, 2013 12:02 UTC (Fri) by mgb (guest, #3226) [Link]

Consider how much one-tool-per-job FLOSS world has accomplished compared to the monoliths of M$ with their orders of magnitude more resources.

Cgroups are hierarchical and do not need PID 1. Indeed systemd itself despite all its baggage is sometimes used outside PID 1 to manage a cgroup until a better designed tool comes along.

Monoliths such as BusyBox and systemd can be useful tools but they are evolutionary leaf nodes. The distros which retain their ability to evolve are the future of FLOSS.

Poettering: The Biggest Myths

Posted Feb 1, 2013 13:23 UTC (Fri) by smurf (subscriber, #17840) [Link]

By your argument, Linux itself is a monolith and an evolutionary dead end.

Whether something is a big 400-pound gorilla or a swarm of ants matters much less, in the grand scheme of things, than you think. Your M$ argument is a straw man: one is closed source, the other is not, thus you're comparing apples with rocks.

I would surmise that the ease of adding new and important features (or just fixing bugs, particularly when they require nontrivial changes) is far more important. And that's precisely where having one common repository shines -- you can make one branch with your change, test it, merge it, you're done, instead of coordinating between N repositories and handling the resulting version mismatch.

NB: Please either answer my question, or admit that you can't.

Poettering: The Biggest Myths

Posted Feb 1, 2013 13:47 UTC (Fri) by mgb (guest, #3226) [Link]

The only reason systemd needs to update a massive repository for a simple change is because poor design has resulted in excessive coupling.

Poettering: The Biggest Myths

Posted Feb 1, 2013 16:20 UTC (Fri) by smurf (subscriber, #17840) [Link]

You cannot do all, or even most, of what systemd does in a loosely-coupled way. Even the supposedly-simple job of (re)starting a daemon in a consistent environment (i.e.. one that's the same as being run during system startup) cannot be reliably done any more – unless you're PID 1.

You know … either prove me wrong with actual code, or shut up.

Or at least stop replying selectively. (As if nobody else would notice.)

Seriously.

Poettering: The Biggest Myths

Posted Feb 1, 2013 17:36 UTC (Fri) by mgb (guest, #3226) [Link]

I hope this isn't your homework assignment.

PID 1 can spawn one or more cgroup controller processes where available and if desired, just like it spawns any other process. PID 1 does not need to be dependent upon cgroups, nor indeed any of the other things that Poettering crammed into his monolith.

Happily the UNIXy way, the properly engineered design, and the mechanism most propitious for future FLOSS evolution all coincide in one simple design which just happens to be the opposite of systemd.

Poettering: The Biggest Myths

Posted Feb 1, 2013 18:32 UTC (Fri) by anselm (subscriber, #2796) [Link]

By all means go ahead and implement it then. If it is really so much better than systemd people will be happy to take it on; the major distributions have been switching init systems so much recently that one more time won't matter if – thanks to you – they finally Get It Right.

Poettering: The Biggest Myths

Posted Feb 1, 2013 19:21 UTC (Fri) by mgb (guest, #3226) [Link]

Of the major distros, Debian and Ubuntu have not been and will not be borged. Gentoo has not been borged. Mint has not been borged and probably will not. I doubt Slackware will ever make the mistake of switching. Suse switched but may switch back when Poettering pulls support:

... systemd currently provides compatibility with the special non-standardized "boot" and "S" runlevels covering early boot which are used on Suse and Debian systems. We expect to remove this eventually ...
http://www.freedesktop.org/wiki/Software/systemd/Incompatibilities

In general it's the exciting high-churn buggier distros that switched, not the distros that people use for serious work, although eventually RHEL will probably switch.

Poettering: The Biggest Myths

Posted Feb 1, 2013 19:34 UTC (Fri) by rahulsundaram (subscriber, #21946) [Link]

The idea of SUSE switching away from systemd because of that note is just fictional. SUSE agrees with that idea completely.

Poettering: The Biggest Myths

Posted Feb 1, 2013 19:56 UTC (Fri) by anselm (subscriber, #2796) [Link]

Of the major distros, Debian and Ubuntu have not been and will not be borged.

With Debian, I wouldn't be so sure. Debian already offers systemd as an alternative to System-V init; the question is really whether systemd will be made the default init system on Debian GNU/Linux installs. The main counterargument is that systemd isn't available on Debian GNU/kFreeBSD, but there are various conceivable approaches the project could take to sort this out. There are many people within Debian who would like Debian GNU/Linux to default to systemd like most other major distributions do now.

As far as Ubuntu is concerned, the issue isn't whether systemd is a good init system or not; it's that they funded the development of Upstart and seem to find it difficult to go over to something else. Thus, arguably Ubuntu isn't as concerned with saving FLOSS by eschewing systemd as they are with rallying behind Upstart (which isn't all that different from systemd as far as being »Unix« is concerned). Again, systemd is, in fact, available for Ubuntu; it is just not something that Canonical is pushing at the moment but that may well change in the future.

I personally would consider neither Gentoo nor Mint »major distributions«. Once more, Gentoo does offer systemd but not as the default (yet). Mint will probably coast along with whatever Debian or Ubuntu will eventually do, so it is not out of the question that Mint will eventually go over to systemd.

In general it's the exciting high-churn buggier distros that switched, not the distros that people use for serious work, although eventually RHEL will probably switch.

I think we can pretty safely assume that RHEL and SLES – the main »serious-work« distributions – will be switching to systemd in the very foreseeable future (with CentOS &c. tagging along). As you note, OpenSUSE is already using systemd and SLES usually follows OpenSUSE.

Again, let's wait a year or five and see where we stand then. My money is on systemd.

Poettering: The Biggest Myths

Posted Feb 1, 2013 23:01 UTC (Fri) by rgmoore (✭ supporter ✭, #75) [Link]

In general it's the exciting high-churn buggier distros that switched, not the distros that people use for serious work, although eventually RHEL will probably switch.

This could more or less equivalently be described as it being the very conservative, release less than once in a blue moon distributions that haven't switched yet. In those instances, it's not at all clear that they're holding back on systemd for technical reasons. RHEL hasn't made a major release since before systemd was an option, and they're planning on switching for their next release. I think SuSE is in more or less the same boat. Debian stable is having a disagreement based on systemd's cross-kernel compatibility, not its technical suitability. The only big distribution that is clearly rejecting systemd is Ubuntu, and that appears to be more of a NIH syndrome than a technical decision.

Poettering: The Biggest Myths

Posted Feb 2, 2013 13:32 UTC (Sat) by jezuch (subscriber, #52988) [Link]

> Of the major distros, Debian and Ubuntu have not been and will not be borged.

Debian provides systemd; I'm using it on my Debian box and I'm really happy with it. (Except that I have to press CTRL+D on each boot because of some weird interaction with my self-built kernel, but that's a minor annoyance.)

Poettering: The Biggest Myths

Posted Feb 2, 2013 16:19 UTC (Sat) by mgb (guest, #3226) [Link]

Making systemd available is a good thing. Using systemd to block progress or to force gratuitous interface churn is a bad thing.

Debian has not been borged. Debian provides systemd as an "extra" package - the lowest priority tier below the "optional" packages.

This ensures that systemd is unable to bottleneck Debian's progress.

Poettering: The Biggest Myths

Posted Feb 2, 2013 17:05 UTC (Sat) by anselm (subscriber, #2796) [Link]

Debian provides systemd as an "extra" package - the lowest priority tier below the "optional" packages.

This is because it conflicts with the sysvinit package, which is "required", so Debian policy requires that the systemd package be "extra". It does not mean that the Debian project considers systemd unimportant.

It is not entirely unlikely that at some point these priorities will be reversed, at least as far as Debian GNU/Linux is concerned. There are many Debian developers who rather like systemd.

Poettering: The Biggest Myths

Posted Feb 2, 2013 17:51 UTC (Sat) by mgb (guest, #3226) [Link]

Debian developers who rather like systemd
They are of course welcome to use it. They are not welcome to force Poettering's capricious interface churn on everybody else.

Poettering: The Biggest Myths

Posted Feb 2, 2013 20:29 UTC (Sat) by anselm (subscriber, #2796) [Link]

I don't think you get to make that sort of claim on behalf of the Debian project.

If a majority of Debian developers decides that it makes sense for Debian GNU/Linux to default to systemd, then it is a fairly straightforward change to make -- if not now then in a few years' time. The project has made that kind of decision before.

Poettering: The Biggest Myths

Posted Feb 3, 2013 3:44 UTC (Sun) by HelloWorld (guest, #56129) [Link]

There is no "interface churn" in systemd. Most sysvinit interfaces work just as before, i. e. init scripts still work, /dev/initctl (and therefore telinit etc.) still works, /etc/fstab is supported, and there are many other things that work just like they always did. As for systemd's new interfaces, they are covered by the Interface Stability Promise:
http://www.freedesktop.org/wiki/Software/systemd/Interfac...

Poettering: The Biggest Myths

Posted Feb 3, 2013 4:04 UTC (Sun) by mgb (guest, #3226) [Link]

The list of what Poettering says he won't break excludes everything of importance.
... will not accept patches for certain distribution-specific compatibility ...
So if you're not Fedora you're at Poettering's mercy. But whether you're Fedora or not you can't even write a script that configures the NICs in your servers.
Previously it was practically guaranteed that hosts equipped with a single ethernet card only had a single "eth0" interface. With this new scheme in place, an administrator now has to check first what the local interface name is before he can invoke commands on it where previously he had a good chance that "eth0" was the right name.
wwp0s29u1u4i6 is so much easier to remember than eth0, don't you think?

Poettering: The Biggest Myths

Posted Feb 3, 2013 5:04 UTC (Sun) by rahulsundaram (subscriber, #21946) [Link]

You have no idea what you are talking about and merely repeating a misinformed notion. There is zero Fedora specific things in systemd. systemd upstream is entirely distribution neutral.

Poettering: The Biggest Myths

Posted Feb 3, 2013 8:23 UTC (Sun) by smurf (subscriber, #17840) [Link]

> wwp0s29u1u4i6 is so much easier to remember than eth0, don't you think?

You don't remember it. You add it to your system configuration and then forget about it. This is important if you ever add a second interface. For all other uses, there's "ip intf ls" and copy/paste.

You profess to be a sysadmin. You should know that.

Besides, the new scheme can be turned off, so you get a choice.

Stop spreading FUD and stop selectively reading systemd documentation.

Poettering: The Biggest Myths

Posted Feb 3, 2013 15:47 UTC (Sun) by smurf (subscriber, #17840) [Link]

>> ip intf ls

"ip link ls". Duh.

-- me, trying to wean myself from ifconfig

Poettering: The Biggest Myths

Posted Feb 3, 2013 12:21 UTC (Sun) by HelloWorld (guest, #56129) [Link]

> So if you're not Fedora you're at Poettering's mercy.
Many of systemd's "new" configuration files came from Debian, not Fedora. And besides, having the same configuration files across most distros is a good thing in the long term.

As for the network interface name stuff, here's the page your quote is from:
http://www.freedesktop.org/wiki/Software/systemd/Predicta...
So you quoted the paragraph that informs about one trivial disadvantage (the need to type "ifconfig" once to find out the name of the interface) while ignoring all the important stuff, such as consistent interface naming across reboots. Seriously, whom are you trying to shit here?

I don't even know why I'm replying to crazy people like you any more...

Poettering: The Biggest Myths

Posted Feb 3, 2013 12:52 UTC (Sun) by andresfreund (subscriber, #69562) [Link]

> So you quoted the paragraph that informs about one trivial disadvantage (the need to type "ifconfig" once to find out the name of the interface) while ignoring all the important stuff, such as consistent interface naming across reboots. Seriously, whom are you trying to shit here?
Imo the problem is not to have to type ifconfig once, but having to type it all the time. I am completely unashamed to admit that I cannot remember such generated names.
The earlier approach of generating stable ethX style interface name imo worked well enough and generated easier to remember names.

I don't understand why you feel the need to react with that amount of hyperbole. It only reinforces stupid anti-systemd stereotypes.

Poettering: The Biggest Myths

Posted Feb 3, 2013 16:18 UTC (Sun) by smurf (subscriber, #17840) [Link]

>> the problem is not to have to type ifconfig once,
>> but having to type it all the time.

*Sigh*. Did anybody even read the PredictableNetworkInterfaceNames page?
Physical-device-location-based names aren't even the default – they're just a safe fallback, if your BIOS doesn't supply sane interface numbers.
Plus, how to go back to ethX is well-documented.

Further observations:

* If you have to type in the name more than once, even in an emergency situation where you'd have to setup a route by hand, you're doing something wrong. If necessary, I'd type x=ethxMACaddr once, then refer to the thing by $x, which is even less typing than eth0. Or I'd use the shell's command history; even the busybox shell has one, these days.

* What's worse – a bit of an inconvenience, or having your internal network suddenly exposed to the whole world, because a timing quirk re-ordered your interfaces? Give me (and Lennart) a break here.

I do not want an OS which defaults to doing unsafe, random, and/or race-condition-prone things when it boots. Neither WRT my disk drives (remember the switch to UUIDs?) nor my network interfaces.

Poettering: The Biggest Myths

Posted Feb 4, 2013 19:29 UTC (Mon) by mathstuf (subscriber, #69389) [Link]

I use zsh and it tab-completes ifup/ifdown interfaces for me (I don't use NetworkManager even on laptops) for me. A comparable completion function for bash shouldn't be too complicated.

Poettering: The Biggest Myths

Posted Feb 4, 2013 20:32 UTC (Mon) by mgb (guest, #3226) [Link]

When you're managing large numbers of systems in offices and data centers scattered around the globe you can't count on tab completion to ensure your systems reboot online. You need predictable interface names.

eth0 used to be the answer. It was great.

Then along came udev. In solving a rare problem (consistent interface naming in the presence of multiple NICs) it created a much more serious problem (interface names change whenever broken NIC replaced).

Sysadmins have mostly solved this by configuring both eth0 and eth1 even when eth1 doesn't exist yet. It's a PITA but we're ready when udev slams us.

But now with systemd we would have to get out the ouija board to figure out some kind of name like wwp0s29u1u4i6 that's going to take over when a broken NIC is replaced two years hence.

The better solution is to stay with an init system that works well and doesn't get in our way and doesn't cause random problems by starting services in a different order on every boot.

Poettering: The Biggest Myths

Posted Feb 4, 2013 21:03 UTC (Mon) by anselm (subscriber, #2796) [Link]

Read the effing documentation already. A convenient link has been provided to you earlier in this thread.

  • This is not a systemd issue in any way, shape, or form. It is a udev issue. Systemd and udev share maintainers but this is as far as it goes. There are people on System V init who have similar problems with naming their NICs, and they can solve them by installing the appropriate version of udev (out of the systemd tarball) without actually having to move to systemd.
  • Nobody prevents you from staying with eth0 etc. even under the new udev. How to achieve this is documented in excruciating detail in the documentation mentioned earlier. It is very easy.

We understand that you're not so hot on systemd. However your position would be that much more tenable if you had actual valid criticisms informed by facts. Judging from your last few postings this does not appear to be the case.

Poettering: The Biggest Myths

Posted Feb 4, 2013 21:10 UTC (Mon) by dlang (subscriber, #313) [Link]

you must have missed the memo where the maintainers merged the projects and have made it so that you can't build udev independently of systemd

Yes, the problems he is describing started off as udev problems, but they are now systemd problems

Poettering: The Biggest Myths

Posted Feb 4, 2013 21:21 UTC (Mon) by anselm (subscriber, #2796) [Link]

You don't need to install systemd in order to install udev. If you start from sources you have to build both but it is perfectly possible to install (or package for installation) udev without anything from systemd.

This has all been explained to death here already.

Poettering: The Biggest Myths

Posted Feb 4, 2013 21:23 UTC (Mon) by raven667 (subscriber, #5198) [Link]

They do share build infrastructure but you can run them independently so running udev doesn't imply running systemd. You're not going to get some kind of systemd cooties from building the shared components you don't end up needing if you just want udev. 8-)

Poettering: The Biggest Myths

Posted Feb 6, 2013 5:35 UTC (Wed) by spaetz (subscriber, #32870) [Link]

> This is not a systemd issue in any way, shape, or form. It is a udev issue. Systemd and udev share maintainers but this is as far as it goes.

Not leaning in on the pro/contra discussion, but this is an exaggeration by far. The announcement (http://lwn.net/Articles/490413/) makes it clear that they "merge the udev sources into the systemd source tree."

To me that indicates strongly that systemd and udev are more than two independent tarballs that can be version mix-and-matched and just happen to have the same maintainers.

Poettering: The Biggest Myths

Posted Feb 6, 2013 12:05 UTC (Wed) by HelloWorld (guest, #56129) [Link]

> Not leaning in on the pro/contra discussion, but this is an exaggeration by far.
No, it's not. udev does the interface naming stuff and systemd has nothing at all to do with it.

Poettering: The Biggest Myths

Posted Feb 4, 2013 21:09 UTC (Mon) by dlang (subscriber, #313) [Link]

> Then along came udev. In solving a rare problem (consistent interface naming in the presence of multiple NICs) it created a much more serious problem (interface names change whenever broken NIC replaced).

I delete the udev rules that implements this on all my server images, it's just not needed. If an interface fails badly enough that other interfaces get renamed to fill the gap, things don't work (it doesn't cause a security risk as the IPs and routes won't be able to make connections any longer

> The better solution is to stay with an init system that works well and doesn't get in our way and doesn't cause random problems by starting services in a different order on every boot.

On my laptop, I like a fast booting system, but I've been able to do that for a decade by stripping down the boot process to not try a bunch of stuff that I don't need.

On servers, predictability and stability are far more important than boot speed. Boot speed is limited by the hardware initialization anyway, I've got large systems that take so long to go through the hardware initialization that I can hit power on one of the large system at the same time I do on a simple (but fast) system, and boot the simple system off a CD, install the OS, and move the CD to the complex system before it gets around to reading the CD. I've setup demos of doing exactly this to impress manager types about how fast the install process is :-)

If you want a fast boot and don't have to dig into the boot process when something goes wrong, the newer init systems are nice.

But if boot speed is not that important to you, but predictability is, then async device detection, parallelizing the boot, etc add complexity and race conditions that cause more harm than the benefit provided.

Poettering: The Biggest Myths

Posted Feb 4, 2013 22:26 UTC (Mon) by raven667 (subscriber, #5198) [Link]

> If an interface fails badly enough that other interfaces get renamed to fill the gap, things don't work

I think that's the opposite of what happens, udev and the practice of associating interface names to the MAC address is so that the interface names are stable across boots and don't get shuffled around in the manner you describe without the sysadmin taking action. 00:11:22:33:44:55 is always eth0 regardless of detection and module load order, if you put in a different NIC with a new MAC and want it to take over an address then you may want to edit the configs before hand, or from the console if you have OOB access.

> But if boot speed is not that important to you, but predictability is, then async device detection, parallelizing the boot, etc add complexity and race conditions that cause more harm than the benefit provided.

I think the point of socket activation is that its fundamentally not race condition prone, and where you want explicit dependencies you can set Before and After to group things. Not having dependancies and service detection at all is inherently racy. As stated, boot speed wasn't a particular design goal but something that fell out of the design, and is worth mentioning, because it's not doing unnecessary work to get the same end result. Spawning thousands of instances of cut and grep and awk to parse config files and whatnot is much more expensive than just using a normal system programming language.

Poettering: The Biggest Myths

Posted Feb 4, 2013 22:40 UTC (Mon) by dlang (subscriber, #313) [Link]

I was talking about the problems you can have if you disable the udev interface naming stuff.

socket activation has problems with response time if it takes a noticable amount of time for the service to start and get to steady-state.

Yes, for some things it's great, but for the main service a system is providing, I would not want to use it.

These sorts of things are the difference between desktop use (where you have lots of stuff defined, but seldom use any of it) and servers (where you permanently disable or uninstall what you don't need, and what remains is likely to be hammered)

Poettering: The Biggest Myths

Posted Feb 4, 2013 23:07 UTC (Mon) by raven667 (subscriber, #5198) [Link]

> socket activation has problems with response time if it takes a noticable amount of time for the service to start and get to steady-state.

I don't think thats much of a problem in practice, you can start a service without waiting for a request to come in to activate it, say by having it start after networking and being a requirement for the multi-user target and having it manage its own sockets. Initial request latency on a service which is starting or has just started is a problem that can exist with or without systemd.

Poettering: The Biggest Myths

Posted Feb 4, 2013 23:46 UTC (Mon) by dlang (subscriber, #313) [Link]

if you have the service start itself, that's not socket activation.

I'm not saying that systemd can't support this either (before someone attacks me, saying that systemd can do this)

Poettering: The Biggest Myths

Posted Feb 5, 2013 0:23 UTC (Tue) by raven667 (subscriber, #5198) [Link]

Of course, but your service can depend on others which are socket activated.

Poettering: The Biggest Myths

Posted Feb 5, 2013 18:53 UTC (Tue) by khim (subscriber, #9252) [Link]

Sure, but you missing the point: when you use systemd you can combine socket activation and other forms of activation.

That's pretty powerfull stuff: your service will start when you system is started and other services can use it even at early boot stages. If your service is somehow started early - not a problem, it'll be used "as is", if it's not yet started - it'll be brought up via socket activation and other services will wait - all transparently and without any fuss or explicit dependencies sorting.

Poettering: The Biggest Myths

Posted Feb 5, 2013 19:24 UTC (Tue) by dlang (subscriber, #313) [Link]

sigh, that 's why I added the bit about how I wasn't saying that systemd couldn't do this.

I wasn't missing the point, I wasn't saying that systemd can't do this.

I was just saying that there are times when socket activation is not the best thing to be doing.

Poettering: The Biggest Myths

Posted Feb 5, 2013 19:58 UTC (Tue) by khim (subscriber, #9252) [Link]

I was just saying that there are times when socket activation is not the best thing to be doing.

It's never a good idea to disable a socket activation. Socket activation is your safety net. It guarantees that services will be started when they are needed. Everything else is optional.

This is yet another thing which systemd is doing correctly and which was traditionally managed in an an-hoc-kinda-works-in-you-quint-just-right way.

Poettering: The Biggest Myths

Posted Feb 5, 2013 21:55 UTC (Tue) by dlang (subscriber, #313) [Link]

having a service started multiple times is not always a safe thing to do. In theory the service properly checks and makes sure there's only one copy of it running, in practice you just don't do that, or it _will_ bite you down the road.

Poettering: The Biggest Myths

Posted Feb 5, 2013 22:12 UTC (Tue) by Cyberax (✭ supporter ✭, #52523) [Link]

Uhm, systemd gurantees that a service will be started only once.

On the other hands, I did have problems when ejabberd started in parallel with the PostgreSQL database - there was no dependency in LSB headers because I was using an optional psql auth module.

This worked just fine until the day it suddenly stopped working because the ordering of services has changed and Postgres moved a little bit later into the boot process.

Poettering: The Biggest Myths

Posted Feb 5, 2013 22:16 UTC (Tue) by dlang (subscriber, #313) [Link]

If nothing can go wrong, then why would you want to have a service that you configure to start one way also configured for socket based startup?

One of the things you learn after several years of running large numbers of production systems is to not trust claims that a process will always work, whatever that process is.

Failures are generally not that common, but they do happen.

Poettering: The Biggest Myths

Posted Feb 5, 2013 22:24 UTC (Tue) by Cyberax (✭ supporter ✭, #52523) [Link]

I thought it's obvious - socket-based activation enforces the ordering of services and explicit startup might be required because service might need to do some background stuff.

Also, it might decrease the response time time for the first client.

SystemD nicely allows to do both.

Poettering: The Biggest Myths

Posted Feb 6, 2013 8:14 UTC (Wed) by paulj (subscriber, #341) [Link]

The one downfall to this is that not all dependencies are socket-based. Some dependencies are more complex, and need a more abstract protocol than "open a socket" to express.

Poettering: The Biggest Myths

Posted Feb 6, 2013 9:18 UTC (Wed) by anselm (subscriber, #2796) [Link]

So? Systemd can deal with that, too – at least as well as System V init does. For example, systemd lets you express explicit forward and backward dependencies between services and will automatically construct a starting order based on that. With System V init, you either get to work out any dependencies yourself to set the magic numbers correctly by hand, or you use something like SUSE's insserv based on LSB metadata in the init scripts, where reverse dependencies are not an official feature.

Poettering: The Biggest Myths

Posted Feb 6, 2013 12:21 UTC (Wed) by HelloWorld (guest, #56129) [Link]

> The one downfall to this is that not all dependencies are socket-based.
Systemd offers a lot more than socket-based activation (which btw even supports inetd compatibility). There's also dbus-, device-, timer- and path-based activation and autofs support (which is arguably a service activation scheme as it can be used with FUSE file systems).

Poettering: The Biggest Myths

Posted Feb 6, 2013 16:52 UTC (Wed) by Cyberax (✭ supporter ✭, #52523) [Link]

If you have some very exotic services that use smoke signals to communicate - go on and add explicit dependencies.

Poettering: The Biggest Myths

Posted Feb 7, 2013 4:34 UTC (Thu) by paulj (subscriber, #341) [Link]

How would you add a dependency on, say, a location? Assume there is some daemon running on the system that can make the current location available over a socket (e.g. GPS co-ordinates, or a more abstract specification). The dependency is not just on the socket, but also on the /content/ of the information sent over that socket.

Hardly smoke signals.

Poettering: The Biggest Myths

Posted Feb 7, 2013 5:38 UTC (Thu) by khim (subscriber, #9252) [Link]

Hardly smoke signals.

It is a smoke signal - from GPS stateliness. It can easily be converted to something systemd understands if you'll create a specialized daemon (in a "true UNIX fashion" people like to preach here so much) which will convert it to signals systemd understands.

And you need such daemon anyway because you need to decide how often you poll, if you use just GPS or if you want to use WiFi, too, etc. It's not as if GPS satellites can pull some trigger on your PC or smartphone which means you'll need some complex logic anyway.

Poettering: The Biggest Myths

Posted Feb 7, 2013 5:50 UTC (Thu) by dlang (subscriber, #313) [Link]

the GPS daemon does have a socket that systemd understands, what are you suggesting? making a different socket for every possible location so that systemd can set dependencies on if that socket responds???

Just because systemd doesn't handle something doesn't mean that the something is worthless or stupid.

Poettering: The Biggest Myths

Posted Feb 7, 2013 8:00 UTC (Thu) by rahulsundaram (subscriber, #21946) [Link]

If systemd cannot handle something, is there a bug report? If not, why not?

Poettering: The Biggest Myths

Posted Feb 7, 2013 8:39 UTC (Thu) by paulj (subscriber, #341) [Link]

There'll be no bug report on systemd because the applications involved are likely already using some other system. E.g. DBus IPC. DBus-daemon also can do "socket activation", and did before systemd existed I think.

Poettering: The Biggest Myths

Posted Feb 7, 2013 12:33 UTC (Thu) by khim (subscriber, #9252) [Link]

There'll be no bug report on systemd because the applications involved are likely already using some other system. E.g. DBus IPC.

If it used DBus IPC then it can be easily be used with systemd thus error report will be entirely unnecessary. I've said "convert it to signals systemd understands" and not "convert it to socket activation" exactly for this reason: because systemd handles many different activation requests and makes sure they don't conflict. Socket activation is just one of them (even if one of the most important ones).

Poettering: The Biggest Myths

Posted Feb 7, 2013 17:02 UTC (Thu) by paulj (subscriber, #341) [Link]

Interesting, thanks. :)

Bit of a dance, to have the IPC nexus hand-off this activation. It feels like really these should be part of one thing...

Poettering: The Biggest Myths

Posted Feb 7, 2013 8:25 UTC (Thu) by anselm (subscriber, #2796) [Link]

The GPS daemon provides the current location on a socket. It is probably unreasonable to expect systemd to be able to deal with that sort of information directly (systemd detractors would immediately jump on features such as these and call them out as »bloat«, with some justification).

Hence, a reasonable way of handling this would be to write a (not very complicated) subsidiary daemon that listened to the GPS daemon's output and triggered various systemd actions based on them. This might be a good idea in any case in order to provide »smoothing« of the location data or additional rules (»tell me about bars in the vicinity but only during happy hour«). This daemon itself would of course be managed by systemd.

This approach should also please the Unix traditionalists who insist that programs should »do one job and do it well«.

Poettering: The Biggest Myths

Posted Feb 7, 2013 8:35 UTC (Thu) by paulj (subscriber, #341) [Link]

So, go on, how do you expose that dependency in a way systemd can handle it. Tell me.

(FWIW, I have no opinion generally on systemd - I don't know enough about it. I'm just a tad sceptical of the wonder claims being made for socket activation based dependency resolution).

Poettering: The Biggest Myths

Posted Feb 7, 2013 8:59 UTC (Thu) by cortana (subscriber, #24596) [Link]

Perhaps you could create target units for each location of interest; other units could then be wanted by/conflict with each target in order to be started/stopped when the location is changed. You would need a glue daemon to look at the GPS data and decide when to start/stop the location targets.

Poettering: The Biggest Myths

Posted Feb 7, 2013 12:41 UTC (Thu) by khim (subscriber, #9252) [Link]

So, go on, how do you expose that dependency in a way systemd can handle it. Tell me.

Well, as you've suggested: DBus activation looks like a natural fit for such a use case - and since systemd handles it just fine... I don't see what's your problem.

I'm just a tad sceptical of the wonder claims being made for socket activation based dependency resolution.

Socket activation covers 90% of usecases, but there are other ways to activate service. And the important thing of systemd is that they all can be used simultaneously. You can start some daemon at specific time (using time-based activation) and when your a leaving specific area (D-Bus based activation) and when some other service needs this particular daemon (socket-based activation). They don't conflict and handled correctly in all cases.

Socket activation is there to track dependencies between services on your own system: it's the simplest one to use and most robust one. But there are other to handle "smoke signals", too.

Poettering: The Biggest Myths

Posted Feb 7, 2013 12:19 UTC (Thu) by Cyberax (✭ supporter ✭, #52523) [Link]

Don't use socket-based activation and do your dependencies manually, just as in the SysV world. Simple.

Additionally, designer of such a crazy interface should be shot immediately to stop propagating bogosity.

Poettering: The Biggest Myths

Posted Feb 7, 2013 15:59 UTC (Thu) by dlang (subscriber, #313) [Link]

> Don't use socket-based activation and do your dependencies manually, just as in the SysV world. Simple.

Then why is it that when people talk about doing exactly this, they get jumped by systemd people saying things like "why didn't you submit a bug report to get that capibility added to systemd" or "that's a stupid way to do things, you need to re-write your software to use systemd to do it"

This subthread started by the simple statement that socket-based activation was not always appropriate, with the acknowledgement that systemd could support this.

Poettering: The Biggest Myths

Posted Feb 7, 2013 16:03 UTC (Thu) by Cyberax (✭ supporter ✭, #52523) [Link]

> Then why is it that when people talk about doing exactly this, they get jumped by systemd people saying things like "why didn't you submit a bug report to get that capibility added to systemd" or "that's a stupid way to do things, you need to re-write your software to use systemd to do it"
These advices are not mutually exclusive, you know.

If you have a compelling use-case of some exotic activation system for which it makes sense to add core support then doing a bugreport might be a good idea.

And in other cases it might be a good idea to simply rewrite the offending code.

Poettering: The Biggest Myths

Posted Feb 5, 2013 22:43 UTC (Tue) by raven667 (subscriber, #5198) [Link]

That's not a problem that can happen with systemd though, since it keeps the services on a tight leash and knows what state (running or not) the service is in. Since that part must be self-contained in PID 1 it would seem there is little room for race conditions or errors to exist.

If we had the time we could skim through http://cgit.freedesktop.org/systemd/systemd/tree/src/core/ and see if we can identify where it keeps that state and how it is updated to see if there are any bugs.

Poettering: The Biggest Myths

Posted Feb 5, 2013 2:38 UTC (Tue) by smurf (subscriber, #17840) [Link]

> socket activation has problems with response time if it takes
> a noticable amount of time for the service to start and get to steady-state.

True. On a server, you probably don't want to use socket activation.

But socket activation is only an offshoot of systemd's "let me open all the sockets up front and hand them out to the daemons" idea. And that's useful on servers with their multiple daemons, because you now no longer need 90% of your boot dependencies, which is 90% less stuff to go wrong – esp. since some of these depend on the details in your daemons' config files.

Poettering: The Biggest Myths

Posted Feb 5, 2013 18:55 UTC (Tue) by dgm (subscriber, #49227) [Link]

> that's useful on servers with their multiple daemons, because you now no longer need 90% of your boot dependencies

I'm a bit divided about that, exactly because of this. If you don't need them, they should not be there. I fear that this will lead to distros enabling each and every service under the sun, just because they assume it will not get activated, which may or may not be true.

Poettering: The Biggest Myths

Posted Feb 6, 2013 1:37 UTC (Wed) by HelloWorld (guest, #56129) [Link]

> I fear that this will lead to distros enabling each and every service under the sun
You mean like Debian has been doing for ages?

Poettering: The Biggest Myths

Posted Feb 6, 2013 12:15 UTC (Wed) by smurf (subscriber, #17840) [Link]

You misunderstand.

What I mean is that if Apache needs Mysql, you no longer need to entomb that dependency in your startup scripts.

Poettering: The Biggest Myths

Posted Feb 5, 2013 18:58 UTC (Tue) by khim (subscriber, #9252) [Link]

True. On a server, you probably don't want to use socket activation.

Yes, you absolutely do want to use socket activation on server. It guarantees that service will be brought up if needed. You may want to use other forms of activation, too - but these are optional, if your forgot to explicitly start some service which is needed by other service - socket activation is there to bail you out.

Even if you bring up all the services on server startup you still want socket activation. Without socket activation you need to order them somehow and need to think about dependencies between them, etc but with socket activation you just start them all - and that's it: socket activation will guarantee that nothing will be lost.

Poettering: The Biggest Myths

Posted Feb 6, 2013 1:51 UTC (Wed) by HelloWorld (guest, #56129) [Link]

So you're saying that having multiple NICs is a rare thing when you're managing large numbers of systems in data centers?

And if that weren't enough, you readily admit that you're too incompetent to figure out that the mapping from MAC addresses to interface names is stored in /etc/udev/rules.d/70-persistent-net.rules and can be modified there?

You're a crazy person. Get help. Or don't, I don't care.

Poettering: The Biggest Myths

Posted Feb 3, 2013 22:41 UTC (Sun) by ovitters (subscriber, #27950) [Link]

Lennart nor systemd force things on others. The systemd developers had a upstream/downstream discussion at FOSDEM. It was announced on their mailing list, it was announced + arranged eventually via Google+. People from various distributions participated, Debian as well.

The seem to be that systemd is a "take it or leave it" and that Lennart somehow makes people do things against their will. While actually there are a lot of people who work within distributions who also do not see the need for so many differences between distributions.

systemd reduces those differences. The work is done by people who agree to that. Those people participate in all the ideas. Lennart & co go to loads of conferences to ensure they reach out to the people who actually make things happen.

It would be nice if you went to FOSDEM and just followed what goes on. Everything is in the open, etc.

Poettering: The Biggest Myths

Posted Feb 2, 2013 20:00 UTC (Sat) by smurf (subscriber, #17840) [Link]

*Yawn* There has been almost no progress, other than (upstart and) systemd, for the last 20 years.

Systemd cannot block progress because (a) nobody prevents you from writing and submitting as many enhancements to it as you like, (b) a modular solution will run into multiple hard-to-solve problems wich are far worse than some nebulous monolithicism. See my other post, which you have conveniently chosen to ignore; I assume because the facts contained therein are incompatible with your world view.

The problem is, computers tend to be not at all interested in world views – unless you can back it up by actual working code.

Thus: put up or shut up.

Poettering: The Biggest Myths

Posted Feb 1, 2013 21:35 UTC (Fri) by smurf (subscriber, #17840) [Link]

OK, just for the sake of argument (and because I admit that I'm having some perverse sort of fun with this) let's treat this as a homework assignment, namely "explain why mgb's idea will crash rather spectacularly".

So let's assume your init will not directly start the daemon (or user session, or …), but spawn a setup-and-exec-daemon program.

And what happens afterwards? The setup-and-exec program then needs to stick around and syslog-and-remember the process output, Oops, can't do that, that'd be too monolithic, so you fork+exec a logger process instead, right?

Said setup-and-exec program also needs to tell init which cgroup the daemon has been spawned into – for the simple reason that if/when it dies, init needs to know how to start another cgroup-aware process which cleans up behind said daemon (i.e. kills any remaining processes, and then tells the logger to exit).

There are a whole lot of problems with this scenario.

* The additional overhead. Running three helpers and a whole lot of interprocess communication in order to start and stop one daemon is … um … rather sub-optimal.

* Either every daemon will have a separate logger running, or you have to do a complicated dance of passing file descriptors between processes. Neither of these looks like a good idea, for rather obvious reasons.

* All of this to-and-fro between processes requires rather tight agreement of what's going on. You can basically forget the idea of a separate repository for part of this song-and-dance. If you disagree, kindly point to a real-world example where this has worked.

* If init does not know about cgroups, it cannot tell the logger that a daemon's last process has died, which it needs to do because part of the interface is /dev/log – which is a datagram interface and thus doesn't get end-of-file notifications. However, there's no other way to discover this because, surprise, PID 1 is the one which gets the SIGCHLD signals.

* No, you cannot get by with a single global /dev/log, because you want the equivalent of "systemctl status X" to show the syslog output from this one daemon X.

* init would need to forward my systemctl request about information about daemon X to the appropriate logger. Yet more complexity.

* All this IPC stuff is at least as complex as using dbus, which is somewhat unfortunate because depending on dbus is evil and much too complex, if I remember previous arguments correctly. But maybe that wasn't yours.

This concludes my part of this homework assignment, using only two aspects of what systemd does, and without reading a single "rationale" document by systemd's authors.

Your part: Either tell me, in as much detail as I just did, how you'd solve all of these problems with the multi-program approach. Or admit that you've been wrong.

Poettering: The Biggest Myths

Posted Feb 1, 2013 21:54 UTC (Fri) by raven667 (subscriber, #5198) [Link]

There is an example of how this can work, although with less features and guarantees (no cgroups, no dependancies), with daemontools. You run the svscan daemon out of /etc/inittab which maintains a parent/child relationship and pipe with PID 1 so that it can be restarted should it fail and svscan fires off a separate supervise process for each daemon you want to manage. supervise maintains a parent/child relationship and pipe with its child process and will restart the child should it go away. If there is a log startup script then that is run as well and anything written to the childs stdout or stderr is forwarded to the stdin of the log daemon (usually multilog).

You can hack around many of the failures (lack of restart throttling for example by using sleep) but it's not feasible to hack around the lack of service dependency resolution and the fact that this is an add-on component and not part of the core OS so can't be relied upon to be there in most cases. There has been development and thought in the area of service management beyond SysV init in the last 15 years or so but there are real reasons why these systems like daemontools and runit haven't gained the traction that systemd has gained, because they are not as comprehensive and are technically inferior.

Poettering: The Biggest Myths

Posted Feb 2, 2013 10:02 UTC (Sat) by jospoortvliet (subscriber, #33164) [Link]

Now that is a lame reply. "Yeah, there IS a way to do it, it just can't actually do it". Just have the balls to say "ok, you're right". There is nothing wrong with changing your opinion/position based on evidence and good arguments, heck, it's been said that "people who never change their opinion only show that they never learn".

Poettering: The Biggest Myths

Posted Feb 2, 2013 14:17 UTC (Sat) by smurf (subscriber, #17840) [Link]

>> less features and guarantees (no cgroups, no dependancies), with daemontools

Which is why daemontools did not catch on.

(To be fair, daemontools predates cgroups – but then, this list of deficiencies WRT systemd is hardly exhaustive.)

Poettering: The Biggest Myths

Posted Feb 1, 2013 16:44 UTC (Fri) by Cyberax (✭ supporter ✭, #52523) [Link]

Are we sure that mgb is not just an Eliza-like script?

Poettering: The Biggest Myths

Posted Feb 1, 2013 14:29 UTC (Fri) by anselm (subscriber, #2796) [Link]

Monoliths such as BusyBox and systemd can be useful tools but they are evolutionary leaf nodes. The distros which retain their ability to evolve are the future of FLOSS.

Funnily enough, System V init represents exactly this sort of evolutionary dead end. It has seen essentially no change for the last 25 years or so – in spite of its various glaring problems. There isn't really a lot you can do to System V init in an »evolutionary« way to get it near systemd feature parity. People have been piling stuff on top of it over the years (without doing a lot about important problems like the complete lack of service monitoring) but the basic approach, together with its deficiencies, remains the same. Systemd is a paradigm shift for init systems that relates to System V init like, e.g., Postfix relates to Sendmail.

In addition, the situation for improvements to System V init is now a lot worse than it was a few years ago because the state of the art for init-like systems is no longer defined by System V init itself but by systemd. Any new init system (including one based on evolutionary improvement of System V init) must now be at least as good as systemd – and likely considerably better – to entice the big distributions which are now pushing systemd to convert to the new system (again!). Opinions may differ as to whether systemd is »monolithic« or an »evolutionary leaf node« but as long as nothing does a noticeably better job systemd looks as if is going to be the init system of choice for most major Linux distributions.

To see whether you are right we shall simply have to wait for a few years and then check again. I'll venture the prediction that, by that time, Linux distributions that do not support systemd will be few and far between – much like today we find very few (if any) Linux distributions that still come with XFree86 rather than X.org.

Poettering: The Biggest Myths

Posted Feb 1, 2013 2:47 UTC (Fri) by raven667 (subscriber, #5198) [Link]

Of course the HA manager needs to monitor the HA services, systemd doesn't provide a mechanism for running monitoring scripts, but it doesn't have to maintain a parent/child relationship with them when sufficient tools are provided to reliably start and stop the daemon. That way you can take advantage of cgroups and all the kernel features that are trivially exposed by systemd (man systemd.exec) without duplicating all that code, probably less reliably, in your HA system.

If you aren't using clusters for HA with inter-server failover than systemd does provide several critical features for making a local service highly available such as automatic restarts (Restart=on-failure), hardware watchdog support and a protocol that a daemon can use to be watchdogged by systemd (man systemd.service, I think you can just use systemd-notify --status="WATCHDOG=1"), should one wish to add support for it to their daemon.

Poettering: The Biggest Myths

Posted Feb 1, 2013 3:36 UTC (Fri) by Cyberax (✭ supporter ✭, #52523) [Link]

>anyplace you start services, it would be useful to have this capability.
>this includes high availability and load balancing managers.
>Should systemd now take over those functions as well?
Sure. We actually rewrote our monitoring scripts to use systemd's DBUS interface to watch for live services. Systemd is used to detect service shutdowns and to do state control.

Works beautifully.

Poettering: The Biggest Myths

Posted Feb 1, 2013 11:10 UTC (Fri) by anselm (subscriber, #2796) [Link]

the "unix way" that you are so dismissive of would be to provide a tool to use when starting things, and then that tool could be used by lots of different users, including ones that you don't think of.

It turns out that systemd is quite useful in a HA/load balancing context – it offers a reliable way of starting and stopping services that can be controlled by an external monitoring infrastructure. It is also possible to run instances of systemd that are not PID 1.

If we stipulate that this is not what Lennart and Kay originally had in mind, then systemd is a good example of the »Unix way«, by your definition.

Poettering: The Biggest Myths

Posted Feb 1, 2013 8:13 UTC (Fri) by smurf (subscriber, #17840) [Link]

> why not just create a service launcher command that creates the cgroups as needed

The word "just" does not apply in this context.

The launcher would still need to tell systemd which cgroup and which main process has been created, so that systemd can take the correct action if/when the thing dies.

In fact, the launcher would have to be *started* from systemd, for the simple reason that you want your daemon to inherit a clean environment. Fixing it all up in a launcher or, worse, in each daemon you start (which is the current method) does not always work and requires additional privileges which you need to guard against being exploited.

Thus, there's no advantage to having a separate launcher, other than giving you a warm fuzzy feeling. Not sufficient to convince me, sorry.

Poettering: The Biggest Myths

Posted Feb 1, 2013 11:15 UTC (Fri) by anselm (subscriber, #2796) [Link]

Presumably, the »service launcher« isn't supposed to be used with systemd – the idea is to hang on to System V init a little longer by having a method to launch services with (some of) the features that systemd would otherwise offer, like per-service cgroups.

In this context the fact that the demise of a service cannot be detected does not matter since System V init can't do it, either, and the proponents of System V init apparently consider this an overrated feature.


Copyright © 2017, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds