The road forward for systemd
The road forward for systemd
Posted May 26, 2010 17:00 UTC (Wed) by jch (guest, #51929)Parent article: The road forward for systemd
> Daemons must be patched to work optimally with systemd
Am I right in understanding that they don't actually /need/ to be patched, as long as they can be kept from forking? (In other words -- if it works with runit and upstart, it will work with systemd.)
--jch
Posted May 26, 2010 17:05 UTC (Wed)
by corbet (editor, #1)
[Link] (1 responses)
Posted May 26, 2010 17:37 UTC (Wed)
by mezcalero (subscriber, #45103)
[Link]
Posted May 26, 2010 18:39 UTC (Wed)
by smurf (subscriber, #17840)
[Link] (7 responses)
Also, equating "a daemon has definitely bitten the dust" with "its cgroup is now empty" might actually work for a nonzero subset of them, though not all.
In summary, IMHO some of systemd's ideas actually make sense and might actually fit into upstart's model of the world. Just a SMOP. :-P
Posted May 26, 2010 23:58 UTC (Wed)
by Tobu (subscriber, #24111)
[Link] (5 responses)
> Also, equating "a daemon has definitely bitten the dust" with "its cgroup is now empty" might actually work for a nonzero subset of them, though not all.
Indeed. A getty service, screen, and systemd do not mix (nabbed from irc).
getty is started, user logs in and starts screen, detaches it. Now the cgroup isn't empty and getty won't respawn.
Respawning getty with a new cgroup without waiting for the cgroup to empty would work better. A service offering screen sessions with new cgroups would also work, requiring the user to explicitly use those.
Posted May 27, 2010 6:45 UTC (Thu)
by mezcalero (subscriber, #45103)
[Link] (4 responses)
not only every service, but also every user gets its own cgroup in the systemd model. if you monitor services via cgroups it doesnt mean you would stop monitoring it via the traditional sigchld stuff.
Posted May 27, 2010 11:04 UTC (Thu)
by liljencrantz (guest, #28458)
[Link] (2 responses)
* [x]inetd only allows you to load services either lazily (start service when first needed) or on demand (start service once per request). Systemd has a third option, to load a service at systemd startup, right after all sockets have been created. This third option is probably the one that will be used the most.
Posted May 31, 2010 15:29 UTC (Mon)
by mezcalero (subscriber, #45103)
[Link] (1 responses)
2. Yes, we currently handle socket-triggered, bus-triggered, file-triggered, mount-triggered, automount-triggered, device-triggered, swap-triggered, timer-triggered and service-triggered activation. And the socket-based activation not only covers AF_INET and AF_INET6, but also AF_UNIX and classic named file system fifos. And the sockets can do all of SOCK_STREAM, SOCK_DGRAM and SOCK_SEQPACKET.
3. Of course, systemd can restart daemons for you just fine. And it does so out-of-the-box for many of them (such as the gettys). Restarting a service is a core part of process babysitting. And systemd is really good at that.
4. There is dependency checking for the order. However we always say that the shutdown ordering is identical to the reversed startup ordering. Hence you may not independently configure startup and shutdown ordering. Of course manual ordering of this is not necessary when socket/bus-based activation is used.
5. No, in either way the patch is trivial, if launchd is already supported it is a bit shorter even. For the 10 daemons (or so) in the F13 default install no took more than 10 or 20 lines of changing.
6. Even if you define no native services, and rely exclusively on SysV services boot-up will be a bit faster than in classic F13 (as an example), since we make use of the parallelization information from the LSB headers (if they are available). Some other distributions (Suse) use that informaton even on classic sysv. Ubuntu does not.
7. And no, I see no problem here. The DNS timeout is 30s or so on most systems. If a DNS server really needs that much time to start up then it is definitely broken. (A simple work-around is to edit the .service file of the client service and add a After= line for your super-slow service.) But anyway, this problem is hypothetical.
8. Well, if you want to connect to a socket service you need to know its address. This has always been like that, and is not changed in any way if systemd comes into play.
9. Sure, Upstart could do that. Not sure though how that fits into their current scheme. I mean, you can hack an emacs clone into the Linux kernel if you want to. There's nothing preventing you from doing that, either.
Posted Jun 1, 2010 23:01 UTC (Tue)
by liljencrantz (guest, #28458)
[Link]
Posted May 27, 2010 11:17 UTC (Thu)
by Tobu (subscriber, #24111)
[Link]
Posted May 31, 2010 15:03 UTC (Mon)
by mezcalero (subscriber, #45103)
[Link]
Posted May 26, 2010 19:01 UTC (Wed)
by ghigo (guest, #5297)
[Link] (8 responses)
The part that I didn't like is that this function is coupled to the init daemon: I would prefer an init daemon very simple, which start a classic sysv init script. The work to "wait for a connection then start the daemon" may be leaved to the script itself.
To me it seems that these function is coupled to the init daemon only because the init daemon has certain capability because its pid is 1 [*]. If this is true (may be I am missing something), I prefer to extend the capability of the pid=1 process to other process (via a kernel changing), and separate the function of the init process (start the base system) to the one of a "systemd" process (start the expensive daemon).
Finally a simple comment: I understand the importance of a quick boot, but today (I have a debian testing, with a no so old AMD system) the major part of the boot time is consumed by the bios booting. So who care a minimal increase of speed of boot, with the risk of a problem in a real critical part of a linux system. In case of init failure the system is blocked!
[*] IIRC The pid=1 process, has the capability to adopt any process which lost their parent.
Posted May 26, 2010 19:19 UTC (Wed)
by mezcalero (subscriber, #45103)
[Link] (3 responses)
i really wish people would read the blog story before commenting here. many of the issues raised again and agin are already explained there in detail.
Posted May 27, 2010 7:50 UTC (Thu)
by ghigo (guest, #5297)
[Link] (2 responses)
I wrote that systemd is more powerful than (x)inetd, but the concept is the same.
Posted May 27, 2010 8:04 UTC (Thu)
by rahulsundaram (subscriber, #21946)
[Link]
Posted May 31, 2010 15:34 UTC (Mon)
by mezcalero (subscriber, #45103)
[Link]
So, please, don't compare systemd with inetd too much, because that misses the core point.
If you don't see what the difference between systemd and inetd is, then please read the story again, particularly the part about "Parallelizing Socket Services".
Posted May 27, 2010 8:37 UTC (Thu)
by marcH (subscriber, #57642)
[Link] (3 responses)
Lucky you.
Posted May 27, 2010 9:04 UTC (Thu)
by dgm (subscriber, #49227)
[Link] (2 responses)
Posted May 27, 2010 14:41 UTC (Thu)
by kronos (subscriber, #55879)
[Link]
Posted Jun 3, 2010 7:33 UTC (Thu)
by eduperez (guest, #11232)
[Link]
Daemons do not need to be patched; they don't even need to be kept from forking. The use of the word "optimally" was intentional; if you want to use the launchd-inspired mechanism, you need a patched daemon.
Patching daemons
Patching daemons
The road forward for systemd
The road forward for systemd
The road forward for systemd
The road forward for systemd
* [x]inetd handles network sockets, systemd also handles unix sockets and file systems in exactly the same way.
* Unlike upstart, systemd does not solve the problem of automatically restarting services if they go down.
* There is no dependency checking when stopping services, so I can't make it so that when I restart postgres, my web service will be automatically restarted too. Your solution is to fix all services that can't gracefully handle downtime in their dependent services. Your rationale is that they will run into that problem when the services are on different machines in the future anyway.
* Enabling systemd support in a daemon can be a fair bit of work since it requires the ability for the daemon to take over an already open socket instead of creating a new one, but many popular daemons already support this because it's what launchd does too. A service that supports launchd can be trivially ported to systemd.
* If one does not convert a server to systemd init and opts to use the sysvinit compatibility layer, boot performance will be unchanged, but one will still get the increased service control of cgroups.
* If service A takes a long time to start, and service B needs A and has a short time out, problems will ensue. This is likely to happen when A is a name server. There is currently no fix for this in systemd, service B needs to be patched to become more patient.
* Socket location configuration is usually moved from service config into the systemd init file for the service, thus in the common case there is no duplication of information. An exception to this is with services that have both a client and server part running on the same system, and the server part is started by systemd. In these cases, the client side of the service will need to duplicate the socket information somewhere.
* Theoretically, there is nothing preventing upstart from using the same socket trick that systemd and launchd uses, but the existence of pre/post startup scripts would mean that some amount of serialisation would still happen.
The road forward for systemd
The road forward for systemd
Using sigchld certainly works, I think it's a good idea to have systemd be flexible on that. Counting a cgroup empty when it has non-empty sub-cgroups (the tty group still having a tty+user subgroup) might also do, if that's what you were getting at.
The road forward for systemd
The road forward for systemd
The road forward for systemd
As someone has pointed out, this is already implemented in (x)inetd. Yes systemd is more powerful but the concept is the same of inetd.
This will simplify also the adoption of systemd.
The road forward for systemd
The road forward for systemd
<cite>
The idea is actually even older than launchd. Prior to launchd the venerable inetd worked much like this: sockets were centrally created in a daemon that would start the actual service daemons passing the socket file descriptors during exec(). However the focus of inetd certainly wasn't local services, but Internet services (although later reimplementations supported AF_UNIX sockets, too). It also wasn't a tool to parallelize boot-up or even useful for getting implicit dependencies right.
</cite>
The road forward for systemd
The road forward for systemd
The road forward for systemd
The road forward for systemd
The road forward for systemd
With Asus ExpressGate enabled (default) it takes 38 second from power on to grub...
Without EG it takes "only" 26...
The road forward for systemd