|
|
Subscribe / Log in / New account

The road forward for systemd

The road forward for systemd

Posted May 26, 2010 17:00 UTC (Wed) by jch (guest, #51929)
Parent article: The road forward for systemd

I think there's a more fundamental difference between upstart and systemd, which the article hasn't mentioned. In upstart, the basic concept is the event; system states are encoded using events, such as "networking started" and "networking stopped". In startd, states are a built-in notion, which complicates the systemd code but simplifies system management.

> Daemons must be patched to work optimally with systemd

Am I right in understanding that they don't actually /need/ to be patched, as long as they can be kept from forking? (In other words -- if it works with runit and upstart, it will work with systemd.)

--jch


to post comments

Patching daemons

Posted May 26, 2010 17:05 UTC (Wed) by corbet (editor, #1) [Link] (1 responses)

Daemons do not need to be patched; they don't even need to be kept from forking. The use of the word "optimally" was intentional; if you want to use the launchd-inspired mechanism, you need a patched daemon.

Patching daemons

Posted May 26, 2010 17:37 UTC (Wed) by mezcalero (subscriber, #45103) [Link]

btw, as a side note. I finished patching all daemons we start by default on f13 now plus a few more, and the patches should have upstreamable quality.

The road forward for systemd

Posted May 26, 2010 18:39 UTC (Wed) by smurf (subscriber, #17840) [Link] (7 responses)

You can treat "somebody wants to connect to port 3306" as an event which should trigger starting mysqld. (Assuming that somebody is going to add appropriate code to upstart.)

Also, equating "a daemon has definitely bitten the dust" with "its cgroup is now empty" might actually work for a nonzero subset of them, though not all.

In summary, IMHO some of systemd's ideas actually make sense and might actually fit into upstart's model of the world. Just a SMOP. :-P

The road forward for systemd

Posted May 26, 2010 23:58 UTC (Wed) by Tobu (subscriber, #24111) [Link] (5 responses)

> Also, equating "a daemon has definitely bitten the dust" with "its cgroup is now empty" might actually work for a nonzero subset of them, though not all.

Indeed. A getty service, screen, and systemd do not mix (nabbed from irc).

getty is started, user logs in and starts screen, detaches it. Now the cgroup isn't empty and getty won't respawn.

Respawning getty with a new cgroup without waiting for the cgroup to empty would work better. A service offering screen sessions with new cgroups would also work, requiring the user to explicitly use those.

The road forward for systemd

Posted May 27, 2010 6:45 UTC (Thu) by mezcalero (subscriber, #45103) [Link] (4 responses)

well this is just nonsense. the screen/getty problem does not exist, scott is just a little confused about this.

not only every service, but also every user gets its own cgroup in the systemd model. if you monitor services via cgroups it doesnt mean you would stop monitoring it via the traditional sigchld stuff.

The road forward for systemd

Posted May 27, 2010 11:04 UTC (Thu) by liljencrantz (guest, #28458) [Link] (2 responses)

Lennart, I've read your blog post, this article and the comments on both. I think I've wrapped my head around most of it, could you just confirm that my understanding is correct on these points?

* [x]inetd only allows you to load services either lazily (start service when first needed) or on demand (start service once per request). Systemd has a third option, to load a service at systemd startup, right after all sockets have been created. This third option is probably the one that will be used the most.
* [x]inetd handles network sockets, systemd also handles unix sockets and file systems in exactly the same way.
* Unlike upstart, systemd does not solve the problem of automatically restarting services if they go down.
* There is no dependency checking when stopping services, so I can't make it so that when I restart postgres, my web service will be automatically restarted too. Your solution is to fix all services that can't gracefully handle downtime in their dependent services. Your rationale is that they will run into that problem when the services are on different machines in the future anyway.
* Enabling systemd support in a daemon can be a fair bit of work since it requires the ability for the daemon to take over an already open socket instead of creating a new one, but many popular daemons already support this because it's what launchd does too. A service that supports launchd can be trivially ported to systemd.
* If one does not convert a server to systemd init and opts to use the sysvinit compatibility layer, boot performance will be unchanged, but one will still get the increased service control of cgroups.
* If service A takes a long time to start, and service B needs A and has a short time out, problems will ensue. This is likely to happen when A is a name server. There is currently no fix for this in systemd, service B needs to be patched to become more patient.
* Socket location configuration is usually moved from service config into the systemd init file for the service, thus in the common case there is no duplication of information. An exception to this is with services that have both a client and server part running on the same system, and the server part is started by systemd. In these cases, the client side of the service will need to duplicate the socket information somewhere.
* Theoretically, there is nothing preventing upstart from using the same socket trick that systemd and launchd uses, but the existence of pre/post startup scripts would mean that some amount of serialisation would still happen.

The road forward for systemd

Posted May 31, 2010 15:29 UTC (Mon) by mezcalero (subscriber, #45103) [Link] (1 responses)

1. Well, I wouldn't use the terms "lazily" and "on demand" like this (they are completely synonymous in my understanding). But yes, inetd supports one-instance-per-connection and one-instance-for-all-connections modes. And so does systemd.

2. Yes, we currently handle socket-triggered, bus-triggered, file-triggered, mount-triggered, automount-triggered, device-triggered, swap-triggered, timer-triggered and service-triggered activation. And the socket-based activation not only covers AF_INET and AF_INET6, but also AF_UNIX and classic named file system fifos. And the sockets can do all of SOCK_STREAM, SOCK_DGRAM and SOCK_SEQPACKET.

3. Of course, systemd can restart daemons for you just fine. And it does so out-of-the-box for many of them (such as the gettys). Restarting a service is a core part of process babysitting. And systemd is really good at that.

4. There is dependency checking for the order. However we always say that the shutdown ordering is identical to the reversed startup ordering. Hence you may not independently configure startup and shutdown ordering. Of course manual ordering of this is not necessary when socket/bus-based activation is used.

5. No, in either way the patch is trivial, if launchd is already supported it is a bit shorter even. For the 10 daemons (or so) in the F13 default install no took more than 10 or 20 lines of changing.

6. Even if you define no native services, and rely exclusively on SysV services boot-up will be a bit faster than in classic F13 (as an example), since we make use of the parallelization information from the LSB headers (if they are available). Some other distributions (Suse) use that informaton even on classic sysv. Ubuntu does not.

7. And no, I see no problem here. The DNS timeout is 30s or so on most systems. If a DNS server really needs that much time to start up then it is definitely broken. (A simple work-around is to edit the .service file of the client service and add a After= line for your super-slow service.) But anyway, this problem is hypothetical.

8. Well, if you want to connect to a socket service you need to know its address. This has always been like that, and is not changed in any way if systemd comes into play.

9. Sure, Upstart could do that. Not sure though how that fits into their current scheme. I mean, you can hack an emacs clone into the Linux kernel if you want to. There's nothing preventing you from doing that, either.

The road forward for systemd

Posted Jun 1, 2010 23:01 UTC (Tue) by liljencrantz (guest, #28458) [Link]

Thanks for clarifying.

The road forward for systemd

Posted May 27, 2010 11:17 UTC (Thu) by Tobu (subscriber, #24111) [Link]

Using sigchld certainly works, I think it's a good idea to have systemd be flexible on that. Counting a cgroup empty when it has non-empty sub-cgroups (the tty group still having a tty+user subgroup) might also do, if that's what you were getting at.

The road forward for systemd

Posted May 31, 2010 15:03 UTC (Mon) by mezcalero (subscriber, #45103) [Link]

Well, I don't think MySQL makes a good example, since it is mostly relevant for servers, where quick bootup is not essential. We should focus on speeding up startup of desktops/laptops, not so much servers.

The road forward for systemd

Posted May 26, 2010 19:01 UTC (Wed) by ghigo (guest, #5297) [Link] (8 responses)

I like the idea behind systemd: to start the service only (and if) they are needed. It is not only a speed gains, but also a resource gain: if a service is not required, it will not be started.

The part that I didn't like is that this function is coupled to the init daemon: I would prefer an init daemon very simple, which start a classic sysv init script. The work to "wait for a connection then start the daemon" may be leaved to the script itself.
As someone has pointed out, this is already implemented in (x)inetd. Yes systemd is more powerful but the concept is the same of inetd.

To me it seems that these function is coupled to the init daemon only because the init daemon has certain capability because its pid is 1 [*]. If this is true (may be I am missing something), I prefer to extend the capability of the pid=1 process to other process (via a kernel changing), and separate the function of the init process (start the base system) to the one of a "systemd" process (start the expensive daemon).
This will simplify also the adoption of systemd.

Finally a simple comment: I understand the importance of a quick boot, but today (I have a debian testing, with a no so old AMD system) the major part of the boot time is consumed by the bios booting. So who care a minimal increase of speed of boot, with the risk of a problem in a real critical part of a linux system. In case of init failure the system is blocked!

[*] IIRC The pid=1 process, has the capability to adopt any process which lost their parent.

The road forward for systemd

Posted May 26, 2010 19:19 UTC (Wed) by mezcalero (subscriber, #45103) [Link] (3 responses)

hey, you haven't read the original blog story. please do. it should tell you why xinetd is a very different beat from systemd.

i really wish people would read the blog story before commenting here. many of the issues raised again and agin are already explained there in detail.

The road forward for systemd

Posted May 27, 2010 7:50 UTC (Thu) by ghigo (guest, #5297) [Link] (2 responses)

I read the blog. In fact the author says:
<cite>
The idea is actually even older than launchd. Prior to launchd the venerable inetd worked much like this: sockets were centrally created in a daemon that would start the actual service daemons passing the socket file descriptors during exec(). However the focus of inetd certainly wasn't local services, but Internet services (although later reimplementations supported AF_UNIX sockets, too). It also wasn't a tool to parallelize boot-up or even useful for getting implicit dependencies right.
</cite>

I wrote that systemd is more powerful than (x)inetd, but the concept is the same.

The road forward for systemd

Posted May 27, 2010 8:04 UTC (Thu) by rahulsundaram (subscriber, #21946) [Link]

You are replying to the same author of the blog post and developer of systemd, FYI.

The road forward for systemd

Posted May 31, 2010 15:34 UTC (Mon) by mezcalero (subscriber, #45103) [Link]

I am the author of the blog story. And there I try to make clear that systemd is substantially more then inetd. inetd is for lazy-loading services. systemd uses the same technique but does so to parallelize boot-up and make dependency information redundant.

So, please, don't compare systemd with inetd too much, because that misses the core point.

If you don't see what the difference between systemd and inetd is, then please read the story again, particularly the part about "Parallelizing Socket Services".

The road forward for systemd

Posted May 27, 2010 8:37 UTC (Thu) by marcH (subscriber, #57642) [Link] (3 responses)

> Finally a simple comment: I understand the importance of a quick boot, but today (I have a debian testing, with a no so old AMD system) the major part of the boot time is consumed by the bios booting.

Lucky you.

The road forward for systemd

Posted May 27, 2010 9:04 UTC (Thu) by dgm (subscriber, #49227) [Link] (2 responses)

Or extremely unlucky, if you think of it. My Lucid UNR takes about 30 seconds from grub to desktop. If his BIOS takes longer than that there's something definitely wrong with his machine... or he's talking about one of those servers with RAID that take forever to boot.

The road forward for systemd

Posted May 27, 2010 14:41 UTC (Thu) by kronos (subscriber, #55879) [Link]

My Asus (M4A79) board has a marvel controller with it's own option BIOS, and I've added an extra PATA controller.
With Asus ExpressGate enabled (default) it takes 38 second from power on to grub...
Without EG it takes "only" 26...

The road forward for systemd

Posted Jun 3, 2010 7:33 UTC (Thu) by eduperez (guest, #11232) [Link]

Some RAID controllers are painfully slow, and I mean they take several minutes to initialize and start loading the boot manager.


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds