LWN: Comments on "Changes coming for systemd and control groups" https://lwn.net/Articles/555920/ This is a special feed containing comments posted to the individual LWN article titled "Changes coming for systemd and control groups". en-us Sat, 01 Nov 2025 03:48:52 +0000 Sat, 01 Nov 2025 03:48:52 +0000 https://www.rssboard.org/rss-specification lwn@lwn.net Changes coming for systemd and control groups https://lwn.net/Articles/559574/ https://lwn.net/Articles/559574/ mathstuf <div class="FormattedComment"> Yeah, I can set envvars at the start of the session, but I can't modify/remove/add any afterwards. XMonad *can* do it, but again, that's not the place to do it (IMO). Things like restarting ssh-agent should automatically refresh the system's environment for new applications automatically which systemd --user can give me for free via a keychain@ssh.service while I'd have to manually tell XMonad about it otherwise.<br> </div> Thu, 18 Jul 2013 21:38:24 +0000 Changes coming for systemd and control groups https://lwn.net/Articles/559570/ https://lwn.net/Articles/559570/ nix <div class="FormattedComment"> I must say I've had no trouble injecting env vars into fvwm's parent via .Xinitrc for, well, ever. And I can't imagine it ever not working with any other window manager, either. (Of course, not all provide a nifty thing like FvwmCommand to adjust the wm's state, and thus set of env vars, while it's running, but I find it hard to believe that XMonad can't do that.)<br> <p> </div> Thu, 18 Jul 2013 21:10:52 +0000 Changes coming for systemd and control groups https://lwn.net/Articles/559548/ https://lwn.net/Articles/559548/ mathstuf <div class="FormattedComment"> My tmux and pulseaudio session survive a logout with systemd-logind just fine. However, when I move over to systemd --user, I'll need to find a different way to keep tmux alive since that *does* kill all children sessions by default. It seems to involve something with tmux making a PAM session and upstream has already expressed that it's not something they're thrilled about, but I also haven't gotten it to fully work (screen would need to do a similar thing).<br> <p> Why would I want to move over? The biggest benefit I see is that I can set session-wide environment variables and have new programs inherit them. To do this with startx, I would need to inject variables into either XMonad's environment to modify new programs (which is not where it belongs), launch from a shell, or launch everything from some trampoline program.<br> </div> Thu, 18 Jul 2013 19:28:53 +0000 Changes coming for systemd and control groups https://lwn.net/Articles/559496/ https://lwn.net/Articles/559496/ Cyberax <div class="FormattedComment"> So just use appropriate tools for this. What's the problem?<br> </div> Thu, 18 Jul 2013 15:07:54 +0000 Changes coming for systemd and control groups https://lwn.net/Articles/559444/ https://lwn.net/Articles/559444/ nix <div class="FormattedComment"> Good grief! Agreed! (Heck, I *rely* on some things surviving logout. Looks like systemd is out, for me...)<br> </div> Thu, 18 Jul 2013 12:30:53 +0000 Confused as to the point of this. https://lwn.net/Articles/557950/ https://lwn.net/Articles/557950/ eternaleye <div class="FormattedComment"> Even worse: You only have that subtree mounted, so you can't see your peers' weights. So you are *actually incapable* of knowing what weights even *would* be 'stupidly high' and starve your peers (or stupidly *low* and starve yourself)<br> </div> Sat, 06 Jul 2013 22:27:45 +0000 Confused as to the point of this. https://lwn.net/Articles/557948/ https://lwn.net/Articles/557948/ eternaleye <div class="FormattedComment"> He links to <a rel="nofollow" href="http://thread.gmane.org/gmane.linux.kernel.cgroups/6638">http://thread.gmane.org/gmane.linux.kernel.cgroups/6638</a> which has some further info. This may be the most damning for what you would like to do, dlang:<br> <p> * The configurations aren't independent. e.g. for weight-based<br> controllers, your weight is only meaningful in relation to other<br> weights at that level. Distributing configuration to whatever<br> entities which may write to cgroupfs simply cannot work. It's<br> fundamentally flawed.<br> <p> That means that anyone could set a stupidly high weight, and starve their peers. You could do double-nesting hacks to isolate that, sure, but that gets painful and stupid very quickly.<br> </div> Sat, 06 Jul 2013 22:25:59 +0000 Confused as to the point of this. https://lwn.net/Articles/557947/ https://lwn.net/Articles/557947/ eternaleye <div class="FormattedComment"> No. Tejun Heo, the cgroups maintainer, *explicity* wants single-writer - because as he's said, "Cgroup doesn't and will not support delegation of subtrees to different security domains."[1]<br> <p> [1] <a rel="nofollow" href="http://permalink.gmane.org/gmane.linux.kernel.cgroups/6899">http://permalink.gmane.org/gmane.linux.kernel.cgroups/6899</a><br> </div> Sat, 06 Jul 2013 22:22:45 +0000 Changes coming for systemd and control groups https://lwn.net/Articles/557732/ https://lwn.net/Articles/557732/ Cyberax <div class="FormattedComment"> <font class="QuotedText">&gt; Wait, did we switch topics to the boot process now? (as opposed to shutdown/reboot) </font><br> It doesn't matter.<br> <p> <font class="QuotedText">&gt;If we are still talking about reboot, it sounds like your solution to the problem of having buggy software in a critical code path is to add even more software in the critical path to supervise the buggy software and contain the impact of known bugs without fixing them.</font><br> Yup. My solution is to put a fairly SMALL amount of carefully tested code that can cope with whatever crap is thrown at it.<br> <p> Your solution is to build a house of cards. Carefully checking each card, because we know that all bugs are easy to spot and fix.<br> </div> Fri, 05 Jul 2013 03:30:07 +0000 Changes coming for systemd and control groups https://lwn.net/Articles/557731/ https://lwn.net/Articles/557731/ zblaxell Wait, did we switch topics to the boot process now? (as opposed to shutdown/reboot) <p>I have no complaints about the way systemd <em>starts</em> processes. The insanity starts when it's time to keep processes alive or reboot the system. <P>If we are still talking about reboot, it sounds like your solution to the problem of having buggy software in a critical code path is to <em>add even more software in the critical path</em> to supervise the buggy software and contain the impact of known bugs without fixing them. Presumably you also run this in some sort of nested container so that if systemd is buggy, some higher level of supervision (maybe another systemd?) can detect the problem and execute even more code in response. That layer could be buggy too, so it's nested supervisor software all the way down? Just the first level (the one with the initial bug) sounds insane to me, and every level of recursion squares the insanity. <P><em>"Yo dawg, I heard you like software, so I put some software on your software so you can run code to run your code..."</em> <P>My approach is to look at the unnecessary code, and realize that even if that code was utterly <em>perfect</em>, it would not do anything more useful than no code at all, but would use more time, space, and power to achieve the null result. The sane thing to do is identify such code and simply remove it. Fri, 05 Jul 2013 03:14:23 +0000 Changes coming for systemd and control groups https://lwn.net/Articles/557729/ https://lwn.net/Articles/557729/ Cyberax <div class="FormattedComment"> No, the problem of unreliable scripts in the boot process IS something that MUST be solved by the systemd (or its equivalent).<br> <p> Bugs ALWAYS happen and they MUST be accounted for. That's why we use OS with memory protection and separate address spaces.<br> <p> If I have to manually check all the scripts to make sure that an error in a BlinkKeyboardLightDaemon doesn't stop the entire boot process, then such a system has no place except in a trashcan.<br> </div> Fri, 05 Jul 2013 01:22:15 +0000 Changes coming for systemd and control groups https://lwn.net/Articles/557672/ https://lwn.net/Articles/557672/ zblaxell FWIW I modify the Debian init scripts too. <tt>rm -f /etc/rc?.d/K*</tt> is a pretty good start. The K* scripts are only needed when switching between runlevels, not when strictly booting up (when there is nothing to kill) and shutting down (when imminent termination is inevitable). <P>The problem in the BIND case isn't SysVInit or rc-style scripts, and it's not systemd's prerogative to solve. The problem is <em>someone put BIND code on the critical path for rebooting</em>. That is <em>the</em> mistake that needs to be corrected. Repeat for daemons we might find in a thousand other packages with code that is spuriously placed where it <em>does not belong</em>. <p>Server daemons that have special state-preserving needs can have scripts that try to bring them down with a non-blocking timeout (or systemd can do it itself). In practice, such servers don't get rebooted intentionally so the extra code executes only under unusual conditions where criteria for success are strict, or routine conditions (i.e. supervised upgrade of the software) where the criteria for success are greatly relaxed. That means the code doesn't get a lot of field testing, and its worst-case behavior only shows up in situations that are already full of unrelated surprises. <p>If I'm responsible for an <em>application</em>, then servers are just buildings for my application to live in. I rearrange the interior walls and fixtures of the building for the convenience of my application. If I need to reboot the server, it's because that building is on fire and I need a new one. I'll try to rescue my application state first--asynchronously, and with application-specific tools. When I have finished that, I'll tell the server to reboot. With that reboot request I implicitly guarantee there is no longer state on the server that I care about losing--my application is not running any more, or its state is so badly broken that I've given up. It would be convenient to umount filesystems and clean up state outside of my application if possible (in threads or processes separate from the rebooting thread due to the high risk of failure) but it is <em>never</em> necessary. The only <em>necessary</em> code in this situation is a hard reboot. Anything in the reboot critical path that isn't rebooting <em>is a bug</em>. <p>If I'm responsible for the <em>server</em>, then applications are cattle and I might want robot cowboys to organize them. This case is the same as the previous case, since my server would effectively be running a single customer-hosting <em>application</em>. systemd makes some sense as that application--although still not <em>necessarily</em> as PID 1, and certainly not as the sole owner of a variety of important kernel features. If a customer stopped paying for service or did something disruptive, I might intentionally destroy their process state with SIGKILL and cgroups. My customer agreement would have the phrase "terminated at <em>any</em> time" sprinkled liberally throughout the text so that nobody can claim to be surprised. Thu, 04 Jul 2013 17:59:07 +0000 Changes coming for systemd and control groups https://lwn.net/Articles/557680/ https://lwn.net/Articles/557680/ Cyberax <div class="FormattedComment"> It has been about 4 years ago, and at that time I had to jump through some hoops to get a separate protected management circuit from our datacenter for IPMI.<br> </div> Thu, 04 Jul 2013 17:17:40 +0000 Changes coming for systemd and control groups https://lwn.net/Articles/557624/ https://lwn.net/Articles/557624/ jubal <div class="FormattedComment"> …what kind of “server” doesn't offer remote powercycle these days?<br> </div> Thu, 04 Jul 2013 13:02:52 +0000 Changes coming for systemd and control groups https://lwn.net/Articles/557558/ https://lwn.net/Articles/557558/ Cyberax <div class="FormattedComment"> <font class="QuotedText">&gt;The more complicated the shutdown code is, the more likely it is to fail. If we try to stop mostly-stateless server daemons (like BIND) which are explicitly designed for and respond well to a famous widely-deployed 20-year-old system-wide SIGTERM/pause/SIGKILL sequence using anything with more failure modes than exactly that sequence, then embarrassing failure is simply inevitable. </font><br> I had to go to our datacenter to power-cycle our server exactly because BIND9 rc-scripts did not do the timeout correctly. All on stock Debian installation, no cgroups or systemd in sight.<br> </div> Thu, 04 Jul 2013 04:13:54 +0000 Changes coming for systemd and control groups https://lwn.net/Articles/557437/ https://lwn.net/Articles/557437/ zblaxell <blockquote><blockquote>if you insist on not writing your own /sbin/init</blockquote> Only on LWN...</blockquote> What? Don't knock it until you've tried it. <tt>;)</tt> <p>I sometimes get projects with single-second boot time requirements. I could waste a lot of time reading config files, forking processes, or resolving symbols from shared libraries...or I can write my own monolithic <tt>/sbin/init</tt>, and use the time saved to load a larger and more functional application from boot media, or to avoid having to resort to kernel XIP and other more painful hacks just to boot in time. <p>With a translator from shell-like syntax to C code it's not even particularly difficult to convert from legacy rc scripts (or, in theory, systemd configuration files). The diminishing returns kick in pretty quickly as you pile on more software, though, so it only tends to be useful in very simple cases. Wed, 03 Jul 2013 17:18:01 +0000 Changes coming for systemd and control groups https://lwn.net/Articles/557424/ https://lwn.net/Articles/557424/ zblaxell Yes, I've had to disable that systemd feature too; however, in this case it's because I usually deploy a much more application-specific watchdog process. systemd's built-in watchdog might be inadequate or non-portable, but it is not a radical departure from legacy behavior and it is better than not having any watchdog code at all. <P>Obviously every system based on rc-scripts has been able to run such a daemon for decades prior to systemd's existence. The page at the link you provided even has a link to such a daemon that is at least 14 years old. I recently had an unsolicited email conversation with one of that daemon's maintainers about their plans to extend the daemon to do more invasive application-specific aliveness checking (I thought the idea wasn't insane, but I probably wouldn't use it because watchdog daemons are trivial to implement while solutions to political problems arising from software integration in critical code paths are not). <P>The more complicated the shutdown code is, the more likely it is to fail. If we try to stop mostly-stateless server daemons (like BIND) which are <em>explicitly designed for</em> and <em>respond well to</em> a famous widely-deployed 20-year-old system-wide SIGTERM/pause/SIGKILL sequence using anything with more failure modes than exactly that sequence, then embarrassing failure is simply inevitable. <P>Less is better. systemd has lots of tactical cleverness in its implementation, but at the same time it gets the basic strategy wrong. <P>I have a test machine running systemd. On February 13, 2013 I executed the 'reboot' command and today I'm still waiting for it to finish. Interestingly, the rest of this particular system's function seems to be unimpaired--including systemctl and systemd services--and I still push software to it for testing regularly. I'm now tempted to take bets on how many years it will take for that machine to reboot. It happens to have two independent battery-backed power supplies... <TT>};-&gt;</TT> Wed, 03 Jul 2013 15:39:30 +0000 Changes coming for systemd and control groups https://lwn.net/Articles/557355/ https://lwn.net/Articles/557355/ zblaxell <div class="FormattedComment"> My point is that the ages-old implementors did have the option of doing drive-by fatal mass-signalling of processes that happened to be descendants (many generations removed) of server daemons or session processes spawned for TTY or network connections.<br> <p> They chose to do things differently from systemd when they had the opportunity and technical capacity to do a similar implementation, and they had sound reasons for making the choices they did.<br> </div> Wed, 03 Jul 2013 14:00:16 +0000 Evil https://lwn.net/Articles/557354/ https://lwn.net/Articles/557354/ zblaxell It wouldn't be the first time someone tried to import some insane bug from some other community to Debian in the name of cross-distro consistency. I can't see any reason why I wouldn't work to push it back out again just like others before. <P>I realize some people disagree with me on this point. Several of them have posted here, repeating the assertion that they are not wrong after I just finished pointing out why they are. ;) Forks happen if your code's behavior as distributed is insane. This is one reason why we have different distributions. <P>I would likely only advocate a change to the default behavior, since the feature does have sane special cases and there seems to be commitment from upstream to maintain both behavior modes (although if it has to be configured in hundreds of separate places then a new global configuration point might still be preferable). <P>Packages that explicitly depend on having their processes killed under a possibly unconstrained list of scenarios <em>defined by strangers from the future</em> can always explicitly request that in their systemd configuration glue to ensure consistent cross-distribution behavior. All other packages will be much happier (and safer!) if left alone. Wed, 03 Jul 2013 13:55:19 +0000 Changes coming for systemd and control groups https://lwn.net/Articles/557312/ https://lwn.net/Articles/557312/ Cyberax <div class="FormattedComment"> And what do you think, systemd HAS support for hardware and software watchdogs. It's right there in the documentation: <a rel="nofollow" href="http://0pointer.de/blog/projects/watchdog.html">http://0pointer.de/blog/projects/watchdog.html</a><br> <p> And you might note, that traditional rc-scripts have nothing of this kind. I once had a very nice 3am trip to our datacenter to power-cycle a server that was stuck at trying to stop BIND during a reboot.<br> </div> Tue, 02 Jul 2013 23:14:42 +0000 Changes coming for systemd and control groups https://lwn.net/Articles/557311/ https://lwn.net/Articles/557311/ Cyberax <div class="FormattedComment"> So disable them. What's the problem?<br> <p> The process group mess is a historic Unix f*up, along with the controlling terminals and the whole TTY system.<br> <p> And if you have a nohuped process that can lose the data if someone SIGKILLs it then you certainly deserve it, unless you use it for one-off activity.<br> </div> Tue, 02 Jul 2013 23:11:49 +0000 Changes coming for systemd and control groups https://lwn.net/Articles/557299/ https://lwn.net/Articles/557299/ zblaxell <div class="FormattedComment"> Actually the sync is often the step that causes a shutdown to hang if the "clean" shutdown procedure is not going well (e.g. due to flaky disks or kernel bugs). In that situation it is better to ask watchdog hardware to reboot, and failing that, tell the kernel to reboot itself without risking a failure in sync. Less is better.<br> </div> Tue, 02 Jul 2013 20:56:08 +0000 Systemd broke nohup? https://lwn.net/Articles/557298/ https://lwn.net/Articles/557298/ zblaxell <div class="FormattedComment"> This is one of those non-working workarounds I was alluding to. None of the PAM recipes I've seen works properly. Some of them mask the problem, but then fail under difficult-to-reproduce circumstances.<br> <p> Maybe in a few years when all the distros get it working, it could be useful, but for now it's just a support nightmare.<br> </div> Tue, 02 Jul 2013 20:55:08 +0000 Changes coming for systemd and control groups https://lwn.net/Articles/557296/ https://lwn.net/Articles/557296/ zblaxell <blockquote>In this case however the previous, ages-old default was more a limitation caused by the lack of cgroups than a conscious decision."</blockquote> Things like SIGHUP, session leaders, process groups and the nohup command existed long before systemd. If a process wanted to stick around after it has been asked to exit (perhaps disassociating itself from resources as it does so), it had standard mechanisms to do so until systemd came along. Tue, 02 Jul 2013 20:55:01 +0000 Changes coming for systemd and control groups https://lwn.net/Articles/557294/ https://lwn.net/Articles/557294/ zblaxell Traditional Unix shutdown scripts send these signals once, when the entire system is about to be (forcibly) rebooted, and it no longer matters what processes are still running because the kernel is going to stop soon anyway. <P>Individual init.d-style scripts might send signals to specific processes, but that's usually ad-hoc behavior that makes sense for each daemon (i.e. Apache or Postgres send signals to all their children, while sshd only signals the master process and leaves child sessions alone). While I appreciate that it might be useful to refactor to a single implementation, it makes no sense for that single implementation to do something new and surprising <em>by default</em>. <P>systemd sends SIGKILL in many new situations, such as when a login session ends, or a service is restarted, or a TCP connection is lost. None of these imply even SIGTERM, let alone SIGKILL, and certainly not for processes that are many generations removed from the initial service process. Tue, 02 Jul 2013 20:54:55 +0000 Changes coming for systemd and control groups https://lwn.net/Articles/556918/ https://lwn.net/Articles/556918/ krakensden <div class="FormattedComment"> Sounds like a great idea for a beautiful FUSE hack<br> </div> Sat, 29 Jun 2013 19:54:02 +0000 Changes coming for systemd and control groups https://lwn.net/Articles/556713/ https://lwn.net/Articles/556713/ dgm <div class="FormattedComment"> Almost. Reducing diversity takes you faster _somewhere_. It could be a local minima if you are lucky (or live in a bumpy world), but it needs not be. If you happen to take the wrong path, then lack of diversity is the fastest way to catastrophe.<br> </div> Fri, 28 Jun 2013 11:23:58 +0000 Changes coming for systemd and control groups https://lwn.net/Articles/556670/ https://lwn.net/Articles/556670/ jschrod <div class="FormattedComment"> <font class="QuotedText">&gt; If the old variation is fitter, the new one dies out/is ignored. If the</font><br> <font class="QuotedText">&gt; new one is fitter, the old one dies out/is discarded. If they are equally</font><br> <font class="QuotedText">&gt; fit, they both spread throughout the population.</font><br> <p> Arrggghhhh!<br> <p> That's not evolution, that's social-darwinistic misinterpretation of evolution.<br> <p> In nature, something doesn't die out because there's an other species that's "more fit". If that were the case, we wouldn't have so many species on earth. A species dies out if it is _not fit enough_ to adapt to changing environmental circumstances (which might be the pressure of other inhabitants of that environment, but that's seldom the case).<br> <p> Selection does *not* mean only the fittest survive. It means the unfit-to-adapt die.<br> </div> Fri, 28 Jun 2013 01:49:59 +0000 Changes coming for systemd and control groups https://lwn.net/Articles/556598/ https://lwn.net/Articles/556598/ marcH <div class="FormattedComment"> I think the misunderstanding is in the definition of "diversity". Where does it start?<br> <p> </div> Thu, 27 Jun 2013 17:42:16 +0000 Changes coming for systemd and control groups https://lwn.net/Articles/556454/ https://lwn.net/Articles/556454/ russell <div class="FormattedComment"> OK let's look at a more mathematical analogy. Lack of diversity is the fastest way to a local minima not a global minima and not an optimal solution. <br> </div> Thu, 27 Jun 2013 02:43:05 +0000 Changes coming for systemd and control groups https://lwn.net/Articles/556336/ https://lwn.net/Articles/556336/ Karellen <div class="FormattedComment"> Right, but then you have diversity between the original allele/factoring and the new one.<br> <p> If the old variation is fitter, the new one dies out/is ignored. If the new one is fitter, the old one dies out/is discarded. If they are equally fit, they both spread throughout the population.<br> <p> (Note, "fitness" is a wide-ranging property. With regards to code changes, if two variants are equally performant and equally simple, then one might be "more fit" simply because it is more widely adopted, and minimising the set of patches you have to others increases the "fitness" of your tree.)<br> <p> </div> Wed, 26 Jun 2013 15:44:43 +0000 Evil https://lwn.net/Articles/556322/ https://lwn.net/Articles/556322/ micka <div class="FormattedComment"> Is the default init system necessarily the same on all kernels ? I'm sure there are far more differences.<br> <p> And if those other kernels are a technical obstacle to some progress, maybe the alternate kernel subproject should be asked to fix the problem themselves ?<br> </div> Wed, 26 Jun 2013 13:10:34 +0000 Evil https://lwn.net/Articles/556320/ https://lwn.net/Articles/556320/ hummassa <div class="FormattedComment"> Debian is not only Linux.<br> <p> There are Debians with FreeBSD and NetBSD kernels, Solaris, and even Hurd. Systemd is (as it should be) available as an OPTION for Debian, but it should not be the default until it works at least as well as sysvinit in the other platforms.<br> </div> Wed, 26 Jun 2013 13:04:56 +0000 Changes coming for systemd and control groups https://lwn.net/Articles/556269/ https://lwn.net/Articles/556269/ nix <div class="FormattedComment"> Quite. SIGTERM-wait-then-SIGKILL is completely standard in Unix shutdown scripts. I'm not sure I've ever seen one that doesn't do that, and systemd can too.<br> </div> Tue, 25 Jun 2013 19:43:14 +0000 Confused as to the point of this. https://lwn.net/Articles/556257/ https://lwn.net/Articles/556257/ SEJeff <div class="FormattedComment"> Ok thats fine. What is the answer for HPC users who know how to carve up their resources manually and apply things like CPU isolation/bind numa nodes to specific cpusets? The Linux kernel and systemd can guess right for the 90% use case, but for users who like systemd and still want to carve up the last 10% of their system, how will that be made possible?<br> <p> I'm just trying to understand what my future will entail as one of those said HPC peeps.<br> </div> Tue, 25 Jun 2013 18:05:20 +0000 Changes coming for systemd and control groups https://lwn.net/Articles/556243/ https://lwn.net/Articles/556243/ nye <div class="FormattedComment"> <font class="QuotedText">&gt;immediately before turning off the power, systemd will do a "killing spree" loop, where it will try to kill possibly remaining processes, unmount remaining file systems, detach remaining DM, turn off remaining swaps, detach remaining loops, over and over again in a tight loop until duringe one iteration it couldn't do anything anymore.</font><br> <p> Thanks for the response. I like this part - seems like a sensible way to ensure that you wait for as long as there is some progress being made, without hanging around forever.<br> </div> Tue, 25 Jun 2013 15:59:39 +0000 Changes coming for systemd and control groups https://lwn.net/Articles/556225/ https://lwn.net/Articles/556225/ epa <blockquote>if you insist on not writing your own /sbin/init</blockquote> Only on LWN... Tue, 25 Jun 2013 12:01:05 +0000 Confused as to the point of this. https://lwn.net/Articles/556212/ https://lwn.net/Articles/556212/ dlang <div class="FormattedComment"> single hierarchy != single writer<br> <p> There's no reason to force a single writer just because they are eliminating the confusion of contradictory hierarchies.<br> </div> Tue, 25 Jun 2013 05:52:43 +0000 Confused as to the point of this. https://lwn.net/Articles/556211/ https://lwn.net/Articles/556211/ suckfish <div class="FormattedComment"> Ugghh. cgroups are a powerful tool for general administration and integration purposes. That's normally done by things like shell scripting, and the hierarchical model exposed &amp; manipulated via the file-system is pretty convenient to access via shell scripting.<br> <p> I wonder, if cgroups had been single-writer when systemd was conceived, would systemd have been written as the one-and-only single writer or would it have found a way to cooperate more democratically with other users?<br> </div> Tue, 25 Jun 2013 05:24:55 +0000 Changes coming for systemd and control groups https://lwn.net/Articles/556207/ https://lwn.net/Articles/556207/ drag <div class="FormattedComment"> <font class="QuotedText">&gt; That is a horrible idea, changes to the status quo happen because of interoperability of components due to standards or other things. </font><br> <p> The nice thing about standards is that there are so many to choose from.<br> <p> <p> Software is a evolving thing, right? So you want to take a kernel and after it becomes good enough then you can build a operating system on it. Then once the operating system is good enough you can build it into a server, desktop, or some other appliance. Once you get that built you can write software for managing hundreds of users, make application development easy, distribute software to millions of users effortlessly. (oh wait.. traditional Linux spent multiple decades and still hasn't gotten to that level, were as Android did it within months)<br> <p> You see 'layer' performs a particular function and then people encapsulates that functionality into bigger and more complex balls of functionality to solve bigger and harder problems. <br> <p> <p> Imagine now: a TCP/IP ethernet network. <br> <p> <p> Each layer does a job. The 'physical layer' is made up of wires and transistors and signals that run over those wires. The 'Transport' layer performs low-level MAC addressing features. Level 3 provides routing. Then you have mixed into that somewhere IP packets. Then TCP or UDP datagrams. Then on top of that you have the application specific protocols, then protocols built on other protocols, OS network stacks with varying abilities to peer into different layers that filter, send, and forward packets and their contents to other software and hardware. And then finally the applications that process the raw data presented to them by system libraries and kernels and turns it into something that is actually useful. <br> <p> It's all dirty, but effective, continuously evolving, with formal layers and rules that people must abide with. It doesn't follow the imagined purity of the ISO network stack model, but there are rules you can't break and people are free to do whatever in their particular product with the knowledge that they can test and produce practical products people can use.<br> <p> <p> Now imagine if the world ran ethernet networks like they run Linux distributions. <br> <p> Which ethernet distribution are you going to go with? They all use just about the same wires and the same protocols, but they decide to do things different for arbitrary reasons that are mostly meaningless to other people. <br> <p> Maybe the like the way colors look in a different order in the RJ45 jacks. Maybe the TCP/IP stack uses some different ports. Like Redhat ethernet prefers to have HTTP be port 9234.1G5, while Debian still uses the old AF port and Suse Linux is still stuck on port 80. <br> <p> Maybe one of the distributions decided that it preferred IPv6 so they made a dual stack IPv4/IPv6 a huge pain the ass to do. Maybe another distribution decided that a IPv5.3 transition protocol would be terrific. Maybe Debian decided that it's going to continue to offer IPv4, IPv5.3, IPv6, and IPX/SPX just in case one or the other might be better. Meanwhile KDE brand routers decided it would be great if they would let people randomly choose whatever firmware version they wanted for their routers while Gnome people figured that the only thing that should work on their stuff is their version of IPv6.2 in order to make everything simple for novice network administrators. <br> <p> Now keep in mind that they are all using the same stuff. Using the same Ethernet hardware, same wires, same network stacks. Just all configured differently and people have to design and support every potential combination if you want things to work on more then one ethernet distro. <br> <p> But this is all for the sake of potential innovation, is it not? <br> <p> How can you continue to improve networks without people just randomly being able to subtly change how all of it works? <br> <p> <p> For anybody running a large network this is just insane thinking.<br> <p> <p> With Linux distributions, however, there are no layers besides 2: There is 'Kernel' and then there is 'Everything else'. There are a few attempts at standards, but it's seems all really very arbitrary. Since 2000 my Xserver can support 200+ different window managers and I can always expect that my X clients would at least be mostly functional regardless of which one I choose, however I still can't really expect that I can use any 'Linux' audio recorder program I want to record voice off of my headset.<br> <p> Change one thing, like need to apply a security patch to a library and it's like a volcano blowing with all the packages that have it's dependencies are automatically regenerated and sent out to the mirrors. <br> <p> All I am imagining (I can't even say suggesting, because at this point it is just fantasy) is that Linux distributions decide to formalize more layers so that people are free to innovate within their layers without causing huge headaches for everybody else. Of course it will never be perfect, but nothing is.<br> </div> Tue, 25 Jun 2013 05:22:42 +0000