In Theory, Microkernels Are Good
In Theory, Microkernels Are Good
Posted Jul 1, 2010 5:34 UTC (Thu) by ldo (guest, #40946)Parent article: GNU HURD: Altered visions and lost promise (The H)
Posted Jul 1, 2010 13:44 UTC (Thu)
by markhb (guest, #1003)
[Link] (9 responses)
Posted Jul 1, 2010 14:31 UTC (Thu)
by neal (subscriber, #7439)
[Link] (8 responses)
http://walfield.org/papers/200707-walfield-critique-of-th...
A shorter answer is that there are a number of technical shortcomings with the design of the Hurd. Many of these can be categorized as either resource management issues or security & protection issues.
Regarding resource management, a problem which becomes particularly acute in highly-decomposed systems (such as multi-server systems), is the difficulty in coordinating resource use. The underlying issue is that coordination must be formalized and agents may be suspicious of one another.
To understand the issue, considering how memory is managed on Linux. When there is pressure, Linux can reach into the various subsystems and ask them to free memory. For instance, the file system code might shrink the inode or dentry cache. That's cheap. If you start moving such components out of the kernel (as the Hurd does), you also need a mechanism to recover this ability, for instance, an upcall asking applications to free memory. But this is hard: the kernel cannot trust applications to behave correctly. This is the thrust of the work on Viengoos.
Posted Jul 1, 2010 16:03 UTC (Thu)
by patrick_g (subscriber, #44470)
[Link] (7 responses)
Posted Jul 1, 2010 17:20 UTC (Thu)
by pboddie (guest, #50784)
[Link]
Here's an interesting attempt to overcome such problems: (From the Nemesis documentation.) That was ten years ago now, however, and I guess nobody picked it up when the funding ran out.
Posted Jul 2, 2010 8:17 UTC (Fri)
by mjthayer (guest, #39183)
[Link] (5 responses)
Compare, for interest, with gist of The Art of Unix Programming (http://www.faqs.org/docs/artu/).
It is also interesting that many parts of a Linux system which perform a kernel-like function (X11, PulseAudio) are in user space servers. Although if that proves anything it is that there is no one-size-fits-all here.
Posted Jul 2, 2010 12:08 UTC (Fri)
by nix (subscriber, #2304)
[Link] (4 responses)
Posted Jul 2, 2010 18:26 UTC (Fri)
by mjthayer (guest, #39183)
[Link] (3 responses)
I must admit that I've always rather liked the micro-kernel idea, but yes, having minimal hardware drivers in the kernel, with enough extra logic for similar devices to present similar interfaces to user space (although I'm not sure I would consider e.g. a five-to-ten year old network card similar to a modern one in that respect) also always seemed like a clean separation. I do wonder how it would be if things like the TCP/IP stack, or at least the upper layers, were in user space. Not sure if I could get many kernel people to wonder about that sort of thing though...
Posted Jul 4, 2010 19:18 UTC (Sun)
by nix (subscriber, #2304)
[Link] (2 responses)
(not sure how you'd do advanced routing or firewalling that way, though.)
Posted Jul 5, 2010 21:30 UTC (Mon)
by mjthayer (guest, #39183)
[Link] (1 responses)
Your mentioning that gave me an impulse to read over the channels stuff again, and it is pretty neat stuff, even if it seems to have hit a few snags (any idea what happened to it? I presume that at least some of the ideas got taken up). I must admit that what I had in mind was just the idle thoughts of someone not particularly knowledgeable about networking, and I certainly hadn't given thought to the details of CPU core cache and things.
Just for the fun of it I will try to give those thoughts a shape though.
* an in-kernel network device driver (probably with a /dev entry) which can accept data to transmit and return data, as naked ethernet frames, or raw TCP data, or whatever the card can do in hardware.
I'm sure that there are millions of holes in that logic, just for a start what happens when in-kernel clients need to send network data (if someone pushing a network stack into userspace really cares about in-kernel clients). Feel free to poke a few more if you like...
Posted Jul 7, 2010 11:36 UTC (Wed)
by nix (subscriber, #2304)
[Link]
Posted Jul 1, 2010 17:18 UTC (Thu)
by HelloWorld (guest, #56129)
[Link] (1 responses)
Posted Jul 1, 2010 19:22 UTC (Thu)
by emk (subscriber, #1128)
[Link]
Posted Jul 2, 2010 4:40 UTC (Fri)
by rsidd (subscriber, #2582)
[Link] (2 responses)
Posted Jul 2, 2010 16:31 UTC (Fri)
by coriordan (guest, #7544)
[Link] (1 responses)
Hurd's speciality is its multi-server architecture, so the Mac OSX example doesn't prove much. (If my first sentence is correct.)
Posted Jul 2, 2010 17:22 UTC (Fri)
by gnb (subscriber, #5132)
[Link]
If Mach was the failure point, why?
If Mach was the failure point, why?
>>>Regarding resource management, a problem which becomes particularly acute in highly-decomposed systems (such as multi-server systems), is the difficulty in coordinating resource use.If Mach was the failure point, why?
Funny. It was exactly the point raised by Linus in this mail.
Extract :
Now, the real problem with split access spaces is
not the performance issue (which does exist), but the
much higher complexity issue. It's ludicrous how micro-
kernel proponents claim that their system is "simpler" than
a traditional kernel. It's not. It's much much more
complicated, exactly because of the barriers that it has
raised between data structures.
If Mach was the failure point, why?
Moreover, Nemesis has been designed such that these Quality of Service guarantees are meaningful: In a microkernel environment, an application is typically implemented by a number of processes, most of which are servers performing work on behalf of more than one client. This leads to enormous difficulty in accounting for resource usage. In a kernel-based system, multimedia applications spend most of their time in the kernel, leading to similar problems.
If Mach was the failure point, why?
If Mach was the failure point, why?
If Mach was the failure point, why?
If Mach was the failure point, why?
If Mach was the failure point, why?
>(not sure how you'd do advanced routing or firewalling that way, though.)
* a userspace server to multiplex the driver which can accept socket connections or whatever from clients and can accept and return data as ethernet frames, ip packages or whatever, but can tell the clients what it can handle most efficiently. It will do whatever checks are needed on outbound data to be sure that the client is allowed to send what it is sending, and will do whatever checks are needed on the inbound data to know where to send it. That is likely to put an end to keeping things on a single core unless the checks can be offloaded to the hardware.
* hopefully some optimisation of the kernels socket primitives so that as much as possible of the data copying accross sockets can be collapsed into single operations, preferable DMA between card and the final client. Should be doable for e.g. outbound TCP data with clever cards, perhaps even for inbound data if the card can buffer it for a short time and give the server just enough information to know where to forward it to.
If Mach was the failure point, why?
a userspace server to multiplex the driver which can accept socket connections or whatever from clients and can accept and return data as ethernet frames, ip packages or whatever
I thought the whole point of VJ tunnels was to avoid copies? Possibly we want a scheme where the kernel hands you raw IP packets which may be zero-copied. If you have NAT or some kinds of firewalling in the way obviously they would not be, but sometimes they would, and when they were you could gain the speed benefits. This doesn't stop the kernel inspecting the raw IP packets and sending them to different clients or anything like that, so I don't think you need a userspace multiplexor at all.
In Theory, Microkernels Are Good
In Theory, Microkernels Are Good
Apple did manage to make Mach work for them (even if it's a hybrid with BSD). So in practice, millions of users are happy with a reliable desktop OS, even though they have no idea what's inside.
In Theory, Microkernels Are Good
In Theory, Microkernels Are Good
In Theory, Microkernels Are Good
I think so. And I'm pretty sure OSF/1, err... Digital Unix... Tru64... whatever did too. Basically using MACH to provide the low-level primitives in order to make getting a Unix kernel running, and subsequently porting it, easier.
