LCA: Andrew Tanenbaum on creating reliable systems
Posted Jan 18, 2007 10:34 UTC (Thu) by Nick
In reply to: LCA: Andrew Tanenbaum on creating reliable systems
Parent article: LCA: Andrew Tanenbaum on creating reliable systems
>I think some linux users are very sensitive when someone criticizes Linux,
That's obviously relative to microkernel advocates, they don't go away no matter how much people criticise them. :)
>but if you think about what is the main reason to use microkernels you have to agree with Tanenbaum. Security, robustness, ..., yes, Linux is very stable and secure but the cost to get it is too high and just minor changes in the kernel can crash the system. OK, microkernels have other drawbacks, but my opinion is operating systems will follow the approach sooner or later. Look at virtualization technology: it is not just to save money, it is for security too. Some security protocols does not allow to run critical applications with others normal ones, but virtualization enables to do that in the same machine, but not in the same OS. Microkernels have the same goal, but here the execution domains are not full isolated, but they are full protected.
I didn't understand how Tanenbaum's design and robustness features were supposed to help so much.
What was shown was that he was able to kill a really simple IDE driver and have it recover, in his test environment (as opposed to hitting a real bug).
Minix apparently can't address issues where the hardware isn't being programmed in exactly the right way. I think this is where a lot of Linux driver bugs come from. These bugs are far worse than a simple driver crash, because your data can get trashed or lost.
It also doesn't address the issue of the wrong bit of data being written, or data being written in the wrong place. Andrew claimed these are usually caught very early in alpha testing. I'm sure this is true for a simple block device driver. I can't say the same would be true for something with one or two orders of magnitude more complexity, like an advanced filesystem.
From what I see, most bugs in Linux are not caused by one kernel component stepping on another, so microkernel protection won't help much; nor by simple mistakes that are easy to detect and/or recover from. So what do minix's tricks do? Hide the most mundane 10% of the bugs encountered, maybe. I can't see how it would reduce the amount of bugs by even one order of magnitude. In fairness it is a work in progress, so let's see what happens with it.
Actually I'm sure that if such a design could be built that meets all the promises then it would be widely used, and Linux would probably adopt ideas from it. It is also great to have such smart people like Tanenbaum researching all these different ideas and making these interesting kernels.
But is minix any less an unproven research toy now than it was during the Torvalds vs Tanenbaum debate?
to post comments)