|
|
Subscribe / Log in / New account

Making stable kernels more stable

Making stable kernels more stable

Posted Oct 24, 2018 7:34 UTC (Wed) by mjthayer (guest, #39183)
Parent article: Making stable kernels more stable

> In the discussions prior to the summit, she had suggested that perhaps stable releases should sit in a release-candidate state for one week prior to release as a way of shaking out any bugs; that idea was not particularly well received.

People have different levels of tolerance for bugs. I use Ubuntu and usually upgrade during the beta period because it is easier to get bugs fixed then, and to know what our users who use Ubuntu will run into. If there is no release candidate state then people who are more sensitive should wait for a while before using a release. Making use of these different tolerance levels (in fact the usual alpha-beta-release cycle) is still an effective way of keeping things stable. On a finer grain, people with higher stability requirements could also hold back non-critical stable kernel updates until they got more testing, while people with less critical needs could use them right away.

On a different note, there shear size of the kernel must be a big problem for stability. I always wonder whether people will try a more micro-kernel-like approach some day. I think that these days many of the problems of micro-kernels have been solved, but that the gains have not yet justified the pain of reworking what we have.


to post comments

Making stable kernels more stable

Posted Oct 24, 2018 10:16 UTC (Wed) by dgm (subscriber, #49227) [Link] (4 responses)

> On a different note, there shear size of the kernel must be a big problem for stability. I always wonder whether people will try a more micro-kernel-like approach some day.

This is a different kind of stability. Having subsystems run at lower privilege would not make them less prone to regressions. It only gives the microkernel the opportunity to restart them without failing completely, but this is only so much useful.

Take for instance the KVM subsystem or the graphic drivers, that were metioned in the article. Losing KVM would crash the running virtual machines, which we can assume are the key part of the system for the user, so no difference here. Also, I'm not sure about graphics cards. Are desktop environments prepared to cope with losing the grahics context?

Making stable kernels more stable

Posted Oct 24, 2018 12:11 UTC (Wed) by mjthayer (guest, #39183) [Link] (1 responses)

Actually I was thinking less of the process separation and more of the conceptual separation, assuming that it is easier to keep several smaller code-bases stable than one big one. I might be wrong there of course, particularly if the added communication complexity outweighs the benefits.

Making stable kernels more stable

Posted Oct 24, 2018 20:55 UTC (Wed) by k8to (guest, #15413) [Link]

Communication is something to wrangle and have problems, but I'd say it's more about binding between components that can happen readily over a well-ordered communication path.

I definitely have seen wins when a lot of strategies are employed together. There are some systems built in erlang where many benefits were reaped by state management, controlled communication, obliviousness about local vs remote communication by default, and many other things combining to give more managability to the process. It's harder achieve those wins when the system doesn't give you the discipline tools to help ensure those things are done.

I'm suspicious that an operating system may be too low-level to use fancy tooling to help you get all those wins though. I'm definitely convinced that complex piles of C talking over messaging busses implementing a kernel does not give you a simple system or any easy wins.

Making stable kernels more stable

Posted Oct 31, 2018 16:16 UTC (Wed) by anton (subscriber, #25547) [Link] (1 responses)

Are desktop environments prepared to cope with losing the grahics context?
X applications certainly know how to redraw a window when you uniconize it. twm has a "restart" action that redraws all windows (with uniconizing and reiconizing if necessary). I use this when my Intel-based X-Terminal decides to make everything black.

My experience concerning Intel and AMD is that I have graphics problems on Intel (HD Graphics 520/500 on Skylake and Apollo Lake), while AMD (Juniper XT) works flawlessly. This may have to do with the age of the hardware (Juniper XT is from 2009, HD 5xx from 2015), but my experience is certainly the opposite of what others have stated.

Making stable kernels more stable

Posted Oct 31, 2018 17:29 UTC (Wed) by nybble41 (subscriber, #55106) [Link]

Losing the graphics context is not like unmapping a window. When a window is unmapped you still have all the resources (e.g. pixmaps) which were previously registered with the server. For that matter the window itself still exists and things like OpenGL contexts which were generated from it remain intact; all you need to do when the window is uniconified is redraw.

Losing the graphics context is more like losing the connection to the X server. Most applications aren't prepared to deal with that gracefully. There is also the extra complication that the context includes *hardware* resources which are no longer available, and which may have been mapped directly into the application's address space. The backing for XShm mappings doesn't suddenly cease to exist even if you do lose your connection to the server.


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds