Making stable kernels more stable
Making stable kernels more stable
Posted Oct 24, 2018 7:34 UTC (Wed) by mjthayer (guest, #39183)Parent article: Making stable kernels more stable
People have different levels of tolerance for bugs.  I use Ubuntu and usually upgrade during the beta period because it is easier to get bugs fixed then, and to know what our users who use Ubuntu will run into.  If there is no release candidate state then people who are more sensitive should wait for a while before using a release.  Making use of these different tolerance levels (in fact the usual alpha-beta-release cycle) is still an effective way of keeping things stable.  On a finer grain, people with higher stability requirements could also hold back non-critical stable kernel updates until they got more testing, while people with less critical needs could use them right away.
On a different note, there shear size of the kernel must be a big problem for stability.  I always wonder whether people will try a more micro-kernel-like approach some day.  I think that these days many of the problems of micro-kernels have been solved, but that the gains have not yet justified the pain of reworking what we have.
      Posted Oct 24, 2018 10:16 UTC (Wed)
                               by dgm (subscriber, #49227)
                              [Link] (4 responses)
       
This is a different kind of stability. Having subsystems run at lower privilege would not make them less prone to regressions. It only gives the microkernel the opportunity to restart them without failing completely, but this is only so much useful. 
Take for instance the KVM subsystem or the graphic drivers, that were metioned in the article. Losing KVM would crash the running virtual machines, which we can assume are the key part of the system for the user, so no difference here. Also, I'm not sure about graphics cards. Are desktop environments prepared to cope with losing the grahics context?  
     
    
      Posted Oct 24, 2018 12:11 UTC (Wed)
                               by mjthayer (guest, #39183)
                              [Link] (1 responses)
       
     
    
      Posted Oct 24, 2018 20:55 UTC (Wed)
                               by k8to (guest, #15413)
                              [Link] 
       
I definitely have seen wins when a lot of strategies are employed together.  There are some systems built in erlang where many benefits were reaped by state management, controlled communication, obliviousness about local vs remote communication by default, and many other things combining to give more managability to the process.  It's harder achieve those wins when the system doesn't give you the discipline tools to help ensure those things are done. 
I'm suspicious that an operating system may be too low-level to use fancy tooling to help you get all those wins though.  I'm definitely convinced that complex piles of C talking over messaging busses implementing a kernel does not give you a simple system or any easy wins. 
     
      Posted Oct 31, 2018 16:16 UTC (Wed)
                               by anton (subscriber, #25547)
                              [Link] (1 responses)
       My experience concerning Intel and AMD is that I have graphics problems on Intel (HD Graphics 520/500 on Skylake and Apollo Lake), while AMD (Juniper XT) works flawlessly.  This may have to do with the age of the hardware (Juniper XT is from 2009, HD 5xx from 2015), but my experience is certainly the opposite of what others have stated.
      
           
     
    
      Posted Oct 31, 2018 17:29 UTC (Wed)
                               by nybble41 (subscriber, #55106)
                              [Link] 
       
Losing the graphics context is more like losing the connection to the X server. Most applications aren't prepared to deal with that gracefully. There is also the extra complication that the context includes *hardware* resources which are no longer available, and which may have been mapped directly into the application's address space. The backing for XShm mappings doesn't suddenly cease to exist even if you do lose your connection to the server. 
     
    Making stable kernels more stable
      
Making stable kernels more stable
      
Making stable kernels more stable
      
Making stable kernels more stable
      Are desktop environments prepared to cope with losing the grahics context?
X applications certainly know how to redraw a window when you uniconize it.  twm has a "restart" action that redraws all windows (with uniconizing and reiconizing if necessary).  I use this when my Intel-based X-Terminal decides to make everything black.
Making stable kernels more stable
      
 
           