Kernel quality control, or the lack thereof
Kernel quality control, or the lack thereof
Posted Dec 10, 2018 21:18 UTC (Mon) by NAR (subscriber, #1313)In reply to: Kernel quality control, or the lack thereof by PaulMcKenney
Parent article: Kernel quality control, or the lack thereof
But anything less than 100% coverage guarantees that some part of the code is not tested...
Posted Dec 10, 2018 21:50 UTC (Mon)
by PaulMcKenney (✭ supporter ✭, #9624)
[Link] (4 responses)
For most types of software, at some point it becomes more important to test more races, more configurations, more input sequences, and more hardware configurations than to provide epsilon increase in coverage by triggering that next assertion. After all, testing and coverage is about reducing risk given the time and resources at hand. Therefore, over-emphasizing one form of testing (such as coverage) will actually increase overall risk due to the consequent neglect of some other form of testing.
Of course, there are some types of software where 100% coverage is reasonable, for example, certain types of safety-critical software. But in this case, you will be living under extremely strict coding standards so as to (among a great many other things) make 100% coverage affordable.
Posted Dec 24, 2018 20:42 UTC (Mon)
by anton (subscriber, #25547)
[Link] (3 responses)
OTOH, how do you test your safety net? Remember that Ariane 5 was exploded by a safety net that was supposed (and proven) to never trigger.
Posted Dec 25, 2018 0:05 UTC (Tue)
by PaulMcKenney (✭ supporter ✭, #9624)
[Link]
But yes, Murphy will always be with us. So even in safety critical code, at the end of the day it is about reducing risk rather than completely eliminating it.
And to your point about Ariane 5's failed proof of correctness... Same issue as the classic failed proof of correctness for the binary search algorithm! Sadly, a proof of correctness cannot prove the assumptions on which it is based. So Murphy will always find a way, but it is nevertheless our job to thwart him. :-)
Posted Dec 30, 2018 11:22 UTC (Sun)
by GoodMirek (guest, #101902)
[Link] (1 responses)
E.g.:
In theory, it should never assert. In reality, it is desirable to minimize a risk that 'explosiveness' variable is stored in a failed memory cell, prior that cell is used for indication of explosiveness of any kind.
Or this case:
It is very rare to trigger and almost impossible to test such assertions, but when I saw them triggered in reality, even once in a lifetime, I appreciated their merit.
Posted Dec 30, 2018 15:44 UTC (Sun)
by PaulMcKenney (✭ supporter ✭, #9624)
[Link]
But if the point was in fact to warn about unreliable memory, mightn't this sort of fault injection nevertheless be quite useful?
Kernel quality control, or the lack thereof
I would expect that defensive coding practices that lead to unreachable code (and thus <100% coverage) are particularly widespread in safety-critical software. I.e., you cannot trigger this particular safety-net code, and you are pretty sure that it cannot be triggered, but not absolutely sure; or even if you are absolutely sure, you foresee that the safety net might become triggerable after maintenance. Will you remove the safety net to increase your coverage metric?
Kernel quality control, or the lack thereof
Kernel quality control, or the lack thereof
Kernel quality control, or the lack thereof
I saw that multiple times while working on embedded systems.
explosiveness=255;
if explosiveness !=255 then assert;
if <green> then
explosiveness=0;
else
explosiveness=255;
if (explosiveness!=0 and explosiveness!=255) then assert;
Kernel quality control, or the lack thereof