SA_IMMUTABLE and the hazards of messing with signals
SA_IMMUTABLE and the hazards of messing with signals
Posted Dec 22, 2021 2:16 UTC (Wed) by roc (subscriber, #30627)In reply to: SA_IMMUTABLE and the hazards of messing with signals by walters
Parent article: SA_IMMUTABLE and the hazards of messing with signals
This is all true too.
Ideally there would be a standard test harness and repository for user-space tests of kernel APIs, and we would be able to merge tests from rr into that system. We wouldn't have to merge all of them, and we could run them hundreds of times pre-merge and reject any that failed even once. But as far as I know that harness and repository don't actually exist. Or am I wrong?
Posted Dec 22, 2021 3:01 UTC (Wed)
by corbet (editor, #1)
[Link] (5 responses)
The Linux Test Project might also be a good home.
Posted Dec 22, 2021 9:41 UTC (Wed)
by roc (subscriber, #30627)
[Link] (4 responses)
Thinking about it some more, I guess I was wrong to suggest that merging rr tests into an existing test framework makes sense. Maybe 0-Day CI is the place for this after all.
Posted Dec 22, 2021 14:09 UTC (Wed)
by corbet (editor, #1)
[Link] (3 responses)
Posted Dec 22, 2021 20:59 UTC (Wed)
by roc (subscriber, #30627)
[Link] (2 responses)
Most rr tests are small programs that exercise certain kernel functionality. Running the test requires recording execution of the small program, followed by replaying it, and we verify that the replay worked correctly. Some of the kernel regressions rr caught could have been detected just by running the small programs normally --- because the kernel functionality they use regressed --- and those could be trivially added to kselftest. But many of the kernel regressions rr caught were related to ptrace, signals, etc that were exercised by rr itself.
rr isn't huge (50K-ish lines) but it's written in C++ and I doubt it would be acceptable to paste it into the kernel tree.
I think 0-Day-CI is really the right approach here. But do test failures there block kernel "stable" releases?
Posted Dec 22, 2021 21:10 UTC (Wed)
by corbet (editor, #1)
[Link] (1 responses)
As for blocking stable releases: regression reports will delay them and cause the offending patches to be fixed or removed; see the discussion that follows any of the review postings to see how that works. You can see an example here. The problem is not (usually) responding to reports of regressions, it's knowing that the regressions exist in the first place.
As you have often (rightly) pointed out, more and better tests would help a lot in that regard.
Posted Dec 23, 2021 20:59 UTC (Thu)
by roc (subscriber, #30627)
[Link]
kselftest is probably the droid you are looking for here. I am sure they would welcome contributions!
Kernel API test harnesses
Kernel API test harnesses
Getting tests merged into kselftest is a good way to get them run in a number of the current testing operations, including those that run for stable kernel candidates.
Kernel API test harnesses
Kernel API test harnesses
My suggestion would be to submit tests that only run if rr is available. But I'm not the kselftest maintainer, so I can't guarantee they would accept such a thing; some of those folks have been known to read here, maybe they could speak up.
Kernel API test harnesses
Kernel API test harnesses