So, there is now a vacancy for an unpaid kernel regression tracker, or maybe two or three unpaid kernel regression trackers. The kernel community doesn't know what to tell people who would like to report bugs. It might be just a personal impression, but this is all beginning to sound rather amateurish.
The problem of regression tracking is inseperable from the problem of regression testing. Testing requires (a) that someone is able, diligent, and motivated enough to test; (b) that they can easily re-run the tests with the latest kernel; (c) that the results can be compared easily between the two; (d) that regressions are routed to the current maintainer; (e) that the maintainer can fix the problem; (f) that the original user can get a timely update to their (probably distro-based stable) kernel release.
Fancy graphs can be plotted showing average lifetime of regressions, how they are changing with time, etc. But the results are of questionable value unless you know there is reasonably comprehensive test coverage. Unless someone falls over them, many regressions will otherwise pass unnoticed for long periods.
If you can't measure the quality of something, at least approximately, you can't control it either. For an active software project, this implies insidious and increasing rust.
Successful projects require (i) good people; (ii) good technology; (iii) good process. The Linux kernel has the first two in spades, but has gaps in the third. Maybe the time is coming for a few-month digression onto testing and tracking infrastructure, like there was in the past with version control, that ended in giving us all the superb git tool.
Wouldn't it be amazing if one fine morning Linus said: "OK, you guys. Great work you're doing. But we do have a problem in actually seeing how great your work is, because it's so hard to test. Well, <joke>Lennart and I have been talking, and</joke> I've come up with this first version of a kernel unit-test plugin interface. All the unit-tests you write for your subsystem will be run at startup or module load, if you specify "test" on the kernel command line. My new test framework will log the results to syslog in this standard self-documenting text-processor-friendly format. If you care enough to add that cute and marvellous feature of yours, you care enough to write some unit tests for it too. It's great, they'll make your job easier. From now on, no changes without tests. Chop-chop."
Or something like that.
A standard opt-in userspace tool could then munge the test results, collect the system configuration, and submit them anonymised to a new automated kernel-bugzilla gateway which would do all the tedious correlation and tracking.
This approach would scale well, as it delegates the test writing to those who know most about the relevant code, and the test running to the users in the field who have all the weird hardware and workloads.