Yes, users do tend to only use releases, which is why Andrew Morton, et al, are forever having problems getting anyone to test kernel patches before they get into the mainstream kernel.
However, users are not the only test vehicle. They're the most important, because frankly who gives a damn what an automated test script thinks beyond whether the tests pass or not, but users should not be relied upon so much as the first line of defense.
The number of static and dynamic code checkers for Linux isn't huge, and they all have problems, but I'm sure projects aren't using them nearly as much as they could. Same goes for profilers, malloc debuggers, memory leak detectors, unit testers, integrated test harnesses, invariant detectors, and the like.
This isn't just a developer issue. In fact, I'd argue it isn't even primarily a developer issue. Developers, as a rule, aren't the greatest QA guys, and vice versa. There are some who work well in both camps, but it is a minority. That means QA is going to fall heavily on the shoulders of "mainstream" distributions.
It would probably be a good thing if such mainstream distributions got together with some of the truly critical app developers and thrashed out an accepted protocol for software validation. This could be something trivial, such as libraries supplying standardized hooks for test harnesses, or a nominal agreement that new interfaces will be written with such-and-such a piece of QA software in mind.
This wouldn't require anyone to actually do the testing, it would merely ensure that code could eventually be efficiently tested when you've distributions of thousands - or tens of thousands - of packages. When that happens is less important than getting some sort of agreement that would allow even the possibility of it happening.
Once some standards are agreed-on for the automatable parts of testing, testing ceases to be quite such a laborious, boring activity and becomes a project no different from any of the other projects.
To some extent, Linux has headed in that direction for some time. There are all kinds of projects for testing compliance with various standards (POSIX, IPv6 and the desktop are the main three I know of, and there are innumerable minor projects such as scanners for identifying common host and network vulnerabilities), and there are numerous projects which supply key capabilities for a comprehensive validation framework.
Even if QA-safe code is limited to new stuff, that might prevent a repeat of the KDE 4 fiasco, because it was the new stuff that had problems. If the developers had better information to work with, prior to it going mainstream, I'm sure they'd have produced a better KDE. In that case, the only question worth asking is how to make sure developers get the very best information, short of stuffing the new code down the userbase?
I think that's going to involve some sort of agreement between distros and developers so that there's a common idea of how to get from here to there. I don't see that, really, there's any realistic alternative.