Backports for "just bugfixes" also carry the risk of unintended regressions. e.g., side effects that are either present both in the tip, or occur just in the backport because the changeset interacts with other patches that have since been applied but not backported. Or even whole fixes that would be applicable to the user base but have not yet been identified and thus not backported (yet). Or that many many users have run kernels leading up to the current tip, but the userbase testing the combination of patches in the backported environment is usually much smaller.
The whole notion that "backports are safer" is, well, a viable business model, but not necessarily sound engineering practice, at least if performed at any non-trivial scale.
Code changes to mainline carry a risk of introducing regressions or new bugs, sure. But so do backports. What I'm proposing is to strengthen tip against regressions by improved QA and process.
Another fallacy is that, because upgrades are scary, you want to do them less often. But that doesn't work out - the delta *keeps* getting larger, and the amount of time that passes during which you *didn't* pay attention does too. The cost does not go down, the effort to get it all working again actually *increases* and needs to be paid in much larger bills than if one had a reasonably fine grained continuous policy.
I already believe that, since code quality *is* getting better over time faster than it is getting worse, that upgrading is generally the safer choice - at least if the regression tests pass. (I'm not saying that we're doing all that we can or should, sure, there are things that can be improved.) But people still cling to the enterprisey-mindset.
Guys and gals, if the enterprisey mindeset worked and was overall the better choice, we'd still be running Solaris, IRIX, UnixWare and the like.