Yet he's right. And I say this as an enthusiastic Gentoo user who's done some automation to find optimum configurations. :)
In reality, though, building a kernel with LTO isn't going to take as long as, say, building LibreOffice, Firefox or Chromium. Those take sufficiently long that more than a few Gentoo users prefer to use binary ebuilds rather than build them themselves.
What's really needed here, though, are automated regression tests to improve the confidence that the output of an LTO build isn't the most likely source of problems. That's the kind of thing that would need massive corporate sponsorship to get off the ground, though; there's a *ton* of code in the kernel, and while it's reasonably well-organized, that's still a lot of tests to write.
It's also necessary to understand that you can't be 100% confident you've caught all possible compiler-introduced bugs. The best you might be able to do is identify the portions of the kernel which the LTO pass twiddled the most, and write additional tests to exercise that code from multiple angles.
I wonder if such an "LTO hotspot" analysis could be used to do certain high-level reasoning about the code. Being able to answer questions like "given that LTO-active regions have some relation to a property of the APIs in that area, is that property something we want to see more or less of in our APIs?" would be interesting; it could help inform tradeoffs in future designs of APIs.