Most in-house stuff is crap. Never designed to be looked at by the general public, still less criticised by dozens of expert kernel developers. It is very noticeable, for example, that an in-house program which I got cleared to be released as GPL, is still sat there waiting for me to take action because I realised after getting the OK to release it that it needs at least a couple of days of extra work doing to it before outsiders will be able to use it sensibly. It mistakenly relies on an extension to a standard that it implements, and (a colleague pointed out when I enquired) nobody except us implements that extension. Whoops. From any outsider's point of that program is currently crap, so I won't be releasing it until it's not crap.
Developing the drivers for yet another brand of 100Mbit network card is not most people's dream job, so it should be no surprise that the developers working for a lot of hardware outfits are not the best in the business. That doesn't mean that /they/ are crap, but it does mean that their code isn't necessarily going to be up to the standard people expect from Linux.
Old in-house drivers weren't any better, I've seen the code that made the Video Toaster go. It's awful, you'd never have guessed that the Video Toaster was a roaring success if you tried to judge based on the quality of source code.
But actually today's drivers have even more opportunities to be terrible than did those from the Video Toaster era. You can screw up suspend and resume, you can fail when there are multiple CPU cores, there are countless novel ways to screw up. Microsoft has a QA process for new drivers, which must be passed before they'll certify the drivers for use with ordinary Windows PCs. Perhaps their QA process doesn't use the word "crap" but the judgement is the same, "this driver isn't good enough, try harder next time".