Over the years, I've become a bit cynical about standards and documentation. When I rely on a standard, I find that nobody implements it correctly. (Witness, oh, every C++ compiler from 1992 to 2002.) When I rely on documentation, I find that it's a pack of useless lies. (The worst offenders: Hardware databooks, which often have no bearing on reality whatsoever.)
So now I take a different approach: I write lots of unit tests, and I set up buildbots for any platform I need to support.
This leads to lots of interesting discoveries. For example, under Windows Vista, pretty much any file system operation can fail for no apparent reason, and it may need to be retried until it works.
There are two exceptions to this rule: Security, and data integrity. You can't ensure either with unit tests. You must also understand both the official documentation and the reality of common implementations. (My nastiest surprise so far: There are snprintf implementations on some legacy Unix platforms that ignore the buffer size parameter, exposing every caller of snprintf to an overflow attack. If you rely on snprintf on old Unix platforms, it might be worth writing a unit test that checks to see if it works.)