It's thought to be too hard to "release specs". The algorithms for handling any particular generation (and type) of Intel flash are substantially different from each other. That's just one manufacturer ... Linux would have a horrendous time trying to keep up with the dozens of flash manufacturers each releasing a new generation of flash every 18 months, possibly in several different flavours (1, 2, 3 and 4 bit per cell).
It's probably not even possible for Linux mainline to keep up with that frequency, let alone the enterprise distros or the embedded distros (I was recently asked "So what changed in the USB system between 2.6.10 and 2.6.37?"). And then there's the question about what to do for other OSes.
It's not just a question of suboptimal performance if you use the wrong algorithms for a given piece of flash; there are real problems of data loss and the flash wearing out. No flash manufacturer wants to be burdened with a massive in-warranty return because some random dude decided to change an '8' to a '16' in their OS that tens of millions of machines ended up running.
So yes, as Arnd says, the industry is looking to abstract away the difference between NAND chips and run the algorithms down on the NAND controller. I'm doing my best to help in the NVMHCI working group ... see http://www.bswd.com/FMS10/FMS10-Huffman-Onufryk.pdf for a presentation given last August.
(I work for Intel, but these are just my opinions).