Yes, almost all recent speech recognition research has been on speaker-independent methods, because nearly all the uses are like that (phone systems, navigation systems etc). One area where speaker-specific research is going on is 'disarthric speech', (i.e speech people who aren't used to it find hard to understand). This was the SPECS project at sheffield uni, followed by VIVOCA2. The core code of the latter at least was done under a free software licence I believe. This code is only useful for recognising a fairly small vocabulary, and of course requires training, but the underlying algorithms are good for speaker-dependent continuous recognising, which might be useful to somebody.