User: Password:
Subscribe / Log in / New account

Not addressed: deafness

Not addressed: deafness

Posted Oct 21, 2012 14:11 UTC (Sun) by Max.Hyre (guest, #1054)
Parent article: Accessibility and the open desktop: everyone gains by it

My late father was just shy of deaf for his last few years, and it was a great burden for the whole family. I spent some time trying to find some sort of speech-recognition software (preferably Free) which would allow us to talk to him and let him read our words on a display.

It's obviously a hard problem. Decent speech-recognition is geared to one speaker, typically someone dictating to a machine rather than carrying on a conversation. I couldn't find anything which could recognize multiple speakers, even when they spoke one at a time.

Has anything happened in the recent years to improve the situation?

(Log in to post comments)

Not addressed: deafness

Posted Oct 22, 2012 16:54 UTC (Mon) by smassy (guest, #87187) [Link]

Yes, although desktop accessibility itself may not require as big a leap, save for visual alerts, the role of computing in social accessibility for deaf people is quite important.

There is some very usable speech recognition software out there, both for the desktop computer and mobile platforms. Speaker-independent speech-to-text seems to have improved greatly, especially on the latter, fueled in part by the need to provide a good hands-free experience for driving users.

On the FOSS side, though, things seem to have been moving fairly slowly and efforts seem to be fragmented somewhat. One of the biggest challenges, as I understand, is presented by the scarcity of free accoustic models necessary to build good speech recognition engines. The VoxForge project was started as an attempt to remedy to that lack.

In the meantime, there is the Ubuntu Speech Input app for Android, which basically turns an Android smartphone into a text input module for Linux. Does anybody know whether the speech recognition code in Android is open? If so, it would be a start of sorts.

Not addressed: deafness

Posted Oct 27, 2012 23:27 UTC (Sat) by wookey (subscriber, #5501) [Link]

Yes, almost all recent speech recognition research has been on speaker-independent methods, because nearly all the uses are like that (phone systems, navigation systems etc). One area where speaker-specific research is going on is 'disarthric speech', (i.e speech people who aren't used to it find hard to understand). This was the SPECS project at sheffield uni, followed by VIVOCA2. The core code of the latter at least was done under a free software licence I believe. This code is only useful for recognising a fairly small vocabulary, and of course requires training, but the underlying algorithms are good for speaker-dependent continuous recognising, which might be useful to somebody.

Copyright © 2017, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds