Not my desktop.
Not my desktop.
Posted Dec 18, 2006 16:56 UTC (Mon) by drag (guest, #31333)In reply to: Not my desktop. by bronson
Parent article: Linux desktop architects map out plans for 2007 (Linux.com)
Well I would prefer to have alsa expose the functionality to user space so that at least it's aviable.
For the majority of users this doesn't matter so much, but occasionally one would want access to some of the more advanced features of sound cards for various reasons.
For example in Windows you have the Win32 interface and windows drivers which abstracts away everything and uses kmix to do software mixing. This is good for normal desktop use, but it is not good for any sort of professional audio use of the computer. It was bad enough that ISVs had to write their own driver model (ASIO) to expose functionality to userland so their applications could take advantage of it.
I don't see any reason why all this policy stuff should be shoved into the kernel...
What alsa does for userland to help make things easier is use of it's plugin architecture and the asoundrc configuration file.
Onen approach that could be used is for sane asound.conf made for each sound card which would provide standard interfaces audio I/O and mixers for desktop use.
So this would require distros and end users to test out their sound cards and then produce asoundrc files according to which card when they can be incorporated back into the alsa distribution and provided by default to end users.
Now asoundrc files are very powerfull. You can setup the ctl.!default for example so that when you open up alsamixer you don't see the hardware-based interfaces, but a software interface.
Another approach would be to use a sound service, like Pulseaudio, to provide abstracted interface with the sound card.
A possible solution would be to have the Alsa create a way to expose sound card hints of functionality through sysfs/HAL/DBUS to indicate driver/sound card aviable functionality and Pulseaudio would take that and then provide standardized capabilities higher up on the software stack.
Already pulseaudio has hal support. It automaticly detects and setup sound cards for it's use. (sound format, frequencies, and stuff like that.).
This has higher level of software complexity and added latency, but you greatly simplify the user and application interface to the sound system and you get features like network transparancy so it ties nicely into the X architecture.
for instance there is a module for pulseaudio were it will check the $DISPLAY and then send the sound to the pulseaudio daemon on that machine.
To get alsa-aware applications to use pulse it's easy... setup a .asoundrc file with this in it (or /etc/asound.conf):
pcm.!default {
type pulse
}
ctl.!default {
type pulse
}
Then your mixer settings and your sound output goes through pulseaudio.
See http://pulseaudio.org/wiki/PerfectSetup
(unfortunately I can get everything working fairly easily through pulseaudio, especially because of alsa's plugin stuff, but I can't get SDL games and VLC to work reliably over a network)
The major major problem we have right now is that we have all these different sound APIs..
OSS, ALSA, Libao, SDL, Gstreamer, Arts, Esound, to name a few. There are others besides that.
Even if Alsa worked 100% of the time for everybody... When you go and try to use a OSS application like Skype (or if sdl is compiled for oss support) then it all turns to shit. Or if arts is running without being configured to use alsa, and lots of other little things.