> Right, so you're saying it's latency. But why is this such a big deal for
> audio? The latency between a user's input and the response comes up
> everywhere in an interactive system (moving the mouse and seeing the arrow
> move, clicking to close a browser tab, pressing spacebar to pause your
> mplayer video, shooting a gun in a video game, etc) and yet you don't hear
> every other developer complaining about latency. Why is the latency
> between audio input -> audio output be any more difficult than the latency
> between USB input -> video output?
That's a very good question. Part of the answer is that ordinary users DO care about latency. Part of what held back garbage-collected languages for so long was users' annoyance with the long and nondeterministic delays. For example, Java on the desktop in the nineties was just no fun at all.
But at the end of the day, for your average Joe, occasional latency spikes are not a big deal. You set your xmms buffer to a big size and get on with your life. If a latency spike happens when you're playing a game or clicking 'save' in your spreadsheet program, that's mildly annoying, but hardly a major problem.
For audio professionals, an occasional latency spike is a real problem. If you're in the middle of composing something on a MIDI keyboard and the computer randomly inserts a long delay, making it sound wrong, you're going to be annoyed. You're going to lose work and time. People can notice audio delays that are more than about 10 ms.
You have to realize that a lot of compositions are done in layers so that you have something playing, and you add another track to it. In those cases, it's critical that the time between pressing the key on the piano keyboard and registering it be kept to a minimum. Buffering will not help you there.
This question is a little bit like asking "I pulled out MS NotePad and started typing some code. It seemed to work fine! Why do programmers need all these fancy editors with macros and whatnot?" Well, they have different needs.