The buffering model in PortAudio is too simple, too hardware-bound. A modern audio API should be a deparature of hw-specific fragment-based buffer metrics. Instead, we need to allow applications to set values that are actually understandable (such as "latency" and stuff) and default to a model that defaults large playback buffers with the option to rewrite them on request. Why? This will save us power (in conjunction with 'glitch-free' PA that is) and will greatly enhance network-transparent audio playback.
That is one of the more fundamental issues I don't think that PortAudio is the way to go. There are more. And it's a feeling shared by a couple of other audio related people.
Fixing the fact that we have to many competing audio APIs on Linux by adding another one is of course paradox. But still, after discussing this forth and back I believe this is the right way.