> For audio work it's really important that you keep feeding the sound card and other devices audio data at a predictable rate. The hardware is effectively a "First In First Out" buffer that is always going to produce output regardless of whether or not you provide it information. Even if your system does not process the audio quick enough it is not going to stop your hardware from playing or recording audio information. As a result a system with poor realtime performance will create a large number of audio artifacts when loaded. You'll get echos, pops, squeeks, pauses, out of sync audio, out of sync video, stuttering video, scratches, etc etc and all other type of really bad things in your recordings and performances.
To solve the problem of buffer underruns, you need to increase the size of your buffers. If you cannot increase the size of your buffers due to latency concerns, or if you have hardware limitations, _then_ I agree you need realtime. But it seems to me that the vast majority of "I'm getting xruns with jack, I need realtime!!" type postings that I see can be solved better by increasing software buffer size, because they are use cases that don't also require low latency (e.g. playing one track while recording another, or playing and live mixing multiple tracks at once, etc). Maybe my perception is just wrong, and low latency is really a requirement more often than I think. Or maybe JACK does need to better handle the case where latency is OK and expected, and account for it rather than trying to push it under the carpet.
In some of my research work, we deal with streaming data from various capture cards at relatively high rate (up to 2Mbit/s). We write it directly to disk on a relatively busy system that can go out to lunch for 500ms or more. The only time I ever had to invoke SCHED_FIFO was with some poorly-designed USB hardware that only had a 384 byte buffer. Now we design our hardware with at least 1MB buffer and the kernel scheduler is no longer a concern.