The future of realtime Linux in doubt
The future of realtime Linux in doubt
Posted Jul 11, 2014 9:30 UTC (Fri) by dlang (guest, #313)In reply to: The future of realtime Linux in doubt by marcH
Parent article: The future of realtime Linux in doubt
remember that "hard real time" only means that you meet your stated target. defining that you get the result of 1+1 within 1 second could be a "hard real time" target
more specifically, burning a optical disk is a 'hard real-time' task, if you let the buffer empty out the resulting disk is trash, but Linux has been successfully burning CDs since burners were first available. Back at the start it wasn't uncommon to have a buffer under-run, but it's become almost unheard of in recent years (unless you have a _really_ heavily loaded system)
That said, and answering what you were really asking :-)
anything you do with linux will involve memory management and "unpredictable" parts of the kernel.
The way that -rt addresses this is to work to "guarantee" that none of these parts will stop other things from progressing for "too long". Frequently this is done in ways that allow for more even progress between tasks, but lower overall throughput.
There isn't any academic measurement of the latency guarantees that Linux can provide (stock or with -rt), it all boils down to people doing extensive stress tests (frequently with a specific set of hardware) and determining if the result if fast enough for them.
As noted elsewhere in this topic, stock Linux is good enough for many "hard real-time" tasks, the -rt patches further reduce the max latency, making the result usable for more tasks.
many people use -rt because they think it's faster, even though the system throughput is actually lower, but there are people who use it for serious purposes.
The LinuxCNC project suggests using -rt and when driving machinery many people report substantially better results when using -rt