It seems like there are two use cases, traditional real-time systems with more-or-less dedicated systems running custom programs, and the emerging desktop real-time stuff. A process will generally know which sort of process it is? It further seems to me that the former case wants weak emergency-only throttling for all tasks as a development aid, while the latter wants much stronger always-on throttling for unprivileged tasks in particular. (I don't see why unprivileged realtime tasks should have access to "most but not all" of the CPU... the ideal limit would be "you get exactly as much time as the fair staircase scheduler would give you, but since you flipped the realtime bit I will guarantee that you receive it exactly when you ask for it".) But they've made the default limit ~10s, which seems way higher than desired for ordinary desktop use, *and* annoys the hard-core real-time folks... (who aren't annoyed that they'll run uniformly 5% slower, they're scared that the 5% might come in the form of being abruptly descheduled just before a deadline. But it's still hard to imagine a task actually running 10s straight without sleeping, which is why the discussion is all in terms of API guarantees and abstract junk like that.)
Of course, in practice what will probably happen is that we won't end up with anyone's idea of the Platonic real-time API, kernels for both sorts of systems will end up tweaked away from the default anyway, desktop systems will use more elaborate schemes with different users segregated into different control groups with different limits, etc., and things will work out.
Copyright © 2017, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds