User: Password:
Subscribe / Log in / New account

What do they mean by "Realtime"?

What do they mean by "Realtime"?

Posted Sep 29, 2009 18:20 UTC (Tue) by clameter (subscriber, #17005)
Parent article: The realtime preemption mini-summit

It seems that the realtime folks are fuzzy on what they are trying to accomplish. I thought realtime was ensuring that the kernel always responds in a mininum time interval to an event but I dont see any discussion of what the minimum time interval is.

From the article it seems that there are numerous features in the kernel that are currently not "Realtime". That probably means that the potential latencies are beyond any assumable time interval. This includes such basic things as locking.

What is meant by "Realtime" then? What set of functionality of the kernel can be used so that a response is guaranteed within the time interval?

(Log in to post comments)

What do they mean by "Realtime"?

Posted Sep 29, 2009 18:56 UTC (Tue) by dlang (subscriber, #313) [Link]

by realtime they don't mean responding in the minimum time

they are looking for a response in a _predictable_max_ time

how short that predictable time is determines how suitable it is for a particular application, but the key thing is to make it predictable.

right now linux is not predictable, that is what they are working on fixing.

What do they mean by "Realtime"?

Posted Sep 29, 2009 20:15 UTC (Tue) by niv (guest, #8656) [Link]

Determinism is what's really important to real-time.

It's often confused with low latency, but the two are separate criteria and often conflicting goals requiring a trade-off, made complicated by the fact that most applications typically want BOTH - determinism AND low latency.

Determinism is easier understood as the ability to say "this task will take AT MOST n ms". That is, bounded maximum latency.

In the strictest case, this would mean the following:

it is preferable for all 5000 iterations of a task execution
to take 49us (less than 50us) than it is for 4950 to take
35us and 50 iterations to take 69us, when your application
requires a maximum latency of 50us.

For most enterprise applications, the max latency is not a MUST_FINISH_BY with severe consequences for failure, but a REALLY_GOOD_TO_FINISH_WITHIN, with the average low latency being also important. Some applications can tolerate some outliers (maximum latency bound exceeded) as they usually need average low latency as well.

Most OSs are optimized for throughput-driven applications (where average latency is minimized).

Real-time Linux is optimized to offer greater determinism than the stock kernel. Hence the need for greater preemption, including the ability to preempt critical kernel tasks should a higher priority application become runnable.

And remember, you can only guarantee/meet real-time requirements for as many threads as you can run concurrently on your system - on an N-core system, you can at most guarantee that N SCHED_FIFO tasks at the same highest priority P will meet their real time guarantees (depending on a lot of things, handwave, handwave, but you get the general idea). So a lot depends on what the system is running, the overall application solution and top-down configuration of the entire system.

Copyright © 2017, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds