User: Password:
Subscribe / Log in / New account

Adding periods to SCHED_DEADLINE

Adding periods to SCHED_DEADLINE

Posted Jul 21, 2010 17:02 UTC (Wed) by iabervon (subscriber, #722)
In reply to: Adding periods to SCHED_DEADLINE by farnz
Parent article: Adding periods to SCHED_DEADLINE

Actually, it should be trivial to tell that, before the first deadline, we already have 4ms of work and can't allocate another 4ms of work. I think the real issue is that, if you've got two tasks with slightly different periods, they may be fine for the first deadlines, but slowly drift into phase, at which point the processor is oversubscribed until they drift back out of phase.

Say you're doing video processing on a stream at 60 Hz and one at 50 Hz. If you've got tight windows between when the input is available and when the output is required (say 2ms of work out of 3ms of real time), you may be fine for the first 100ms and then have to do both frames at the same time. Or the streams might be staggered such that they never conflict.

(Log in to post comments)

Adding periods to SCHED_DEADLINE

Posted Jul 21, 2010 20:34 UTC (Wed) by blitzkrieg3 (subscriber, #57873) [Link]

I agree wholeheartedly. The example provided was pretty bad. Clearly what the scheduler does in the case above is just stagger the cpu time. I don't think any realtime guys consider it unreasonable for a process to have a start up cost.

The example you provided was exactly what I thought of when I started reading, sort of like how blinkers will phase in and out of sync while sitting in the turning lane.


Posted Jul 21, 2010 20:43 UTC (Wed) by corbet (editor, #1) [Link]

I'll confess to being totally confused by this comment. The example was (deliberately) simple, but I think it's clear and valid: the scheduler simply cannot meet the deadlines in that case. I don't see how that can be fixed by "staggering the CPU time." The scheduler can't change the deadline... What am I missing?


Posted Jul 21, 2010 21:58 UTC (Wed) by Shewmaker (guest, #1126) [Link]

I think their confusion centers around the implications of allowing separate deadline and period values. If the commenters read Dario's email, then they would have seen his reference to the "sporadic task model" which may have helped.

Perhaps another way to describe your example is that it is specifically constructed to show that when a scheduler tries to allow admission of a task at any time, and not just on period boundaries, then it gets harder. Yes, the scheduler could delay the admission of the new task until the deadline of the existing task, but then you have a periodic scheduler and not one that handles sporadic job arrivals.

I hope that helps.


Posted Jul 21, 2010 22:37 UTC (Wed) by blitzkrieg3 (subscriber, #57873) [Link]

I think their confusion centers around the implications of allowing separate deadline and period values. If the commenters read Dario's email, then they would have seen his reference to the "sporadic task model" which may have helped.

Yeah exactly. I went back and read the email after seeing the response, and the key point is:

one of the tasks can miss its deadline by 3 time units in the worst case (if, and if yes which one, depends on the order with witch they arrive).

Obviously if we have the good fortune of having P2 arrive after 5 time units, we're golden. Before reading the email, I would have assumed that if you're just starting an executable, rather than acting on an event such as a udp packet or something, then you can sacrifice 5 time units.

I've illustrated exactly what I mean

For the purposes of explaining how this works and what these values mean, it might make more sense to use a simple example that isn't schedule-able, no matter what order it comes in, like:

T_1 = (2, 4, 3)
T_2 = (2, 5, 3)

Even if T_2 can be scheduled after 2 time units, using the naive algorithm that I illustrated in the picture, it's clear that they're both going to wake up at the same time after 10 units, at which time you have 3 units to schedule 4 slots of cpu time.


Posted Jul 23, 2010 22:33 UTC (Fri) by giraffedata (subscriber, #1954) [Link]

I think where people really get lost in these explanations is that they appear to describe a system where the scheduler gets to to decide exactly who to give CPU time and when, or where a process requests a precise CPU schedule for itself for the rest of time.

But in reality, the scheduler must wait for a process to request each chunk of time, and even the process can't know exactly when it will do that.

I presume the parameters actually say, "I am going to be requesting CPU time in the future. There will be at least 10 ms between requests. Each one will be for less than 4 ms, but whatever it is, I'll need it all within 5 ms of when I ask for it.

Based on that, the scheduler is supposed to say, "As long as you stick to those constraints, I guarantee I will fill all your requests" or "I can't guarantee that, so don't bother trying to run."

I really don't get the one statement about it being OK to miss the deadline by a bounded amount. If the process is OK with getting its 4ms within 10 ms, why did it state a deadline of 5 ms?


Posted Jul 22, 2010 19:57 UTC (Thu) by droundy (subscriber, #4559) [Link]

I think the problem with the example was just that in the case where all the periods are the same (as in the example), the kernel need only look at the first period to figure out if it can accept the request, which wouldn't be particularly hard to figure out. But when there are different periods involved, it could be much harder...

Copyright © 2018, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds