The CHOKe packet scheduler
The CHOKe packet scheduler
Posted Feb 28, 2011 3:05 UTC (Mon) by jthill (subscriber, #56558)In reply to: The CHOKe packet scheduler by gmaxwell
Parent article: The CHOKe packet scheduler
not hardware friendlyRouter hardware is so taut that lighting a don't-transmit-me flag in the packet header is excessive?
Posted Feb 28, 2011 3:19 UTC (Mon)
by gmaxwell (guest, #30048)
[Link] (3 responses)
At best it would halve the worst case throughput of a memory bandwidth bottlenecked system (e.g. 10g+ packet forwarding engines)
Compared to RED (just needs a RNG and a fullness counter) and SFB (whos state can be made 'arbitrarily' small without compromising the queue capacity, and so can be kept on chip) this sounds pretty unfriendly.
Of course, not an issue on some small software driven router handling only a few tens of megabits. but most of those devices could also do real flow-aware queuing which was far more fine grained than CHOKe.
Posted Feb 28, 2011 3:24 UTC (Mon)
by dlang (guest, #313)
[Link] (1 responses)
allow this to be configured on a per-interface basis and you can disable it on your links where the hardware can barely keep up, but still have it in place where you are going to be buffering the data that you send.
Posted Feb 28, 2011 3:41 UTC (Mon)
by gmaxwell (guest, #30048)
[Link]
As I mentioned, in the sort of place where you're using a purely software forwarding path and where the design of CHOKe wouldn't be a performance impediment you could also do per-flow queuing which would be considerably more fair than CHOKe (and potentially much better for loss sensitive flows), perhaps falling back to CHOKe/SFB if the flow lookups become too expensive.
AFAIK, Linux doesn't even have a true per-flow qdisc though there have been patches and SFQ approximates it with acceptable performance. Can you suggest a case where CHOKe would be needed but the SFQ qdisc in Linux would be inappropriate?
Posted Feb 28, 2011 6:19 UTC (Mon)
by jthill (subscriber, #56558)
[Link]
So far as the envelope is concerned, that would mean each envelope is written as (some equivalent of) a single cache line. Hauling the envelope back and rewriting a bit would then actually cost the same bandwidth. That seems sensible enough.
I don't think full interlock on the update is necessary -- that's what LL/SC is for, yes? Just ignore the result from the conditional store, if it works, great, if not, so what? Just as for a duplicate tag, the inbound packet is still dropped, as it should be, and the goal is to approximate, to "differentially penalize", which this still achieves.
It's still really hard for me to imagine metadata bandwidth being more of a bottleneck than actual data bandwidth, though.
The CHOKe packet scheduler
The CHOKe packet scheduler
The CHOKe packet scheduler
The CHOKe packet scheduler
At best it would halve the worst case throughput of a memory bandwidth bottlenecked system (e.g. 10g+ packet forwarding engines
That implies that internal bandwidth to packet metadata can be more of a bottleneck than transmission bandwidth to the actual packet data, and that one read and one write to a random packet envelope costs roughly as much as allocating and queuing it did in the first place.
