User: Password:
|
|
Subscribe / Log in / New account

The CHOKe packet scheduler

The CHOKe packet scheduler

Posted Feb 28, 2011 3:19 UTC (Mon) by gmaxwell (guest, #30048)
In reply to: The CHOKe packet scheduler by jthill
Parent article: The CHOKe packet scheduler

An additional read-modify-write cycle out to whatever memory is storing the en-queued descriptors, potentially for every packet ingress? And what happens if you hit a don't transmit packet on your next sample? Stall the pipeline and try again?

At best it would halve the worst case throughput of a memory bandwidth bottlenecked system (e.g. 10g+ packet forwarding engines)

Compared to RED (just needs a RNG and a fullness counter) and SFB (whos state can be made 'arbitrarily' small without compromising the queue capacity, and so can be kept on chip) this sounds pretty unfriendly.

Of course, not an issue on some small software driven router handling only a few tens of megabits. but most of those devices could also do real flow-aware queuing which was far more fine grained than CHOKe.


(Log in to post comments)

The CHOKe packet scheduler

Posted Feb 28, 2011 3:24 UTC (Mon) by dlang (subscriber, #313) [Link]

if you are pumping data out a 10Gb pipe, you are unlikely to need this sort of thing. this is needed when the link you are sending out of is the bottleneck of the communications path.

allow this to be configured on a per-interface basis and you can disable it on your links where the hardware can barely keep up, but still have it in place where you are going to be buffering the data that you send.

The CHOKe packet scheduler

Posted Feb 28, 2011 3:41 UTC (Mon) by gmaxwell (guest, #30048) [Link]

I wonder what bit of misinformation makes people think that simply because a link is fast that it won't be congested. This isn't the caseĀ— and fast links tend to be major traffic aggregation points where fairness is a bigger issue, and where insufficient buffering can result in very costly under-utilization.

As I mentioned, in the sort of place where you're using a purely software forwarding path and where the design of CHOKe wouldn't be a performance impediment you could also do per-flow queuing which would be considerably more fair than CHOKe (and potentially much better for loss sensitive flows), perhaps falling back to CHOKe/SFB if the flow lookups become too expensive.

AFAIK, Linux doesn't even have a true per-flow qdisc though there have been patches and SFQ approximates it with acceptable performance. Can you suggest a case where CHOKe would be needed but the SFQ qdisc in Linux would be inappropriate?

The CHOKe packet scheduler

Posted Feb 28, 2011 6:19 UTC (Mon) by jthill (subscriber, #56558) [Link]

At best it would halve the worst case throughput of a memory bandwidth bottlenecked system (e.g. 10g+ packet forwarding engines
That implies that internal bandwidth to packet metadata can be more of a bottleneck than transmission bandwidth to the actual packet data, and that one read and one write to a random packet envelope costs roughly as much as allocating and queuing it did in the first place.

So far as the envelope is concerned, that would mean each envelope is written as (some equivalent of) a single cache line. Hauling the envelope back and rewriting a bit would then actually cost the same bandwidth. That seems sensible enough.

I don't think full interlock on the update is necessary -- that's what LL/SC is for, yes? Just ignore the result from the conditional store, if it works, great, if not, so what? Just as for a duplicate tag, the inbound packet is still dropped, as it should be, and the goal is to approximate, to "differentially penalize", which this still achieves.

It's still really hard for me to imagine metadata bandwidth being more of a bottleneck than actual data bandwidth, though.


Copyright © 2017, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds