User: Password:
Subscribe / Log in / New account

Eliminating rwlocks and IRQF_DISABLED

Eliminating rwlocks and IRQF_DISABLED

Posted Dec 7, 2009 11:26 UTC (Mon) by xav (subscriber, #18536)
In reply to: Eliminating rwlocks and IRQF_DISABLED by kleptog
Parent article: Eliminating rwlocks and IRQF_DISABLED

i don't think it's such a good solution. If all interrupts are related (say, you're receiving data from the network and must process it in an app), distributing the interrupts from CPU to CPU means the application must follow, and with it its cache, so you'll have lots of cache line bouncing, which is expensive.

(Log in to post comments)

Round-robin IRQs

Posted Dec 7, 2009 11:56 UTC (Mon) by kleptog (subscriber, #1183) [Link]

Since you cannot run the app on the same CPU as the one receiving the interrupts you're going to get a cache bounce *anyway*, right?

But it's worse than that, it's not *an* app, there are several apps which all need to see the same data and since they are running on different CPUs you're going to get a cache bounce for each one anyway.

What you're basically saying is: round-robin IRQ handling is bad because you're sometimes going to get 6 cache-bounces per packet instead of 5. BFD. Without round-robin IRQs if the amount of traffic doubles we have to tell the client we can't do it.

The irony is, if you buy a more expensive network card you get MSI-X which gets the network card to do the IRQ distribution. You then get the same number of cache bounces as if we programmed the IO-APIC to do round-robin, but the load is more evenly distributed. So we've worked around a software limitation by modifying the hardware!

I'm mostly irked by a built-in feature of the IO-APIC being summarily disabled on machines with 8+ CPUs with the comment "but you don't really want that" while I believe I should at least be given the choice.

Copyright © 2017, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds