Posted Mar 3, 2012 1:39 UTC (Sat) by pflugstad (subscriber, #224)
Parent article: Speeding up D-Bus
Caveat: I honestly don't know much about what DBus does or how it's used, so I'm conjecturing here.
Why does it feel like they're going at this problem the wrong way.
Is the problem really the amount of time it takes to (essentially) ping-pong a message between some processes? Seems to me that that's way down in the microsecond level already (10000 ping-pong messages over 3.8 seconds in one of the linked articles) - far beyond the point where humans would notice even an order of magnitude speed-up. And Linux context switching is also ridiculously fast and optimized.
Maybe they need to really look at the interactions that seem to take a long time? Or maybe they need to look at the content of the messages - maybe instead of sending 5 messages to update five semi-related things, maybe it's better to send one message to update them.
Maybe it's something else that's the problem - like maybe the way the applications listen for and respond to messages is problematic? Are applications polling the sockets, instead of blocking on them in some way.
Seriously - what kind of interactions are going on over DBus that speeding them up by even 1.8x (from the same linked article) is really going to matter?
One thing I do have experience with: with almost all optimization problems, changing your fundamental algorithm (going from O(n^3) to O(n)) will do way more than any point optimization.
And given how fast IPC and context switching in Linux is already, this whole discussion feels like a point optimization and they're not really getting at the root of the problems.