A huge library size for simple functionality is a clear sign of badly written or designed code, with all downsides that come with that: Inefficient, unnecessary complex code which is hard to debug and hard to optimise properly.
Or perhaps it's just a code needed for the task on hand. How can you distinguish these two cases?
Sending a short message to multiple processes should be very fast, and we agree that isn't what makes dbus so slow. What makes it slow is all the other things it does for no good reason, but what exactly all that is, I don't know.
Let's summarize the discussion:
1. You have no idea about the dbus design.
2. You have no idea about the task dbus is trying to solve.
3. Yet “you know for sure” it's bloated pig which trashes caches and this is why it's slow.
Real feat of solid engineering thought! Not.
Compare libpthread from GLibC and bionic, for example. GLibC's one is about three or four times larger yet in a lot of cases it's 10-100 times faster (I'm not joking).
Sometimes you need a lot of code because the task you are trying to solve requires a lot of code. Sometimes it's just legacy. To say that bloat indeed affects performance you need benchmarks, not handwaving.
If dbus-daemon was mean and lean this extra ping-ponging wouldn't be very noticeable and wouldn't happen as much.
And this is what I'm talking about: why are you so sure dbus-deamon trashed everything in CPU if it's size is much smaller then CPU cache? Do you have any evidence that your outrageous theories have any relation to what happens on practice? Are you just writing rubbish because you can?
Copyright © 2017, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds