User: Password:
|
|
Subscribe / Log in / New account

Imaginary losses

Imaginary losses

Posted Mar 27, 2008 20:40 UTC (Thu) by pphaneuf (subscriber, #23480)
In reply to: Imaginary losses by alankila
Parent article: Striking gold in binutils

You get a once per function (and thus, amortized better and better with the longer the function) setup and teardown that registers destructors. If there are no destructors, it simply can be left out. Even a "just crash" approach involves one test and branch per possible failure point.

On modern processors, having branches is expensive, due to mis-predicting them. I suspect that's one of the reasons that profile-driven optimizers can be so good, is that they can figure out which side of a branch is more likely. In the case of error-handling, which branch is more likely would be readily obvious to a human, but is harder to do for a compiler (see the annotations available in GCC, used by the Linux kernel code).

The code size increases with error-handling code, often with "dead spots" that get jumped over when there are no error, which on todays faster and faster machines, means increased instruction cache usage, less locality and so on.

I don't doubt that when they happen, C++ exceptions might be more expensive, but the thing with exception is that they don't happen often, and thus, that's the most interesting case.


(Log in to post comments)


Copyright © 2017, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds