"does well to be conservative about adopting a solution"?
Posted Apr 6, 2012 20:15 UTC (Fri) by khim
In reply to: "does well to be conservative about adopting a solution"?
Parent article: A turning point for GNU libc
that's a valid argument if the change improves something, but if (as in this case) the change doesn't give any performance improvement, the only 'advantage' of the change is that it breaks existing programs.
Wow. Just… wow. I guess it's time to ask the usual question: are you an idiot or just play one on TV? You were given link - now, please go and read it.
People tend to see message from Linus (which says I bothered to _measure_ the speed, and according to my measurements, glibc wasn't any faster than my trivial version and was likely slower) and immediately switch to Linus is god, GLibC developers are stupid mindset. Which is not justified at all.
Because of course the very next sentence (but I only tested two cases) in the same paragraph flies right over their head and detailed explanation (At last on Core2 we gain 1.83x speedup compared with original instruction sequence and Based on our micro-benchmark small bytes from 1 to 127 bytes, we got up to 2X improvement, and up to 1.5X improvement for 1024 bytes on Corei7) is totally ignored or, at best, hand-waved with hopefully Linus has answered this one appeal to authority.
This is the same never-ending fight between pragmatists and standard hairsplitters. Linus, ever the pragmatist, never rebuffed speedup claim when it was pointed out that he was incorrect (good for him: speedup is very much hardware-dependent and was just unobservable on the hardware he used, it's quite real and measurable on different hardware) but he said quite sensible thing from pragmatist's POV: new version of memcpy may be more efficient, but it's more complex as well thus the usual excuse (we have trivial and fast
memcpy and slower, but more robust
memmove) no longer applies. But GLibC and GCC developers, ever then nitpickers, say that this makes no sense: spec most definitely says that if copying takes place between objects that overlap, the behaviour is undefined (and even that other OS agrees) so why should they add any such checks to the code which works fine in standard-mandated case?
Note: GLibC guys rolled back change for old binaries pretty quickly when it was found that their improvement broke real programs. After that point it's no longer about “stable ABI” and “backward compatibility”, but about “doing the right thing”.
I think the end result (old programs get the old behavior and new ones should finally fix the bugs in according to the documentation) makes sense in this context. You can say that this is what should have happened from the beginning, but it was not all that obvious that so many programs actually depend on the broken behavior of the old
to post comments)