User: Password:
Subscribe / Log in / New account

Shrinking the kernel with gcc

Shrinking the kernel with gcc

Posted Jan 23, 2004 10:33 UTC (Fri) by hingo (guest, #14792)
In reply to: Shrinking the kernel with gcc by dw
Parent article: Shrinking the kernel with gcc

This is the thing that geeks masturbate overnight, but is there anything wrong with that?

As I see it, it works like this:
1) some geek decides to, just for fun, see how much horsepower he can still manage to squeeze out from the kernel. Note that this has nothing to do with Red Hat, the commercial IT world or anything like that, just curiosity
2) Someone at Red Hat reads a web-page/discussion, where he learns that by turning on three switches in gcc, he will have produced a kernel that performs 3% faster (or whatever, I realize that 3% is not speed in this article). He also learns, that there will be a side effect wrt binary-only modules. So it's a simple yes/no decision for him, including no masturbation at all!

Let's continue: Since we know that Red Hat is not interested in the desktop anyway (they recommend using windows for that), it really looks to me, like he should go ahead with it. And let's face it, he might have some personal and political preferences too weighing in.

As a more general question, I think we all agree, that wether or not nVidia agrees or not, from an idealistic and purely engineering point of view, the optimum situation would be that all source code is open. And mind you, this is not a political statement! It's a (technical) fact that this guy has just proven, that if you have access to all source code, you are able to do things that will make the kernel better, whereas if you don't have acces to all source, the same tricks will only break things.

The question then facing us is: should we strive for the technically optimum solution, or strive to maximize compatibility in a world, where everyone is not (for whatever reason) willing to open up their sources. Not surprisingly, most kernel developers are more interested in going for the technically optimum solution rather than settling for a policy that has other benefits, but might never get you there. There are at least the following considerations supporting that:
- for people like Linus, that is what he's interested in in the first place. (He never wanted to make a kernel that nVidia could do something with, he just wanted to do a kernel.)
- Some people might think, that Linux and FOSS currently has more power/momentum/whatever than nVidia and all other binary people combined, wherefore there is really no reason to compromise. Just stick to their strategy, and the others will have to surrender. It might hurt at first, but in the end it will lead to the optimum solution you are striving for.
- For Red Hat (et al) it could also be a wise strategy to avoid being too dependant on some specific hardware manufacturer. If people get too comfortable with nVidias binary-only dirvers, we could one day live in a world where we are not locked into MS software, and not into RH software (because its Open Source) but locked into nVidias drivers and their release schedules etc. RH probably don't want to be in a situation, where they have to ask nVidia (and a dozen others) for permission before doing a simple technical decision.


(Log in to post comments)

Copyright © 2017, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds