User: Password:
|
|
Subscribe / Log in / New account

Shrinking the kernel with gcc

Shrinking the kernel with gcc

Posted Jan 22, 2004 19:32 UTC (Thu) by dw (subscriber, #12017)
In reply to: Shrinking the kernel with gcc by rknop
Parent article: Shrinking the kernel with gcc

Hey there!

I'm by no means a kernel developer either, but I do believe the effects of such a fundamental change to the kernel is very important. I am a developer by day and by night, I also spend a hell of a lot of time supporting very cutting-edge uses of Linux in IT.

This is the sort of crap that gives Linux a bad name.

Why do we need this change? I mean, really. I can see uses for it in the embedded sector, but in the real world (read: the world where the average server comes equipped with 2gb RAM) this has very little benefit.

I am fully aware of the potential correlation between size and performance, but I remain unconvinced that any more than 0.001% of the real world users of Linux would find this change enabling, or even useful. I think it will find the most popular acceptance among nerds who spend day and night trying to squeeze just that last 1kb of free memory.

This change is the sort of thing geeks masterbate over at night, but has very little real-world gain. Nice idea guys [Redhat], but why the hell break binary modules? 2004 is meant to be the year of the Linux desktop, yeah right.

If Redhat go ahead and release broken-without-a-cause kernels on the world, we can be sure that there is a higher probability that the average Linux beginner will throw his hands up in horror when he discovers he has to build and replace a core operating system component just in order to get OpenGL to work.

"Linux compatible" peripherals will become splintered yet more. Now not only must you match up your kernel major version (yes, there are still 2.2-only devices), but now you have to match up procedure calling strategies too. Do you think the average desktop user cares about this?

The reactions to the proposed patch sickens me slightly too, I see that some people still have trouble accepting that Linux is indeed used commercially and in environments where diehard open source fundamentalism does not go down well.

If it weren't for the fact that I'm earning money from Linux, I'd have probably found an alternative with a more realistic community. Maybe even a commercial operating system.

As ever, my opinions are my own, of limited scope, and potentially (probably) naive.


David.
Life? Don't talk to me about life.


(Log in to post comments)

Shrinking the kernel with gcc

Posted Jan 22, 2004 20:46 UTC (Thu) by oak (guest, #2786) [Link]

In earlier decades when code was small and fast it was pretty common to pass arguments in registers instead of stack. Besides saving memory, it should be also faster, that can make people more interested about it.

The register passing was the thing making stuff binary incompatible, other changes should be usable without problems.

Shrinking the kernel with gcc

Posted Jan 23, 2004 2:01 UTC (Fri) by rknop (guest, #66) [Link]

The reactions to the proposed patch sickens me slightly too, I see that some people still have trouble accepting that Linux is indeed used commercially and in environments where diehard open source fundamentalism does not go down well.

You say that as if there was something wrong with it.

As the person you seem to deride as a diehard open source fundamentalist, let me put it this way. I have absolutely nothing against Linux being used commercially and being used easily. However, I was be MUCH happier if Linux could remain free while so doing. If it gets to the point that us Linux geeks can't use Linux any more without resorting to binary-only drivers, well, then, in a sense Linux will have been "taken away" by the commercial interests who don't give a rats ass about free software.

And, I think that a commercial interest that says "we just want it to work, we don't care about free vs. binary-only drivers" is very short sighted. Corporate America is waking up to the many advantages of free software right now-- among them, avoiding vendor lock-in. But if in their adpotion they overmuch water down the philosophy that keeps free software free, many of those advantages may go away.

It is no accident that the first operating system that has created credible competition for Microsoft is free. The fact that it is free is probably Linux's greatest strength, more so than its technical strengths. If that goes away, although it may take time for everybody to realize it much of the advantage and attractiveness of Linux will go away.

So go on feeling all grouchy that you're in a community of hippie commies and wishing that you weren't stuck making money from Linux so that you could find a closed-source community more to your tastes. In the mean time, those of us who really want to keep open source free will by and large recognize that it is for practical reasons that we want to do so.

-Rob

Shrinking the kernel with gcc

Posted Jan 23, 2004 8:34 UTC (Fri) by Duncan (guest, #6647) [Link]

> Now not only must you match up your kernel major version,
> but now you have to match up procedure calling strategies too.
> Do you think the average desktop user cares about this?

The practical effect on the NVidias of the world will be small, in any case. I
recently changed out an NVidia card for an ATI, because the libre ATI driver
does multiple video-outs per card, while the libre NVidia (NV) driver does not,
and I was tired of the hassle of having to recompile their driver every time I
recompiled the kernel, and of warnings about using an incompatible GCC, when
I compiled both with the SAME GCC, because of their stupid binary only stuff,
compiled with some GCC lagging the bleeding edge I'm accustomed to running.
I've vowed never to go back to hardware requiring binary only driver solutions.

Anyway, due to running NVidia's proprietary-ware solution for some time, I
know they release binary editions for each of the release versions of the major
distributions, often several for each, one for each kernel of each release. This
wouldn't change. They'd still end up having to make a different binary kernel
driver available for each kernel of each release, or force users to do their own
compiling. Nothing different there. The only difference would be one more
thing they'd have to check when doing their own compiling, behind the publicly
available drivers, and many of the kernel developers appear to be with me in not
caring about that. They are making their own boat, let them have to deal with
their own sea-worthiness!

I like the stricter controls on calls available to GPLed vs. non-GPLed modules in
the 2.6 kernel, as well, and am looking forward to 2.7 and beyond making
proprietary-ware driver suppliers lives even harder. By the time 2.7 goes stable
as 2.8 or 3.0, ideally, Linux will be used by a large enough percentage of the
computing world that hardware manufacturers will begin to have to think twice
about passing up the Linux segment, and open source will be driving the bargain
according to open source terms. That day is coming. Hardware suppliers have
been used to marching to the tune of MS. I can't believe they'll find it any
MORE difficult marching to the libre tune!

Duncan

(Not a kernel hacker either, just a user that does his own kernel compiling, and
got tired of having to do a separate compile for all the hardware that wants to
force me to.. there's other hardware out there, and that's what I'll buy with
**MY** few $$!)

Shrinking the kernel with gcc

Posted Jan 23, 2004 10:33 UTC (Fri) by hingo (guest, #14792) [Link]

This is the thing that geeks masturbate overnight, but is there anything wrong with that?

As I see it, it works like this:
1) some geek decides to, just for fun, see how much horsepower he can still manage to squeeze out from the kernel. Note that this has nothing to do with Red Hat, the commercial IT world or anything like that, just curiosity
2) Someone at Red Hat reads a web-page/discussion, where he learns that by turning on three switches in gcc, he will have produced a kernel that performs 3% faster (or whatever, I realize that 3% is not speed in this article). He also learns, that there will be a side effect wrt binary-only modules. So it's a simple yes/no decision for him, including no masturbation at all!

Let's continue: Since we know that Red Hat is not interested in the desktop anyway (they recommend using windows for that), it really looks to me, like he should go ahead with it. And let's face it, he might have some personal and political preferences too weighing in.

As a more general question, I think we all agree, that wether or not nVidia agrees or not, from an idealistic and purely engineering point of view, the optimum situation would be that all source code is open. And mind you, this is not a political statement! It's a (technical) fact that this guy has just proven, that if you have access to all source code, you are able to do things that will make the kernel better, whereas if you don't have acces to all source, the same tricks will only break things.

The question then facing us is: should we strive for the technically optimum solution, or strive to maximize compatibility in a world, where everyone is not (for whatever reason) willing to open up their sources. Not surprisingly, most kernel developers are more interested in going for the technically optimum solution rather than settling for a policy that has other benefits, but might never get you there. There are at least the following considerations supporting that:
- for people like Linus, that is what he's interested in in the first place. (He never wanted to make a kernel that nVidia could do something with, he just wanted to do a kernel.)
- Some people might think, that Linux and FOSS currently has more power/momentum/whatever than nVidia and all other binary people combined, wherefore there is really no reason to compromise. Just stick to their strategy, and the others will have to surrender. It might hurt at first, but in the end it will lead to the optimum solution you are striving for.
- For Red Hat (et al) it could also be a wise strategy to avoid being too dependant on some specific hardware manufacturer. If people get too comfortable with nVidias binary-only dirvers, we could one day live in a world where we are not locked into MS software, and not into RH software (because its Open Source) but locked into nVidias drivers and their release schedules etc. RH probably don't want to be in a situation, where they have to ask nVidia (and a dozen others) for permission before doing a simple technical decision.

henrik

Shrinking the kernel with gcc

Posted Jan 24, 2004 7:02 UTC (Sat) by bronson (subscriber, #4806) [Link]

Are you "making money" using a kernel that runs closed-source drivers? If so, then why??? Just go buy a video card that has open-source drivers and sleep better at night. Problem solved.

Back when Linux had fairly poor hardware support (1996), I was very much in favor of closed source drivers. I remember wanting Linus to freeze an ABI so drivers wouldn't break with every kernel upgrade. I was perfectly happy running random code.

Times have changed. Hardware support in Linux is _excellent_. And just imagine how bloated and incompetent the kernel would be if a driver ABI had been established. RCU, elevator, preemt, etc. -- all these changes would have been effectively impossible (not without _massive_ amounts of cruft anyway).

Watching Linux over the past few years has convinced me: binary-only drivers seriously impede development. They're also a security risk (heck, I might even consider them a downright threat). New hardware is cheap. There's just no need to put up with them anymore.

Shrinking the kernel with gcc

Posted Jan 25, 2004 13:29 UTC (Sun) by rknop (guest, #66) [Link]

The problem with this is in two areas: 3d video cards and wireless cards. In both cases, it's getting hard to buy a new card that has open-source support. ATI used to be the company of choice, since you could get open source DRI drivers for its cards. However, the last one (I believe) that had open-source 3d support is the Radeon 9200.

The binary-only drivers that many companies put out for these cards do work well enough for most users. This means that the companies are seen as "supporting Linux", and even many Linux users don't think there is a point in pushing them to allow open-source drivers.

(Plus, the legal environment has changed. It's more dangerous now to reverse engineer than it used to be, *and* Linux is high-profile enough that you're probably more likely to get sued for doing it, and that suit is more likely to suceed.)

-Rob

Shrinking the kernel with gcc

Posted Jan 29, 2004 20:33 UTC (Thu) by khim (subscriber, #9252) [Link]

Hmm. To me it looks like more reasons to make usage of closed-source drivers difficult, not less.

Shrinking the kernel with gcc

Posted Jan 24, 2004 15:42 UTC (Sat) by dion (guest, #2764) [Link]

> Why do we need this change?

Didn't you read the article?

Smaller code means faster code and thus lower latencies, this is something that everybody wants.

There is no reason not to do this if it's technically sound, those few idiotic hardware developers who insist that paying all maintainance on their buggy binary-only driver is better than simply playing by the rules and letting people help them selves can bloody well stew.

I have a nVidia card and since starting to use it their driver has caused more crashes than I have ever had before, furtunatly it only crashes when X is shutting down and that means that I havn't lost any data, but you can bet I will not be buying any more binary-only crap when it's time to upgrade.

In any case, not taking a step forward because someone might need to update their driver is a silly idea, the kernel developers should do whatever it takes to make the kernel better, even if some things need to change as a result.


Copyright © 2017, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds