Not logged in
Log in now
Create an account
Subscribe to LWN
LWN.net Weekly Edition for May 23, 2013
An "enum" for Python 3
An unexpected perf feature
LWN.net Weekly Edition for May 16, 2013
A look at the PyPy 2.0 release
Well I dunno, but it seems that the Linux kernel does a better job at it. Existing functionality is largely kept, yet somehow to manage to change >1 million lines every release.
Firefox 3.0.10 released
Posted Apr 28, 2009 22:31 UTC (Tue) by njs (guest, #40338)
Why? Well, re-read the first half of the sentence you quoted... the kernel has, overall, extraordinarily clean code as its starting point. Plus, the kernel has fewer users, more developers, less coupling (most kernel code is standalone device drivers!), and vastly simpler exposed interfaces.
None of which is to say that Firefox is anywhere close to perfect, and it may well yet be overtaken by Webkit etc. But many of the people slagging on Firefox seem to be patting themselves on the back over how incredibly clever they are to have misdiagnosed Firefox's failure to live up to their misguided expectations.
Browser is the new X server
Posted Apr 28, 2009 23:19 UTC (Tue) by tialaramex (subscriber, #21167)
X was cursed because it's this huge ball of stuff, too complicated to audit with confidence, yet not readily divided into smaller independent pieces. Everything interacts with it, it has remarkable privileges (and that's even without the Linux / PC situation where it was running as root) and so it's a huge bullseye for any black hats.
We've almost dragged PC systems back to the status quo where X isn't root, but it still controls the input and output devices (and so compromising it means you get to see everything the user does, and control what they see). Awful, but maybe not so bad you can't sleep at night.
But what's this - a new even more enormous piece of software, even more complicated, even less possible to audit, responsible for enforcing security policies, managing a half dozen virtual machines and language runtimes, interpreting keyboard input, accessing untrusted remote servers over the network and so much more. That's the web browser, no-one with even a weekend spent reading books on security would have come up with such a thing, but we're stuck with it.
Posted Apr 29, 2009 5:18 UTC (Wed) by drag (subscriber, #31333)
The Web is the only platform that is allowed to penetrate their hedgemony. Even then it's still crippled by the reality that 90+% of your target audiance is going to be running Internet Explorer.
From a computer science perspective the web platform has a fundamental advantage over X in terms of trying to do world-wide network-based applications... The simple fact that the bulk of web applications use client-side scripting is a significant advantage. This means that it can scale quite well since the more clients that use the web platform the more processing power it has, more or less. With X it would be quite daunting to try to build a server cluster that could make OpenOffice.org scale up to handling 10,000s of users.
Of course the best path to massive scalability is the approach were each user has their own local copy of a application.
It's also has very good security benefits... for sensitive purposes a user can simply disconnect their machine from the network and even malicously created applications have no chance of subverting a user's security.
I am holding out hope that someday IPv6 will gain hold and we can go a long long way to eliminating the dependence the client-server model that has come to dominate the internet lately. Many things that would obviously be better handled in a pure p2p mode.... File sharing, hosting personal profiles, Voice and video communication over IP, etc. Those things tend to work best when you eliminate the middle man. Then the only real reliance on a central server would come in the form of simple notifications... like whether or not a user is accepting connections and were to find them on the internet.
The only possible real future for X (as in the networking) is if people learn that they can make their personal computers cheaper and disposible if they were little more then terminals to a central household computer or something like that. Cheaper is always good in most people's eyes.
However I am not going to hold my breath.
Posted Apr 28, 2009 23:42 UTC (Tue) by nix (subscriber, #2304)
Posted Apr 29, 2009 1:52 UTC (Wed) by njs (guest, #40338)
When it's obvious that what the kernel needs is a Parrot interpreter?
Posted Apr 29, 2009 7:06 UTC (Wed) by nix (subscriber, #2304)
I'm being snarky. The kernel's desynchronization from GCC, and its much
faster release cycles and insistence on working even with relatively old
compilers means that you need to wait about five years between hitting a
GCC bug and being able to assume that it's fixed: and shipping the
compiler and kernel together *would* solve this. Now you'd only have the
problem of bugs in the compiler. (Unless the in-kernel-tree compiler did a
three-stage bootstrap+compare I'm not sure I could trust it much, either,
and that would make kernel compilation times balloon unless it was *much*
faster than GCC. Mind you that's not unlikely either.)
The only other project of similar size that I've seen run into similar
numbers of compiler bugs is clamav. God knows what they do to trigger that
many. MPlayer is also known for this, but that really *does* play at evil
low levels, using inline asm with borderline constraints for a tiny extra
flicker of speed on some obscure platform and that sort of thing.
And kernels *are* low-level and special: you really *don't* care about a
lot of the optimizations that really do speed up real code. All the work
recently going into improved loop detection, currently used mostly to
improve autoparallelization, is useless for OS kernels, which can't allow
the compiler to go throwing things into other threads on the fly; a couple
of optimizations speed up giant switch statements entangled with computed
goto by turning them into jump tables and stuff like that, which is useful
for threaded interpreters, and the kernel doesn't have any (the ACPI
interpreter uses an opcode array instead).
The kernel people like to assume that these only speed up SPEC, but y'know
it just isn't true. I'm hard put to think of an on-by-default optimization
in GCC that doesn't speed up code I see on a nearly daily basis at least a
bit (I benchmarked this three years or so back, diking optimizations out
of -O2 one by one and benchmarking work flow rates through a pile of very
large financial apps, some CPU-bound, some RAM-bound, in a desperate and
failed attempt to see which of them was pessimizing it: it would be
interesting to try this sort of thing with the kernel, only it's a bit
harder to detect pessimization automatically)... some of it isn't going to
speed up high-quality code, but you just can't assume that all code you
see is going to be high-quality.
Some optimizations do seem to have minimal effect, until you try turning
them off and realise that later optimizations rely on them (these days,
many of these dependencies are even documented or asserted in the pass
manager: radical stuff for those of us used to the old grime-everywhere
GCC :) )
A few have minimal effect on one CPU but a lot on another: sometimes these
are turned on on both of them, which probably *is* a waste of time and
should be fixed. None of these seemed to be especially expensive
optimizations in my testing. (I should dig up the results and forward them
to the GCC list, really. But they're quite old now...)
Posted Apr 29, 2009 7:50 UTC (Wed) by epa (subscriber, #39769)
So in terms of the codebase that is used in a typical installation, it may be that Firefox is bigger and more complex than Linux.
Copyright © 2013, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds