Firefox 3.0.10 released
Firefox 3.0.10 released
Posted Apr 28, 2009 22:31 UTC (Tue) by njs (subscriber, #40338)In reply to: Firefox 3.0.10 released by jengelh
Parent article: Firefox 3.0.10 released
Why? Well, re-read the first half of the sentence you quoted... the kernel has, overall, extraordinarily clean code as its starting point. Plus, the kernel has fewer users, more developers, less coupling (most kernel code is standalone device drivers!), and vastly simpler exposed interfaces.
If you're skeptical kernel code is so easy, then ask yourself which would you rather be responsible for: writing a secure TCP/IP stack that will be exposed to admins and programmers, or writing a secure HTTP/HTML3/4/5/XHTML/Javascript/CSS/SVG/OMG/BBQ renderer that will be exposed to Uncle Joe (whoever *he* is...)? Browsers are #@$@ing hard.
None of which is to say that Firefox is anywhere close to perfect, and it may well yet be overtaken by Webkit etc. But many of the people slagging on Firefox seem to be patting themselves on the back over how incredibly clever they are to have misdiagnosed Firefox's failure to live up to their misguided expectations.
Posted Apr 28, 2009 23:19 UTC (Tue)
by tialaramex (subscriber, #21167)
[Link] (1 responses)
X was cursed because it's this huge ball of stuff, too complicated to audit with confidence, yet not readily divided into smaller independent pieces. Everything interacts with it, it has remarkable privileges (and that's even without the Linux / PC situation where it was running as root) and so it's a huge bullseye for any black hats.
We've almost dragged PC systems back to the status quo where X isn't root, but it still controls the input and output devices (and so compromising it means you get to see everything the user does, and control what they see). Awful, but maybe not so bad you can't sleep at night.
But what's this - a new even more enormous piece of software, even more complicated, even less possible to audit, responsible for enforcing security policies, managing a half dozen virtual machines and language runtimes, interpreting keyboard input, accessing untrusted remote servers over the network and so much more. That's the web browser, no-one with even a weekend spent reading books on security would have come up with such a thing, but we're stuck with it.
Posted Apr 29, 2009 5:18 UTC (Wed)
by drag (guest, #31333)
[Link]
The Web is the only platform that is allowed to penetrate their hedgemony. Even then it's still crippled by the reality that 90+% of your target audiance is going to be running Internet Explorer.
It's unfortunately.. people can add all the 3D canvases and video and improve javascript all they want, but until Microsoft integrates that into IE then nobody can use it for anything serious.
--------------------------
From a computer science perspective the web platform has a fundamental advantage over X in terms of trying to do world-wide network-based applications... The simple fact that the bulk of web applications use client-side scripting is a significant advantage. This means that it can scale quite well since the more clients that use the web platform the more processing power it has, more or less. With X it would be quite daunting to try to build a server cluster that could make OpenOffice.org scale up to handling 10,000s of users.
------------------------------
Of course the best path to massive scalability is the approach were each user has their own local copy of a application.
It's also has very good security benefits... for sensitive purposes a user can simply disconnect their machine from the network and even malicously created applications have no chance of subverting a user's security.
I am holding out hope that someday IPv6 will gain hold and we can go a long long way to eliminating the dependence the client-server model that has come to dominate the internet lately. Many things that would obviously be better handled in a pure p2p mode.... File sharing, hosting personal profiles, Voice and video communication over IP, etc. Those things tend to work best when you eliminate the middle man. Then the only real reliance on a central server would come in the form of simple notifications... like whether or not a user is accepting connections and were to find them on the internet.
----------------------
The only possible real future for X (as in the networking) is if people learn that they can make their personal computers cheaper and disposible if they were little more then terminals to a central household computer or something like that. Cheaper is always good in most people's eyes.
However I am not going to hold my breath.
Posted Apr 28, 2009 23:42 UTC (Tue)
by nix (subscriber, #2304)
[Link] (2 responses)
Posted Apr 29, 2009 1:52 UTC (Wed)
by njs (subscriber, #40338)
[Link] (1 responses)
When it's obvious that what the kernel needs is a Parrot interpreter?
Posted Apr 29, 2009 7:06 UTC (Wed)
by nix (subscriber, #2304)
[Link]
I'm being snarky. The kernel's desynchronization from GCC, and its much
The only other project of similar size that I've seen run into similar
And kernels *are* low-level and special: you really *don't* care about a
The kernel people like to assume that these only speed up SPEC, but y'know
Some optimizations do seem to have minimal effect, until you try turning
A few have minimal effect on one CPU but a lot on another: sometimes these
Browser is the new X server
Browser is the new X server
Firefox 3.0.10 released
kernel by just souping up sparse'. People who actually know sparse and C
compilers, like viro and davem, have been jumping hard on this silly
idea. Even if you ignored *all* optimizers, you'd have a big pile of
backends to implement, and ignoring all optimizers is probably a bad idea:
the kernel surely needs at least decent submodel-specific scheduling (to
minimize stalls and keep pipelines full: not all hardware does it for you
like x86) and a decent register-pressure-sensitive reload, which means
live range analysis, which means... it's a good-bit bigger job than
they've been assuming.
Firefox 3.0.10 released
Firefox 3.0.10 released
changes to percolate down that writing their own C compiler is obviously
better. Oh, and 'every other project of any size' runs into huge numbers
of GCC bugs too so this is not just them, only it *is* just them because
kernels are low-level and special.
faster release cycles and insistence on working even with relatively old
compilers means that you need to wait about five years between hitting a
GCC bug and being able to assume that it's fixed: and shipping the
compiler and kernel together *would* solve this. Now you'd only have the
problem of bugs in the compiler. (Unless the in-kernel-tree compiler did a
three-stage bootstrap+compare I'm not sure I could trust it much, either,
and that would make kernel compilation times balloon unless it was *much*
faster than GCC. Mind you that's not unlikely either.)
numbers of compiler bugs is clamav. God knows what they do to trigger that
many. MPlayer is also known for this, but that really *does* play at evil
low levels, using inline asm with borderline constraints for a tiny extra
flicker of speed on some obscure platform and that sort of thing.
lot of the optimizations that really do speed up real code. All the work
recently going into improved loop detection, currently used mostly to
improve autoparallelization, is useless for OS kernels, which can't allow
the compiler to go throwing things into other threads on the fly; a couple
of optimizations speed up giant switch statements entangled with computed
goto by turning them into jump tables and stuff like that, which is useful
for threaded interpreters, and the kernel doesn't have any (the ACPI
interpreter uses an opcode array instead).
it just isn't true. I'm hard put to think of an on-by-default optimization
in GCC that doesn't speed up code I see on a nearly daily basis at least a
bit (I benchmarked this three years or so back, diking optimizations out
of -O2 one by one and benchmarking work flow rates through a pile of very
large financial apps, some CPU-bound, some RAM-bound, in a desperate and
failed attempt to see which of them was pessimizing it: it would be
interesting to try this sort of thing with the kernel, only it's a bit
harder to detect pessimization automatically)... some of it isn't going to
speed up high-quality code, but you just can't assume that all code you
see is going to be high-quality.
them off and realise that later optimizations rely on them (these days,
many of these dependencies are even documented or asserted in the pass
manager: radical stuff for those of us used to the old grime-everywhere
GCC :) )
are turned on on both of them, which probably *is* a waste of time and
should be fixed. None of these seemed to be especially expensive
optimizations in my testing. (I should dig up the results and forward them
to the GCC list, really. But they're quite old now...)