Firefox 3.0.10 released
From: | Samuel Sidler <ss-AT-mozilla.com> | |
To: | announce-AT-lists.mozilla.org | |
Subject: | Firefox 3.0.10 available for download | |
Date: | Mon, 27 Apr 2009 19:47:52 -0700 | |
Message-ID: | <A6AF4D81-DB36-469E-991A-0669B45300D2@mozilla.com> | |
Cc: | "dev. planning" <dev-planning-AT-lists.mozilla.org> |
As part of Mozilla Corporation's ongoing security and stability update process, Firefox 3.0.10 is now available for Windows, Mac, and Linux for free download from http://getfirefox.com/. We strongly recommend that all Firefox users upgrade to this latest release. If you already have Firefox 3.0.x, you will receive an automated update notification within 24 to 48 hours. This update can also be applied manually by selecting "Check for Updates?" from the Help menu. For a list of changes and more information, please review the Firefox 3.0.10 release notes at: http://www.mozilla.com/firefox/3.0.10/releasenotes/ Please note: If you're still using Firefox 2.0.0.x, this version is no longer supported and contains known security vulnerabilities. Please upgrade to Firefox 3 by downloading Firefox 3.0.10 from http://getfirefox.com/ . (follow-up: mozilla.dev.planning / dev-planning@lists.mozilla.org) _______________________________________________ announce mailing list announce@lists.mozilla.org https://lists.mozilla.org/listinfo/announce
Posted Apr 28, 2009 14:17 UTC (Tue)
by danielpf (guest, #4723)
[Link] (26 responses)
Posted Apr 28, 2009 14:21 UTC (Tue)
by malefic (guest, #37306)
[Link] (4 responses)
Posted Apr 28, 2009 14:34 UTC (Tue)
by mattdm (subscriber, #18)
[Link]
Posted Apr 28, 2009 14:55 UTC (Tue)
by danielpf (guest, #4723)
[Link] (2 responses)
"One of the security fixes in Firefox 3.0.9 introduced a regression that caused some users to experience frequent crashes. Users of the HTML Validator add-on were particularly affected, but other users also experienced this crash in some situations. In analyzing this crash we discovered that it was due to memory corruption similar to cases that have been identified as security vulnerabilities in the past."
Posted Apr 28, 2009 17:12 UTC (Tue)
by stumbles (guest, #8796)
[Link]
Posted Apr 28, 2009 18:18 UTC (Tue)
by man_ls (guest, #15091)
[Link]
I guess what I'm saying can be resumed as: without more information it is hard to know how bad it is.
Posted Apr 28, 2009 15:17 UTC (Tue)
by sbergman27 (guest, #10767)
[Link] (11 responses)
Posted Apr 28, 2009 15:27 UTC (Tue)
by rfunk (subscriber, #4054)
[Link] (3 responses)
Posted Apr 28, 2009 15:34 UTC (Tue)
by pr1268 (guest, #24648)
[Link] (2 responses)
Which may have been a bad idea to begin with.
Posted Apr 28, 2009 22:14 UTC (Tue)
by jordanb (guest, #45668)
[Link] (1 responses)
Netscape was losing marketshare like crazy when the code was released. It had little to do with the long release time for Mozilla 1.0. Also, the code that Netscape released was unbuildable. They basically ripped out anything that had a copyright question on it so all you got an incoherent blob of C code that nobody understood. So the long release time had was mostly a result of trying to figure out what they had and how they'd go about turning it into a functioning computer program.
2) I've poked around in the Mozilla codebase and there is a TON of "mcom" stuff in there. So if there was a serious effort to rewrite it (I don't think there was) they sure did leave a lot of truly ancient code.
Posted Apr 29, 2009 7:59 UTC (Wed)
by sdalley (subscriber, #18550)
[Link]
Posted Apr 28, 2009 19:37 UTC (Tue)
by khim (subscriber, #9252)
[Link] (6 responses)
WebKit is somewhat better, but only marginally. You don't hear about it
because most WebKit-based browsers will just silently upgrade without even
offering you opt-out choice! And the ones without such "service" are
considered unsupported so you'll never know if you have any security issues
till rootkit will be installed on your system...
Posted Apr 28, 2009 20:03 UTC (Tue)
by sbergman27 (guest, #10767)
[Link] (5 responses)
Of course, I'm assuming, just for the sake of argument, that what you claim about WebKit-based browsers pushing out security updates against the users' will was true.
The lengths to which some die-hard Firefox fans will go... the logical contortions they are willing to accept... to "prove" that the endless stream of security vulnerabilities in Firefox is really a good thing is beyond just worrisome. It's out and out scary.
I doubt that there is a transgression that Mozilla Corp could commit, short of maybe dissing the GPL, that would cause some of the more ardent fans to even think critically about the situation.
Posted Apr 28, 2009 22:39 UTC (Tue)
by njs (subscriber, #40338)
[Link] (1 responses)
If that's "logically contorted", then so be it...
Posted Apr 29, 2009 6:29 UTC (Wed)
by khim (subscriber, #9252)
[Link]
I'm not convinced in that - only Microsoft practices this hiding
approach. Apple and Google are publishing internally-discovered
vulnerabilities. And there are less of them then in Firefox (8 vs 38 in
2009 so far), but is this difference enough to claim that Firefox is
disaster while WebKit is ideal? Statistic for full 2008 is 45 Safari vs 102
Firefox. Safari still wins but difference is moderate if you'll recall
that Firefox has more subsystems - Safari does does support Firefox-like
extensions and all this flexibility does not come free.
Posted Apr 29, 2009 6:16 UTC (Wed)
by khim (subscriber, #9252)
[Link]
No. Security sites don't ignore them, only news sites do. If you visit
security
database you'll find out that WebKit is vulnerable, Safari is
too and Chrome is far from ideal - but they don't issue
numbered releases to be downloaded from site so LWN does not issue articles
on subject too. Yes, 1/3 of bugs (159 for Safari vs 455 for Firefox) is
good achievment, but is it enough to say "WebKit doesn't seem to have this
chronic problem"? You can not do apples-to-apples comparison between Gecko and
WebKit: for Gecko there are just 5 CVE and for WebKit 27, but I find it
hard to believe that out of 455 Firefox's vulnerabilities only 5 affect
Gecko and some 450 are in different subsystems... Yes, it's really scary. Only Firefox-haters are worse...
Posted Apr 29, 2009 7:46 UTC (Wed)
by epa (subscriber, #39769)
[Link] (1 responses)
A large number of security fixes being published is neither a 'good thing' nor a 'bad thing' in itself.
Posted Apr 29, 2009 8:28 UTC (Wed)
by khim (subscriber, #9252)
[Link]
Worthy of Firefox fanboy. It's know fact that Firefox has more
vulnerabilities than WebKit-based browsers. May be they are less severe,
may be not. That is not the point. The point is: number of
vulnerabilities in Firefox and WebKit-based browsers are of the same order.
It's not like OpenBSD vs Linux comparison: one side has hundreds of
potential vulnerabilities, the other one - just a handful ("ten over last
ten years" or something like that). Here both sides have sizable number and
these vulnerabilities were exploited in the wild and will surely be
exploited in the future. No reason for WebKit developers to feel smug and
not reason for Firefox developers to fret over statistic.
Posted Apr 28, 2009 17:12 UTC (Tue)
by njs (subscriber, #40338)
[Link] (8 responses)
A huge amount of that complexity is inherited; the trick is to go around and remove the giant piles of cruft underneath that were written on the "we're late, throw more developers at it" model, and do that while keeping up with the market's demand for new features and without destabilizing the existing functionality -- which is itself incredibly complex.
I dunno whether they'll pull it off (and to some extent it doesn't matter whether the current Mozilla codebase survives or not), but I've been pleasantly surprised at how well they've managed things so far. I wouldn't write them off yet.
Posted Apr 28, 2009 21:50 UTC (Tue)
by jengelh (guest, #33263)
[Link] (7 responses)
Well I dunno, but it seems that the Linux kernel does a better job at it. Existing functionality is largely kept, yet somehow to manage to change >1 million lines every release.
Posted Apr 28, 2009 22:31 UTC (Tue)
by njs (subscriber, #40338)
[Link] (5 responses)
Why? Well, re-read the first half of the sentence you quoted... the kernel has, overall, extraordinarily clean code as its starting point. Plus, the kernel has fewer users, more developers, less coupling (most kernel code is standalone device drivers!), and vastly simpler exposed interfaces.
If you're skeptical kernel code is so easy, then ask yourself which would you rather be responsible for: writing a secure TCP/IP stack that will be exposed to admins and programmers, or writing a secure HTTP/HTML3/4/5/XHTML/Javascript/CSS/SVG/OMG/BBQ renderer that will be exposed to Uncle Joe (whoever *he* is...)? Browsers are #@$@ing hard.
None of which is to say that Firefox is anywhere close to perfect, and it may well yet be overtaken by Webkit etc. But many of the people slagging on Firefox seem to be patting themselves on the back over how incredibly clever they are to have misdiagnosed Firefox's failure to live up to their misguided expectations.
Posted Apr 28, 2009 23:19 UTC (Tue)
by tialaramex (subscriber, #21167)
[Link] (1 responses)
X was cursed because it's this huge ball of stuff, too complicated to audit with confidence, yet not readily divided into smaller independent pieces. Everything interacts with it, it has remarkable privileges (and that's even without the Linux / PC situation where it was running as root) and so it's a huge bullseye for any black hats.
We've almost dragged PC systems back to the status quo where X isn't root, but it still controls the input and output devices (and so compromising it means you get to see everything the user does, and control what they see). Awful, but maybe not so bad you can't sleep at night.
But what's this - a new even more enormous piece of software, even more complicated, even less possible to audit, responsible for enforcing security policies, managing a half dozen virtual machines and language runtimes, interpreting keyboard input, accessing untrusted remote servers over the network and so much more. That's the web browser, no-one with even a weekend spent reading books on security would have come up with such a thing, but we're stuck with it.
Posted Apr 29, 2009 5:18 UTC (Wed)
by drag (guest, #31333)
[Link]
The Web is the only platform that is allowed to penetrate their hedgemony. Even then it's still crippled by the reality that 90+% of your target audiance is going to be running Internet Explorer.
It's unfortunately.. people can add all the 3D canvases and video and improve javascript all they want, but until Microsoft integrates that into IE then nobody can use it for anything serious.
--------------------------
From a computer science perspective the web platform has a fundamental advantage over X in terms of trying to do world-wide network-based applications... The simple fact that the bulk of web applications use client-side scripting is a significant advantage. This means that it can scale quite well since the more clients that use the web platform the more processing power it has, more or less. With X it would be quite daunting to try to build a server cluster that could make OpenOffice.org scale up to handling 10,000s of users.
------------------------------
Of course the best path to massive scalability is the approach were each user has their own local copy of a application.
It's also has very good security benefits... for sensitive purposes a user can simply disconnect their machine from the network and even malicously created applications have no chance of subverting a user's security.
I am holding out hope that someday IPv6 will gain hold and we can go a long long way to eliminating the dependence the client-server model that has come to dominate the internet lately. Many things that would obviously be better handled in a pure p2p mode.... File sharing, hosting personal profiles, Voice and video communication over IP, etc. Those things tend to work best when you eliminate the middle man. Then the only real reliance on a central server would come in the form of simple notifications... like whether or not a user is accepting connections and were to find them on the internet.
----------------------
The only possible real future for X (as in the networking) is if people learn that they can make their personal computers cheaper and disposible if they were little more then terminals to a central household computer or something like that. Cheaper is always good in most people's eyes.
However I am not going to hold my breath.
Posted Apr 28, 2009 23:42 UTC (Tue)
by nix (subscriber, #2304)
[Link] (2 responses)
Posted Apr 29, 2009 1:52 UTC (Wed)
by njs (subscriber, #40338)
[Link] (1 responses)
When it's obvious that what the kernel needs is a Parrot interpreter?
Posted Apr 29, 2009 7:06 UTC (Wed)
by nix (subscriber, #2304)
[Link]
I'm being snarky. The kernel's desynchronization from GCC, and its much
The only other project of similar size that I've seen run into similar
And kernels *are* low-level and special: you really *don't* care about a
The kernel people like to assume that these only speed up SPEC, but y'know
Some optimizations do seem to have minimal effect, until you try turning
A few have minimal effect on one CPU but a lot on another: sometimes these
Posted Apr 29, 2009 7:50 UTC (Wed)
by epa (subscriber, #39769)
[Link]
By contrast, a typical Firefox user exercises most of the code, unless they visit a very restricted set of sites or have an odd setup such as turning off Javascript. The only big chunk of code they do not run is platform-specific code for other platforms.
So in terms of the codebase that is used in a typical installation, it may be that Firefox is bigger and more complex than Linux.
Posted Apr 28, 2009 17:06 UTC (Tue)
by pranith (subscriber, #53092)
[Link] (8 responses)
Posted Apr 28, 2009 17:10 UTC (Tue)
by corbet (editor, #1)
[Link]
Except, of course, that Fedora has to release several dozen security updates every time Firefox changes, so the advantages of deltarpms are kind of overwhelmed in this case...
Posted Apr 28, 2009 19:26 UTC (Tue)
by MattPerry (guest, #46341)
[Link] (5 responses)
It already does. That feature was added in Firefox 1.5. Updates to Firefox are very small.
Posted Apr 28, 2009 20:56 UTC (Tue)
by nix (subscriber, #2304)
[Link] (2 responses)
Posted Apr 29, 2009 19:22 UTC (Wed)
by MattPerry (guest, #46341)
[Link] (1 responses)
Posted Apr 29, 2009 20:13 UTC (Wed)
by nix (subscriber, #2304)
[Link]
It *is* much better now than it was in the FF1 days. The real problem is
Posted Apr 29, 2009 6:35 UTC (Wed)
by pranith (subscriber, #53092)
[Link] (1 responses)
Posted Apr 29, 2009 19:19 UTC (Wed)
by MattPerry (guest, #46341)
[Link]
No, 1.5 MB is small. That's the size of the update file from 3.0.9 to 3.0.10 on Windows. For Linux it's even smaller, only 199 KB. That even meets your definition of small. If you're downloading 10 MB of data for a Firefox update, you're likely downloading the full installer rather than the update. I'm not sure why it would do that.
Posted Apr 29, 2009 11:05 UTC (Wed)
by MortenSickel (subscriber, #3238)
[Link]
Firefox 3.0.10 released
Firefox 3.0.10 released
Firefox 3.0.10 released
Firefox 3.0.10 released
The latest advisory description, especially the last sentence gives an impression that Firefox's code is not under tight control:
Frankly I think you are being nit picky, and doesn't in my view suggest
they have or are losing control.
Firefox 3.0.10 released
Bad fixes are a routine cause of bugs. In business developments, on average you can expect that 5-10% of your bug fixes will generate new bugs. Security fixes should pass a more intense inspection and test cycle, but still... Say that one of the last 100 security fixes has resulted in another bug; 1% of bad fixes would look like a desirable target even in security fixes, even if the fix isolated leaves a bad impression.
One bad fix?
Firefox 3.0.10 released
inherited?
and rewriting it.
inherited?
inherited?
inherited?
Is this a joke?
Odd that WebKit doesn't seem to have this chronic
problem.
Is this a joke?
Is this a joke?
Actually that's not 100% true.
Therefore, using reported vulnerabilities as an estimate of
relative exposure is systematically biased against Firefox.
Yup.
If a project pushes out updates automatically then all the
security sites ignore any security advisories regarding that software, and
news sites like LWN.net decline to report on them. Is that *really* the
case that you want to argue?
The lengths to which some die-hard Firefox fans will go... the
logical contortions they are willing to accept... to "prove" that the
endless stream of security vulnerabilities in Firefox is really a good
thing is beyond just worrisome. It's out and out scary.
Is this a joke?
That's hairsplitting...
Mozilla probably has the most disciplined project management of any free software project. (See e. g.) Whether it's sufficient to the complexity of the project is harder to say.
Firefox 3.0.10 released
Firefox 3.0.10 released
Firefox 3.0.10 released
Browser is the new X server
Browser is the new X server
Firefox 3.0.10 released
kernel by just souping up sparse'. People who actually know sparse and C
compilers, like viro and davem, have been jumping hard on this silly
idea. Even if you ignored *all* optimizers, you'd have a big pile of
backends to implement, and ignoring all optimizers is probably a bad idea:
the kernel surely needs at least decent submodel-specific scheduling (to
minimize stalls and keep pipelines full: not all hardware does it for you
like x86) and a decent register-pressure-sensitive reload, which means
live range analysis, which means... it's a good-bit bigger job than
they've been assuming.
Firefox 3.0.10 released
Firefox 3.0.10 released
changes to percolate down that writing their own C compiler is obviously
better. Oh, and 'every other project of any size' runs into huge numbers
of GCC bugs too so this is not just them, only it *is* just them because
kernels are low-level and special.
faster release cycles and insistence on working even with relatively old
compilers means that you need to wait about five years between hitting a
GCC bug and being able to assume that it's fixed: and shipping the
compiler and kernel together *would* solve this. Now you'd only have the
problem of bugs in the compiler. (Unless the in-kernel-tree compiler did a
three-stage bootstrap+compare I'm not sure I could trust it much, either,
and that would make kernel compilation times balloon unless it was *much*
faster than GCC. Mind you that's not unlikely either.)
numbers of compiler bugs is clamav. God knows what they do to trigger that
many. MPlayer is also known for this, but that really *does* play at evil
low levels, using inline asm with borderline constraints for a tiny extra
flicker of speed on some obscure platform and that sort of thing.
lot of the optimizations that really do speed up real code. All the work
recently going into improved loop detection, currently used mostly to
improve autoparallelization, is useless for OS kernels, which can't allow
the compiler to go throwing things into other threads on the fly; a couple
of optimizations speed up giant switch statements entangled with computed
goto by turning them into jump tables and stuff like that, which is useful
for threaded interpreters, and the kernel doesn't have any (the ACPI
interpreter uses an opcode array instead).
it just isn't true. I'm hard put to think of an on-by-default optimization
in GCC that doesn't speed up code I see on a nearly daily basis at least a
bit (I benchmarked this three years or so back, diking optimizations out
of -O2 one by one and benchmarking work flow rates through a pile of very
large financial apps, some CPU-bound, some RAM-bound, in a desperate and
failed attempt to see which of them was pessimizing it: it would be
interesting to try this sort of thing with the kernel, only it's a bit
harder to detect pessimization automatically)... some of it isn't going to
speed up high-quality code, but you just can't assume that all code you
see is going to be high-quality.
them off and realise that later optimizations rely on them (these days,
many of these dependencies are even documented or asserted in the pass
manager: radical stuff for those of us used to the old grime-everywhere
GCC :) )
are turned on on both of them, which probably *is* a waste of time and
should be fixed. None of these seemed to be especially expensive
optimizations in my testing. (I should dig up the results and forward them
to the GCC list, really. But they're quite old now...)
Firefox 3.0.10 released
Firefox 3.0.10 released
Mechanisms like yum-presto are your friend in situations like this.
Big downloads
Firefox 3.0.10 released
Firefox 3.0.10 released
the beholder :( it often takes twenty minutes or more and sometimes eats
an hour. A costly hour.
Firefox 3.0.10 released
Firefox 3.0.10 released
something else saturating the line, but Windows being Windows it can be
hard to tell what: probably a billion contending worms or something like
that).
that they're stuck on narrowband...
Firefox 3.0.10 released
Firefox 3.0.10 released
Firefox 3.0.10 released