Mozilla: Improving Security for Bugzilla
The Mozilla blog has disclosed
that the official Mozilla instance of Bugzilla was recently
compromised by an attacker who stole "security-sensitive
information
" related to unannounced vulnerabilities in
Firefox—in particular, the PDF
Viewer exploit discovered on August 5. The blog post explains that
Mozilla has now taken several steps to reduce the risk of future
attacks using Bugzilla as a stepping stone. "As an immediate
first step, all users with access to security-sensitive information
have been required to change their passwords and use two-factor
authentication. We are reducing the number of users with privileged
access and limiting what each privileged user can do. In other words,
we are making it harder for an attacker to break in, providing fewer
opportunities to break in, and reducing the amount of information an
attacker can get by breaking in.
"
Posted Sep 4, 2015 23:29 UTC (Fri)
by smoogen (subscriber, #97)
[Link] (1 responses)
== The sentence for the computer buzz word robots to find. [This probably includes journalists of certain stripes.]
In other words, we are making it harder for an attacker to break in, providing fewer opportunities to break in, and reducing the amount of information an attacker can get by breaking in
== The sentence for humans.
Posted Sep 6, 2015 18:59 UTC (Sun)
by xtifr (guest, #143)
[Link]
...Or maybe I'm a robot. In which case, I hope that you, for one, will welcome your new robotic overlords. :)
Posted Sep 4, 2015 23:37 UTC (Fri)
by pboddie (guest, #50784)
[Link] (3 responses)
A few things (maybe found in the FAQ) are still left lingering. For example, is this a general Bugzilla problem or more a specific thing to Mozilla's instance (which is unhelpfully referenced as just "Bugzilla")? Free Software really needs robust bug trackers and tools, despite what the GitHub crowd may say, and projects need the confidence to keep using them. If I were still running Bugzilla instances, I'd really want to know a bit more about what to do with this news.
(I remember dealing with MediaWiki and actually getting some pretty good security-related news from that project, even though it wasn't my software of choice, and even though I later ended up just using the Red Hat packages, anyway.)
Posted Sep 4, 2015 23:56 UTC (Fri)
by tialaramex (subscriber, #21167)
[Link] (1 responses)
Posted Sep 5, 2015 12:49 UTC (Sat)
by pboddie (guest, #50784)
[Link]
Posted Sep 17, 2015 9:07 UTC (Thu)
by ssokolow (guest, #94568)
[Link]
Posted Sep 5, 2015 10:02 UTC (Sat)
by warrax (subscriber, #103205)
[Link] (2 responses)
(Not that I think proprietary or Chrome/Chromium code is necessarily any better, but damn... I guess this might be some of the reasoning behind Rust, but I doubt that those bugs are simple memory safety issues -- which are *usually* pretty easy to fix.)
Posted Sep 5, 2015 12:10 UTC (Sat)
by roc (subscriber, #30627)
[Link] (1 responses)
Posted Sep 5, 2015 17:04 UTC (Sat)
by warrax (subscriber, #103205)
[Link]
Posted Sep 5, 2015 16:37 UTC (Sat)
by drag (guest, #31333)
[Link] (8 responses)
What I am curious about is what form does the 'two factor' authentication take.
Admins have their personal passwords 'stolen' so that attackers can gain access to a project's servers has been a perennial problem for open source projects. Debian, Fedora, etc. If, for example, the 'second' factor is from a separate device.. like a cell phone with a number generator in it, then that would probably be extremely helpful.
Large scale distributed projects all seem very vulnerable these sorts of issues. If Mozilla can find a good solution then I am very interested in it.
Posted Sep 5, 2015 21:43 UTC (Sat)
by roc (subscriber, #30627)
[Link] (7 responses)
Posted Sep 6, 2015 0:43 UTC (Sun)
by drag (guest, #31333)
[Link] (6 responses)
Posted Sep 6, 2015 7:16 UTC (Sun)
by roc (subscriber, #30627)
[Link] (4 responses)
On the client side, off-the-shelf TOTP apps work, like Google Authenticator on Android.
Posted Sep 6, 2015 13:14 UTC (Sun)
by jhoblitt (subscriber, #77733)
[Link] (1 responses)
Posted Sep 6, 2015 21:11 UTC (Sun)
by iarenaza (subscriber, #4812)
[Link]
Posted Sep 11, 2015 16:12 UTC (Fri)
by hkario (subscriber, #94864)
[Link] (1 responses)
hardly a good pick to store your keys to the castle
Posted Sep 11, 2015 16:20 UTC (Fri)
by pizza (subscriber, #46)
[Link]
Posted Sep 9, 2015 8:39 UTC (Wed)
by ovitters (guest, #27950)
[Link]
If you browse around the repository they have all kinds of nice things in extensions. E.g. something which uses a http dns blacklist to automatically deny account creations based on IP address, etc.
Posted Sep 5, 2015 19:29 UTC (Sat)
by ibukanov (subscriber, #3942)
[Link] (6 responses)
Posted Sep 7, 2015 3:26 UTC (Mon)
by mathstuf (subscriber, #69389)
[Link]
Posted Sep 7, 2015 8:47 UTC (Mon)
by epa (subscriber, #39769)
[Link] (3 responses)
Posted Sep 7, 2015 17:06 UTC (Mon)
by ewan (guest, #5533)
[Link] (2 responses)
Posted Sep 8, 2015 13:39 UTC (Tue)
by cortana (subscriber, #24596)
[Link] (1 responses)
Posted Sep 10, 2015 16:58 UTC (Thu)
by kvaml (guest, #61841)
[Link]
Posted Sep 14, 2015 3:14 UTC (Mon)
by gutschke (subscriber, #27910)
[Link]
Posted Sep 5, 2015 21:14 UTC (Sat)
by rgmoore (✭ supporter ✭, #75)
[Link]
It seems to me that a critical point here is that the attackers were after information about known security bugs, presumably so they could exploit them before they were closed. I sincerely doubt this is a one-time thing. Every project that deals with security bugs needs to be looking at how it tracks them and whether they also have exploitable problems that could give an attacker access to all their known security bugs.
Posted Sep 7, 2015 5:07 UTC (Mon)
by xophos (subscriber, #75267)
[Link] (22 responses)
Posted Sep 7, 2015 7:19 UTC (Mon)
by roc (subscriber, #30627)
[Link] (21 responses)
The important question is whether it's being exploited or not. Vulnerabilities are often warehoused. If someone finds it but we roll out a fix before they can use it, or sell it to someone who uses it, no harm done. It certainly would be interesting to know what that number is though.
> So any feature development, update cycles, deadlines etc. become irrelevant until this vulnerability is fixed.
Bad guys have vulnerabilities that aren't reported to us, too. Should we stop everything and look for them? Your logic seems to suggest we should, but that doesn't make sense.
> So first warn your users instantly and do a fix within 2-3 days (What do you have auto update for anyway?).
We do try to get a fix within a few days, especially if the bug is present in a shipping release. (AFAICT these days most security bugs are found in code that hasn't reached release yet, even with our short release cycle.) However, doing a browser update every few days or even once a week would be a ton of of work and a significant burden on our users, so if there's no sign of the bug being known in the wild, we'll bundle into the next update, which is at most six weeks away (three, on average). This is standard practice.
> If you can't do that (because the fix is complicated) disable the vulnerable functionality until you have a real fix.
Deliberately breaking random Web sites or apps every week (or six) is not a realistic option.
> If you can't do that, tell your users how to work around the vulnerability.
This would be ineffective for most users.
> If you can't do that, advise your users not to use the your software for X days, where X is the number of days after which you are sure to have a fix
If every vendor followed your advice, there wouldn't be much software for people to use. Heartbleed -> turn off the Internet.
Your suggestions make some sense in the worst cases, e.g. when a public exploitation tool gets an exploit for some critical vulnerability. But when we have no reason to believe a vulnerability is circulating externally, extreme measures are a disservice to users.
Posted Sep 7, 2015 11:08 UTC (Mon)
by xophos (subscriber, #75267)
[Link] (20 responses)
Posted Sep 7, 2015 11:31 UTC (Mon)
by roc (subscriber, #30627)
[Link] (2 responses)
Posted Sep 7, 2015 14:55 UTC (Mon)
by xophos (subscriber, #75267)
[Link] (1 responses)
Posted Sep 7, 2015 23:08 UTC (Mon)
by roc (subscriber, #30627)
[Link]
Posted Sep 7, 2015 20:01 UTC (Mon)
by Wol (subscriber, #4433)
[Link] (15 responses)
In which case, again, we might as well shut down the internet. To quote Knuth, "I have not tested it, I have merely proven it correct", and Einstein, "Insofar as maths refers to reality, it is not certain, and insofar as maths is certain, it has no relation to reality".
There are all sorts of ways of minimising bugs, but at the end of the day, there is NO WAY of being confident that any program is going to run as intended. As soon as hardware of any sort (electrical, human, mechanical) gets involved, THERE ARE NO GUARANTEES.
Cheers,
Posted Sep 8, 2015 5:02 UTC (Tue)
by xophos (subscriber, #75267)
[Link] (14 responses)
Posted Sep 8, 2015 8:06 UTC (Tue)
by ovitters (guest, #27950)
[Link]
PS: No need to press enter all the time. Paragraphs are cool! :-P
Posted Sep 8, 2015 17:05 UTC (Tue)
by zlynx (guest, #2285)
[Link]
Posted Sep 14, 2015 19:52 UTC (Mon)
by ballombe (subscriber, #9523)
[Link] (11 responses)
There are complex software with almost no bugs: TeX, IJG libjpeg, qmail
Posted Sep 14, 2015 23:20 UTC (Mon)
by anselm (subscriber, #2796)
[Link] (10 responses)
The problem is that these programs weren't cleverly developed to be bug-free from the start – they started out just as buggy as all other programs, but were beaten on long enough for virtually all their bugs to be fixed (in the case of TeX, over 30 years by now). This approach is not very helpful if, like most people, you want to write your software and use (or sell) it right away.
Posted Sep 15, 2015 10:39 UTC (Tue)
by jezuch (subscriber, #52988)
[Link] (9 responses)
...and no new features added. That's the only way to bug-free software: 30 years of bug fixing + no new functionality, ever :)
Posted Sep 15, 2015 11:06 UTC (Tue)
by ibukanov (subscriber, #3942)
[Link] (8 responses)
No, you can also use theorem provers. For example, there is already a bug free C compiler and a microkernel, http://compcert.inria.fr/compcert-C.html . The real problem is that perceived cost of using formal verification is considered way too high even for mission critical software. I wonder what kind of hacking attack it would take to change the balance cost.
Posted Sep 15, 2015 11:16 UTC (Tue)
by anselm (subscriber, #2796)
[Link] (3 responses)
Of course the presumption there is that your formal specifications are bug-free. A verified implementation of a buggy specification isn't much better than a buggy implementation as far as the end result is concerned. C, for example, is probably still simple enough to lend itself to that sort of thing (although notably the authors of this compiler had to make do with a subset of the language). With something like TeX, things might look a bit different. Interactive programs, which are less easily described in terms of translating a well-specified language X into another well-specified language Y, are probably even more difficult.
Or, as Donald E. Knuth said, “Beware of bugs in the above code – I didn't try it, I only proved it correct” (as the author of TeX, he probably knows what he's talking about).
Posted Sep 15, 2015 11:52 UTC (Tue)
by ibukanov (subscriber, #3942)
[Link] (2 responses)
One does not need a detailed and inevitably complex formal specification to have a useful statement about the program. For example, proving that a C program does not have undefined behavior or that a user input is always properly escaped would be immediately useful even if that does not prevent bugs that could lead to denial-of-service. That is, the goal should not be to specify the detailed behavior, but rather the bounds of acceptable errors and then prove those bounds.
Posted Sep 16, 2015 6:10 UTC (Wed)
by jezuch (subscriber, #52988)
[Link] (1 responses)
Posted Sep 16, 2015 6:14 UTC (Wed)
by Cyberax (✭ supporter ✭, #52523)
[Link]
For example, for a train controller system, it's easy to come with an invariant "no trains on the same railway section" and even more elaborate like "a train must not approach a busy section with a speed more than 15 km/h".
Posted Sep 15, 2015 15:49 UTC (Tue)
by NAR (subscriber, #1313)
[Link] (3 responses)
"(transformation of C source text to CompCert C abstract syntax trees) is not formally verified"
So I think this is not a formally verified program that converts C source code into executable. Not surprisingly the hard part (handling user input) is missing. That's where most bugs are.
I remember CS class when I furiously tried to verify a practically "Hello World" level concurrent program and it took better part of the 90 minutes exam to achieve grade 2 (5 is best, 1 is failing). If I remember correctly, the avarage grade for the class on this particalar exam was below 2 - not because we were such lazy bastards who didn't study, but because it was hard and complicated. I knew the theory to pass the oral exam with grade 5, but computing the verification was really complicated even for a trivial task.
Posted Sep 16, 2015 8:16 UTC (Wed)
by cebewee (guest, #94775)
[Link] (2 responses)
Posted Sep 18, 2015 22:36 UTC (Fri)
by Wol (subscriber, #4433)
[Link] (1 responses)
Try and parse the following (legal!!!) statement in databasic
REM: REM = REM(10, 4); REM this calculates the remainder from dividing 10 by 4
That's a label, a variable, a function and a statement, all different, and all using the same identifier. (Incidentally, I don't know of a single compiler that correctly identifies all four - different implementations permit different syntaxes, but they are ALL valid.)
Now try providing a formal proof for a system that can cope with that ... :-)
Cheers,
Posted Sep 19, 2015 13:01 UTC (Sat)
by cebewee (guest, #94775)
[Link]
Posted Sep 8, 2015 7:02 UTC (Tue)
by oldtomas (guest, #72579)
[Link]
This sounds plausible if you assume that the users are the browser's primary customers. This is changing right in front of our eyes.
Chrome leads the pack, but it'd be naïve to assume the others won't follow (and you can see signs of it scattered across the last five to seven years or so).
Mozilla: Improving Security for Bugzilla
Mozilla: Improving Security for Bugzilla
Mozilla: Improving Security for Bugzilla
Mozilla: Improving Security for Bugzilla
Mozilla: Improving Security for Bugzilla
Mozilla: Improving Security for Bugzilla
Mozilla: Improving Security for Bugzilla
Mozilla: Improving Security for Bugzilla
Mozilla: Improving Security for Bugzilla
Mozilla: Improving Security for Bugzilla
Mozilla: Improving Security for Bugzilla
Mozilla: Improving Security for Bugzilla
Mozilla: Improving Security for Bugzilla
Mozilla: Improving Security for Bugzilla
Mozilla: Improving Security for Bugzilla
Mozilla: Improving Security for Bugzilla
Mozilla: Improving Security for Bugzilla
Mozilla: Improving Security for Bugzilla
Mozilla: Improving Security for Bugzilla
Mozilla: Improving Security for Bugzilla
Mozilla: Improving Security for Bugzilla
Mozilla: Improving Security for Bugzilla
Mozilla: Improving Security for Bugzilla
Mozilla: Improving Security for Bugzilla
Mozilla: Improving Security for Bugzilla
Look out other bug trackers
Mozilla: Improving Security for Bugzilla is the wrong angle
When an exploitable security problem is found by some good guy that reports it to Mozilla, chances are about 50% (higher, if you are more cynical than me) that some bad guy had it first. So any feature development, update cycles, deadlines etc. become irrelevant until this vulnerability is fixed.
Hello Mozilla people if you find a vulnerability:
So first warn your users instantly and do a fix within 2-3 days (What do you have auto update for anyway?).
If you can't do that (because the fix is complicated) disable the vulnerable functionality until you have a real fix.
If you can't do that, tell your users how to work around the vulnerability.
If you can't do that, advise your users not to use the your software for X days, where X is the number of days after which you are sure to have a fix ready and deployed.
Mozilla: Improving Security for Bugzilla is the wrong angle
> me) that some bad guy had it first.
> ready and deployed.
Mozilla: Improving Security for Bugzilla is the wrong angle
Mozilla: Improving Security for Bugzilla is the wrong angle
Mozilla: Improving Security for Bugzilla is the wrong angle
Mozilla: Improving Security for Bugzilla is the wrong angle
Mozilla: Improving Security for Bugzilla is the wrong angle
Wol
Mozilla: Improving Security for Bugzilla is the wrong angle
But it seems that the whole industry has their priorities backwards.
I know it makes economic sense: Users see new features, but they don't notice, when their computers are not owned by some stealthy malware.
Firefox crashes. It does this at least twice a week for me without using flash or other horrible plugins.
It does it even on android, where there are none of those.
I know that not all crashes are exploitable bugs, but they are more often than not.
Firefox doesn't need any more features. It doesen't need a new look. Heck it doesn't even need to go any faster.
The only thing that could make it better would be less bugs.
And when there are as few bugs in firefox as possible, as many unittests as possible, as much static checking of invariants as possible.
Then maybe html 6 or the next generation of ecmascript is out, and Feature development is needed for a short while.
Mozilla: Improving Security for Bugzilla is the wrong angle
Mozilla: Improving Security for Bugzilla is the wrong angle
Mozilla: Improving Security for Bugzilla is the wrong angle
but there is little interest in bug-free software.
Mozilla: Improving Security for Bugzilla is the wrong angle
Mozilla: Improving Security for Bugzilla is the wrong angle
Mozilla: Improving Security for Bugzilla is the wrong angle
Mozilla: Improving Security for Bugzilla is the wrong angle
Mozilla: Improving Security for Bugzilla is the wrong angle
Mozilla: Improving Security for Bugzilla is the wrong angle
Mozilla: Improving Security for Bugzilla is the wrong angle
From the linked article:
Mozilla: Improving Security for Bugzilla is the wrong angle
Mozilla: Improving Security for Bugzilla is the wrong angle
Mozilla: Improving Security for Bugzilla is the wrong angle
Wol
Mozilla: Improving Security for Bugzilla is the wrong angle
Mozilla: Improving Security for Bugzilla is the wrong angle