|
|
Subscribe / Log in / New account

A new Mindcraft moment?

By Jonathan Corbet
November 6, 2015
It is not often that Linux kernel development attracts the attention of a mainstream newspaper like The Washington Post; lengthy features on the kernel community's approach to security are even more uncommon. So when just such a feature hit the net, it attracted a lot of attention. This article has gotten mixed reactions, with many seeing it as a direct attack on Linux. The motivations behind the article are hard to know, but history suggests that we may look back on it as having given us a much-needed push in a direction we should have been going for some time.

Think back, a moment, to the dim and distant past — April 1999, to be specific. An analyst company named Mindcraft issued a report showing that Windows NT greatly outperformed Red Hat Linux 5.2 and Apache for web-server workloads. The outcry from the Linux community, including from a very young LWN, was swift and strong. The report was a piece of Microsoft-funded FUD trying to cut off an emerging threat to its world-domination plans. The Linux system had been deliberately configured for poor performance. The hardware chosen was not well supported by Linux at the time. And so on.

Once people calmed down a bit, though, one other fact came clear: the Mindcraft folks, whatever their motivations, had a point. Linux did, indeed, have performance problems that were reasonably well understood even at the time. The community then did what it does best: we sat down and fixed the problems. The scheduler got exclusive wakeups, for example, to put an end to the thundering-herd problem in the acceptance of connection requests. Numerous other little problems were fixed. Within a year or so, the kernel's performance on this kind of workload had improved considerably.

The Mindcraft report, in other words, was a much-needed kick in the rear that got the community to deal with issues that had been neglected until then.

The Washington Post article seems clearly slanted toward a negative view of the Linux kernel and its contributors. It freely mixes kernel problems with other issues (the AshleyMadison.com breakin, for example) that were not kernel vulnerabilities at all. The fact that vendors seem to have little interest in getting security fixes to their customers is danced around like a huge elephant in the room. There are rumors of dark forces that drove the article in the hopes of taking Linux down a notch. All of this could well be true, but it should not be allowed to overshadow the simple fact that the article has a valid point.

We do a reasonable job of finding and fixing bugs. Problems, whether they are security-related or not, are patched quickly, and the stable-update mechanism makes those patches available to kernel users. Compared to a lot of programs out there (free and proprietary alike), the kernel is quite well supported. But pointing at our ability to fix bugs is missing a crucial point: fixing security bugs is, in the end, a game of whack-a-mole. There will always be more moles, some of which we will not know about (and will thus be unable to whack) for a long time after they are discovered and exploited by attackers. These bugs leave our users vulnerable, even if the commercial side of Linux did a perfect job of getting fixes to users — which it decidedly does not.

The point that developers concerned about security have been trying to make for a while is that fixing bugs is not enough. We must instead realize that we will never fix them all and focus on making bugs harder to exploit. That means restricting access to information about the kernel, making it impossible for the kernel to execute code in user-space memory, instrumenting the kernel to detect integer overflows, and all the other things laid out in Kees Cook's Kernel Summit talk at the end of October. Many of these techniques are well understood and have been adopted by other operating systems; others will require innovation on our part. But, if we want to adequately defend our users from attackers, these changes need to be made.

Why hasn't the kernel adopted these technologies already? The Washington Post article puts the blame firmly on the development community, and on Linus Torvalds in particular. The culture of the kernel community prioritizes performance and functionality over security and is unwilling to make compromises if they are needed to improve the security of the kernel. There is some truth to this claim; the good news is that attitudes appear to be shifting as the scope of the problem becomes clear. Kees's talk was well received, and it clearly got developers thinking and talking about the issues.

The point that has been missed is that we do not just have a case of Linus fending off useful security patches. There simply are not many such patches circulating in the kernel community. In particular, the few developers who are working in this area have never made a serious attempt to get that work integrated upstream. Getting any large, intrusive patch set merged requires working with the kernel community, making the case for the changes, splitting the changes into reviewable pieces, dealing with review comments, and so on. It can be tiresome and frustrating, but it's how the kernel works, and it clearly results in a more generally useful, more maintainable kernel in the long run.

Almost nobody is doing that work to get new security technologies into the kernel. One might cite a "chilling effect" from the hostile reaction such patches can receive, but that is an inadequate answer: developers have managed to merge many changes over the years despite a difficult initial reaction. Few security developers are even trying.

Why aren't they trying? One fairly obvious answer is that almost nobody is being paid to try. Almost all of the work going into the kernel is done by paid developers and has been for many years. The areas that companies see fit to support get a lot of work and are well advanced in the kernel. The areas that companies think are not their problem are rather less so. The difficulties in getting support for realtime development are a clear case in point. Other areas, such as documentation, tend to languish as well. Security is clearly one of those areas. There are a lot of reasons why Linux lags behind in defensive security technologies, but one of the key ones is that the companies making money on Linux have not prioritized the development and integration of those technologies.

There are signs that things might be changing a bit. More developers are showing interest in security-related issues, though commercial support for their work is still less than it should be. The reaction against security-related changes might be less knee-jerk negative than it used to be. Efforts like the Kernel Self Protection Project are starting to work on integrating existing security technologies into the kernel.

We have a long way to go, but, with some support and the right mindset, a lot of progress can be made in a short time. The kernel community can do amazing things when it sets its mind to it. With luck, the Washington Post article will help to provide the needed impetus for that sort of setting of mind. History suggests that we will eventually see this moment as a turning point, when we were finally embarrassed into doing work that has clearly needed doing for a while. Linux should not have a substandard security story for much longer.


to post comments

A new Mindcraft moment?

Posted Nov 6, 2015 20:50 UTC (Fri) by PaXTeam (guest, #24616) [Link] (55 responses)

Jon, i'm not sure if you're just having bad days or your recent security related articles are actually hitting rock bottom in quality (reminds me to respond to that utter nonsense you posted about us and the recent KS discussion). some points for you and other conspiracy loving readers to consider:

1. this WP article was the 5th in a series of articles following the security of the internet from its beginnings to relevant topics of today. discussing the security of linux (or lack thereof) fits nicely in there. it was also a well-researched article with over two months of research and interviews, something you can't quite claim yourself for your recent pieces on the subject. you don't like the facts? then say so. or even better, do something constructive about them like Kees and others have been trying. however silly comparisons to old crap like the Mindcraft studies and fueling conspiracies don't exactly help your case.

2. "We do a reasonable job of finding and fixing bugs."

let's start here. is this statement based on wishful thinking or cold hard facts you're going to share in your response? according to Kees, the lifetime of security bugs is measured in years. that's more than the lifetime of many devices people buy and use and ditch in that period.

3. "Problems, whether they are security-related or not, are patched quickly,"

some are, some aren't: let's not forget the recent NMI fixes that took over 2 months to trickle down to stable kernels and we also have a user who has been waiting for over 2 weeks now: http://thread.gmane.org/gmane.comp.file-systems.btrfs/49500 (FYI, the overflow plugin is the first one Kees is trying to upstream, imagine the shitstorm if bugreports will be treated with this attitude, let's hope btrfs guys are an exception, not the rule). anyway, two examples are not statistics, so once again, do you have numbers or is it all wishful thinking? (it's partly a trick question because you'll also have to explain how something gets to be determined to be security related which as we all know is a messy business in the linux world)

4. "and the stable-update mechanism makes those patches available to kernel users."

except when it does not. and yes, i have numbers: grsec carries 200+ backported patches in our 3.14 stable tree.

5. "In particular, the few developers who are working in this area have never made a serious attempt to get that work integrated upstream."

you don't need to be shy about naming us, after all you did so elsewhere already. and we also explained the reasons why we have not pursued upstreaming our code: https://lwn.net/Articles/538600/ . since i don't expect you and your readers to read any of it, here's the tl;dr: if you want us to spend thousands of hours of our time to upstream our code, you will have to pay for it. no ifs no buts, that's how the world works, that's how >90% of linux code gets in too. i personally find it pretty hypocritic that well paid kernel developers are bitching about our unwillingness and inability to serve them our code on a silver platter for free. and before someone brings up the CII, go check their mail archives, after some initial exploratory discussions i explicitly asked them about supporting this long drawn out upstreaming work and got no answers.

A new Mindcraft moment?

Posted Nov 6, 2015 21:39 UTC (Fri) by patrick_g (subscriber, #44470) [Link] (22 responses)

I'm not sure why you always feel the need to be aggressive and confrontational in your posts...but you're perfectly right about the fact that this upstreaming work will have to be paid.
That's exactly the point of bojan's answer to your 2013 post : https://lwn.net/Articles/538658/

Money (aha) quote :

> I propose you spend none of your free time on this. Zero. I propose you get paid to do this. And well.

Nobody expect you to serve your code on a silver platter for free. The Linux foundation and big companies using Linux (Google, Red Hat, Oracle, Samsung, etc.) should pay security specialists like you to upstream your patchs.

A new Mindcraft moment?

Posted Nov 6, 2015 21:57 UTC (Fri) by nirbheek (subscriber, #54111) [Link] (7 responses)

> I'm not sure why you always feel the need to be aggressive and confrontational in your posts.

I would just like to point out that the way you phrased this makes your comment a tone argument[1][2]; you've (probably unintentionally) dismissed all of the parent's arguments by pointing at its presentation. The tone of PAXTeam's comment displays the frustration built up over the years with the way things work which I think should be taken at face value, empathized with, and understood rather than simply dismissed.

1. http://rationalwiki.org/wiki/Tone_argument
2. http://geekfeminism.wikia.com/wiki/Tone_argument

Cheers,

A new Mindcraft moment?

Posted Nov 7, 2015 0:55 UTC (Sat) by josh (subscriber, #17465) [Link] (2 responses)

It's also one reason, however, why most of that work has never quite managed to find its way into the upstream Linux kernel for most users to run, and why presentations like those at Kernel Summit have had such a hard time getting traction in the past. When the presentation, pitch, and general ability to collaborate is part of the standard process for getting code into the upstream kernel, commenting on it is not unreasonable, any more than it's unreasonable to comment on upstream's own failings at basic civility and decency.

A new Mindcraft moment?

Posted Nov 7, 2015 1:21 UTC (Sat) by PaXTeam (guest, #24616) [Link] (1 responses)

> [...]commenting on it is not unreasonable, any more than it's unreasonable to comment on upstream's own failings at basic civility and decency.

why, is upstream known for its basic civility and decency? have you even read the WP post under discussion, never mind past lkml traffic?

A new Mindcraft moment?

Posted Nov 7, 2015 5:37 UTC (Sat) by josh (subscriber, #17465) [Link]

I think you read my comment as the inverse of what I said: I was explicitly stating that upstream is *not* known for its civility or decency, and that it's not unreasonable to comment on that either.

No Argument

Posted Nov 7, 2015 5:34 UTC (Sat) by gmatht (guest, #58961) [Link] (3 responses)

To be a tone argument Patrick_G would have to be arguing. We seem to be all in agreement that more funding is needed for security. Unless someone is a funding manager here, all we can really do is argue over whether PaXTeam's negative tone or Corbet's positive tone is more appropriate. Arguably PaXTeam should keep the confrontational tone on LKML where it belongs...

No Argument

Posted Nov 7, 2015 6:09 UTC (Sat) by josh (subscriber, #17465) [Link] (2 responses)

> Arguably PaXTeam should keep the confrontational tone on LKML where it belongs...

Please don't; it doesn't belong there either, and it especially doesn't need a cheering section as the tech press (LWN generally excepted) tends to provide.

OK, but I was thinking of Linus Torvalds

Posted Nov 8, 2015 8:36 UTC (Sun) by gmatht (guest, #58961) [Link] (1 responses)

I don't disagree, but I imagine Linus Torvalds has done more to set the tone on LKML than anyone from the "Tech Press".

OK, but I was thinking of Linus Torvalds

Posted Nov 8, 2015 16:11 UTC (Sun) by pbonzini (subscriber, #60935) [Link]

I'm not sure how you can set the tone of LKML, considering that no one is probably reading LKML, and people as well as practices vary wildly across the various subsystem mailing lists.

A new Mindcraft moment?

Posted Nov 6, 2015 22:43 UTC (Fri) by PaXTeam (guest, #24616) [Link] (13 responses)

nice way to derail a discussion but last i checked, you don't have exclusive rights to be agressive and confrontational (you know, heed your own advice and all that :). as for your actual complaint, yes, i'm getting fed up with people writing about our work without doing any fact checking (look at how many articles here on lwn are grsec related vs. how many times Jon has ever asked us anything, it'd be a divide-by-zero error if you get what i mean). then we have a *rare* piece of actual journalism where the author went out of his way to do said fact checking (and not just one side, imagine that) and what does he get here from the faithful followers? someone has to stop the nonsense, and the faithful will have to face reality: this is 2015AD already and linux *still* has a security problem and no, it's not going to fix itself unless actual money (and not empty words) is thrown at it. nothing whatsoever happened since 2013 and i have no high hopes this time either.

A new Mindcraft moment?

Posted Nov 6, 2015 23:00 UTC (Fri) by pr1268 (guest, #24648) [Link] (3 responses)

it's not going to fix itself unless actual money (and not empty words) is thrown at it.

Why must you assume only money will fix this problem? Yes, I agree more resources should be spent on fixing Linux kernel security issues, but don't assume someone giving an organization (ahem, PAXTeam) money is the only solution. (Not mean to impugn PAXTeam's security efforts.)

The Linux development community may have had the wool pulled over its collective eyes with respect to security issues (either real or perceived), but simply throwing money at the problem won't fix this.

And yes, I do realize the commercial Linux distros do lots (most?) of the kernel development these days, and that implies indirect monetary transactions, but it's a lot more involved than just that.

A new Mindcraft moment?

Posted Nov 7, 2015 0:36 UTC (Sat) by PaXTeam (guest, #24616) [Link]

throwing money at the problem is a figure of speech, it means to actually pay the right people who know what they are doing to enable them to do what they're doing best. whether it's me personally or not is not the relevant point, but i happen to have the experience of the past 15 years to know how much time it takes to do research and develop these kinds of defense mechanisms and i'm telling you that it's no longer your weekend project that i'm sure many people would otherwise be glad to spend on this for free (as i have done so myself for all this time). the thousands of hours mentioned above was not a figure of speech however, that's what it would (and will, if this new kernel self protection thing is a serious effort) take to bring grsec features into mainline and that's not counting my most recent defense mechanism (RAP) that i developed over the past 4 years (which alone was a few thousand unpaid hours). so yes, no money - no security, this has not been a hobbyist's world for a long time now and i guess most people didn't notice only because of the likes of us who had stretched themselves out to unimaginable proportions but all that is coming to an end now. interesting times ahead for sure.

A new Mindcraft moment?

Posted Nov 7, 2015 7:34 UTC (Sat) by nix (subscriber, #2304) [Link] (1 responses)

Throwing money at PaXTeam is pointless -- they won't merge changes from someone who refuses to divulge their identity to anyone.

A new Mindcraft moment?

Posted Nov 7, 2015 9:49 UTC (Sat) by PaXTeam (guest, #24616) [Link]

2def2ef2ae5f3990aabdbe8a755911902707d268 says otherwise ;). also the person doing the work doesn't have to be the one who signs off on it, this already happens with company contributions in fact (and happened to bits and pieces of PaX if you check the git logs).

A new Mindcraft moment?

Posted Nov 6, 2015 23:13 UTC (Fri) by dowdle (subscriber, #659) [Link] (1 responses)

I guess because they talked to you that makes it well researched? And if Jon had only talked to you, his would have been too? I've interacted with journalists (mostly at the local level) a few times over the years... and they very rarely have a clue about what they are writing about, That is understandable because they can't be an expert on the wide range of topics they cover. They should be an expert on writing but other than that, they get a lot of things wrong. Anyone who has been part of an article getting written is aware that while the bulk of it might be correct, there will always be some mistakes... or at least in any about a complex subject... unless of course the author happens to be an expert on the subject themselves. That goes for this article. While you might be happy with the fact that they talked to you and got "both sides" when a lot of the time that doesn't happen... it also contained quite a few of groan-worthy statements.

I think you definitely agree with the gist of Jon's argument... not enough focus has been given to security in the Linux kernel... the article gets that part right... money hasn't been going towards security... and now it needs to. Aren't you glad?

A new Mindcraft moment?

Posted Nov 7, 2015 1:37 UTC (Sat) by PaXTeam (guest, #24616) [Link]

> I guess because they talked to you that makes it well researched?

they talked to spender, not me personally, but yes, this side of the coin is well represented by us and others who were interviewed. the same way Linus is a good representative of, well, his own pet project called linux.

> And if Jon had only talked to you, his would have been too.

given that i'm the author of PaX (part of grsec) yes, talking to me about grsec matters makes it one of the best ways to research it. but if you know of someone else, be my guest and name them, i'm pretty sure the recently formed kernel self-protection folks would be dying to engage them (or not, i don't think there's a sucker out there with thousands of hours of free time on their hand).

> [...]it also contained quite a few of groan-worthy statements.

nothing is perfect but considering the audience of the WP, this is one of the better journalistic pieces on the topic, regardless of how you and others don't like the sorry state of linux security exposed in there. if you want to discuss more technical details, nothing stops you from talking to us ;).

speaking of your complaints about journalistic qualities, since a previous LWN article saw it fit to include several typical dismissive claims by Linus about the quality of unspecified grsec features with no evidence of what experience he had with the code and how recent it was, how come we didn't see you or anyone else complaining about the quality of that article?

> Aren't you glad?

no, or not yet anyway. i've heard lots of empty phrases over the years and nothing ever manifested or worse, all the money has gone to the pointless exercise of fixing individual bugs and related circus (that Linus rightfully despises FWIW).

A new Mindcraft moment?

Posted Nov 7, 2015 0:18 UTC (Sat) by bojan (subscriber, #14302) [Link] (4 responses)

Let me ask the obvious question here. You tried to sell your security solutions for Linux kernel since 2013 to the big players (Red Hat, Oracle, Google etc.), but they all turned you down, correct?

A new Mindcraft moment?

Posted Nov 8, 2015 13:06 UTC (Sun) by k3ninho (subscriber, #50375) [Link] (3 responses)

I suspect that the rotten thing in Denmark is more rotten than that. Kees' point is that security attitudes have to change. I keep looking at the business I'm in and think that 'good enough' isn't defensive enough. I'd prefer that reliable & repeatable testing you can evaluate for yourself (let's call that 'science') is used to show that a patch from one level of functionality to a next version keeps old stuff working and improves on it with new features. There's not really enough of that in contemporary software production habits because of developers scratching itches that they have.

Right now we've got developers from big names saying that doing all that the Linux ecosystem does *safely* is an itch that they have. Unfortunately, the surrounding cultural attitude of developers is to hit functional goals, and occasionally performance goals. Security goals are often overlooked. Ideally, the culture would shift so that we make it difficult to follow insecure habits, patterns or paradigms -- that is a task that will take a sustained effort, not merely the upstreaming of patches.

Regardless of the culture, these patches will go upstream eventually anyway because the ideas that they embody are now timely. I can see a way to make it happen: Linus will accept them when a big end-user (say, Intel, Google, Facebook or Amazon) delivers stuff with notes like 'here's a set of improvements, we're already using them to solve this kind of problem, here's how everything will remain working because $evidence, note carefully that you're staring down the barrels of a fork because your tree is now evolutionarily disadvantaged'. It's a game and can be gamed; I'd prefer that the community shepherds users to follow the pattern of declaring problem + solution + functional test evidence + performance test evidence + security test evidence.

K3n.

A new Mindcraft moment?

Posted Nov 9, 2015 6:49 UTC (Mon) by jospoortvliet (guest, #33164) [Link] (2 responses)

I seriously doubt it will happen that way. I bet it will go exactly as Linus wants: some money will be put on the table and one or more people will work on getting the improvements from Pax and friends in, piece by piece, fixing performance issues, skipping those Linus won't like due to their impact (or rewriting them to lower that impact) and all that in the usual, incremental way. Why? Because decades of development have shown it is the best way of doing it.

And about that fork barrel: I'd argue it is the other way around. Google forked and lost already.

A new Mindcraft moment?

Posted Nov 12, 2015 6:25 UTC (Thu) by Garak (guest, #99377) [Link] (1 responses)

And about that fork barrel: I'd argue it is the other way around. Google forked and lost already.
What did they lose exactly? They made a lot of money in the business of forking. The ponzi scheme may be about to collapse as the elephant dance nears conclusion, but somehow I don't expect they'll lose any significant fraction of all the money they made getting to this point with the NSA's help.
The fact that vendors seem to have little interest in getting security fixes to their customers is danced around like a huge elephant in the room.

A new Mindcraft moment?

Posted Nov 23, 2015 6:33 UTC (Mon) by jospoortvliet (guest, #33164) [Link]

I was arguing quite simple- Google lost the connection with upstream before, had to work hard to get it back and now lost it again. It will cost time and money to get it back again. Yes, they still do just fine overall, I wouldn't worry about their stockholders.

A new Mindcraft moment?

Posted Nov 7, 2015 3:20 UTC (Sat) by corbet (editor, #1) [Link] (1 responses)

"this is 2015AD already and linux *still* has a security problem and no, it's not going to fix itself unless actual money (and not empty words) is thrown at it."

So I must confess to a certain amount of confusion. I could swear that the article I wrote said exactly that, but you've put a fair amount of effort into flaming it...?

A new Mindcraft moment?

Posted Nov 8, 2015 1:34 UTC (Sun) by PaXTeam (guest, #24616) [Link]

then i am just as confused as you are as that quote was in response to someone else, not your article ;). speaking of which we now have a blog about this whole WP post/kernel security/etc topic at https://forums.grsecurity.net/viewtopic.php?f=7&t=4309 .

A new Mindcraft moment?

Posted Nov 6, 2015 22:52 UTC (Fri) by flussence (guest, #85566) [Link]

> i personally find it pretty hypocritic that well paid kernel developers are bitching about our unwillingness and inability to serve them our code on a silver platter for free.

I personally think you and Nick Krause share opposite sides of the same coin. Programming ability and basic civility.

A new Mindcraft moment?

Posted Nov 6, 2015 22:59 UTC (Fri) by dowdle (subscriber, #659) [Link] (1 responses)

As they say... accept yes for an answer... and quit complaining about it. You won.

A new Mindcraft moment?

Posted Nov 7, 2015 0:16 UTC (Sat) by rahvin (guest, #16953) [Link]

I simply don't get the tone of the response. The article is clear that more money needs to go to security and that it needs to be a priority. But one of the ways you can get paid for doing work like this in the community is to establish that you are capable of doing all the work of getting patches submitted by doing just that.

I hope I'm wrong, but a hostile attitude isn't going to help anyone get paid. It's a time like this where something you seem to be an "expert" at and there is a demand for that expertise where you display cooperation and willingness to participate because it's an opportunity. I'm relatively shocked that someone doesn't get that, but I'm older and have seen a few of these opportunities in my career and exploited the hell out of them. You only get a few of these in the average career, and handful at the most.

Sometimes you have to invest in proving your skills, and this is one of those moments. It appears the Kernel community may finally take this security lesson to heart and embrace it, as said in the article as a "mindcraft moment". This is an opportunity for developers that may want to work on Linux security. Some will exploit the opportunity and others will thumb their noses at it. In the end those developers that exploit the opportunity will prosper from it.

I feel old even having to write that.

A new Mindcraft moment?

Posted Nov 7, 2015 1:00 UTC (Sat) by josh (subscriber, #17465) [Link] (18 responses)

> after some initial exploratory discussions i explicitly asked them about supporting this long drawn out upstreaming work and got no answers.

Perhaps there's a chicken and egg problem here, but when seeking out and funding people to get code upstream, it helps to select people and groups with a history of being able to get code upstream.

It's perfectly reasonable to prefer working out of tree, providing the ability to develop impressive and critical security advances unconstrained by upstream requirements. That's work someone might also wish to fund, if that meets their needs.

A new Mindcraft moment?

Posted Nov 7, 2015 1:28 UTC (Sat) by PaXTeam (guest, #24616) [Link] (17 responses)

it's probably worth reading those mail archives instead of speculating about them ;). case in point, it was *them* who suggested that they would not fund out-of-tree work but would consider funding upstreaming work, except when pressed for the details, all i got was silence. obviously i won't spend time to write up a begging proposal just to be told that 'no sorry, we do not fund multi-year projects at all'. that's something that one should be told in advance (or heck, be part of some public rules so that others will know the rules too). as for getting code upstream, how about you check the kernel git logs (minus the stuff that was not properly credited)?

A new Mindcraft moment?

Posted Nov 7, 2015 19:12 UTC (Sat) by jejb (subscriber, #6654) [Link] (16 responses)

> it's probably worth reading those mail archives instead of speculating about them ;)

You make this argument (implying you do research and Josh doesn't) and then fail to support it by any cite. It would be much more convincing if you give up on the Onus probandi rhetorical fallacy and actually cite facts.

> case in point, it was *them* who suggested that they would not fund out-of-tree work but would consider funding upstreaming work, except when pressed for the details, all i got was silence.

For those following along at home, this is the relevant set of threads:

http://lists.coreinfrastructure.org/pipermail/cii-discuss...

A quick precis is that they told you your project was unhealthy because the code was never going upstream. You told them it was because of kernel developers attitude so they should fund you anyway. They told you to submit a grant proposal, you whined more about the kernel attitudes and eventually even your apologist told you that submitting a proposal might be the best thing to do. At that point you went silent, not vice versa as you imply above.

> obviously i won't spend time to write up a begging proposal just to be told that 'no sorry, we do not fund multi-year projects at all'. that's something that one should be told in advance (or heck, be part of some public rules so that others will know the rules too).

You appear to have a fatally flawed grasp of how public funding works. If you don't tell people why you want the money and how you'll spend it, they're unlikely to disburse. Saying I'm brilliant and I know the problem now hand over the cash doesn't even work for most Academics who have a solid reputation in the field; which is why most of them spend >30% of their time writing grant proposals.

> as for getting code upstream, how about you check the kernel git logs (minus the stuff that was not properly credited)?

jejb@jarvis> git log|grep -i 'Author: pax.*team'|wc -l
1

Stellar, I must say. And before you light off on those who have misappropriated your credit, please remember that getting code upstream on behalf of reluctant or incapable actors is a hugely valuable and time consuming skill and one of the reasons groups like Linaro exist and are well funded. If more of your stuff does go upstream, it will be because of the not inconsiderable efforts of other people in this area.

You now have a business model selling non-upstream security patches to customers. There's nothing wrong with that, it's a fairly usual first stage business model, but it does rather depend on patches not being upstream in the first place, calling into question the earnestness of your attempt to put them there.

Now here's some free advice in my field, which is assisting companies align their businesses in open source: The selling out of tree patch route is always an eventual failure, particularly with the kernel, because if the functionality is that useful, it gets upstreamed or reinvented in your despite, leaving you with nothing to sell. If your business plan B is selling expertise, you have to bear in mind that it's going to be a hard sell when you've no out of tree differentiator left and git history denies that you had anything to do with the in-tree patches. In fact "crazy security person" will become a self fulfilling prophecy. The advice? it was obvious to everyone else who read this, but for you, it's do the upstreaming yourself before it gets done for you. That way you have a legitimate historical claim to Plan B and you might even have a Plan A selling a rollup of upstream track patches integrated and delivered before the distributions get around to it. Even your application to the CII couldn't be dismissed because your work wasn't going anywhere. Your alternative is to continue playing the role of Cassandra and probably suffer her eventual fate.

A new Mindcraft moment?

Posted Nov 7, 2015 23:20 UTC (Sat) by PaXTeam (guest, #24616) [Link] (14 responses)

i specifically meant this post: http://lists.coreinfrastructure.org/pipermail/cii-discuss... and my explicit question:

> Second, for the potentially viable pieces this would be a multi-year
> full time job. Is the CII willing to fund projects at that level? If not
> we all would end up with lots of unfinished and partially broken features.

please show me the answer to that question. without a definitive 'yes' there is no point in submitting a proposal because this is the time frame that in my opinion the job will take and any proposal with that requirement would be shot down immediately and be a waste of my time. and i stand by my claim that such simple basic requirements should be public information.

> Stellar, I must say.

"Lies, damned lies, and statistics". you realize there's more than one way to get code into the kernel? how about you use your git-fu to find all the bugreports/suggested fixes that went in due to us? as for specifically me, Greg explicitly banned me from future contributions via af45f32d25cc1 so it's no wonder i don't send patches directly in (and that one commit you found that went in despite said ban is actually a very bad example because it is also the one that Linus censored for no good reason and made me decide to never send security fixes upstream until that practice changes).

> You now have a business model selling non-upstream security patches to customers.

now? we've had paid sponsorship for our various stable kernel series for 7 years. i would not call it a business model though as it hasn't paid anyone's bills.

> [...]calling into question the earnestness of your attempt to put them there.

i must be missing something here but what attempt? i've never in my life tried to submit PaX upstream (for all the reasons discussed already). the CII mails were exploratory to see how serious that whole organization is about actually securing core infrastructure. in a sense i've got my answers, there's nothing more to the story.

as for your free advice, let me reciprocate: complex problems don't solve themselves. code solving complex problems doesn't write itself. people writing code solving complex problems are few and far between that you will find out in short order. such people (domain experts) don't work for free with few exceptions like ourselves. biting the hand that feeds you will only end you up in hunger.

PS: since you're so sure about kernel developers' ability to reimplement our code, maybe look at what parallel features i still maintain in PaX despite vanilla having a 'totally-not-reinvented-here' implementation and try to understand the reason. or just look at all the CVEs that affected say vanilla's ASLR but did not affect mine.

PPS: Cassandra never wrote code, i do. criticizing the sorry state of kernel security is a side project when i'm bored or just waiting for the next kernel to compile (i wish LTO was more efficient).

A new Mindcraft moment?

Posted Nov 8, 2015 2:28 UTC (Sun) by jejb (subscriber, #6654) [Link] (11 responses)

>i specifically meant this post: http://lists.coreinfrastructure.org/pipermail/cii-discuss... and my explicit question:

In other words, you tried to define their process for them ... I can't think why that wouldn't work.

> "Lies, damned lies, and statistics".

The problem with ad hominem attacks is that they're singularly ineffective against a transparently factual argument. I posted a one line command anyone could run to get the number of patches you've authored in the kernel. Why don't you post an equivalent that gives figures you like more?

> i've never in my life tried to submit PaX upstream (for all the reasons discussed already).

So the master plan is to demonstrate your expertise by the number of patches you haven't submitted? great plan, world domination beckons, sorry that one got away from you, but I'm sure you won't let it happen again.

A new Mindcraft moment?

Posted Nov 8, 2015 2:56 UTC (Sun) by PaXTeam (guest, #24616) [Link] (9 responses)

> In other words, you tried to define their process for them

what? since when does asking a question define anything? isn't that how we find out what someone else thinks? isn't that what *they* have that webform (never mind the mailing lists) for as well? in other words you admit that my question was not actually answered .

> The problem with ad hominem attacks is that they're singularly ineffective against a transparently factual argument.

you didn't have an argument to begin with, that's what i explained in the part you carefully chose not to quote. i'm not here to defend myself against your clearly idiotic attempts at proving whatever you're trying to prove, as they say even in kernel circles, code speaks, bullshit walks. you can look at mine and decide what i can or cannot do (not that you have the knowledge to understand most of it, mind you). that said, there're clearly other more capable people who have done so and decided that my/our work was worth something else nobody would have been feeding off of it for the past 15 years and still counting. and as unimaginable as it may appear to you, life doesn't revolve around the vanilla kernel, not everyone's dying to get their code in there especially when it means to put up with such silly hostility on lkml that you now also demonstrated here (it's ironic how you came to the defense of josh who specifically asked people not to bring that infamous lkml style here. nice job there James.). as for world domination, there're many ways to achieve it and something tells me that you're clearly out of your league here since PaX has already achieved that. you're running such code that implements PaX features as we speak.

A new Mindcraft moment?

Posted Nov 8, 2015 16:52 UTC (Sun) by jejb (subscriber, #6654) [Link] (8 responses)

> i'm not here to defend myself against your clearly idiotic attempts at proving whatever you're trying to prove, as they say even in kernel circles, code speaks, bullshit walks.

I posted the one line git script giving your authored patches in response to this original request by you (this one, just in case you've forgotten http://lwn.net/Articles/663591/):

> as for getting code upstream, how about you check the kernel git logs (minus the stuff that was not properly credited)?

I take it, by the way you've shifted ground in the previous threads, that you wish to withdraw that request?

A new Mindcraft moment?

Posted Nov 8, 2015 19:31 UTC (Sun) by PaXTeam (guest, #24616) [Link] (7 responses)

your script is wrong because writing patch != submitting patch. as a kernel maintainer you should know better. really, if you're still frustrated and are just trolling here because not that long ago spender exposed your lack of expertise in matters you claim to be a CTO of then how about you spend your time on educating yourself instead, it will come handy in the near future as more and more code demolishing the added value of your employer's out-of-tree code will enter the vanilla tree. as a self-confessed person "assisting companies align their businesses in open source" you might as well be looking for a new job soon.

A new Mindcraft moment?

Posted Nov 8, 2015 22:31 UTC (Sun) by pizza (subscriber, #46) [Link] (6 responses)

His script is wrong? Fine -- Put up, or shut up.

Please provide one that's not wrong, or less wrong. It will take less time than you've already wasted here.

A new Mindcraft moment?

Posted Nov 8, 2015 22:49 UTC (Sun) by PaXTeam (guest, #24616) [Link] (5 responses)

heh, what's up with James, is he running out of steam that he needs help now? :)

anyway, since it's you guys who have a bee in your bonnet, let's test your level of intelligence too. first figure out my email address and project name then try to find the commits that say they come from there (it brought back some memories from 2004 already, how times flies! i'm surprised i actually managed to accomplish this much with explicitly not trying, imagine if i did :). it's an incredibly complex task so by accomplishing it you'll prove yourself to be the top dog here on lwn, whatever that's worth ;).

A new Mindcraft moment?

Posted Nov 8, 2015 23:25 UTC (Sun) by pizza (subscriber, #46) [Link]

Again, put up, or shut up.

*shrug* Or don't; you're only sullying your own reputation.

A new Mindcraft moment?

Posted Nov 9, 2015 7:08 UTC (Mon) by jospoortvliet (guest, #33164) [Link]

Honestly, you make yourself look amazingly immature by just attacking people rather than responding reasonably. I guess it explains why you can't get code in Linux - it requires one's social skills and finesse to outweigh their ego.

A new Mindcraft moment?

Posted Nov 9, 2015 11:38 UTC (Mon) by hkario (subscriber, #94864) [Link]

congratulations, I'm now sure that you are one of those "crazy security people" and I'm not surprised that Linus doesn't want to work with you

I wouldn't either

A new Mindcraft moment?

Posted Nov 12, 2015 2:09 UTC (Thu) by jschrod (subscriber, #1646) [Link]

congrats; your posts made completely clear why you're part of the problem, and not part of the solution.

A new Mindcraft moment?

Posted Nov 12, 2015 8:50 UTC (Thu) by nwmcsween (guest, #62367) [Link]

I held the grsec/pax team in high regard due to the technical work they did but after seeing this I understand the reason grsec/pax isn't currently a part of linux in some form. Realistically you can have the work you've done reimplemented by someone else or you can stop complaining and put some effort into trying to secure funding to do it yourself.

A new Mindcraft moment?

Posted Nov 8, 2015 3:38 UTC (Sun) by PaXTeam (guest, #24616) [Link]

by the way James, just as a matter of fact checking, your diatribe above has surely nothing to do with the fact that you're the CTO of a certain company with a vested interest (and out-of-tree kernel code just for kicks but we now all know where your business advice comes from ;) in providing secure hosting which also as a matter of fact would not consider our work to be in direct competition with your products? as they say, when it rains, it pours.

A new Mindcraft moment?

Posted Nov 12, 2015 13:47 UTC (Thu) by nix (subscriber, #2304) [Link] (1 responses)

> Greg explicitly banned me from future contributions via af45f32d25cc1

Ah. I thought my memory wasn't failing me. Compare to PaXTeam's response to <http://lwn.net/Articles/663612/>.

PaXTeam is not averse to outright lying if it means he gets to appear right, I see. Maybe PaXTeam's memory is failing, and this apparent contradiction is not a brazen lie, but given that the two posts were made within a day of each other I doubt it. (PaXTeam's total unwillingness to assume good faith in others deserves some reflection. Yes, I *do* think he's lying by implication here, and doing so when there's almost nothing at stake. God alone knows what he's willing to stoop to when something *is* at stake. Gosh I wonder why his fixes aren't going upstream very fast.)

A new Mindcraft moment?

Posted Nov 12, 2015 14:11 UTC (Thu) by PaXTeam (guest, #24616) [Link]

if you'd extended your attention span towards the remaining part of the quoted sentence, you'd have found:

> and that one commit you found that went in despite said ban

also someone's ban doesn't mean it'll translate into someone else's execution of that ban as it's clear from the commit in question. it's somewhat sad that it takes a security fix to expose the fallacy of this policy though. the rest of your pithy ad hominem speaks for itself better than i ever could ;).

A new Mindcraft moment?

Posted Nov 12, 2015 15:58 UTC (Thu) by andreashappe (subscriber, #4810) [Link]

just wanted to say that the comment was really well written, thank you.

A new Mindcraft moment?

Posted Nov 7, 2015 19:01 UTC (Sat) by cwillu (guest, #67268) [Link]

re: http://thread.gmane.org/gmane.comp.file-systems.btrfs/49500 ("overflow in inode.c, file.c"):

I don't see this message in my mailbox, so presumably it got swallowed.

A new Mindcraft moment?

Posted Nov 7, 2015 22:33 UTC (Sat) by ssmith32 (subscriber, #72404) [Link]

How can the article be well-researched and fair when not one of the security incidents listed as examples has anything relevant to do with the Linux kernel?

You are aware that it's entirely possible that everyone is wrong here , right?

That the kernel maintainers need to focus more on security, that the article was biased, that you're irresponsible to decry the state of security, and do nothing to help, and that your patchsets wouldn't help that much and are the wrong direction for the kernel? That just because the kernel maintainers aren't 100% right it doesn't mean you are?

A new Mindcraft moment?

Posted Nov 9, 2015 9:50 UTC (Mon) by njd27 (subscriber, #5770) [Link] (7 responses)

1. this WP article was the 5th in a series of articles following the security of the internet from its beginnings to relevant topics of today. discussing the security of linux (or lack thereof) fits nicely in there. it was also a well-researched article with over two months of research and interviews, something you can't quite claim yourself for your recent pieces on the subject. you don't like the facts? then say so. or even better, do something constructive about them like Kees and others have been trying. however silly comparisons to old crap like the Mindcraft studies and fueling conspiracies don't exactly help your case.

I think you have him backwards there. Jon is comparing this to Mindcraft because he thinks that despite being unpalatable to a lot of the community, the article might in fact contain a lot of truth.

A new Mindcraft moment?

Posted Nov 9, 2015 14:03 UTC (Mon) by corbet (editor, #1) [Link] (6 responses)

That is indeed the point I was trying to make. A few people seem to have missed that; I guess I can only conclude that I wrote it poorly (rather than, say, write it off as a deliberate misreading) and apologize for the confusion.

A new Mindcraft moment?

Posted Nov 9, 2015 15:13 UTC (Mon) by spender (guest, #23067) [Link]

"The motivations behind the article are hard to know"

"There are rumors of dark forces that drove the article in the hopes of taking Linux down a notch. All of this could well be true"

Just as you criticized the article for mentioning Ashley Madison even though in the very first sentence of the following paragraph it mentions it didn't involve the Linux kernel, you can't give credence to conspiracy theories without incurring the same criticism (in other words, you can't play the Glenn Beck "I'm just asking the questions here!" whose "questions" fuel the conspiracy theories of others). Much like mentioning Ashley Madison as an example for non-technical readers about the prevalence of Linux in the world, if you're criticizing the mention then should not likening a non-FUD article to a FUD article also deserve criticism, especially given the rosy, self-congratulatory picture you painted of upstream Linux security?

As the PaX Team pointed out in the initial post, the motivations aren't hard to know -- you made no mention at all about it being the 5th in a long-running series following a pretty predictable time trajectory.

No, we didn't miss the overall analogy you were trying to make, we just don't think you can have your cake and eat it too.

-Brad

A new Mindcraft moment?

Posted Nov 9, 2015 15:18 UTC (Mon) by karath (subscriber, #19025) [Link]

I believe that I understood it as you meant it. I planned to write a comment in support and then saw the steaming pile that made up a large portion of the comments and thought again...

A new Mindcraft moment?

Posted Nov 9, 2015 17:06 UTC (Mon) by k3ninho (subscriber, #50375) [Link]

>That is indeed the point I was trying to make. A few people seem to have missed that; I guess I can only conclude that I wrote it poorly...

It's gracious of you not to blame your readers. I figure they're a fair target: there's that line about those ignorant of history being condemned to re-implement Unix -- as your readers are! :-)

K3n.

A new Mindcraft moment?

Posted Nov 9, 2015 18:43 UTC (Mon) by bojan (subscriber, #14302) [Link] (2 responses)

I totally understood your point. And I agree with it.

Unfortunately, I do not understand neither the "security" folks (PaXTeam/spender), nor the mainstream kernel folks in terms of their attitude. I confess I have totally no technical capabilities on any of these topics, but if they all decided to work together, instead of having endless and pointless flame wars and blame game exchanges, a lot of the stuff would have been done already. And all the while everyone involved may have made another big pile of money on the stuff. They all seem to want to have a better Linux kernel, so I've got no idea what the problem is. It seems that nobody is willing to yield any of their positions even a little bit. Instead, both sides appear to be bent on trying to insult their way into forcing the other side to give up. Which, of course, never works - it just causes more pushback.

Perplexing stuff...

A new Mindcraft moment?

Posted Nov 9, 2015 19:00 UTC (Mon) by sfeam (subscriber, #2841) [Link] (1 responses)

Interpersonal communication is certainly a major issue here, but even setting that aside there is a problem of comparing apples and oranges. The case for inclusion in the kernel rests on a cost/benefit argument. The cost is in part quantifiable in terms of performance (%decrease in throughput, %increase in memory usage, etc). But how do you quantify the benefit on a comparable scale? If you place an extremely high value on security, you come at this thinking that any improvement is worth the cost. If you place a less extreme value on security then the argument runs aground as just seen earlier in this thread: is an XX% slowdown a small price to pay or is it an unacceptable regression?

A new Mindcraft moment?

Posted Nov 9, 2015 19:44 UTC (Mon) by bojan (subscriber, #14302) [Link]

I don't think there are absolute answers to these questions.

Take a scientific computational cluster with an "air gap", for instance. You'd probably want most of the security stuff turned off on it to gain maximum performance, because you can trust all users. Now take a few billion mobile phones that may be difficult or slow to patch. You'd probably want to kill many of the exploit classes there, if those devices can still run reasonably well with most security features turned on.

So, it's not either/or. It's probably "it depends". But, if the stuff isn't there for everyone to compile/use in the vanilla kernel, it will be more difficult to make it part of everyday choices for distributors and users.

A new Mindcraft moment?

Posted Nov 6, 2015 22:20 UTC (Fri) by artem (subscriber, #51262) [Link] (6 responses)

> We must instead realize that we will never fix them all and focus on making bugs harder to exploit.

How sad. This Dijkstra quote comes to mind immediately:

Software engineering, of course, presents itself as another worthy cause, but that is eyewash: if you carefully read its literature and analyse what its devotees actually do, you will discover that software engineering has accepted as its charter "How to program if you cannot."

A new Mindcraft moment?

Posted Nov 7, 2015 0:35 UTC (Sat) by roc (subscriber, #30627) [Link] (4 responses)

Some software systems are necessarily so complicated that *no-one* is skilled enough to just "get it right".

I guess that fact was too unpleasant to fit into Dijkstra's world view.

A new Mindcraft moment?

Posted Nov 7, 2015 10:52 UTC (Sat) by ms (subscriber, #41272) [Link] (3 responses)

> Some software systems are necessarily so complicated that *no-one* is skilled enough to just "get it right".

Indeed. And the interesting thing to me is that once I reach that point, tests are not sufficient - model checking at a minimum and really proofs are the only way forwards. I'm no security expert, my field is all distributed systems. I understand and have implemented Paxos and I believe I can explain how and why it works to anyone. But I'm currently doing some algorithms combining Paxos with a bunch of variations on VectorClocks and reasoning about causality and consensus. No test is sufficient because there are infinite interleavings of events and my head just couldn't cope with working on this either at the computer or on paper - I found I could not intuitively reason about this stuff at all. So I started defining the properties and wanted and step by step proving why each of them holds. Without my notes and proofs I can't even explain to myself, let alone anyone else, why this thing works. I find this both completely obvious that this can happen and utterly terrifying - the maintenance cost of these algorithms is now an order of magnitude higher.

A new Mindcraft moment?

Posted Nov 19, 2015 12:24 UTC (Thu) by Wol (subscriber, #4433) [Link] (2 responses)

> > Some software systems are necessarily so complicated that *no-one* is skilled enough to just "get it right".

> Indeed. And the interesting thing to me is that once I reach that point, tests are not sufficient - model checking at a minimum and really proofs are the only way forwards.

Or are you just using the wrong maths? Hobbyhorse time again :-) but to quote a fellow Pick developer ... "I often walk into a SQL development shop and see that wall - you know, the one with the huge SQL schema that no-one fully understands on it - and wonder how I can easily hold the entire schema for a Pick database of the same or greater complexity in my head".

But it's easy - by education I'm a Chemist, by interest a Physical Chemist (and by profession an unemployed programmer :-). And when I'm thinking about chemistry, I can ask myself "what is an atom made of" and think about things like the strong nuclear force. Next level up, how do atoms stick together and make molecules, and think about the electroweak force and electron orbitals, and how do chemical reactions occur. Then I think about molecules stick together to make materials, and think about metals, and/or Van de Waals, and stuff.

Point is, you need to *layer* stuff, and look at things, and say "how can I split parts off into 'black boxes' so at any one level I can assume the other levels 'just work'". For example, with Pick a FILE (table to you) stores a class - a collection of identical objects. One object per RECORD (row). And, same as relational, one attribute per FIELD (column). Can you map your relational tables to reality so easily? :-)

Going back THIRTY years, I remember a story about a guy who built little computer crabs, that could quite happily scuttle around in the surf zone. Because he didn't try to work out how to solve all the problems at once - each of his (incredibly puny by today's standards - this is the 8080/Z80 era!) processors was set to just process a little bit of the problem and there was no central "brain". But it worked ... Maybe you should just write a bunch of small modules to solve each individual problem, and let ultimate answer "just happen".

Cheers,
Wol

A new Mindcraft moment?

Posted Nov 19, 2015 19:28 UTC (Thu) by ksandstr (guest, #60862) [Link] (1 responses)

>Point is, you need to *layer* stuff, and look at things, and say "how can I split parts off into 'black boxes' so at any one level I can assume the other levels 'just work'".

To my understanding, this is exactly what a mathematical abstraction does. For example in Z notation we'd construct schemas for the various modifying ("delta") operations on the base schema, and then argue about preservation of formal invariants, properties of the outcome, and transitivity of the operation when chained with itself, or the preceding aggregate schema composed of schemas A through O (for which they've been already argued).

The outcome is a set of operations that, executed in arbitrary order, result in a set of properties holding for the result and outputs. Thus proving the formal design correct (w/ caveat lectors concerning scope, correspondence with its implementation [though that can be proven as well], and read-only ["xi"] operations).

A new Mindcraft moment?

Posted Nov 20, 2015 11:23 UTC (Fri) by Wol (subscriber, #4433) [Link]

Except we have this amazing desire to explain/understand everything in terms of the lowest common denominator. Completely at odds with the *need* to abstract in order to understand.

Looking through the history of computing (and probably plenty of other fields too), you'll probably find that people "can't see the wood for the trees" more often that not. They dive into the detail and completely miss the big picture.

(Medicine, and interest of mine, suffers from that too - I remember somebody talking about the consultant wanting to amputate a gangrenous leg to save someone's life - oblivious to the fact that the patient was dying of cancer.)

Cheers,
Wol

A new Mindcraft moment?

Posted Nov 7, 2015 6:35 UTC (Sat) by dgc (subscriber, #6611) [Link]

My response to anyone quoting Dijkstra in contexts like this:

https://www.youtube.com/watch?v=VpuVDfSXs-g

(LCA 2015 - "Programming Considered Harmful")

FWIW, I think that this talk is very relevant to why writing secure software is so hard..

-Dave.

A new Mindcraft moment?

Posted Nov 7, 2015 5:49 UTC (Sat) by kunitz (subscriber, #3965) [Link] (3 responses)

I believe I'm qualified to comment here. I have worked some years on kernel code for the zd1211rw WLAN driver and my day job is working on security in a multinational financial institution.

While we are spending millions at a multitude of security problems, kernel issues are not on our top-priority list. Honestly I remember only once having discussing a kernel vulnerability. The result of the analysis has been that all our systems were running kernels that were older as the kernel that had the vulnerability.

But "patch management" is a real issue for us. Software must continue to work if we install security patches or update to new releases because of the end-of-life policy of a vendor. The revenue of the company is depending on the IT systems running. So "not breaking user space" is a security feature for us, because a breakage of one component of our several ten thousands of Linux systems will stop the roll-out of the security update.

Another problem is embedded software or firmware. These days almost all hardware systems include an operating system, often some Linux version, providing a fill network stack embedded to support remote management. Regularly those systems don't survive our obligatory security scan, because vendors still didn't update the embedded openssl.

The real challenge is to provide a software stack that can be operated in the hostile environment of the Internet maintaining full system integrity for ten years or even longer without any customer maintenance. The current state of software engineering will require support for an automated update process, but vendors must understand that their business model must be able to finance the resources providing the updates.

Overall I'm optimistic, networked software is not the first technology used by mankind causing problems that were addressed later. Steam engine use could result in boiler explosions but the "engineers" were able to reduce this risk significantly over a few decades.

A new Mindcraft moment?

Posted Nov 7, 2015 10:29 UTC (Sat) by ms (subscriber, #41272) [Link]

I think this emphasises some of the points in Jon's original article, namely: what is money spent on, and why? Essentially, you're saying that the bottom line of your employer is driven mainly by the reliability of the system, and to a much lesser extent the security, particularly where security impacts reliability.

The following is all guess work; I'd be keen to know if others have evidence either one way or another on this: The people who learn how to hack into these systems through kernel vulnerabilities know that they skills they've learnt have a market. Thus they don't tend to hack in order to wreak havoc - indeed on the whole where data has been stolen in order to release and embarrass people, it _seems_ as though those hacks are through much simpler vectors. I.e. lesser skilled hackers find there is a whole load of low-hanging fruit which they can get at. They're not being paid ahead of time for the data, so they turn to extortion instead. They don't cover their tracks, and they can often be found and charged with criminal offences.

So if your security meets a certain basic level of proficiency and/or your company isn't doing anything that puts it near the top of "companies we'd like to embarrass" (I suspect the latter is much more effective at keeping systems "safe" than the former), then the hackers that get into your system are likely to be skilled, paid, and probably not going to do much damage - they're stealing data for a competitor / state. So that doesn't bother your bottom line - at least not in a way which your shareholders will be aware of. So why fund security?

A new Mindcraft moment?

Posted Nov 7, 2015 17:02 UTC (Sat) by citypw (guest, #82661) [Link] (1 responses)

Many of my customers has the "breach" experiences and they do agree with that kernel is still a big issue inside the enterprise data centres. In the most common case of an attacking path in the wild-cyber-world may look like: Attacker try to find some vulns from web system( router's mgt UI or a regular website) --> exploit it to get the web shell --> privilege escalation of the linux kernel --> root( pwned). In the real world, more than 60% attacks from inside. Hardened kernel may be your last defensive line. See, kernel( and other core infrastructure including compiler( DDC issues), firmware) is still the matter that you should concern.

On the other hand, some effective mitigation in kernel level would be very helpful to crush cybercriminal/skiddie's try. If one of your customer running a future trading platform exposes some open API to their clients, and if the server has some memory corruption bugs can be exploited remotely. Then you know there are known attack methods( such as offset2lib) can help the attacker make the weaponized exploit so much easier. Will you explain the failosophy "A bug is bug" to your customer and tell them it'd be ok? Btw, offset2lib is useless to PaX/Grsecurity's ASLR imp.

To the most commercial uses, more security mitigation within the software won't cost you more budget. You'll still have to do the regression test for each upgrade.

A new Mindcraft moment?

Posted Nov 12, 2015 16:14 UTC (Thu) by andreashappe (subscriber, #4810) [Link]

I'm doing pen-tests and while it is nice if I get root-level access data-theft on databases is more "costly" for the companies. Getting access with web-server/database credentials gains lots of interesting attack vectors (be it client-side attacks or data-extraction) -- the linux kernel is not involved with any of them.

Keep in mind that I specialize in external web-based penetration-tests and that in-house tests (local LAN) will likely yield different results.

A new Mindcraft moment?

Posted Nov 7, 2015 20:33 UTC (Sat) by mattdm (subscriber, #18) [Link] (2 responses)

I keep reading this headline as "a new Minecraft moment", and thinking that maybe they've decided to follow up the .NET thing by open-sourcing Minecraft.

Oh well. I mean, security is good too, I guess.

A new Mindcraft moment?

Posted Nov 7, 2015 22:24 UTC (Sat) by ssmith32 (subscriber, #72404) [Link]

Lol. Part of me thought it might be a typo too.. but since I know of no "Minecraft moment", I figured it was likely my own ignorance, and was correct.

A new Mindcraft moment?

Posted Nov 12, 2015 17:29 UTC (Thu) by smitty_one_each (subscriber, #28989) [Link]

I knew I was not alone in this.

A new Mindcraft moment?

Posted Nov 8, 2015 10:34 UTC (Sun) by jcm (subscriber, #18262) [Link] (2 responses)

I actually quite enjoyed reading the article. I thought it was reasonably technically informed for a mainstream press piece. Sure, it miss-characterized a number of unrelated topics (Ashley Madison wasn't the only example of such) as being related to Linux's perceived poor security, but they can be forgiven for looking for the dramatic angle for their readership. All of that aside, a number of reasonable points were made. It's too easy to respond to such with an attack on the Washington Post, or with views that disparage other Operating Systems, or with any number of counter-points that ignore the Linux specific angle.

A new Mindcraft moment?

Posted Nov 9, 2015 7:15 UTC (Mon) by jospoortvliet (guest, #33164) [Link] (1 responses)

Same here. Yes, some things might have been dramatized to the point of being incorrect but as John pointed out, the gist of it wasn't wrong. And it was a good read :-)

A new Mindcraft moment?

Posted Nov 9, 2015 15:53 UTC (Mon) by nelljerram (subscriber, #12005) [Link]

I agree. I don't know what a reader unfamiliar with Linux would have thought; but for me (being generally on Linus's side of the argument) it clearly portrayed the upside that, by not tying itself in knots of paranoid security thinking, Linux now powers - well - pretty much everything. Imagine how much good in the world that has enabled.

(Oh, and I was also still wondering how Minecraft had taught us about Linux performance - so thanks to the other comment thread that pointed out the 'd', not 'e'.)

A new Mindcraft moment?

Posted Nov 9, 2015 11:31 UTC (Mon) by ortalo (guest, #4654) [Link] (5 responses)

Note: I don't want to enter the (always heated) debate in the previous comment thread with PaXTeam & co.

I'd just like to add that in my opinion, there is a general problem with the economics of computer security, which is especially visible currently. Two problems even maybe.

First, the money spent on computer security is often diverted towards the so-called security "circus": fast, easy solutions which are primarily selected just in order to "do something" and get better press. It took me a long time - maybe decades - to claim that no security mechanism at all is better than a bad mechanism. But now I firmly believe in this attitude and would rather take the risk knowingly (provided that I can save money/resource for myself) than take a bad approach at solving it (and have no money/resource left when I realize I should have done something else). And I find there are many bad or incomplete approaches currently available in the computer security field.
Those spilling our rare money/resources on ready-made useless tools should get the bad press they deserve. And, we certainly need to enlighten the press on that because it is not so easy to appreciate the efficiency of protection mechanisms (which, by definition, should prevent things from happening).

Second, and that may be more recent and more worrying. The flow of money/resource is oriented in the direction of attack tools and vulnerabilities discovery much more than in the direction of new protection mechanisms.
This is especially worrying as cyber "defense" initiatives look more and more like the usual idustrial projects aimed at producing weapons or intelligence systems. Furthermore, bad useless weapons, because they are only working against our very vulnerable current systems; and bad intelligence systems as even basic school-level encryption scares them down to useless.
However, all the ressources are for these adult teenagers playing the white hat hackers with not-so-difficult programming tricks or network monitoring or WWI-level cryptanalysis. And now also for the cyberwarriors and cyberspies that have yet to prove their usefulness entirely (especially for peace protection...).

Personnally, I'd happily leave them all the hype; but I'll forcefully claim that they have no right whatsoever on any of the budget allocation decisions. Only those working on protection should. And yep, it means we should decide where to put there resources. We have to claim the exclusive lock for ourselves this time. (and I guess the PaXteam could be among the first to benefit from such a change).

While thinking about it, I would not even leave white-hat or cyber-guys any hype in the end. That's more publicity than they deserve.
I crave for the day I will read in the newspaper that: "Another of these ill advised debutant programmer hooligans that pretend to be cyber-pirates/warriors modified some well known virus program code exploiting a programmer mistake and managed however to bring one of those unfinished and bad quality programs, X, that we are all obliged to use to its knees, annoying millions of regular users with his unfortunate cyber-vandalism. All the protection experts unanimously recommend that, once again, the budget of the cyber-command be retargetted, or at least leveled-off, in order to bring more security engineer positions in the academic domain or civilian industry. And that X's producer, XY Inc., be liable for the potential losses if proved to be unprofessional in this affair."

Hmmm - cyber-hooligans - I like the label. Though it does not apply well to the battlefield-oriented variant.

A new Mindcraft moment?

Posted Nov 9, 2015 14:28 UTC (Mon) by drag (guest, #31333) [Link] (1 responses)

> First, the money spent on computer security is often diverted towards the so-called security "circus":

The state of 'software security industry' is a f-ng disaster. Failure of the highest order. There is massive amounts of money that is going into 'cyber security', but it's usually spent on government compliance and audit efforts. This means instead of actually putting effort into correcting issues and mitigating future problems, the majority of the effort goes into taking existing applications and making them conform to committee-driven guidelines with the minimal amount of effort and changes.

Some level of regulation and standardization is absolutely needed, but lay people are clueless and are completely unable to discern the difference between somebody who has valuable experience versus some company that has spent millions on slick marketing and 'native advertising' on large websites and computer magazines. The people with the money unfortunately only have their own judgment to rely on when buying into 'cyber security'.

> Those spilling our rare money/resources on ready-made useless tools should get the bad press they deserve.

There is no such thing as 'our rare money/resources'. You have your money, I have mine. Money being spent by some corporation like Redhat is their money. Money being spent by governments is the government's money. (you, literally, have far more control in how Walmart spends it's money then over what your government does with their's)

> This is especially worrying as cyber "defense" initiatives look more and more like the usual idustrial projects aimed at producing weapons or intelligence systems. Furthermore, bad useless weapons, because they are only working against our very vulnerable current systems; and bad intelligence systems as even basic school-level encryption scares them down to useless.

Having secure software with strong encryption mechanisms in the hands of the public runs counter to the interests of most major governments. Governments, like any other for-profit organization, are primarily interested in self-preservation. Money spent on drone initiatives or banking auditing/oversight regulation compliance is FAR more valuable to them then trying to help the public have a secure mechanism for making phone calls. Especially when those secure mechanisms interfere with data collection efforts.

Unfortunately you/I/us cannot depend on some magical benefactor with deep pockets to sweep in and make Linux better. It's just not going to happen.

Corporations like Redhat have been massively beneficial to spending resources to make Linux kernel more capable.. however they are driven by a the need to turn a profit, which means they need to cater directly to the the sort of requirements established by their customer base. Customers for EL tend to be much more focused on reducing costs associated with administration and software development then security at the low-level OS.

Enterprise Linux customers tend to rely on physical, human policy, and network security to protect their 'soft' interiors from being exposed to external threats.. assuming (rightly) that there is very little they can do to actually harden their systems. In fact when the choice comes between security vs convenience I am sure that most customers will happily defeat or strip out any security mechanisms introduced into Linux.

On top of that when most Enterprise software is extremely bad. So much so that 10 hours spent on improving a web front-end will yield more real-world security benefits then a 1000 hours spent on Linux kernel bugs for most businesses.

Even for 'normal' Linux users a security bug in their Firefox's NAPI flash plugin is far more devastating and poses a massively higher risk then a obscure Linux kernel buffer over flow problem. It's just not really important for attackers to get 'root' to get access to the important information... generally all of which is contained in a single user account.

Ultimately it's up to individuals like you and myself to put the effort and money into improving Linux security. For both ourselves and other people.

A new Mindcraft moment?

Posted Nov 10, 2015 11:05 UTC (Tue) by ortalo (guest, #4654) [Link]

I am not at all waiting after a "magical benefactor", though I easily admit that it may really look like that.
My key point is actually showing that money/resources is spilled currently in this field, due to the lack of maturity or - much more worryingly - the bad faith of some of the actors [1]. Especially among some of the big organizations that, logically, have the most resources to spill.

Spilling has always been the case, but now, to me and in computer security, most of the money seems spilled due to bad faith. And this is mostly your money or mine: either tax-fueled governemental resources or corporate costs that are directly reimputed on the prices of goods/software we are told we are *obliged* to buy. (Look at corporate firewalls, home alarms or antivirus software marketing discourse.)

I think it is time to point out that there are several "malicious malefactors" around and that there is a real need to identify and sanction them and confiscate the resources they have somehow managed to monopolize. And I do *not* think Linus is among such culprits by the way. But I think he may be among the ones hiding their heads in the sand about the aforementioned evil actors, while he probably has more leverage to counteract them or oblige them to reveal themselves than many of us.
I find that to be of brown-paper-bag level (though head-in-the-sand is somehow a new interpretation).

In the end, I think you are right to say that currently it's only up to us individuals to try honestly to do something to improve Linux or computer security. But I still think that I am right to say that this is not normal; especially while some very serious people get very serious salaries to distribute randomly some difficult to evaluate budgets.

[1] A paradoxical situation when you think about it: in a domain where you are first and foremost preoccupied by malicious individuals everyone should have factual, transparent and honest behavior as the first priority in their mind.

A new Mindcraft moment?

Posted Nov 9, 2015 15:47 UTC (Mon) by MarcB (subscriber, #101804) [Link] (2 responses)

One of my all-time favourite readings sums the situation up very well and raises all of the issues you do (and some more): http://www.ranum.com/security/computer_security/editorial...

It even has a nice, seven line Basic-pseudo-code that describes the current situation and clearly shows that we are caught in an endless loop. It does not answer the big question, though: How to write better software.

The sad thing is, that this is from 2005 and all the things that were obviously stupid ideas 10 years ago have proliferated even more.

A new Mindcraft moment?

Posted Nov 10, 2015 11:20 UTC (Tue) by ortalo (guest, #4654) [Link]

Thanks for the link! Very nice page indeed and I did not know it.

Note IMHO, we should investigate further why these dumb things proliferate and get so much support.
If it's only human psychology, well, let's fight it: e.g. Mozilla has shown us that they can do wonderful things given the right message.
If we are facing active people exploiting public credulity: let's identify and fight them.

But, more importantly, let's capitalize on this knowledge and secure *our* systems, to show off at a minimum (and more later on of course).

Your reference conclusion is especially nice to me. "challenge [...] the conventional wisdom and the status quo": that job I would happily accept.

A new Mindcraft moment?

Posted Nov 30, 2015 9:39 UTC (Mon) by paulj (subscriber, #341) [Link]

I gave up reading that. If "default permit" and "enumerating badness" are the top things on that list, it's not a very good list. The converse of "default deny" is a disease of the nuttier "security at all costs (usefulness? what's that?)" types. The converse of "enumerating goodness" is as unscalable and unrealistic as "enumerating badness", and again its on the nutty "security over usefulness" side of security.

That rant is itself a bunch of "empty calories". The converse to the items it rants about, which it is suggesting at some level, would be as bad or worse, and indicative of the worst kind of security thinking that has put a lot of people off. Alternatively, it is just a rant that offers little of value.

Personally, I think there's no magic bullet. Security is and always has been, in human history, an arms race between defenders and attackers, and one that is inherently a trade-off between usability, risks and costs. If there are mistakes being made, it is that we should probably spend more resources on defences that could block entire classes of attacks. E.g., why is the GRSec kernel hardening stuff so hard to apply to regular distros (e.g. there's no reliable source of a GRSec kernel for Fedora or RHEL, is there?). Why does the entire Linux kernel run in one security context? Why are we still writing lots of software in C/C++, often without any basic security-checking abstractions (e.g. basic bounds-checking layers in between I/O and parsing layers, say)? Can hardware do more to provide security with speed?

No doubt there are plenty of people working on "block classes of attacks" stuff, the question is, why aren't there more resources directed there?

A new Mindcraft moment?

Posted Nov 10, 2015 2:06 UTC (Tue) by timrichardson (subscriber, #72836) [Link] (3 responses)

For as a long as I can remember, a core commercial appeal of linux has been stability and security. Apparently this statement is true:

>There are a lot of reasons why Linux lags behind in defensive security technologies, but one of the key ones is that the companies making money on Linux have not prioritized the development and integration of those technologies.

This seems like a reason which is really worth exploring. Why is it so?

I think it is not obvious why this doesn't get some more attention. Is it possible that the people with the money are right not to more highly prioritise this? Afterall, what interest do they have in an unsecure, exploitable kernel? Where there is common cause, linux development gets resourced. It's been this way for many years. If filesystems qualify for common interest, surely security does. So there doesn't seem to be any obvious reason why this issue does not get more mainstream attention, except that it actually already gets enough. You may say that disaster has not struck yet, that the iceberg has not been hit. But it seems to be that the linux development process is not overly reactive elsewhere.

A new Mindcraft moment?

Posted Nov 10, 2015 15:53 UTC (Tue) by raven667 (subscriber, #5198) [Link] (1 responses)

> Is it possible that the people with the money are right not to more highly prioritise this?

That is an interesting question, certainly that is what they actually believe regardless of what they publicly say about their commitment to security technologies. What is the actually demonstrated downside for Kernel developers and the organizations that pay them, as far as I can tell there is not sufficient consequence for the lack of Security to drive more funding, so we are left begging and cajoling unconvincingly.

A new Mindcraft moment?

Posted Nov 12, 2015 14:37 UTC (Thu) by ortalo (guest, #4654) [Link]

Unfortunately, relying only on potential consequences evaluation is inadequate for strategical management in the security field.

The key issue with this domain is it relates to malicious faults. So, when consequences manifest themselves, it is too late to act. And if the current commitment to a lack of voluntary strategy persists, we are going to oscillate between phases of relaxed inconscience and anxious paranoia.

Admittedly, kernel developpers seem pretty resistant to paranoia. That is a good thing. But I am waiting for the days where armed land-drones patrol US streets in the vicinity of their children schools for them to discover the feeling. They are not so distants the days when innocent lives will unconsciouly rely on the security of (linux-based) computer systems; under water, that's already the case if I remember correctly my last dive, as well as in several recent cars according to some reports.

A new Mindcraft moment?

Posted Nov 12, 2015 14:32 UTC (Thu) by MarcB (subscriber, #101804) [Link]

My guess is that there is a growing disconnect between actual users of and contributors to the kernel.

Classic hosting companies that use Linux as an exposed front-end system are retreating from development while HPC, mobile and "generic enterprise", i.E. RHEL/SLES, are pushing the kernel in their directions.

This is really not that surprising: For hosting needs the kernel has been "finished" for quite some time now. Besides support for current hardware there is not much use for newer kernels. Linux 3.2, or even older, works just fine.

Hosting does not need scalability to hundreds or thousands of CPU cores (one uses commodity hardware), complex instrumentation like perf or tracing (systems are locked down as much as possible) or advanced power-management (if the system does not have constant high load, it is not making enough money). So why should hosting companies still make strong investments in kernel development? Even if they had something to contribute, the hurdles for contribution have become higher and higher.

For their security needs, hosting companies already use Grsecurity. I have no numbers, but some experience suggests that Grsecurity is basically a fixed requirement for shared hosting.

On the other hand, kernel security is almost irrelevant on nodes of a super computer or on a system running large business databases that are wrapped in layers of middle-ware. And mobile vendors simply do not care.

A new Mindcraft moment?

Posted Nov 10, 2015 4:18 UTC (Tue) by bronson (subscriber, #4806) [Link] (1 responses)

Was it intentional not to link to the Washington Post article in question?

Linking

Posted Nov 10, 2015 13:15 UTC (Tue) by corbet (editor, #1) [Link]

I took the link-to-the-link approach in the first paragraph to pull in the conversation that had happened previously.

How about the long overdue autopsy on the August 2011 kernel.org compromise?

Posted Nov 11, 2015 22:38 UTC (Wed) by rickmoen (subscriber, #6943) [Link] (6 responses)

All of this reminds me of something tangential but, I think, very relevant indeed.

The assembled doubtless recall that in August 2011, kernel.org was root compromised. I'm sure the system's hard drives were sent off for forensic examination, and we've all been waiting patiently for the answer to the most important question: What was the compromise vector?

From shortly after the compromise was discovered on August 28, 2011, right through April 1st, 2013, kernel.org included this note at the top of the Site News: 'Thanks to all for your patience and understanding during our outage and please bear with us as we bring up the different kernel.org systems over the next few weeks. We will be writing up a report on the incident in the future.' (Emphasis added.) That comment was removed (along with the rest of the Site News) during a May 2013 edit, and there hasn't been -- to my knowledge -- a peep about any report on the incident since then.

This has been disappointing. When the Debian Project discovered sudden compromise of several of its servers in 2007, Wichert Akkerman wrote and posted an excellent public report on exactly what happened. Likewise, the Apache Foundation likewise did the right thing with good public autopsies of the 2010 Web site breaches.

Arstechnica's Dan Goodin was still trying to follow up on the lack of an autopsy on the kernel.org meltdown -- in 2013. Two years ago. He wrote:

Linux developer and maintainer Greg Kroah-Hartman told Ars that the investigation has yet to be completed and gave no timetable for when a report might be released. [...] Kroah-Hartman also told Ars kernel.org systems were rebuilt from scratch following the attack. Officials have developed new tools and procedures since then, but he declined to say what they are. "There will be a report later this year about site [sic] has been engineered, but don't quote me on when it will be released as I am not responsible for it," he wrote.

Who's responsible, then? Is anyone? Anyone? Bueller? Or is it a state secret, or what? Two years since Greg K-H said there would be a report 'later this year', and four years since the meltdown, nothing yet. How about some information?

Rick Moen
rick@linuxmafia.com

How about the long overdue autopsy on the August 2011 kernel.org compromise?

Posted Nov 12, 2015 14:19 UTC (Thu) by ortalo (guest, #4654) [Link]

Thank you for the reminder. Unfortunately, I am only a potential reader of the document but obviously, I second your request. And even volunteering, maybe they need some help for the post-mortem analysis.

Less seriously, note that if even the Linux mafia does not know, it must be the venusians; they are notoriously stealth in their invasions.

How about the long overdue autopsy on the August 2011 kernel.org compromise?

Posted Nov 14, 2015 12:46 UTC (Sat) by error27 (subscriber, #8346) [Link] (4 responses)

The compromise vector was made public at the time. http://www.theregister.co.uk/2011/08/31/linux_kernel_secu... Hackers stole an admin's ssh key.

I know the kernel.org admins have given talks about some of the new protections that have been put into place. There are no more shell logins, instead everything uses gitolite. The different services are on different hosts. There are more kernel.org staff now. People are using two factor identification. Some other stuff. Do a search for Konstantin Ryabitsev.

How about the long overdue autopsy on the August 2011 kernel.org compromise?

Posted Nov 14, 2015 15:58 UTC (Sat) by rickmoen (subscriber, #6943) [Link] (3 responses)

error27 wrote:
The compromise vector was made public at the time. http://www.theregister.co.uk/2011/08/31/linux_kernel_security_breach/ Hackers stole an admin's ssh key.

I beg your pardon if I was somehow unclear: That was said to have been the path of entry to the machine (and I can readily believe that, as it was also the exact path to entry into shells.sourceforge.net, many years prior, around 2002, and into many other shared Internet hosts for many years). But that is not what is of primary interest, and is not what the forensic study long promised would primarily concern: How did intruders escalate to root. To quote kernel.org administrator in the August 2011 Dan Goodin article you cited: 'How they managed to exploit that to root access is currently unknown and is being investigated'.

OK, folks, you've now had four years of investigation. What was the path of escalation to root? (Also, other details that would logically be covered by a forensic study, such as: Whose key was stolen? Who stole the key?) This is the sort of autopsy was promised prominently on the front page of kernel.org, to reporters, and elsewhere for a long time (and then summarily removed as a promise from the front page of kernel.org, without comment, along with the rest of the Site News section, and apparently dropped). It still would be appropriate to know and share that knowledge. Especially the datum of whether the path to root privilege was or was not a kernel bug (and, if not, what it was).

Rick Moen
rick@linuxmafia.com

How about the long overdue autopsy on the August 2011 kernel.org compromise?

Posted Nov 22, 2015 12:42 UTC (Sun) by rickmoen (subscriber, #6943) [Link] (2 responses)

I've done a closer review of revelations that came out soon after the break-in, and think I've found the answer, via a leaked copy of kernel.org chief sysadmin John H. 'Warthog9' Hawley's Aug. 29, 2011 e-mail to shell users (two days before the public was informed), plus Aug. 31st comments to The Register's Dan Goodin by 'two security researchers who were briefed on the breach':

Root escalation was via exploit of a Linux kernel security hole: Per the two security researchers, it was one both extremely embarrassing (wide-open access to /dev/mem contents including the running kernel's image in RAM, in 2.6 kernels of that day) and known-exploitable for the prior six years by canned 'sploits, one of which (Phalanx) was run by some script kiddie after entry using stolen dev credentials. Other tidbits:

  • Site admins left the root-compromised Internet servers running with all services still lit up, for multiple days.
  • Site admins and Linux Foundation sat on the knowledge and failed to inform the public for those same multiple days.
  • Site admins and Linux Foundation have never revealed whether trojaned Linux source tarballs were posted in the http/ftp tree for the 19+ days before they took the site down. (Yes, git checkout was fine, but what about the thousands of tarball downloads?)
  • After promising a report for several years and then quietly removing that promise from the front page of kernel.org, Linux Foundation now stonewalls press queries.

I posted my best attempt at reconstructing the story, absent a real report from insiders, to SVLUG's main mailing list yesterday. (Necessarily, there are surmises. If the people with the facts were more forthcoming, we'd know what happened for certain.)

I do have to wonder: If there's another embarrassing screwup, will we even be told about it at all?

Rick Moen
rick@linuxmafia.com

How about the long overdue autopsy on the August 2011 kernel.org compromise?

Posted Nov 22, 2015 14:25 UTC (Sun) by spender (guest, #23067) [Link] (1 responses)

Phalanx is a rootkit, not an exploit. It uses (or could use) /dev/mem to load itself into the kernel. I think you may have misinterpreted the readme -- it's not saying that /dev/mem was readable/writable by everyone, it's just saying that for users with proper permission to access it, they can read/modify any physical memory. Phalanx itself therefore wouldn't have been the privilege escalation vector. Also, it's unclear from the email you posted, but the "/dev/mem error message" they're talking about was probably the logging from STRICT_DEVMEM. If that's the case, then the rootkit wasn't successful (the Phalanx readme might be referring to older RH/Fedora versions of /dev/kmem restrictions, not the "newer" STRICT_DEVMEM).

Also, it's preferable to use live memory acquisition prior to powering off the system, otherwise you lose out on memory-resident artifacts that you can perform forensics on.

-Brad

How about the long overdue autopsy on the August 2011 kernel.org compromise?

Posted Nov 22, 2015 16:28 UTC (Sun) by rickmoen (subscriber, #6943) [Link]

Thanks for your comments, Brad.

I'd been relying on Dan Goodin's claim of Phalanx being what was used to gain root, in the bit where he cited 'two security researchers who were briefed on the breach' to that effect. Goodin also elaborated: 'Fellow security researcher Dan Rosenberg said he was also briefed that the attackers used Phalanx to compromise the kernel.org machines.' This was the first time I've heard of a rootkit being claimed to be bundled with an attack tool, and I noted that oddity in my posting to SVLUG.

That having been said, yeah, the Phalanx README doesn't specifically claim this, so then maybe Goodin and his several 'security researcher' sources blew that detail, and nobody but kernel.org insiders yet knows the escalation path used to gain root.

Also, it's preferable to use live memory acquisition prior to powering off the system, otherwise you lose out on memory-resident artifacts that you can perform forensics on.

Arguable, but a tradeoff; you can poke the compromised live system for state data, but with the drawback of leaving your system running under hostile control. I was always taught that, on balance, it's better to pull power to end the intrusion.

Rick Moen
rick@linuxmafia.com

A new Mindcraft moment?

Posted Nov 20, 2015 8:23 UTC (Fri) by toyotabedzrock (guest, #88005) [Link] (2 responses)

The other problem is how the kernel makes it hard for drivers that are not open source to continue working. Something is making what should be simple updates prohibitively expensive for consumer product companies.

A new Mindcraft moment?

Posted Nov 20, 2015 9:31 UTC (Fri) by gioele (subscriber, #61675) [Link] (1 responses)

> The other problem is how the kernel makes it hard for drivers that are not open source to continue working.
> Something is making what should be simple updates prohibitively expensive for consumer product companies.

With "something" you mean those who produce those closed source drivers, right?

If the "consumer product companies" just stuck to using parts with mainlined open source drivers, then updating their products would be much easier.

A new Mindcraft moment?

Posted Nov 20, 2015 11:29 UTC (Fri) by Wol (subscriber, #4433) [Link]

Bear in mind that closed source drivers are probably the BIGGEST single security hole in the kernel, anyway!

They have ring 0 privilege, can access protected memory directly, and cannot be audited. Trick a kernel into running a compromised module and it's game over.

Even tickle a bug in a "good" module, and it's probably game over - in this case quite literally as such modules tend to be video drivers optimised for games ... :-)

Cheers,
Wol


Copyright © 2015, Eklektix, Inc.
This article may be redistributed under the terms of the Creative Commons CC BY-SA 4.0 license
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds