Don't let the bastards wear you down. The LWN kernel coverage benefits your subscribers and the larger community of readers. Publication of incessant sniping by PaXTeam does not. The better choice is to continue the former and block the latter. There is no shame in choosing to killfile sources of predictable aggrevation. I know there is always a temptation to peek at messages in the spam bin, but resist it.
That'd be sad - at least I appreciate them. And from my POV it seems that minimizing mainline security work is pretty much PaXTeam's goal. Reducing positive-ish coverage of such efforts seems like part of that.
If necessary I'd rather see you disable comments on these articles, or just you putting PaXTeam on ignore.
So, PaX...is $OTHER_PUBLICATION paying you nicely for repeatedly trashing our writing? :)
The patch set is described as coming from Kees Cook, based on work in PaX/grsecurity. That is objectively true. More to the point, that patch set contains a great deal of work to separate out the changes, make them acceptable to mainline, document them, run benchmarks, etc. All part of the normal work required to get a change upstream. You have, as is your right, declined to do that work. But when somebody else does it for you, the result is their work as much as yours.
I write pretty routinely about patch sets here. They often contain work from multiple people, often in ways that is hard to tell. If I were to credit everybody who somehow influenced a patch set, the articles would be long indeed. The current patch set originated in your work — which was noted in the article — but also contains contributions from Kees, Josh Poimboeuf, Ingo Molnar, Li Kun, and probably others, most of whom were not credited in the article. But only you, who were actually credited, chose to publicly question my integrity (again).
The point is coming where I'm just not going to write about this work anymore, it's simply not worth the shitstorm that results every time. There is a lot of other kernel work going on — work that your patch sets depend on — that I can write about without wishing I'd chosen a different line of work.
And this is not the first time that I'm disturbed by this kind of imprecision in lwn..
So does zombocom, and so do several other interesting historical Flash animations.
Not to forget JS could create canvas elements at will, so you really need to intercept it at some other level than the DOM, unless you want to slow down every page that ever touches the DOM in any way.
QWOP! Which probably is the most major cultural artifact made in this century!
And let's not forget WebUSB, which for some reason is a thing.
And while we're at it, WebRTC leaking IPs from behind a VPN isn't nice either.
You can statically lay out memory, so that for each supported framebuffer size, you have a fixed memory layout, including a fixed size pool of buffers for networking. You can do double and triple buffering without an OS backing you (it's just buffer swapping in an interrupt handler at heart).
I've worked on MPEG-2 decoders that did exactly that - triple buffered display at any of the standard resolutions for HD-SDI, ASI reception with a buffer pool for inbound packets, hardware decoder that needed software to depacketise in the right format, all managed by a linker script in a single bare-metal application. Even supported picture-in-picture and both Teletext font-based and DVB bitmap subtitles.
I think it's just been less of an issue on ARM because until recently 1) most ARM chips were single core, or at least TrustZone code was often pinned to a single core, and 2) people weren't doing pre-emptive scheduling in the TrustZone. In that case it was easier to ensure sensitive cryptographic operations ran to completion consistently. But that's rapidly changing, because reasons (performance, etc). In x86 land I think we'll soon see the ability for user processes to reliably detect context switches. That way you can flush the cache, cryptographically sign some data, and flush again, without being interrupted. (I don't know if that's sufficient alone, just that it should be more feasible to implement the requisite measures.)
Why do I say this?
Because the appropriate solution isn't to simply disappear the proprietary flash plugin, but rather open it and release it as free software.
I am not saying that to encourage its continued use and am happy to see it die for new material. Rather, freeing flash is primarily necessary to help data archivists a decade or a century from now who will have no good way to understand content archived now for whatever cultural or scientific studies may wish a look at our age, of which significant content, for better or worse, is on flash and will not be migrated.
A generation of net users became caught by the "flash trap", and it will be a pity to have a black hole in the archives of our culture because of the past predaceousness and continued contempt of one company.
I still remember this being one of the most noticeable issues, when I first switched to Linux and stuck entirely with FOSS from Debian main.
That got *much* easier when scripts like youtube-dl (and its various predecessors) came around, and still easier when major sites started switching to HTML.
These days, sites actually using Flash without any HTML alternative have become vanishingly rare, and simply choosing to ignore them and move on doesn't cause any significant problems for me.
They're not required for an OS. Yes, using static memory means you have to think a lot harder about what you're doing, and the programmer has to actively manage it, rather than let the computer sort it out at run time, but I used to work with an OS written in FORTRAN ...
Those who really need every cycle can disable it then and sane distros can ship their kernel with them, maybe ship even two versions.
Copyright © 2017, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds