This is partially true...
Posted Nov 18, 2011 21:17 UTC (Fri) by khim
In reply to: This is partially true...
Parent article: Interview with Andrew Tanenbaum (LinuxFr.org)
And speaking of military-grade security, LISP and Java are used by NASA on satellites.
On LISP and Java machines? Care to share a link? I have nothing against LISP, Haskel, Java and other such languages. If they are used where it makes sense... why not. But this hysteria where LISP (long ago) and Java (in last 10-15 years) was supposed to replace everything down to OS and hardware - this is just crazy.
We've failed spectacularly at this regard, I don't think one can find half a year when Linux had not been vulnerable to local security exploits.
Sure. But the same can be said about JVM and CLR - and they don't even try to offer full capabilities of full-blown OS!
And no, memory protection has failed miserably as witnessed by the current torrent of vulnerabilities.
This is, again, is the case where theory and practice are not the same. In theory we have these nice articles which claim that managed code should rule them all. And we have these scary intrusion reports. But we also have facts: the most common vehicle for intrusions is JVM - and CLR are only behind because Silverlight is less pervasive then Java plugin.
As the saying goes: memory protection is the worst form of security... except all the others that have been tried.
Azul hardware is geared towards realtime (trading is fast!) and data-intensive tasks which traditional supercomputers suck at.
Sorry, but no cigar. Traditional supercomputers excels on these tasks. The only problem: these beasts were so expensive most researches dropped them and learned to use much "worse" architecture to do real work. Thus now Top500 is dominated by "commodity hardware with a small twist" systems and traditional supercomputers are displayed in museums. Azul tried to compete with commodity hardware for a time, but... now it, too, offers it's goodies in Linux for x86 version. This is beginning of the end for that saga, I believe.
So? You argument boils down to "we've tried this 30 years before and it hadn't worked then, so it would never work at all!"
Yes. This rule is not 100% bullet-proof, but it's pretty good rule of thumb. One example where this rule want me astray: I was convinced for a time that people will never manage to write a compiler which can eliminate all that bloat in STL for that reason - but compiler developers managed to do that.. well, almost. It took over 10 years, but they did that. This is great achievement and notable counter-example. Notable exactly because it's so extremely rare.
There were graphic accelerators back then for performing specialized rendering (like sprite animation) and by 90-s they all but disappeared from most of the hardware since software rendering became fast enough.
Sorry, but you, again, are rewriting history. PC had no graphic acceleration initially. It got it in 1991 (with S3 86C911) and kept it ever since. Sure, today we don't have 2D acceleration in hardware - but that's because 3D units can emulate is quite efficiently.
LISP machines, on the other hand, disappeared completely. Azul understands this much better then you - that's why it switches to x86, Linux and commodity hardware. I'm pretty sure it'll sell it's expensive hardware packages till people will be willing to pay for them, but there are no doubt that eventually it'll be pure software company.
Does it mean that we don't need GPUs since "they've been tried and failed"?
On the contrary: they've been tried and survived when the host platforms (workstation and LISP Machines) folded, they've just migrated to PC when PC killed these hosts - this means they are here to stay.
So is the current generation of GCs - Azul has _pauseless_ GC (and it really is) which has never been achieved in practice before.
Sure, but what this has to do with OS design? You don't need GC at all so the fact that you create cool GC does not really change anything. It's still complicated piece of software which should not be in (micro?)kernel if want to create secure OS.
Hello? Have you heard about SQL Slammer? Or maybe the original Morris worm? Or about those terrorists insisting on bringing down the Western world?
Yeah, I've seen and heard tales. Sure, some hiccups happened, but somehow the only country where cyberwar managed to destroy significant value is Iran - which kinda makes it all less scary then you want to imply.
I believe that Linux right now has at least several undiscovered remotely-exploitable vulnerabilities. I don't doubt that they can be used to bring down the Internet and cause massive havoc.
Probably - but where is the profit in that? Why will anyone pay for such work? That's the question.
Actually what you are preaching already happened in a sense - and nobody noticed. The funny thing is that while yes, Linux routers were infected (among other computers), yet attackers had no need for kernel vulnerabilities. Not at all. They just used well-known usernames and password (admin/admin and the like) - and it was enough.
With this state of security on the Internet talks about microkernels and Singularity are... well, premature - don't you think?
to post comments)