|
|
Subscribe / Log in / New account

Multics security, thirty years later

Worth a read: Paul Karger and Roger Schell have released a new paper (available in PDF format) entitled "Thirty Years Later: Lessons from the Multics Security Evaluation." It includes an analysis of the security of the Multics operating system, written by the same two authors and published in 1974, along with a new forward describing how things have changed in the mean time. Their assessment of the current state of computer security is harsh:

The unpleasant conclusion is that although few, if any, fundamentally new vulnerabilities are evident today, today's products generally do not even include many of the Multics security techniques, let alone the enhancement identified as essential.

That essential enhancement is the creation of verifiable "security kernel" around which the rest of the system could be built. In 2002, very few systems built around such kernels exist, and the authors are not very enthusiastic about those which do exist:

...the ring 0 supervisor of Multics of 1973 occupied about 628K bytes of executable code and read-only data. This was considered to be a very large system. By comparison, the size of the SELinux module with the example policy code and read-only data has been estimated to be 1767K bytes. This means that just the example security policy of SELinux is more than 2.5 times bigger than the entire 1973 Multics kernel and that doesn't count the size of the Linux kernel itself. Given that complexity is the biggest single enemy of security, this suggests that the complexity of SELinux needs to be seriously examined.

Or, to put things in more general terms:

Given the understanding of system vulnerabilities that existed nearly thirty years ago, today's "security enhanced" or "trusted" systems would not be considered suitable for processing even in the benign closed environment.

So how do we make things better? The paper does not provide a whole lot of new suggestions. The authors talk some about the tools that are used; for example, Multics was mostly free of buffer overflow vulnerabilities, thanks to the use of PL/I as the implementation language. PL/I required an explicit declaration of the length of all strings.

The net result is that a PL/I programmer would have to work very hard to program a buffer overflow error, while a C programmer has to work very hard to avoid programming a buffer overflow error.

Beyond that, one gets the sense that the authors feel they said what needed to be said thirty years ago, and they are still waiting for the message to get across. Their prediction:

It is unthinkable that another thirty years will go by without one of two occurrences: either there will be horrific cyber disasters that will deprive society of much of the value computers can provide, or the available technology will be delivered, and hopefully enhanced, in products that provide effective security.

The authors hope for the latter scenario; so do we.


to post comments

16bit versus 32bit?

Posted Sep 12, 2002 3:34 UTC (Thu) by smoogen (subscriber, #97) [Link] (2 responses)

I was wondering about one line quoted above:

...the ring 0 supervisor of Multics of 1973 occupied about 628K bytes of executable code and read-only data. This was considered to be a very large system. By comparison, the size of the SELinux module with the example policy code and read-only data has been estimated to be 1767K bytes.

I am not sure if this is an apples and oranges comparison. One the two architectures used different instruction widths. Second, the number of instructions are comparable. If they were able to code the Multics code for an Intel processor/hardware and get the same size then I think the comparison would be valid.

No, that's *36* bits young man.

Posted Sep 12, 2002 8:07 UTC (Thu) by oldtoss (guest, #3648) [Link] (1 responses)

Not that bitness should have a great deal of influence on code size, but FYI Multics used 36 bit hardware (4 9-bit bytes). It isn't clear whether the authors allowed for the difference in byte width, so let's make that 706.5K to err on the safe side. Multics still comes out vastly less bloated. Remember also the Multics hardcore needed much *more* memory management code (relative to i386) to work with a much more primitive MMU. So, now you all know the facts, please don't perpetuate the old FUD about bloat.

No, that's *36* bits young man.

Posted Sep 12, 2002 9:37 UTC (Thu) by beejaybee (guest, #1581) [Link]

Yeah, well, when I started programming seriously (about 1975) I was told in no uncertain terms that if it wouldn't run in 16K (24-bit words - NOT on multics h/w), it wasn't worth writing.

Things have moved on a bit since then, but "small is beautiful" is obviously still a worthwhile concept. Plain straightforward common sense suggests than the bigger a chunk of code is, the more likely it is to contain bugs - and security holes.

PL/I vs C++

Posted Sep 12, 2002 15:42 UTC (Thu) by Greyfox (guest, #1604) [Link] (1 responses)

While it's pretty easy to code a buffer overflow in C, it is much harder to do it in C++ as long as you don't treat it as just a better C. Use the C++ features now available and it is very easy to avoid the buffer overflow issue completely.

Of course, C++ programs can easily explode in size and template programming has some very esoteric aspects. I would like to see the usual set of servers coded in C++, though. I'd be far more inclined to trust them over the current batch of C servers.

As far as "Small is Beautiful," if my memory serves me correctly, I seem to recall that there is a dicipline of Japanese painting in which the artist expresses the nature of the object in as few brush strokes as possible. Programming tends to be as artistic an exercise as it is a scientific one, and would seem to follow this form of art.

PL/I vs C++

Posted Sep 15, 2002 2:25 UTC (Sun) by Baylink (guest, #755) [Link]

And, of course, there are some things that are next to impossible to achieve easily or cleanly in a language where you cannot reasonably perform real-time memory allocation and string sizing...

Multics security, thirty years later

Posted Sep 13, 2002 10:37 UTC (Fri) by AnswerGuy (guest, #1256) [Link]

I think the real advances in OS security have been KeyKOS and EROS. KeyKOS is dead (as far as I know). However, EROS is under active development and more can be learned about it at: http://www.eros-os.org/

Multics security, thirty years later

Posted Sep 15, 2002 3:09 UTC (Sun) by Baylink (guest, #755) [Link]

Well, I will be dipped in shit.

The Thompson hack (placing a backdoor in the *compiler*) related in "Reflections on Trusting Trust" actually came directly from the Multics project.

Who knew?

Multics security, thirty years later

Posted Nov 12, 2002 20:07 UTC (Tue) by chorning (guest, #7686) [Link]

I'd like to say this entire article is FUD. Having worked on Multics in the early 80's at MIT and been a hardcore "hacker" (the Multics core) I can say Multics is a _lot_ larger than a Linux kernel.. The ring 0 controller is about 700k in memory, but the ring 1 and 2 controllers are huge. The Honeywell Project and Multics were started to solve any possible security problems that the engineers could think of. It ran slow as all users were alloted processor time and memory whether they were logged in or not, as this could be used as a discrete channel.

Just to give you an idea of the size of Multics, in 1982 MIT spent over 7 million dollars on storage alone for the machine the famous Rochlis tapes were archived from (last known good copy of the Multics source) and it had a little over 4 gigabytes of storage.

3 things that Linux can learn from Multics:
1) Security costs performance
2) Security is inversely proportional to useability
3) Operating Systems are not political statements, They are for running programs.

-C. Hornig, hardcore hacker '79-84


Copyright © 2002, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds