|
|
Subscribe / Log in / New account

Reinventing Plan 9... badly.

Reinventing Plan 9... badly.

Posted Sep 26, 2014 8:03 UTC (Fri) by roskegg (subscriber, #105)
Parent article: Mahinovs: Distributing third party applications via Docker?

Those who don't learn from Plan 9 are doomed to perpetually reinvent Plan 9... badly.

http://www.kix.in/2008/06/19/an-alternative-to-shared-lib...
http://harmful.cat-v.org/software/dynamic-linking/

A lot of corrupt stuff in Unix is trying to recover from the decision to go with shared libraries instead of the 9P approach. If only the D-Bus guys had paid more attention to 9P. 9P was even ported to Linux, why reinvent it badly?


to post comments

Reinventing Plan 9... badly.

Posted Sep 26, 2014 8:46 UTC (Fri) by niner (subscriber, #26151) [Link] (6 responses)

While being elaborate this doesn't actually seem to solve the problem. Let's restate the problem with shared libraries: incompatible changes in newer versions.

Now if instead of symbols in a shared library we use files in a virtual file system, how exactly does this prevent incompatible changes? Instead of renamed symbols we'd have to deal with renamed files. The example provided in the article is an extremely simple one: one input stream, one output stream. That's rarely a problem in shared library code. Problems start when parameters get added or removed or when structs get changed. I really don't see how a file system can help here. You'd still have to deal with changes to data structures written to virtual files.

And indeed the article seems to acknowledge this by suggesting a 'version' file in the virtual file system. Same as people add version fields to structs or functions. So the providers of a library still have to be careful and provide mechanisms for dealing with incompatible changes. And that they just don't do this is the whole root of the problem. So nothing changed.

Reinventing Plan 9... badly.

Posted Sep 26, 2014 8:52 UTC (Fri) by roskegg (subscriber, #105) [Link] (4 responses)

I think I understand what you are saying. The BSDs are fairly good about bumping version numbers every time an API changes, so "people don't do that" isn't a good reason against it.

But the point those links were making is, you turn each library into a 9P service. The way Plan9 works, it is easy to substitute one service for another. If it breaks, you can continue using the old library without relinking.

Reinventing Plan 9... badly.

Posted Sep 26, 2014 13:31 UTC (Fri) by wahern (subscriber, #37304) [Link] (2 responses)

One of those links uses a crypto library as an example. So on a hunch I decided to figure out how Plan 9 implements SSL.

As expected, there's an SSL device which you access with 9P. But how does this device handle encryption? Does it also use 9P to access a crypto device? That would be ungodly slow.

And it turns out that, no, it doesn't. It just links in libsec, which is their equivalent of OpenSSL.

I don't think the argument is that libraries suck compared to IPC. My educated guess is that their argument is that stable interfaces (like a collection of block ciphers) shouldn't change often it written well. While more dynamic interfaces are better accessed using a more flexible abstraction than an increasingly convoluted toolchain for dynamic linking. The distaste for dynamic libraries has more to do with the complexity of the implementations and how it influences the way developers model their application.

The preferences undergirding Unix and Plan 9 are rather accurately described as Worse is Better

http://en.wikipedia.org/wiki/Worse_is_better

And that characterization seems to be apposite here, too. I don't think you can successfully argue that IPC provides an _easier_ or more _convenient_ method for versioning the typical C interface. It almost certainly makes it more difficult. But you'll model your application differently, perhaps in a more elegant way. It's a kind of environmental forcing--you make better choices because the bad ones have been foreclosed.

Reinventing Plan 9... badly.

Posted Sep 26, 2014 19:16 UTC (Fri) by roskegg (subscriber, #105) [Link] (1 responses)

Nice write-up. To clarify, I was talking about shared, dynamically loaded libraries. Libraries per se are good, and used quite a bit in Plan 9. The ssl service in Plan 9 links the ssl library statically, then exports the API over the 9P protocol.

Reinventing Plan 9... badly.

Posted Sep 26, 2014 21:29 UTC (Fri) by foom (subscriber, #14868) [Link]

So if there's a vulnerability in the crypto library, how do you fix it? Is the SSL service the *only* thing that statically links the crypto library, so you just recompile that one binary? (answer almost certain to be no...)

Having an IPC-based mechanism seems orthogonal to having shared libraries. Unless you dispense with libraries entirely and do ALL sharing of code via IPC calls, I'd argue that you still want to be using shared libraries, not static linking, in all circumstances where you have decided to use a library rather than an IPC call.

Reinventing Plan 9... badly.

Posted Sep 28, 2014 1:46 UTC (Sun) by vonbrand (subscriber, #4458) [Link]

Small wonder. Each BSD is somewhat like a Linux distribution (one coherent set of packages, managed by a coordinated group), and they are equally incompatible among themselves.

Reinventing Plan 9... badly.

Posted Sep 26, 2014 9:02 UTC (Fri) by roskegg (subscriber, #105) [Link]

I looked at those links again; they are good, but not as complete as the one I thought I was linking to; a few weeks ago I stumbled across a really good one outlining not only the problems with shared libraries, but the graceful way that Plan 9 handles the issues, including the things you mentioned. I'll look for it a bit more. Thought for sure it was the second link. Seems like only 1/4 of the content is still there.


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds