User: Password:
|
|
Subscribe / Log in / New account

Bazaar on the slow track -- Montone gets too little attention

Bazaar on the slow track -- Montone gets too little attention

Posted Sep 17, 2012 15:16 UTC (Mon) by zooko (guest, #2589)
In reply to: Bazaar on the slow track -- Montone gets too little attention by graydon
Parent article: Bazaar on the slow track

If I recall correctly on the day (weekend?) that Linus tried monotone, the then-current release of monotone had some diagnostic/debugging/profiling code compiled in which caused it to have superlinear runtime for some computation or other. Correct me if I'm wrong, Graydon, as I think what I'm recalling is from something you wrote shortly thereafter.

It's one of those "for want of a nail the horseshoe was lost" kinds of moments in history -- if monotone had been fast enough for Linus to use at that time then presumably he never would have invented git.

And while *most* of the good stuff that the world has learned from git is stuff that git learned from monotone, I do feel a bit of relief that we have git's current branch naming scheme. Git's approach is basically to not try to solve it, and make it Someone Else's Problem. That sucks, it leads to ad-hoc reliance on DNS/PKI, and it probably contributes to centralization e.g. github, but at least there is an obvious spot where something better could be plugged in to replace it. If we had monotone's deeper integration into DNS/PKI (http://www.monotone.ca/docs/Branches.html), it might be harder for people to understand what the problem is and how to change it.


(Log in to post comments)

Bazaar on the slow track -- Montone gets too little attention

Posted Sep 18, 2012 15:25 UTC (Tue) by graydon (guest, #5009) [Link]

I don't think it was just a matter of a missing nail in a horseshoe. If I _had_ to point to a single matter, it would center on identity-tracking (that is, not "just dumb content tracking"). We initially just did content-tracking alone -- which was orders of magnitude faster -- and found that we were stapling together too many ad-hoc merge "algorithms" to reconstruct events like file and directory renames, and java users were complaining about the inaccuracy of those, so we wound up building a large (and as it turns out, computationally expensive) secondary layer of logic concerned with file and directory object lifecycles. That's probably the source of the lion's share of the costs; but even if we hadn't done that, I'm sure the amount of in-memory transformations and verification, data re-parsing, crypto, and simple buffer-copying / sqlite-IO would probably have doomed us going up against kernel engineers. They think in a much closer to "zero copies and never calculate anything twice" mode. Very hard to compete with, given my background and coding style. I'm happy to yield defeat on implementation here; git _flies_. Very impressive implementation (though I do wish it'd integrate rolling-checksum fragment-consolidation in its packfiles, a la bup).

All that's a distraction though, at this stage. Git won; but there's more to do. I agree with you that the residual/next/larger issue is PKI and naming. Or rather, getting _rid_ of PKI-as-we-have-tried-it and deploying something pragmatic, decentralized and scalable in its place for managing names-and-trust. The current system of expressing trust through x.509 PKI is a joke in poor taste, and git (rightly) rejects most of that in favour of the three weaker more-functional models: the "DNS and soon-to-be-PKI DNSSEC+DANE" model of global-name disambiguation, the "manual ssh key-exchange with sticky-key-fingerprints" model of endpoint transport security, and the (imo strictly _worse_) "GPG web of trust" model for long-lived audit-trails. The three of these systems serve as modest backstops to one another but I still feel there's productive work to do exploring the socio-technical nexus of trust-and-naming as a more integrated, simplified, decentralized and less random, more holistic level (RFCs 2693 and 4255 aside). There are still too many orthogonal failure modes, discontinuities and security skeuomorphisms; the experience of naming things, and trusting the names you exchange, at a global scale, still retains far too much of the sensation of pulling teeth. We wind up on IRC with old friends pasting SHA-256 fingerprints of things back and forth and saying "this one? no? maybe this one?" far too often.

Bazaar on the slow track -- Montone gets too little attention

Posted Sep 18, 2012 18:59 UTC (Tue) by jackb (guest, #41909) [Link]

All that's a distraction though, at this stage. Git won; but there's more to do. I agree with you that the residual/next/larger issue is PKI and naming. Or rather, getting _rid_ of PKI-as-we-have-tried-it and deploying something pragmatic, decentralized and scalable in its place for managing names-and-trust. The current system of expressing trust through x.509 PKI is a joke in poor taste, and git (rightly) rejects most of that in favour of the three weaker more-functional models: the "DNS and soon-to-be-PKI DNSSEC+DANE" model of global-name disambiguation, the "manual ssh key-exchange with sticky-key-fingerprints" model of endpoint transport security, and the (imo strictly _worse_) "GPG web of trust" model for long-lived audit-trails. The three of these systems serve as modest backstops to one another but I still feel there's productive work to do exploring the socio-technical nexus of trust-and-naming as a more integrated, simplified, decentralized and less random, more holistic level (RFCs 2693 and 4255 aside). There are still too many orthogonal failure modes, discontinuities and security skeuomorphisms; the experience of naming things, and trusting the names you exchange, at a global scale, still retains far too much of the sensation of pulling teeth. We wind up on IRC with old friends pasting SHA-256 fingerprints of things back and forth and saying "this one? no? maybe this one?" far too often.

My theory is that PKI doesn't work because it is based on a flawed understanding of what identity actually means.

The fraction of the population that really understands what it means to assign cryptographic trust to a key is statistically indistinguishable from "no one". Maybe the reason that the web of trust we've been promised since the 90s hasn't appeared yet is because the model itself is broken.

Bazaar on the slow track -- Montone gets too little attention

Posted Sep 18, 2012 19:43 UTC (Tue) by hummassa (subscriber, #307) [Link]

> The fraction of the population that really understands what it means to assign cryptographic trust to a key is statistically indistinguishable from "no one". Maybe the reason that the web of trust we've been promised since the 90s hasn't appeared yet is because the model itself is broken.

Ok, but... what is the alternative?

Bazaar on the slow track -- Montone gets too little attention

Posted Sep 18, 2012 20:05 UTC (Tue) by jackb (guest, #41909) [Link]

Now that people are carrying mobile, internet-connected computers around with them basically all the time key signing can be automated.

The question of "does the person standing in front of me control a particular private key" can be answered by having each person's smartphone sign a challenge and exchange keys via QR codes (bluetooth, NFC, etc). This step should require very little human interaction.

That question, however, does not establish an identity as we humans understand it. Identity between social creatures is a set of shared experiences. The way that you "know" your friends is because of your memories of interacting with them.

Key signing should be done in person and mostly handled by an automated process. Identity formation is done by having the users verify facts about other people based on their shared experiences.

If properly implemented the end result would look a lot like a social network that just happens to produce a cryptographic web of trust as a side effect.

Bazaar on the slow track -- Montone gets too little attention

Posted Sep 18, 2012 20:23 UTC (Tue) by graydon (guest, #5009) [Link]

I agree. My hunch (currently exploring in code) is that a more useful model involves defining trust in reference to cross-validation between multiple private small-group communication-histories. Put another way: identity should adhere to evidence concerning communication-capability (and the active verification thereof), not evidence of decrypting long-lived keys. Keys should always be ephemeral. They'll be broken, lost or stolen anyways; best to treat them as such.

(Keep in mind how much online-verification comes out in the details of evaluating trust in our key-oriented PKI system anyways. And how often "denying a centralized / findable verification service" features in attack scenarios. Surprise surprise.)

So, I also expect this will require -- or at least greatly benefit from -- a degree of "going around" current network infrastructure. Or at least a willingness to run verification traffic over a comfortable mixture of channels, to resist whole-network-controlling MITMs (as the current incarnation of the internet seems to have become).

But lucky for our future, communication bandwidth grows faster than everything else, and most new devices have plenty of unusual radios.

Bazaar on the slow track -- Montone gets too little attention

Posted Sep 18, 2012 20:25 UTC (Tue) by Cyberax (✭ supporter ✭, #52523) [Link]

PKI is a failure on all levels, starting from technical and going up to the social/management level.

For example, is there anybody here who can claim enough of ASN.1 knowledge to parse encoded certificates and keys? I certainly don't, every time I need to generate a CSR or a key, I go to Google and search for the required command-line to make OpenSSL spit out the magic binhex block.

Then there's a problem with lack of delegation. It's not possible to create a master cert for "mydomain.com" which I then can use to sign "host1.mydomain.com" and "host2.mydomain.com".

And so on. I'd gladly help a project to replace all this morass with clean JSON-based certificates with clear human-readable encoding.

Bazaar on the slow track -- Montone gets too little attention

Posted Sep 18, 2012 21:16 UTC (Tue) by jackb (guest, #41909) [Link]

I think there are two components necessary to build a web of trust that real people will actually use. First is automated in-person key signing that I described in an eariler post. The second part is an online database of facts about a particular identity.

The database would consist of one table that associates arbitrary text strings with public key IDs, and another table containing cryptographically-signed affirmations or refutations of the entries in the first table.

An example of an arbitrary text string could be a legal name, an email address, "inventor of the Linxu kernel", "CEO of Acme, Inc.", etc.

Everybody is free to claim anything they want, and everyone else is free to confirm or refute it. A suitable algorithm would be used to sort out these statements based on the user's location in the web of trust to estimate the veracity of any particular statement.

The value of the web of trust depends on getting people to actually use it so the tools for managing it would need to be enjoyable to work with instead of painful. That's one reason I think making the user interface similar to a social network because the emperical evidence suggests that people like using Facebook more than they like using GPG or OpenSSL. The other reason is that social networks better model how people actually interact in real life so making the web of trust operate that way is more intuitive.


Copyright © 2017, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds