Bazaar on the slow track -- Montone gets too little attention
Bazaar on the slow track -- Montone gets too little attention
Posted Sep 13, 2012 7:54 UTC (Thu) by graydon (guest, #5009)In reply to: Bazaar on the slow track -- Montone gets too little attention by martin.langhoff
Parent article: Bazaar on the slow track
That said, monotone was unusably slow _when compared to git_, and as project histories and development parallelism has grown, that delta has become an easy and correct criterion for picking git for production in most cases. Git also picked a more sensible branch-naming model (local, per-repo, no PKI; less ambitious but easier and more powerful), embraced history-rewriting early and aggressively, had the benefit of hindsight in most algorithms, declined to bother tracking object identity (turns out to cost more performance than it's worth), figured out submodules, etc. etc. Git won this space hands down. There's no point competing with it anymore, imo.
Posted Sep 15, 2012 20:54 UTC (Sat)
by cmccabe (guest, #60281)
[Link]
Posted Sep 17, 2012 15:16 UTC (Mon)
by zooko (guest, #2589)
[Link] (7 responses)
It's one of those "for want of a nail the horseshoe was lost" kinds of moments in history -- if monotone had been fast enough for Linus to use at that time then presumably he never would have invented git.
And while *most* of the good stuff that the world has learned from git is stuff that git learned from monotone, I do feel a bit of relief that we have git's current branch naming scheme. Git's approach is basically to not try to solve it, and make it Someone Else's Problem. That sucks, it leads to ad-hoc reliance on DNS/PKI, and it probably contributes to centralization e.g. github, but at least there is an obvious spot where something better could be plugged in to replace it. If we had monotone's deeper integration into DNS/PKI (http://www.monotone.ca/docs/Branches.html), it might be harder for people to understand what the problem is and how to change it.
Posted Sep 18, 2012 15:25 UTC (Tue)
by graydon (guest, #5009)
[Link] (6 responses)
All that's a distraction though, at this stage. Git won; but there's more to do. I agree with you that the residual/next/larger issue is PKI and naming. Or rather, getting _rid_ of PKI-as-we-have-tried-it and deploying something pragmatic, decentralized and scalable in its place for managing names-and-trust. The current system of expressing trust through x.509 PKI is a joke in poor taste, and git (rightly) rejects most of that in favour of the three weaker more-functional models: the "DNS and soon-to-be-PKI DNSSEC+DANE" model of global-name disambiguation, the "manual ssh key-exchange with sticky-key-fingerprints" model of endpoint transport security, and the (imo strictly _worse_) "GPG web of trust" model for long-lived audit-trails. The three of these systems serve as modest backstops to one another but I still feel there's productive work to do exploring the socio-technical nexus of trust-and-naming as a more integrated, simplified, decentralized and less random, more holistic level (RFCs 2693 and 4255 aside). There are still too many orthogonal failure modes, discontinuities and security skeuomorphisms; the experience of naming things, and trusting the names you exchange, at a global scale, still retains far too much of the sensation of pulling teeth. We wind up on IRC with old friends pasting SHA-256 fingerprints of things back and forth and saying "this one? no? maybe this one?" far too often.
Posted Sep 18, 2012 18:59 UTC (Tue)
by jackb (guest, #41909)
[Link] (5 responses)
My theory is that PKI doesn't work because it is based on a flawed understanding of what identity actually means. The fraction of the population that really understands what it means to assign cryptographic trust to a key is statistically indistinguishable from "no one". Maybe the reason that the web of trust we've been promised since the 90s hasn't appeared yet is because the model itself is broken.
Posted Sep 18, 2012 19:43 UTC (Tue)
by hummassa (subscriber, #307)
[Link] (1 responses)
Ok, but... what is the alternative?
Posted Sep 18, 2012 20:05 UTC (Tue)
by jackb (guest, #41909)
[Link]
The question of "does the person standing in front of me control a particular private key" can be answered by having each person's smartphone sign a challenge and exchange keys via QR codes (bluetooth, NFC, etc). This step should require very little human interaction.
That question, however, does not establish an identity as we humans understand it. Identity between social creatures is a set of shared experiences. The way that you "know" your friends is because of your memories of interacting with them.
Key signing should be done in person and mostly handled by an automated process. Identity formation is done by having the users verify facts about other people based on their shared experiences.
If properly implemented the end result would look a lot like a social network that just happens to produce a cryptographic web of trust as a side effect.
Posted Sep 18, 2012 20:23 UTC (Tue)
by graydon (guest, #5009)
[Link]
(Keep in mind how much online-verification comes out in the details of evaluating trust in our key-oriented PKI system anyways. And how often "denying a centralized / findable verification service" features in attack scenarios. Surprise surprise.)
So, I also expect this will require -- or at least greatly benefit from -- a degree of "going around" current network infrastructure. Or at least a willingness to run verification traffic over a comfortable mixture of channels, to resist whole-network-controlling MITMs (as the current incarnation of the internet seems to have become).
But lucky for our future, communication bandwidth grows faster than everything else, and most new devices have plenty of unusual radios.
Posted Sep 18, 2012 20:25 UTC (Tue)
by Cyberax (✭ supporter ✭, #52523)
[Link] (1 responses)
For example, is there anybody here who can claim enough of ASN.1 knowledge to parse encoded certificates and keys? I certainly don't, every time I need to generate a CSR or a key, I go to Google and search for the required command-line to make OpenSSL spit out the magic binhex block.
Then there's a problem with lack of delegation. It's not possible to create a master cert for "mydomain.com" which I then can use to sign "host1.mydomain.com" and "host2.mydomain.com".
And so on. I'd gladly help a project to replace all this morass with clean JSON-based certificates with clear human-readable encoding.
Posted Sep 18, 2012 21:16 UTC (Tue)
by jackb (guest, #41909)
[Link]
The database would consist of one table that associates arbitrary text strings with public key IDs, and another table containing cryptographically-signed affirmations or refutations of the entries in the first table.
An example of an arbitrary text string could be a legal name, an email address, "inventor of the Linxu kernel", "CEO of Acme, Inc.", etc.
Everybody is free to claim anything they want, and everyone else is free to confirm or refute it. A suitable algorithm would be used to sort out these statements based on the user's location in the web of trust to estimate the veracity of any particular statement.
The value of the web of trust depends on getting people to actually use it so the tools for managing it would need to be enjoyable to work with instead of painful. That's one reason I think making the user interface similar to a social network because the emperical evidence suggests that people like using Facebook more than they like using GPG or OpenSSL. The other reason is that social networks better model how people actually interact in real life so making the web of trust operate that way is more intuitive.
Bazaar on the slow track -- Montone gets too little attention
Bazaar on the slow track -- Montone gets too little attention
Bazaar on the slow track -- Montone gets too little attention
Bazaar on the slow track -- Montone gets too little attention
All that's a distraction though, at this stage. Git won; but there's more to do. I agree with you that the residual/next/larger issue is PKI and naming. Or rather, getting _rid_ of PKI-as-we-have-tried-it and deploying something pragmatic, decentralized and scalable in its place for managing names-and-trust. The current system of expressing trust through x.509 PKI is a joke in poor taste, and git (rightly) rejects most of that in favour of the three weaker more-functional models: the "DNS and soon-to-be-PKI DNSSEC+DANE" model of global-name disambiguation, the "manual ssh key-exchange with sticky-key-fingerprints" model of endpoint transport security, and the (imo strictly _worse_) "GPG web of trust" model for long-lived audit-trails. The three of these systems serve as modest backstops to one another but I still feel there's productive work to do exploring the socio-technical nexus of trust-and-naming as a more integrated, simplified, decentralized and less random, more holistic level (RFCs 2693 and 4255 aside). There are still too many orthogonal failure modes, discontinuities and security skeuomorphisms; the experience of naming things, and trusting the names you exchange, at a global scale, still retains far too much of the sensation of pulling teeth. We wind up on IRC with old friends pasting SHA-256 fingerprints of things back and forth and saying "this one? no? maybe this one?" far too often.
Bazaar on the slow track -- Montone gets too little attention
Bazaar on the slow track -- Montone gets too little attention
Bazaar on the slow track -- Montone gets too little attention
Bazaar on the slow track -- Montone gets too little attention
Bazaar on the slow track -- Montone gets too little attention