My God are you alarmist
My God are you alarmist
Posted Feb 27, 2017 9:42 UTC (Mon) by linuxrocks123 (subscriber, #34648)In reply to: Proof by tialaramex
Parent article: Linus on Git and SHA-1
Guess what? RSync uses Adler-32 and MD-5, and it works fine, and the world is still here. No fire.
My personal hope? SCMs will use this opportunity to change their code not to rely on cryptographically secure hashes. They have no fundamental need for them, and assuming
(hash(A) = hash(B)) -> (A = B)
is obviously logically unsound, sometimes unsafe, and always the worst of kludges. Subversion deserves the mess it's in now, and I say that even as I now have to make sure never to commit those two files to my giant Subversion repository that controls my home directory thanks to their f-up. I hope they will fix their design rather than kick the can down the road by replacing SHA-1 with something new. You don't need a cryptographically secure hash, guys: don't use one, and don't assume the unsound.
Posted Feb 27, 2017 15:02 UTC (Mon)
by tialaramex (subscriber, #21167)
[Link] (14 responses)
SCMs have chosen crypto hashes because they want content addressability and a crypto hash lets you achieve that efficiently. Git in particular doesn't bother doing very much else except content-addressability, this is part of how it's able to be distributed so easily, unlike other schemes content addressing means two parties are sure to the agree what to name the same thing.
Insisting that hash(A) = hash(B) ⇏ A = B is all very well logically, but as a practical matter all it does for you is that you're now obliged to send a LOT of bits everywhere to do anything, because the only useful content addressable reference for A other than hash(A) is A itself. And what do you get for this enormous increase in bandwidth, CPU power and disk space? Er, well, in theory there's a minuscule chance it'll catch an error case some day, for someone, somewhere. How minuscule ? Let's find out:
For example compared to a 256 bit crypto hash - if you use this new SCM to create a billion-billion-billion (10^27) new documents per second (good luck with that), and compare them all against all the other documents ever created this way (remember you need to store them all somewhere), by the year 9999 CE you'd probably run into one collision which you'd detect but the hash would not. Although maybe not, chance can be a funny thing.
All worth it right? Unfortunately all the extra bandwidth, CPU power and disk space incurs its own risk of errors, which are far larger. I mean, tiny, but because the risk from using a good crypto hash was so minute these tiny risks dwarf it. You've wasted all those resources AND made things worse.
So that's why nobody is doing that.
Posted Feb 27, 2017 22:28 UTC (Mon)
by linuxrocks123 (subscriber, #34648)
[Link] (13 responses)
Posted Feb 27, 2017 23:29 UTC (Mon)
by tialaramex (subscriber, #21167)
[Link] (12 responses)
Of _course_ content addressability isn't obligatory for an SCM, SCCS doesn't have content addressability. But it turns out people actually want their SCM to be much better than SCCS, and lots of the features which you may not care about rely on this choice. Git is so reliant upon this choice that Linus says its better to think of the version control features as just a sort of thin layer of icing, the main thing he built is a content addressable filesystem.
Likewise, doubtless any number of amazing schemes could be dreamed up for content addressability, and I have no doubt that somewhere a Computer Science course has decided to spend lots of time walking students through one or more of the fancier examples, just as one of my CS undergrad courses spent a bunch of time trying to brutalize the C pre-processor into a way of managing HTML files so each student could make their own web page (this probably dates me) so everything is being taught somewhere. But that didn't make the C pre-processor suitable for this task, and fancier schemes don't fix the problem you're complaining of.
The Pigeonhole Principle, the same thing that forbids infinite compression algorithms and other such schemes, requires that you accept some non-zero risk of collisions if you're going to throw some of the bits away. Doesn't matter how you do it, the risk is the same - which is why we pick crypto hashes, because they have the best properties we could hope for under the circumstances.
Posted Feb 28, 2017 10:03 UTC (Tue)
by linuxrocks123 (subscriber, #34648)
[Link] (11 responses)
I'm not sure how you misread what I wrote to come to that conclusion.
> Git is so reliant upon this choice that Linus says its better to think of the version control features as just a sort of thin layer of icing, the main thing he built is a content addressable filesystem.
So Git will continue to rely on that unsound kludge, then. That's a shame, but I don't personally use it for anything serious, so I guess I don't care.
> trying to brutalize the C pre-processor into a way of managing HTML files so each student could make their own web page
That sounds beyond horrible.
> The Pigeonhole Principle, the same thing that forbids infinite compression algorithms and other such schemes, requires that you accept some non-zero risk of collisions if you're going to throw some of the bits away. Doesn't matter how you do it, the risk is the same
So don't do that. Uniquely identify things some other way, like with a hash plus its location on a linked list of things that hash to the same thing. Or don't use hashes at all and use that tree thing where each letter in the string moves you down a level of the tree. I'm not going to write out a full design for some hypothetical SCM that doesn't rely on hashes not having collisions.
> we pick crypto hashes, because they have the best properties we could hope for under the circumstances.
Yeah ... if you're going to try to use hash(A) = hash(B) --> A = B, crypto hashes are about the only way to make that sort of work. If you're _NOT_ going to make that assumption, and you shouldn't, you can use much cheaper hashes and save some CPU time.
The only way people get away with abusing crypto hashes for this at all is because SCMs and the other places this hack is used don't reside in memory, and the hashes don't need to be taken frequently.
But, hey, if Git people thinks it's easier to break the universe every five years to switch to a new hashing algorithm, be my guest. It's not my universe. I just hope the SVN people have more sense and fix the problem correctly. The design of Subversion doesn't rely on the unsound assumption, just this one optional deduplication feature (representation sharing). And fixing that would be as easy as just comparing the actual files before doing the deduplication. Easy.
Posted Feb 28, 2017 13:48 UTC (Tue)
by madscientist (subscriber, #16861)
[Link]
Which is not to say that I think Git shouldn't ensure that a collision won't corrupt the repository... but anything beyond that is useless work.
> if Git people thinks it's easier to break the universe every five years to switch to a new hashing algorithm
I know you're being hyperbolic but just for posterity: Git is currently 12 years old and I wouldn't be surprised if it's another 2 years or more before a new hashing algorithm is in common use. If we get to a point where cryptographic hashes are being broken within 5 years we'll have much bigger problems than Git collisions. Also, the introduction of the new hash won't "break the universe"--plans for introducing it all include backward-compatibility. It's clearly a non-starter to require everyone to rewrite their entire history, and it's not necessary.
Posted Feb 28, 2017 14:24 UTC (Tue)
by tialaramex (subscriber, #21167)
[Link] (9 responses)
There are three things you could choose here, let's look at each in turn, because I think you are gradually learning a little bit
1. Everybody makes their own linked lists. Whenever we look at any object we need to examine it in order to compare it with things in our list for the same hash and figure out if it was already on the list. We can't share our "unique" identifiers for things, because we each have our own linked list with different things in it and in a different order, so the "unique" identifiers are only locally unique. When we discuss things with other people the only way to refer to them is by sending the entire thing to be discussed, even if we're pretty sure they have it, we can't identify it to them any other way because they have different linked lists.
You can build a pretty good little local revision control system this way. It's a bit resource hungry, slow, disk bandwidth intensive, but it works well. Attempts to build a distributed revision control system are painful, although centralised revision control is practical for people with plenty of network bandwidth so they can upload and download all the things that the remote system needs to find / add in its linked lists.
2. A Central Authority makes the linked lists. The unique identifiers are global, but we can't find out the identifier for anything unless we either send it to the Central Authority for them to identify, or we have a (potentially partial) copy of the Central Authority's linked list to compare against. We can discuss things with other people because we share these globally unique references, but we're restricted to only talking about things the Central Authority has seen. Somebody has to pay for the Central Authority, which also functions as a global library of all human knowledge, albeit one that doesn't have a useful index.
This time using the system at all (even locally) is clumsy and slow unless you have a high bandwidth connection to the Central Authority or (for read only use) a complete and up-to-date copy of its linked lists. Good news is that distributed usage is no harder than local use, bad news is that's not saying much.
3. We use an _imaginary_ totally ordered list instead of a real linked list, so that we can work out where anything would be in this totally ordered list and write that location down without keeping all the items from the list. For a crypto hash we can't even attempt this, but with a weak or trivial hash this isn't too hard to arrange. The hash plus the location number uniquely identify anything. This seems like an amazing breakthrough, until we remember the Pigeonhole Principle. Where do all the bits go? Ah, they're in the location number. Instead of N=9703384921593020482 I now have hash(N) = 446 plus location(N) = 27829639719764153, I write those down, and er... wait, why did we invent this complicated scheme that has no benefit whatsoever?
This system is pointless and nobody would build it.
Posted Feb 28, 2017 15:56 UTC (Tue)
by linuxrocks123 (subscriber, #34648)
[Link] (8 responses)
Posted Feb 28, 2017 18:06 UTC (Tue)
by nix (subscriber, #2304)
[Link] (6 responses)
Fundamentally, any DVCS requires the ability to determine whether two objects are the same, and an indication of their histories. Either you do it by transmitting the whole object and history (-> very expensive) or you do it by assigning an identifier (-> impractical on any DVCS without central coordination) or you do it by some scheme that collapses large objects into much smaller identifiers algorithmically (i.e. a hash).
I see no alternatives to these three, and only the last yields anything remotely usable as a DVCS. Your assertion that this is not true is not convincing, not to me, at least.
Posted Mar 1, 2017 3:41 UTC (Wed)
by linuxrocks123 (subscriber, #34648)
[Link] (5 responses)
Fine, here's a non-exhaustive list of possible ways to uniquely ID a commit excluding the three from earlier:
There's three. I'm not interested in debating stupid details like "what if the coder has two machines" (then use a different public key on each machine or something. Whatever.). Minor problems have minor solutions. You get the idea here. HAND.
Posted Mar 1, 2017 13:30 UTC (Wed)
by itvirta (guest, #49997)
[Link]
Posted Mar 1, 2017 14:07 UTC (Wed)
by gmatht (guest, #58961)
[Link] (1 responses)
In any case (3) has the same theoretical problem as hashes. How do you *know* your public key is unique? My understanding is that keys are more vulnerable to collisions because key generation also relies on being able to generate truly random numbers: http://jblevins.org/log/ssh-vulnkey (obviously they have the advantage that you wouldn't have to check the central server on each commit).
Posted Mar 1, 2017 14:56 UTC (Wed)
by excors (subscriber, #95769)
[Link]
But there's still a non-zero chance that it gives the wrong answer. If you insist on absolute mathematical certainty, to the extent that you won't trust any hash function as a unique identifier, then you shouldn't trust public key cryptography either.
Posted Mar 1, 2017 15:32 UTC (Wed)
by ianmcc (subscriber, #88379)
[Link] (1 responses)
Posted Mar 1, 2017 16:51 UTC (Wed)
by nix (subscriber, #2304)
[Link]
Posted Mar 1, 2017 1:19 UTC (Wed)
by Cyberax (✭ supporter ✭, #52523)
[Link]
In other words, it's impossible to create a DVCS without referring to content of the nodes.
My God are you alarmist
There Is (Almost) Never One "Only Other Way"
There Is (Almost) Never One "Only Other Way"
There Is (Almost) Never One "Only Other Way"
There Is (Almost) Never One "Only Other Way"
There Is (Almost) Never One "Only Other Way"
There Is (Almost) Never One "Only Other Way"
There Is (Almost) Never One "Only Other Way"
There Is (Almost) Never One "Only Other Way"
- Email address plus ID number generated by the client with that email address that is always 1 + previous ID number generated by the coder's machine.
- Hostname of arbitrary server running unique ID generation network service, plus the ID number said server generated for your changeset.
- Public key of committing programmer plus ID number (always 1 + previous local ID generated) generated by the coder's machine.
There Is (Almost) Never One "Only Other Way"
(or perhaps a combination of the two)?
There Is (Almost) Never One "Only Other Way"
There Is (Almost) Never One "Only Other Way"
There Is (Almost) Never One "Only Other Way"
There Is (Almost) Never One "Only Other Way"
There Is (Almost) Never One "Only Other Way"