Updating the Git protocol for SHA-256
The Git source-code management system has for years been moving toward abandoning the Secure Hash Algorithm 1 (SHA-1) in favor of the more secure SHA-256 algorithm. Recently, the project moved a step closer to that goal with contributors implementing new Git protocol capabilities to enable the transition.
Why move from SHA-1
Fundamentally, Git repositories are built on hash values — presently using the SHA-1 algorithm. A simplified explanation of the importance of hash values to Git follows; readers may also be interested in our previous coverage where the details were covered.
SHA-1 hash values are strings that uniquely represent the contents of an object (for example a source file), and no two files should ever produce the same string. In Git, every object has a hash value representation of its contents. The specific directory structure of these objects is stored in an object called a tree object. This tree object is an organized hierarchy of hashes, each one pointing to a specific version of a specific object within the repository. Tree objects themselves, as mentioned, are also hashed when stored into the repository. When a commit to the repository occurs, the basic steps that occur are:
- Files are assigned new hash values (if they changed)
- A tree object is created then hashed, containing all the hashes for all the files in their current state.
- A commit object is created and hashed, referencing the tree object hash
In short, Git uses SHA-1 hashes everywhere to ensure the integrity of the repository's contents by effectively creating a chain of hash values of the objects representing that repository over time, similar to blockchain technology.
The problem with SHA-1, or any hashing algorithm, is that its usefulness erodes if the hashes it produces make collisions likely. A collision, in this case, means two pieces of data that produce the same hash value. If an attacker is able to replace the contents of an object in such a way that it still produces the same hashed value, the idea of trusting the hash value to uniquely define the contents of a Git object breaks down. Worse, if one were to find a way to intelligently produce those collisions, say to inject malicious code, the security implications would be devastating as it would allow a file in the chain to be replaced unnoticed. Since practical compromises of SHA-1 have already happened, it is important to move away from SHA-1. That transition is one step closer with recent developments.
State of the SHA-256 transition
The primary force behind the move from SHA-1 to SHA-256 is contributor brian m. carlson, who has been working over the years to make the transition happen. It has not been an easy task; the original Git implementation hard-coded SHA-1 as the only supported algorithm, and countless repositories need to be transitioned from SHA-1 to SHA-256. Moreover, in the time this transition is taking place, Git needs to maintain interoperability between the two hash algorithms within the context of a single repository, since users may still be using older Git clients.
The problems surrounding that transition are complicated. Different versions of Git clients and servers may or may not have SHA-256 support, and all repositories need to be able to work under both algorithms for some time to come. This means Git will need to keep track of objects in two different ways and seamlessly work correctly, regardless of the hashing algorithm. For example, hash values are often abbreviated by users when referencing commits: 412e40d041 instead of 412e40d041e861506bb3ac11a3a91e3, so even the fact that SHA-256 and SHA-1 hash values are different lengths is only marginally helpful.
In the latest round of patches, carlson proposes changes to the communication protocol logic for dealing with the transition. The patch doesn't sound like it was part of the original transition plans, but became necessary to move forward as carlson notes:
It was originally planned that we would not upgrade the protocol and would use SHA-1 for all protocol functionality until some point in the future. However, doing that requires a huge amount of additional work (probably incorporating several hundred more patches which are not yet written) and it's not possible to get the test suite to even come close to passing without a way to fetch and push repositories. I therefore decided that implementing an object-format extension was the best way forward.
The patch set enhances the pack protocol that is used by Git clients to include keeping track of the hashing algorithm. This is implemented via the new object-format capability. In the patch to the protocol documentation, carlson describes the object-format capability as a way for Git to indicate support for particular hashing algorithms:
This capability, which takes a hash algorithm as an argument, indicates that the server supports the given hash algorithms [...] When provided by the client, this indicates that it intends to use the given hash algorithm to communicate.
If the client supports hashes using SHA-256, this change to the protocol enables that to be specified directly. By omitting the capability, Git will assume the hash values are presented as SHA-1.
This provides a clear path forward for the most commonly used Git protocol (git://). It does not, however, address less desirable methods such as communicating over HTTP (http://), since that method does not provide capabilities. To address these situations, the implementation attempts to guess the type of hash algorithm being used by looking at the hash length. Carlson notes this works, but could be a problem if at some point in the future SHA-256 is replaced with a different algorithm that also produces 256-bit outputs. To this however, carlson says that he believes any hashing algorithm that someday might supersede SHA-256 will be longer than 256-bit:
The other two cases are the dumb HTTP protocol and bundles, both of which have no object-format extension (because they provide no capabilities) and are therefore distinguished solely by their hash length. We will have problems if in the future we need to use another 256-bit algorithm, but I plan to be improvident and hope that we'll move to longer algorithms in the future to cover ourselves for post-quantum security.
Carlson acknowledges that his solution to the technical challenges facing the project moving to SHA-256 isn't ideal. When cloning a repository, for example, the hashing algorithm being used by the parent repository isn't known up front. Carlson's work gets around this in a two step process:
Clone support is necessarily a little tricky because we are initializing a repository and then fetching refs, at which point we learn what hash algorithm the remote side supports. We work around this by calling the code that updates the hash algorithm and repository version a second time to rewrite that data once we know what version we're using. This is the most robust way I could approach this problem, but it is still a little ugly.
What comes next
With this milestone reached, the end is in sight for a fully working implementation of SHA-256 powered repositories. This will be a major milestone in the evolution of Git, and arguably place it on solid footing for the future. In fact, carlson laid out what he expects those last patches will likely consist of:
Additional future series include one last series of test fixes (28 patches) plus six final patches in the series that enables SHA-256 support.
In closing, it is worth noting that one of the reasons this transition has been so hard is that the original Git implementation was not designed to swap out hashing algorithms. Much of the work put in to the SHA-256 implementation has been walking back this initial design flaw. With these changes almost complete, it not only provides an alternative to SHA-1, but also makes Git fundamentally indifferent to the hashing algorithm used. This should make Git more adaptable in the future should the need to replace SHA-256 with something stronger arise.
Posted Jun 19, 2020 16:38 UTC (Fri)
by sytoka (guest, #38525)
[Link] (25 responses)
Posted Jun 19, 2020 17:09 UTC (Fri)
by pj (subscriber, #4506)
[Link] (19 responses)
Posted Jun 20, 2020 0:34 UTC (Sat)
by ms-tg (subscriber, #89231)
[Link] (18 responses)
Posted Jun 20, 2020 14:07 UTC (Sat)
by Ericson2314 (guest, #139248)
[Link] (10 responses)
Git really should start merkelizing blob hashes / chunk blobs. Not only does it help with data exchange, but it also means faster hashing when a blob changes O(n) vs O(log n). This transition is the best time to fix things like this, pity it seems they are not under discussion.
Posted Jun 20, 2020 23:14 UTC (Sat)
by mathstuf (subscriber, #69389)
[Link] (9 responses)
- best left to git-lfs, git-annex, or some other off-loading tool
I think experiments to test the actual benefits in organic Git repositories this would be interesting, but I'd rather see the hash transition happen correctly and smoothly and it sounds complicated enough as it is. And it should be laying down version numbers into formats as it needs that such another transition could leverage to ease its upgrade path too.
Posted Jun 21, 2020 13:46 UTC (Sun)
by pabs (subscriber, #43278)
[Link] (8 responses)
Posted Jun 22, 2020 2:07 UTC (Mon)
by cyphar (subscriber, #110703)
[Link]
Posted Jun 27, 2020 5:14 UTC (Sat)
by ras (subscriber, #33059)
[Link] (6 responses)
But you say it has friends?
Posted Jun 27, 2020 5:46 UTC (Sat)
by pabs (subscriber, #43278)
[Link] (5 responses)
https://borgbackup.github.io/borgbackup/
There is also bup, much more closely related to git:
Posted Jun 27, 2020 7:19 UTC (Sat)
by johill (subscriber, #25196)
[Link] (4 responses)
Borg's encryption design seems to have one issue - as far as I can tell, the "content-based chunker" has a very small key (they claim 32 bits, but say it goes linearly though the algorithm, so not all of those bits eventually matter), which would seem to allow fingerprinting attacks ("you have this chain of chunk sizes, so you must have this file"). Borg also has been debating S3 storage for years without any movement.
Ultimately I landed with bup (that I had used previously), and have been working on adding to bup both (asymmetric) encryption support and AWS/S3 storage; in the latter case you can effectively make your repo append-only (to the machine that's making the backup), i.e. AWS permissions ensure that it cannot actually delete the data. It could delete some metadata tables etc. but that's mostly recoverable (though I haven't written the recovery tools yet), apart from the ref names (which are only stored in dynamoDB for consistency reasons, S3 has almost no consistency guarantees.)
It's probably not ready for mainline yet (and we're busy finishing the python 3 port in mainline), but I've actually used it recently to begin storing some of my backups (currently ~850GiB) in S3 Deep Archive.
Configuration references:
Some design documentation is in the code:
If you use it, there are two other things in my tree that you'd probably want:
1) with a lot of data, the content-based splitting on 13 bits results in far too much metadata (and storage isn't that expensive anymore), so you'd want to increase that. Currently in master that's not configurable, but I changed that: https://github.com/jmberg/bup/blob/master/Documentation/b...
2) if you have lots of large directories (e.g. maildir) then minor changes to those currently consumes a significant amount of storage space since the entire folder is saved again (the list of files). I have "treesplit" in my code that allows splitting up those trees (again, content-based) to avoid that issue, which for my largest maildir of ~400k files brings down the amount of new data saved from close to 10 MB (after compression) to <<50kB when a new email is written there. Looks like I didn't document that yet, but I should add it here: https://github.com/jmberg/bup/blob/master/Documentation/b.... The commit describes it a bit now: https://github.com/jmberg/bup/commit/44006daca4786abe31e3...
And yes, I'm working with upstream on this.
Posted Jun 27, 2020 7:31 UTC (Sat)
by pabs (subscriber, #43278)
[Link] (1 responses)
Posted Jun 27, 2020 7:42 UTC (Sat)
by johill (subscriber, #25196)
[Link]
However, it's not nearly as bad in git? You're not storing hundreds of thousands of files in a folder in git, presumably? :-) Not sure how much interest there would be in git on that.
Posted Jun 27, 2020 7:48 UTC (Sat)
by johill (subscriber, #25196)
[Link]
Posted Jul 8, 2020 19:22 UTC (Wed)
by nix (subscriber, #2304)
[Link]
By this point, as a mere observer, I would say you *are* one of upstream. You're one of the two people doing most of the bup commits and have been for over a year now. :)
Posted Jun 20, 2020 15:36 UTC (Sat)
by hmh (subscriber, #3838)
[Link] (6 responses)
At that point it becomes app specific, and other than the obvious protocol best practice that you should explicitly encode the protocol version (in this case what hash and hash parameters if not implied), there is little to be gained.
Prefixing (hidden by base# or explicitly) the hash type in git has already been covered by other replies and posts, and yes, imho it really should be done if at all possible.
Posted Jun 20, 2020 17:11 UTC (Sat)
by cyphar (subscriber, #110703)
[Link] (5 responses)
Now there isn't an IANA-like procedure, everything is done via PRs on GitHub but that's just differences in administrative structure.
Posted Jun 20, 2020 18:42 UTC (Sat)
by hmh (subscriber, #3838)
[Link] (1 responses)
This link you sent is much better, the other one lacks essential information...
I am quite sure git would severely restrict the allowed hashes, but at least the design of multihash seems sane and safely extensible, including when ones does the short-sighted error of enshrining short prefixes of the hash anywhere that is not a throw away command line call... A bad practice that is very common among git users.
Posted Jun 20, 2020 23:17 UTC (Sat)
by mathstuf (subscriber, #69389)
[Link]
"Best practice" for short usage in more permanent places includes the date (or tag description) and summary of the commit in question (which both greatly ease conflict resolution when it occurs and gives some idea of what's going on without having to copy/paste the has yourself).
Posted Jun 22, 2020 15:37 UTC (Mon)
by tialaramex (subscriber, #21167)
[Link]
RFC 8126 lists 10 such procedures for general use in new namespaces.
So what Multihash are doing here sounds like a typical new IANA namespace which has an Experimental/ Private Use region (self-assigned) and then Specification Required for the rest of the namespace. You must document what you're doing, maybe with a Standards Organisation, maybe you write a white paper, maybe even you just spin up a web site with a technical rant, but you need to document it and then you get reviewed and maybe get in.
Apparently Multihash is writing up some sort of formal document to maybe got to the IETF, but given they started in 2016 and it's not that hard they may not ever get it polished up and standardised anywhere, it's not a problem.
Posted Jun 24, 2020 4:03 UTC (Wed)
by nevyn (guest, #33129)
[Link] (1 responses)
Another similar point is the table itself, the hashes added are done ad hoc when someone uses them and wants to use multihash ... again, fine if the project is very new and gaining traction but much less good if the project is established and you go see that none of https://github.com/dgryski/dgohash are there. I understand it's volunteer based contributions but if you want people to actually use your std. it's going to be much easier if they can use it without having to self register well known/used decade old types.
Then there's the format itself. I understand that hashes are variable length but showing abbreviated hashes is very well known at this point. A new git repo. shows 7 characters for the --abbrev hash, ansible with over 50k commits only shows 10 (and even then github only shows 7), and they want to add "1220" to the front of that? And they really want you to show it to the user all the time? Even if abbreviated hashes weren't a thing, most users are going to think it's a bit weird if literally all the hashes they see start with the same 4 hex characters (at a minimum -- using blake2b will eat 6, I think). I also doubt many developers would want to store the hases natively, because it doesn't take many instances before storing the exact same byte sequence with each piece of actual data becomes more than trivial waste.
Posted Jun 25, 2020 17:02 UTC (Thu)
by pj (subscriber, #4506)
[Link]
Posted Jun 19, 2020 17:27 UTC (Fri)
by david.a.wheeler (subscriber, #72896)
[Link] (2 responses)
git reset HASH_VALUE
But having a standard prefix is reasonable. I had proposed rotating the hash value by 16 characters in the first character, so that 0 becomes g, 1 becomes h, and so on. Then you can determine from the first character what encoding is used. You can extend that further by additional rotations or encoding more characters.
Posted Jun 20, 2020 10:51 UTC (Sat)
by cpitrat (subscriber, #116459)
[Link] (1 responses)
Posted Jun 20, 2020 14:22 UTC (Sat)
by gavinbeatty (guest, #139659)
[Link]
Posted Jun 20, 2020 20:36 UTC (Sat)
by josh (subscriber, #17465)
[Link]
[0-9a-f]+ would be SHA-1.
Posted Jun 25, 2020 4:41 UTC (Thu)
by draco (subscriber, #1792)
[Link]
But instead it looks like they'll stick with hex and disambiguate via ^{sha1} and ^{sha256} suffixes.
Posted Jun 19, 2020 16:49 UTC (Fri)
by michaelkjohnson (subscriber, #41438)
[Link] (1 responses)
But 485865fd0 is not a prefix of 412e40d041e861506bb3ac11a3a91e3; that example would be clearer if you used the prefix.
Posted Jun 19, 2020 17:11 UTC (Fri)
by coogle (guest, #138507)
[Link]
Thank you - not sure how that happened. Updated the article to use the proper shorthand value in the example.
Posted Jun 20, 2020 13:03 UTC (Sat)
by cesarb (subscriber, #6266)
[Link]
The tendency seems to be towards newer hashing algorithms being 256-bit. From the SHA-2 family, we have SHA-512/256 which is basically SHA-512 truncated to 256 bits; from the BLAKE family, the latest member BLAKE3 has 256-bit output (though it has an extensible mode with unlimited output length).
Posted Jun 20, 2020 16:20 UTC (Sat)
by Tomasu (guest, #39889)
[Link] (2 responses)
Yes I realize that might cause issues for larger projects that have a bunch of external automated scripts/bots running that may not be maintained. It's not the projects responsibility to support unmaintained processes.
Posted Jun 20, 2020 23:09 UTC (Sat)
by mathstuf (subscriber, #69389)
[Link]
Posted Jun 21, 2020 11:34 UTC (Sun)
by mb (subscriber, #50428)
[Link]
Posted Jun 21, 2020 2:58 UTC (Sun)
by Kamilion (subscriber, #42576)
[Link] (5 responses)
Posted Jun 21, 2020 5:32 UTC (Sun)
by flussence (guest, #85566)
[Link] (4 responses)
Posted Jun 21, 2020 7:08 UTC (Sun)
by Otus (subscriber, #67685)
[Link] (1 responses)
IMO choosing something more secure makes little sense when SHA-256 remains unbroken. Maybe SHA-3, but that's still sort of new and less tested.
Posted Jun 22, 2020 1:57 UTC (Mon)
by NYKevin (subscriber, #129325)
[Link]
This is the essential problem. There will always be shiny new hash functions that may or may not actually be secure. There will always be new threats against old functions. It is impossible to know, right now, what hash function you will need to be using in ten years' time. If you are not designing your system to regularly switch hash functions, you are not designing for security.
That's why they are making this extensible. They have the humility to realize that we don't know what we're going to need tomorrow.
Posted Jun 23, 2020 14:53 UTC (Tue)
by Hattifnattar (subscriber, #93737)
[Link] (1 responses)
No, it is not known to be more secure.
Unfortunately with the current sate of the art this is impossible to know. Sure, it has bigger key space, but 256 bit already makes a random collision astronomically unlikely.
The real problem are vulnerabilities.
And any vulnerability found in SHA-256 is pretty much guaranteed to be present in SHA-512, and vice versa.
Posted Jun 26, 2020 15:45 UTC (Fri)
by plugwash (subscriber, #29694)
[Link]
I would also expect hash functions with a larger internal state to be more secure even if their output size is the same. Even if the difficulty of finding a collision is similar the collision is less useful if you can't just tack on an arbitary suffix.
Posted Jun 22, 2020 12:16 UTC (Mon)
by jezuch (subscriber, #52988)
[Link] (9 responses)
Ouch.
Also, is it really "less desirable"? AFAICT all the hosting providers are only allowing cloning via HTTPS... At least that I know of.
Posted Jun 22, 2020 14:03 UTC (Mon)
by cesarb (subscriber, #6266)
[Link] (8 responses)
There are actually two different http/https transports in git, the older "dumb" transport (put the files somewhere visible to the http daemon, make it export that directory through http, done), and the newer "smart" transport (which is more similar to a CGI script). So if I'm not miscounting, we have a total of six different transports in git: the "git" transport, the "dumb" http transport, the "smart" http transport, the ssh transport, the rsync transport, and the "local" transport (pointing directly to a local filesystem).
Posted Jun 22, 2020 15:44 UTC (Mon)
by NYKevin (subscriber, #129325)
[Link] (7 responses)
- Why do we need both dumb and smart HTTP(S)? Should the client even care what the server looks like internally?
IIRC Mercurial has a grand total of three: HTTP(S), SSH, and local.
Posted Jun 22, 2020 16:02 UTC (Mon)
by mirabilos (subscriber, #84359)
[Link]
The git protocol is only used when there’s an actual server process involved, which isn’t always possible.
Posted Jun 22, 2020 18:08 UTC (Mon)
by nix (subscriber, #2304)
[Link] (5 responses)
Dumb HTTP doesn't require a server -- it only needs an HTTP server that can serve files. It's much slower and transfers a lot more than the smart protocol, but if you need it you really need it. Like git bundles, it's useful getting stuff to/from networkologically constrained environments.
Posted Jun 23, 2020 2:10 UTC (Tue)
by pabs (subscriber, #43278)
[Link] (4 responses)
https://askubuntu.com/questions/583141/passwordless-and-k...
PS: branchable.com allows anonymous git:// pushes to wikis.
http://ikiwiki.info/tips/untrusted_git_push/
Posted Jun 23, 2020 7:20 UTC (Tue)
by niner (subscriber, #26151)
[Link] (2 responses)
Posted Jun 23, 2020 12:19 UTC (Tue)
by dezgeg (subscriber, #92243)
[Link]
Posted Jun 25, 2020 9:09 UTC (Thu)
by grawity (subscriber, #80596)
[Link]
Well, if the password is actually empty, at least OpenSSH will outright let you skip password-based authentication – no password prompts to be shown. I have seen actual Git and Hg servers which use this (if I remember correctly, the OpenSolaris Hg repository used to be served exactly this way). Sure you could argue that you still need a known username, but that can be simply included in the git+ssh:// URL (like people already do with (Still, even if you had to press Enter at a blank password prompt, that's how CVS pserver used to work and everyone accepted it as "anonymous access" all the same.)
Posted Jul 8, 2020 19:28 UTC (Wed)
by nix (subscriber, #2304)
[Link]
Posted Jun 22, 2020 18:43 UTC (Mon)
by xnox (guest, #63320)
[Link] (2 responses)
BLAKE3 is faster on both 32bit and 64bit arches, over big and small inputs. And for big stuff, it supports streaming validation and incremental hash updates. Such that one can verify large pack files as one is receiving them.
I wonder if it is too late to consider BLAKE3.
Posted Jun 25, 2020 4:23 UTC (Thu)
by draco (subscriber, #1792)
[Link] (1 responses)
Here's the criteria they used to choose SHA-256, from git.git/Documentation/technical/hash-function-transition.txt:
1. A 256-bit hash (long enough to match common security practice; not
2. High quality implementations should be widely available (e.g., in
3. The hash function's properties should match Git's needs (e.g. Git
4. As a tiebreaker, the hash should be fast to compute (fortunately
Looking at the git history of the file, their candidates included: SHA-256, SHA-512/256, SHA-256x16, K12, and BLAKE2bp-256.
From the commit message in which they down-selected to SHA-256:
"From a security perspective, it seems that SHA-256, BLAKE2, SHA3-256, K12, and so on are all believed to have similar security properties. All are good options from a security point of view.
SHA-256 has a number of advantages:
* It has been around for a while, is widely used, and is supported by
* When you compare against SHA1DC, most vectorized SHA-256
* If we're doing signatures with OpenPGP (or even, I suppose, CMS),
So SHA-256 it is."
Perhaps this goes without saying, but since this is the kind of thing that can get very bikesheddy, performance numbers and strong arguments specifically refuting their reasons will probably do better than opinions.
Posted Jun 25, 2020 6:52 UTC (Thu)
by newren (subscriber, #5160)
[Link]
I don't think that's quite a fair characterization; as far as I understand, there's been quite a bit of sha-256 specific work -- the choice of sha-256 was made two years ago (and not earlier) because that was the point at which brian needed a decision to be made to proceed further on the transition plan. When someone tried to propose a different hash six months ago, this is part of what brian had to say:
"Because we decided some time ago, I've sent in a bunch of patches to our
Absent a compelling security reason to abandon SHA-256, such as a
> Perhaps this goes without saying, but since this is the kind of thing that can get very bikesheddy, performance numbers and strong arguments specifically refuting their reasons will probably do better than opinions.
Yes, absolutely. And someone as capable as brian to volunteer to do all the work brian has been doing for the last few years, or some magic to convince brian to throw away part of his work and happily redo it for a new hash. I personally know almost nothing about all these hashes and have not been involved in the hash transition plan, but if blake3 is still impressive enough to you that you still want to try to change brian's and possibly others' minds, I can at least point you to the thread where the sha256 decision was initially made. It may help you craft your arguments relative to performance and other characteristics. See it over here: https://lore.kernel.org/git/20180609224913.GC38834@genre....
Posted Jun 25, 2020 8:05 UTC (Thu)
by jnareb (subscriber, #46500)
[Link]
Actually Git protocol (with capabilities) is used with three transport methods: bare TCP (git://) - unauthenticated and nowadays rarely used, SSH, and "smart" HTTP(s) (http:// and https://). If you use GitHub, Bitbucket, GitLab or any other hosting site, you are using encapsulated git protocol whether you use SSH or https:// URLs.
It is only *"dumb" HTTP* that has problems, that relies on WebDAV-capable web server and `git update-server-info` to be run on each repository update (usually from hooks). "Smart" HTTP protocol relies on `git http-server` CGI script or equivalent.
So in my opinion it should be s/such as communicating over HTTP/such as communicating over "dumb" HTTP/
Posted Jun 25, 2020 16:36 UTC (Thu)
by kmweber (guest, #114635)
[Link]
Posted Jun 26, 2020 11:31 UTC (Fri)
by smitty_one_each (subscriber, #28989)
[Link]
Which is probably harder than I realize to accomplish.
Updating the Git protocol for SHA-256
Updating the Git protocol for SHA-256
Updating the Git protocol for SHA-256
Updating the Git protocol for SHA-256
Updating the Git protocol for SHA-256
- machine generated data (of some kind) that changes rarely
- non-text artifacts that change rarely
Updating the Git protocol for SHA-256
Updating the Git protocol for SHA-256
Updating the Git protocol for SHA-256
Updating the Git protocol for SHA-256
Updating the Git protocol for SHA-256
https://github.com/jmberg/bup/blob/master/Documentation/b...
https://github.com/jmberg/bup/blob/master/Documentation/b...
https://github.com/jmberg/bup/blob/master/lib/bup/repo/en...
Updating the Git protocol for SHA-256
Updating the Git protocol for SHA-256
Updating the Git protocol for SHA-256
Updating the Git protocol for SHA-256
Updating the Git protocol for SHA-256
Updating the Git protocol for SHA-256
Updating the Git protocol for SHA-256
Updating the Git protocol for SHA-256
IANA
Updating the Git protocol for SHA-256
Updating the Git protocol for SHA-256
Updating the Git protocol for SHA-256
Updating the Git protocol for SHA-256
Updating the Git protocol for SHA-256
Updating the Git protocol for SHA-256
T[0-9a-f]+ would be SHA-256
Pick a new capital letter [G-Z] for each new hash.
Updating the Git protocol for SHA-256
Updating the Git protocol for SHA-256
Updating the Git protocol for SHA-256
Updating the Git protocol for SHA-256
Updating the Git protocol for SHA-256
Updating the Git protocol for SHA-256
Updating the Git protocol for SHA-256
It makes life easier for the developers, but hard for all users.
One should always try to avoid the need for help from users when upgrading/extending things. There are so many examples where this just made the process take forever, because so many users don't want to put the effort in. (e.g. Python 2->3).
Updating the Git protocol for SHA-256
Shouldn't we be taking lessons learned from wireguard into account? If we move to SHA256 we're simply kicking the can down the road a bit further, but not solving the problem. Software selection for adoption in high profile projects like this tends to drive hardware acceleration, and I'd much rather see HW vendors armtwisted into shooting for SHA512/ED25519/ChaCha20 accelerators than the current breed of ZLIB and AES-256 accelerators.
Updating the Git protocol for SHA-256
Updating the Git protocol for SHA-256
Updating the Git protocol for SHA-256
Updating the Git protocol for SHA-256
Updating the Git protocol for SHA-256
Updating the Git protocol for SHA-256
Updating the Git protocol for SHA-256
Updating the Git protocol for SHA-256
- Why isn't local just a special case of rsync?
- The inclusion of both git and ssh in the list is questionable (you can tunnel anything over ssh, right?) but it's probably too late to fix now.
Updating the Git protocol for SHA-256
Updating the Git protocol for SHA-256
Updating the Git protocol for SHA-256
https://singpolyma.net/2009/11/anonymous-sftp-on-ubuntu/
https://ikiwiki-hosting.branchable.com/todo/anonymous_git...
Updating the Git protocol for SHA-256
Updating the Git protocol for SHA-256
Updating the Git protocol for SHA-256
git@github.com
).Updating the Git protocol for SHA-256
Updating the Git protocol for SHA-256
Updating the Git protocol for SHA-256
excessively long to hurt performance and disk usage).
OpenSSL and Apple CommonCrypto).
requires collision and 2nd preimage resistance and does not require
length extension resistance).
many contenders are faster than SHA-1).
just about every single crypto library (OpenSSL, mbedTLS, CryptoNG,
SecureTransport, etc).
implementations are indeed faster, even without acceleration.
we're going to be using SHA-2, so it doesn't make sense to have our
security depend on two separate algorithms when either one of them
alone could break the security when we could just depend on one.
Updating the Git protocol for SHA-256
testsuite to make it work with SHA-256. Some of these patches are
general, in that they make the tests generate values which are used, or
they are specific to the length of the hash algorithm. Others use
specific hash values, and changing the hash algorithm will require
recomputing all of these values.
significant previously unknown cryptographic weakness, I don't plan to
reimplement all of this work. Updating our testsuite to work
successfully with SHA-256 has taken a huge amount of time, and this work
has been entirely done on my own free time because I want the Git
project to be successful. That doesn't even include the contributions
of others who have reviewed, discussed, and contributed to the current
work and transition plan."
(Source: https://lore.kernel.org/git/20191223011306.GF163225@camp....)
No capabilities only for "dumb" HTTP protocol
Updating the Git protocol for SHA-256
Updating the Git protocol for SHA-256