User: Password:
|
|
Subscribe / Log in / New account

The guts of git

The guts of git

Posted Apr 13, 2005 9:27 UTC (Wed) by ekj (guest, #1524)
In reply to: The guts of git by bronson
Parent article: The guts of git

But in the context of source-code it's obviosuy true. Not nessecarily so for all other contexts.

Source-code is *very* small and *very* compressible in relation to how much work it takes to produce it. If you invest a million dollars in developing some software over a year. *ALL* revisions of *ALL* files can still be stored, completely uncompressed for a storage-cost in the pennies-range.

There ain't anyone seriously working on Kernel-development that has a problem storing 10 or 100GB in order to do so efficiently. And hardisks are less than a dollar/GB.


(Log in to post comments)

The guts of git

Posted Apr 14, 2005 21:02 UTC (Thu) by joey (subscriber, #328) [Link]

hmm, I've done some calculations before on checking out all revisions of all data I keep in my subversion repositories. IIRC, checking out all versions of all files in my ~3 gb of repositories would need closer to 1 terabyte of data than 100 gb. Not very practical for laptop use. :-)

The guts of git

Posted Apr 15, 2005 7:49 UTC (Fri) by njhurst (guest, #6022) [Link]

Have you considered for loops?

The guts of git

Posted Apr 15, 2005 21:45 UTC (Fri) by proski (subscriber, #104) [Link]

You cannot just multiply the number of revisions by the number of files unless you change all files in every revision. The files that don't change between revisions are not stored as separate copies (because their SHA1 checksum is the same). In fact, if you revert to original file contents, the repository would be reusing the old files.


Copyright © 2017, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds