Not logged in
Log in now
Create an account
Subscribe to LWN
Pencil, Pencil, and Pencil
Dividing the Linux desktop
LWN.net Weekly Edition for June 13, 2013
A report from pgCon 2013
Little things that matter in language design
1. Unmergable files (binaries) and file locking. This is big. A DVCS can't handle the output
of our CAD program.
2. Forcing people to keep Local copies of the entire repo falls apart above certain repo
Subversion considered obsolete
Posted Apr 5, 2010 17:21 UTC (Mon) by iabervon (subscriber, #722)
Also, a DVCS could, in theory, know where to get all the big uninteresting files instead of actually storing them on the client. That is, for files that aren't useful to compare aside from identity, it would be perfectly reasonable for the DVCS to store on the client "hash xyz is available at (location)", and only actually get the content when needed. For that matter, a DVCS could store the content of large binary files in bittorrent (or a site-internal equivalent) and beat a centralized distribution point.
So far, we haven't seen any DVCSes that do either of these things, but there's not reason they couldn't, aside from the fact that there aren't developers who want to work on those particular problems. That is, version control programs are written in environments with merging and files that compress well against each other and other versions of the same file; this means that "eating your own dogfood" isn't sufficient to motivate developers of version controls systems to fix these problems, and centralized systems, by and large, tend to just happen to work for these cases by default or with very little design effort.
Posted Apr 5, 2010 17:46 UTC (Mon) by Msemack (guest, #65001)
Posted Apr 5, 2010 18:01 UTC (Mon) by iabervon (subscriber, #722)
Posted Apr 6, 2010 19:07 UTC (Tue) by vonbrand (subscriber, #4458)
Why all this locking nonsense? You have complete control over your clone of the central repo (as a bonus, nobody sees any dumb experiments you try). Use something like gitolite to provide finer-grained access to a shared repo.
And have a real person as a gatekeeper for changes. Call them QA or something.
Posted Apr 6, 2010 20:01 UTC (Tue) by iabervon (subscriber, #722)
It's only relevant to projects where merge conflicts can't be resolved (and if you get a merge conflict, someone's work has to be thrown away and redone from scratch), where you want the person whose work would be wasted to do something else or take the afternoon off instead of wasting their time, but these are important cases in a number of industries that aren't software development.
Posted Apr 7, 2010 1:10 UTC (Wed) by vonbrand (subscriber, #4458)
The git way to handle this is to have everybody work in their own area (no "I step on your toes" possible), and merge the finished products when the developer say it is done. As everybody can also freely take versions from anybody, this doesn't restrict work based on not-yet-accepted changes in any way (sanctioning a change as official is an administrative decision, as it should be, not one forced by the tool).
Posted Apr 7, 2010 2:13 UTC (Wed) by iabervon (subscriber, #722)
Posted Apr 7, 2010 20:21 UTC (Wed) by vonbrand (subscriber, #4458)
OK, but this is a problem that no VCS can solve (because there is no reasonable way to merge separate modifications). Locking doesn't help either, in any case this requires administrative (workflow) coordination between people.
Posted Apr 7, 2010 20:32 UTC (Wed) by foom (subscriber, #14868)
The advisory locking in SVN *is the implementation of* the administrative (workflow) coordination
Posted Apr 6, 2010 19:03 UTC (Tue) by vonbrand (subscriber, #4458)
Locking is not needed in git, all changes are atomic by design. Sure, if several people mess around commiting stuff at random into the same repo, caos could ensue (but "lock for a commit" won't help here anyway). A solution is to have a real person as a gatekeeper for changes. Or use something like gitolite to provide finer-grained access to a shared repo.
"Large files can't be handled"? That I don't see where it comes from. Sure, early git versions did handle everything by keeping (compressed) copies of each file contents they came across, but that is long gone now.
Posted Apr 6, 2010 23:16 UTC (Tue) by dlang (✭ supporter ✭, #313)
git's pack file format uses 32 bit offsets, which limits the largest pack size to ~4G (I believe it uses unsigned 32 bit values, if it's signed values then the limit is 2G) I think that it is always pointing to the beginning of a file, so a file larger than 4G can exist in a pack, but would be the only thing in the pack.
Size limit in git objects?
Posted Apr 7, 2010 1:02 UTC (Wed) by vonbrand (subscriber, #4458)
Wrong. From Documentation/tecnical/pack-format.txt for current git (version v126.96.36.199-361-g8b5fe8c):
Observation: length of each object is encoded in a variable
length format and is not constrained to 32-bit or anything.
Posted Apr 7, 2010 3:23 UTC (Wed) by dlang (✭ supporter ✭, #313)
so you can have up to 4G in a pack file, plus however much the last object runs off the end of it.
Posted Apr 7, 2010 22:56 UTC (Wed) by cmccabe (guest, #60281)
You can mmap(2) files that are bigger than your memory size.
Of course, if you're on 32-bit, there are some size limitations because of the limited size of virtual memory.
Posted Apr 7, 2010 20:21 UTC (Wed) by tialaramex (subscriber, #21167)
What happens in many proprietary systems, and in subversion if you choose, is that you need a lock to be "authorised" to edit a file, not to commit the change. The procedure looks like:
1 The lockable files start off read-only
2. You _tell the VCS_ that you want to work on file X
2.1 The VCS contacts a central server, asking for a lock on file X
2.2 If that's granted file X is set read-write
2.3 Optionally your program for working on file X is started automatically
3. You do some work, maybe over the course of hours or even days
4. You save and check in the new file X, optionally releasing the lock
The VCS can't strictly _enforce_ the locking rule, of course you could copy file X, or set it read-write manually, then start working on it, and only try to take the lock a day later. But, when you complain to your boss that someone changed file X after you'd started work on it, of course he won't have much sympathy.
The locking model has lots of problems, but some people have convincing arguments for why its appropriate to their problem. The only options for Git are (1) not appealing to those users (2) persuading them all that they're wrong or (3) offering some weird hybrid mode where there can be a central locking server for a subset of files. If you accept that (1) could be the right option, then the continued existence of Subversion is justified for those users already.
Posted Apr 8, 2010 20:56 UTC (Thu) by vonbrand (subscriber, #4458)
In centralized systems (especially those that don't handle merges decently) this is really the only way to work, true. (RCS works this way, for example). But with decentralized systems with reasonable branching and merging this isn't required. And locking a file doesn't really make sense in a decentralized system (there is no single "file" to lock!), so it is left out of the tool. If the workflow requires some sort of "don't touch some file(s) for a while" synchronization, it has to be handled outside.
I believe this requirement's importance is way overblown. How many projects do you work on, where it is really a requirement (not an artifact of shortcommings in integrating changes by the tool)?
Posted Apr 8, 2010 21:56 UTC (Thu) by foom (subscriber, #14868)
Being a distributed VCS doesn't make this workflow management tool any less necessary. And it's convenient to have it integrated with the VCS so that "status" shows the status, and "commit" checks and releases the locks, and so on.
You might want to read this so you can know what you're talking about:
I wish I didn't have to keep defending subversion: git is really nice. But come on people, just be honest about what it can't do, and stop claiming those things are unnecessary!
Posted Apr 9, 2010 16:07 UTC (Fri) by vonbrand (subscriber, #4458)
As I said, the workflow might require it. But in a decentralized environment there simply can't be any "common rallying point" for all developers handled by the DVCS, so the tool itself can't help you here.
Not by a design flaw, but by fundamental reasons: The "locking" idea only makes sense if there is one master copy shared by all.
Copyright © 2013, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds