Bazaar on the slow track
Stefan Monnier kicked off the discussion by noting that the number of commits to Bazaar has been dropping and that bugs are not getting fixed. The departure of lead developer Martin Pool from Canonical (and from the Bazaar project) has certainly not helped the situation. So, Stefan said:
Some participants questioned whether there was a problem, noting that commits continue to flow into the Bazaar repository. But Matthew Fuller ran the numbers and came to a fairly clear conclusion:
This slowdown has happened despite the fact that a new major release (2.6) was expected in August. When that release does happen, the list of new features seems unlikely to knock the socks off many users. Bazaar, as a project, may not be dead, but it shows signs of going into a sort of maintenance mode.
What is going on?
Once one accepts that development on Bazaar is slowing, it is natural to wonder why that is and what can be done about it. One possibility is that the distributed version control problem has been solved and that there is little need for more development. After all, significant projects like bash and make have not shown blistering rates of development; there simply is no need at this point. In the distributed version control system area, though, it would be hard to say that there are no challenges remaining. Projects like Git and Mercurial continue to evolve at a high rate. So, in a general sense, it would be hard to say that Bazaar is slowing down because there's nothing more to do.
That doesn't mean that Canonical, which has sponsored most of the work on Bazaar, sees more that needs to be done. Indeed, according to John Arbash Meinel (Martin Pool's replacement as Bazaar lead developer), Canonical is happy with the current state of affairs:
He added that Bazaar wasn't in danger of disappearing anytime soon.
"It is still being actively maintained, though a little less actively
than last year.
" That statement was seen by some as an oblique way
of saying that Bazaar is now in maintenance mode — a prospect that was not seen
as particularly reassuring by Bazaar users.
Of course, Bazaar is free software, licensed under the GPL, so anybody is free to step up and carry it forward. Thus far, though, that has not happened. Once again, it is worthwhile to think about why that might be. Possibly Bazaar users got comfortable with Canonical carrying the load of Bazaar development and have not, yet, felt the need to help out. Over time, some of these users might decide that it is time to pick up some of that load going forward. Or they might just switch to another system with a more active community.
One possibility, raised by Ben Finney, is that Canonical's much-maligned contributor agreement is a part of the problem. This reasoning says that, since Canonical reserves the right to release contributions to Bazaar under proprietary licenses, many potential contributors have voted with their feet and gone elsewhere. It's far from clear that the contributor agreement is really part of the problem, though. If there were really a community of developers who would contribute if only the terms were more fair, an agreement-less Bazaar fork would almost certainly have emerged by now. The fact that nobody has even attempted such a fork suggests that Canonical's agreement is not really holding things back.
Stephen Turnbull had an interesting alternative explanation for what is going on. Bazaar, he says, is a tool aimed at users who want their version control system to "just work" without them having to think about it much. Git, he says, is a different matter:
Some participants saw this suggestion as a sort of insult against Bazaar users, saying that they lacked the ability or the drive to improve the tool. But that is not what Stephen was saying; his point is that, by appealing to users who don't want to have to think about their version control system, Bazaar has created a user community that is relatively unlikely to want to put their time into making the system better.
There is an alternative that nobody has mentioned in this discussion: perhaps Bazaar has simply lost out to competing projects which have managed to advance further and faster. For sheer functionality, Git is hard to compete with. For those who are put off by the complexity of Git, Mercurial offers a gentler alternative without compromising on features. Perhaps most potential users just do not see anything in Bazaar that is sufficiently shiny to attract them away from the other tools.
If that is the case, it is hard to imagine what can be done to improve the
situation from the point of view of Bazaar users, especially given that
Canonical has lost interest in adding
features to Bazaar. Perhaps, as Ben suggested, another corporate sponsor could be
found to take up the Bazaar banner. Failing that, Bazaar seems likely to
stay in maintenance mode indefinitely; it will remain a capable tool, but
will find itself increasingly left behind by the other free alternatives.
Posted Sep 11, 2012 21:31 UTC (Tue)
by Cyberax (✭ supporter ✭, #52523)
[Link] (12 responses)
Git is conceptually simple (it's just a list of hash-linked diffs between revisions). Mercurial is more complicated (it tracks tree changes) but by now it's not really that different from git in functionality.
Bzr stands out among them. And for reasons that are not really clear.
Posted Sep 11, 2012 21:42 UTC (Tue)
by juliank (guest, #45896)
[Link] (5 responses)
Bazaar just adopted common terminology: checkout works the same way as in svn, branch (aka get/clone, but those are deprecated) works like clone in git.
Posted Sep 11, 2012 22:03 UTC (Tue)
by Cyberax (✭ supporter ✭, #52523)
[Link] (2 responses)
For example: http://bazaar.launchpad.net/~bzr-pqm/bzr/bzr.dev/view/hea...
In comparison, hg and git are truly distributed - they're using hashes to identify commits: http://selenic.com/repo/hg/rev/8fea378242e3 This design makes sure that there's no single global ordering of commits, but there is always a clearly-defined local ordering (i.e. git/hg commits form a partially ordered set).
Posted Sep 11, 2012 22:23 UTC (Tue)
by james_w (guest, #51167)
[Link] (1 responses)
Posted Sep 11, 2012 22:32 UTC (Tue)
by Cyberax (✭ supporter ✭, #52523)
[Link]
For instance, try to google your ID - it's not present in any publically-crawled repository viewers.
Posted Sep 11, 2012 23:10 UTC (Tue)
by marcH (subscriber, #57642)
[Link] (1 responses)
I much prefer a simple, clear and sound data model over a familiar and supposedly "user-friendly" interface.
On the other hand I've met a significant number of people who want to know as little as possible about version control *in general* (I know this is wrong but what can you do?). They just want to run the same and very small subset of commands again and again to publish their work without absolutely any interest for what happens behind the scenes nor for any other actual version control feature. git's complex and inconsistency command line makes their life extremely difficult. They would probably much prefer Bazaar. As noted in the article, this type of lusers would also be extremely unlikely to contribute to any VC tool in any way.
Great quote: "A common response I get to complaints about Git’s command line complexity is that “you don’t need to use all those commands, you can use it like Subversion if that’s what you really want”. Rubbish. That’s like telling an old granny that the freeway isn’t scary, she can drive at 20kph in the left lane if she wants."
Sometimes I wish I weren't that comfortable with git because that makes me too lazy now to try and learn Mercurial...
Posted Sep 12, 2012 2:33 UTC (Wed)
by mathstuf (subscriber, #69389)
[Link]
I tried using it for a few hours and my git experience made me lose some changes[1]. I'd call it data loss, but the devs consider it just a bug. Sure, they're both a DVCS, but I think starting from step one is probably easier when learning a new one than trying to make analogies based on my experience with darcs (XMonad repos and some other Haskell stuff), hg (tried to make a patch for mutt and udiskie), and one attempt to do something with bzr (don't even remember what it was, but zsh's vcs_info plugin for it is/was molasses (as in 5s for a prompt to appear with just the branch name and "dirty" status)). Every time, I get the feeling "I'd rather use git", but that's probably just familiarity talking.
Posted Sep 11, 2012 23:32 UTC (Tue)
by nix (subscriber, #2304)
[Link]
(Sure, the *pack implementation* happens to be delta-compressed, but one of git's very nice features, shared with bzr as it happens, is that this is not visible to the user at all: the conceptual model and the storage mechanism are completely decoupled. Recently-added (loose) objects, note, are gzipped but not delta-compressed at all, but the user need not care.)
Posted Sep 13, 2012 7:51 UTC (Thu)
by mbp (subscriber, #2737)
[Link] (4 responses)
In many projects, after a patch/feature/fix is merged to trunk, the history of just how that patch was written becomes relatively unimportant: to start with, people looking at history just want to see "jdoe fixed bug 123". One approach is to literally throw that history away and just submit a plain patch, as is often done with git. I wanted to try something different that would keep all the history, but also have a view of which path through the dag was the main history. (You can also do the prior one in bzr of course.)
The other major difference with bzr is that revisions are hashed for integrity, but primarily identified by assigned ids. This avoids transitions when the representation changes and allows directly talking about revisions in foreign systems. But, hash or not, they still have globally unique ids.
Posted Sep 13, 2012 19:42 UTC (Thu)
by dlang (guest, #313)
[Link] (3 responses)
note that git doesn't force you to throw away the history.
If you pull from the mainline, create your patch, and send a pull request, your history will show up in the main repository.
you can even edit your patch history prior to sending the pull request. This is commonly done by people doing major changes as it lets them clean things up and make each patch 'correct' and self contained rather than showing the reality where one patch may introduce a bug that's fixed several patches later.
the only question is defining "which path was the main history" because git really doesn't define a "main history".
Posted Sep 19, 2012 14:22 UTC (Wed)
by pboddie (guest, #50784)
[Link] (2 responses)
I think that the intention may have been to describe the apparently common practice, particularly amongst git-using projects, of aggressively rebasing everything and asking people to collapse their contributions into a single patch so that the project history is kept "clean".
Posted Sep 19, 2012 15:31 UTC (Wed)
by dlang (guest, #313)
[Link] (1 responses)
Yes, this can be abused to combine a huge amount of work into one monster patch.
But it can be used sanely to re-order and combine patches from a line of development into a clean series of logical patches.
When you are developing something, if you check it in frequently as you go along, you are going to have cases where you introduce a bug at one point and don't find and fix it for several commits. You are also going to have things that you do at some point that you find were not the best way to do something and that you change back later (but want to keep other work you have done in the meantime)
you now have the option of either pushing this history, including all your false starts, bugs, etc.
Or you can clean the history up, combining the bugfixes with the earlier patches that introduced the bug, eliminating the false starts, etc and push the result.
The first approach has the advantage that everything is visible, but it has the disadvantage that there are a lot of commits in the history where things just don't work.
If the project in question encourages the use of bisect to track down problems, having lots of commits where things just don't work makes it really hard for users trying to help the developers track down the bugs.
As a result, many projects encourage the developers to take the second approach.
Now, many developers misunderstand this to mean that they are encouraged to rebase their entire development effort into one monster patch relative to the latest head, but that's actually a bad thing to do.
And in any case, the history is still available to the developer, they are just choosing not to share that history with the outside world.
Posted Sep 19, 2012 19:52 UTC (Wed)
by smurf (subscriber, #17840)
[Link]
A "clean" history (meaning "to the best of my knowledge, every change transforms program X1 into a strictly better program X2") means that you can take advantage of one of git's main features when you do find a regression.
Bisecting.
If you do break something, "git bisect" requires ten compile-test-run cycles to find the culprit, among a thousand changes. Or twenty cycles if you have a million changes. (OK, more like 13 and 30, because history isn't linear, but you get the idea.) If you try to keep track of that manually you'd go bonkers.
Of course this isn't restricted to git. bzr and hg also implemented the command. The idea was too good not to. ;-)
Posted Sep 11, 2012 21:50 UTC (Tue)
by tialaramex (subscriber, #21167)
[Link]
Now, if there weren't any other similar software, that needn't matter. A lot more Word documents get created every day than git commits, but that's no reason for programmers to consider choosing Word over git. But the reality is that Bazaar isn't so different from other DVCSs, and while the difference between CIA's statistics for CVS or Subversion and say, Mercurial, might be accounted for by some small variation in usage style, the huge gap between Bazaar and almost any of the others can't be explained that way.
If there are ten (or as seems more likely from those stats, several hundred) times more git users than Bazaar users then on average Bazaar users must be a LOT more proactive in supporting Bazaar to have the same impact. That's just not likely to happen.
Posted Sep 11, 2012 22:48 UTC (Tue)
by rahulsundaram (subscriber, #21946)
[Link] (3 responses)
I am not sure about that. It doesn't seem like people are willing to fork over the issue of agreements unless there are other issues at play as well. Likely because there is no community of developers that even forms outside of the single dominant originator vendor in such cases. The forks that have emerged under similar circumstances have usually been formed by ex employees of that single vendor.
Posted Sep 12, 2012 20:21 UTC (Wed)
by ceplm (subscriber, #41334)
[Link] (2 responses)
If MySQL users will finally get p*sed off enough to be willing to fork it, they won't do it (most likely). They will just go to MariaDB (or hopefully PostgreSQL, but that's another issue).
When bzr users will be missing some functionality in bzr, will they spent all that effort to create a fork of bzr which they will then have to maintain, or rather sit down and read http://git-scm.com/book ... it is really not that scary (anymore?).
Posted Sep 13, 2012 9:55 UTC (Thu)
by hingo (guest, #14792)
[Link] (1 responses)
Even better, there are 2 more active MySQL forks too, and historically there have been several.
Posted Sep 13, 2012 11:52 UTC (Thu)
by ceplm (subscriber, #41334)
[Link]
Posted Sep 11, 2012 22:55 UTC (Tue)
by amk (subscriber, #19)
[Link] (8 responses)
Posted Sep 12, 2012 2:08 UTC (Wed)
by SEJeff (guest, #51588)
[Link] (1 responses)
That made it an absolute nonstarter for some things I was working on several years ago
Posted Sep 12, 2012 8:12 UTC (Wed)
by hrw (subscriber, #44826)
[Link]
But personally I prefer git.
Posted Sep 18, 2012 20:36 UTC (Tue)
by fw (subscriber, #26023)
[Link] (5 responses)
Posted Sep 19, 2012 11:03 UTC (Wed)
by hummassa (guest, #307)
[Link] (3 responses)
Care to elaborate that? You just stated that git was 3x slower than bzr, so I'm confused.
Posted Sep 19, 2012 11:17 UTC (Wed)
by hummassa (guest, #307)
[Link]
Posted Sep 21, 2012 19:37 UTC (Fri)
by fw (subscriber, #26023)
[Link] (1 responses)
Posted Oct 3, 2012 10:07 UTC (Wed)
by fw (subscriber, #26023)
[Link]
bzr: 41s, 27.7 MiB
So bzr and git are roughly in the same ballpark, at least for this test (pulling changes into a local repository which hasn't seen any local changes).
Posted Sep 19, 2012 21:52 UTC (Wed)
by nix (subscriber, #2304)
[Link]
Getting a diff of the changes on trunk -- entirely in core -- took four seconds (not too shabby but still terribly slow compared to git, where in-core diffs often run at rates in excess of 30,000 lines per second), and revealed
311 files changed, 3025 insertions(+), 2658 deletions(-)
A wc of the output says
12952 55987 447812
so "bzr pull" had to receive roughly thirty times as much data as was actually changed, and that's despite the fact that patches contain lots of redundant context info that hasn't actually changed in any way at all.
I'm sorry, but no matter which way you slice it, bzr is still hilariously inefficient. It may not be *unusably* inefficient anymore, but that's damning with very faint praise.
Posted Sep 11, 2012 23:55 UTC (Tue)
by martin.langhoff (subscriber, #61417)
[Link] (21 responses)
When the git wave hit, it looked as if Bazaar took a while to figure out the internal data structures of git, and eventually learned a lot from them. They are not identical, of course, but the TLA data structures were a horridly bad fit for the job.
There was a long period of confusion -- circa 2006. All the while git's usability was, um, bad, but it's storage was rock-solid. Bazaar's storage was not considered very reliable, even by Martin Pool. This was the time when X.org, Mozilla and other high-profile / large repo projects were looking to migrate.
Once git's cli UI starting getting a bit more polished, around git 1.5, the "slightly better UI" justification for many of the other VCS started to dry up. VCSs are specialized tools, so when you invest in learning one (and you have to), a slightly steeper learning curve is very often worth it. So all git needed to do was to get close in usability to the others -- and it did. At that point, git's sheer flexibility and power closes the deal.
One VCS I do miss is darcs. Its internal data structures were flawed, but the UI was oh so elegant, specially for those of use suffering with TLA. Not sure if it was original, but I do think it set the UI standard for Mercurial and Bazaar (and git, once it got to the "make it usable" stage).
I am actually surprised that the Bazaar people haven't traded their core engine for git. If you want to support features git won't give you (explicit renames, for example :-) ) you can attach extra bits of metadata to trees and commit objects.
You can use that trick to prototype DVCS features ideas pretty quick, and you could implement a very good "usable-first" DVCS. There are many git "wrappers" that aim to preserve git purity (that is, they are compatible with standard git usage). That puts a lot of limitations on what you can do. If you break that taboo, git can be an outstanding storage engine for very fancy DCVS.
{ I am author of importers to git and git-cvsserver, as well as patches to git and the long-obsoleted cg. I helped several large profile projects evaluate git and import their repos (with mixed results). And I am always looking for time to try to fix some git usability pet peeves. }
Posted Sep 12, 2012 3:48 UTC (Wed)
by pabs (subscriber, #43278)
[Link] (1 responses)
Posted Sep 13, 2012 1:19 UTC (Thu)
by martin.langhoff (subscriber, #61417)
[Link]
Posted Sep 12, 2012 5:06 UTC (Wed)
by abentley (guest, #22064)
[Link] (3 responses)
I don't think Martin considered Bazaar-NG's storage unreliable. Bazaar-NG was self-hosting in March after 3 months of development, and Martin wouldn't have done that if he didn't trust it. His original web site warned "This is pre-release unstable code. Keep backups of any important information.", but I think this was just an overabundance of caution.
One of the reasons Bazaar(-NG) didn't switch to git's core was because git didn't provide a library. And even if it had, it would have been in C, not Python. But we also wanted something that worked with our data model. We felt we could do at least as well as git in storing data, and I've never had reason to doubt the 2a format's efficiency.
Posted Sep 13, 2012 1:51 UTC (Thu)
by martin.langhoff (subscriber, #61417)
[Link] (2 responses)
On whether the reliability of storage and Martin Pool's regard of it... I have an anecdote :-)
I was sitting at Martin Pool's presentation in linux.conf.au 2006 (Dunedin, NZ). From the back of the room, in the QA part of the session, someone asked: "so, is it ready for real work? You see, I have this large codebase that's been developed for 25+ years. After several VCS migrations, it's in CVS with a messy repo due to migrations. We are a widely distributed team, and we are hurting. Should I be migrating to bzr now?"
Martin looked rather uncomfortable with the question, and muttered something like "not really, not yet". He had already been less than reassuring when I had asked whether Bazaar storage was delta-centric (darcs-like) or snapshot centric (git-like).
The "is it ready for real workdd?" question had come from Jim Gettys, who I did not know personally at the time. After the talk I asked him whether he had been talking about X.org and whether he could give me access to those messy X.org CVS repos. I would try importing them into git, and we could see if he liked the outcome.
It was the start of a long hard road -- it led to many improvements to git- cvsimport, yet the migration was done with parsecvs (written by Keith Packard).
I was at linux.conf.au to run a workshop on git; Linus joined us, so it stretched from 2 to 4hs. We had a much smaller room assigned than Bazaar, but you could feel we were rocking and rolling :-) I believe Matt Mackall was there too, talking about Mercurial, but I missed it.
This happened long ago -- and this is how I remember it. Quotes are as best as I can recall.
In my view, 2006/2007 was the time where the overall trends in the DVCS space got established; x.org migrated to git, Mozilla ran high profile bakeoffs between DVCSs, etc. And at that time Bazaar was on unfortunately on unsure footing (bad timing!). As a result, Git and Mercurial generally stole the show...
Posted Sep 13, 2012 7:34 UTC (Thu)
by mbp (subscriber, #2737)
[Link] (1 responses)
bzr has always had snapshot storage and never been darcs-like.
I reject, and resent, the implication that I publicly advocated something I privately didn't think was reliable.
Posted Sep 13, 2012 13:50 UTC (Thu)
by martin.langhoff (subscriber, #61417)
[Link]
My impression after your talk back then was that perhaps Bazaar-NG was performing or planning internal storage changes (or something like that) and that at that particular time those were awkward questions. Not that you did not trust or promote Bazaar, but that you were stating "not right now".
Posted Sep 12, 2012 16:38 UTC (Wed)
by walex (guest, #69836)
[Link] (13 responses)
I like your discussion, and in particular the emphasis on storage structure as well as functionality. One of the big issues with SVN for example is the enormous number of small state files it creates in the working copy, and the rather inefficient repository storage too, once they deprecated the DB files. I think that most recent Bazaar storage structures are not too bad, but that the Mercurial one is pretty terrible, the Git one is so-so, and by far the best is that used by Monotone, which is a single Sqlite file per repository. That makes tree searches, backups, and in general all whole-repository and filetree oriented operations a lot faster and easier. Also Git and Monotone are implemented fairly well in compiled languages, and can be much faster than Python-implemented Bazaar and Mercurial, even if a bit more careful for the latter two has improved the situation. Interestingly Monotone, which inspired a lot the design of Git, is also functionally rather complete, and works well, and I think that it is for most projects the most appropriate VCS, followed by Git itself, then Bazaar and not far behind Mercurial. It is a pity that TLA and DARCS are more often mentioned than Monotone, which has a pretty deliberate, careful design and implementation, even if it is one of the Gang Of Four major modern DVCSes.
Posted Sep 12, 2012 17:20 UTC (Wed)
by BlueLightning (subscriber, #38978)
[Link]
Posted Sep 12, 2012 23:15 UTC (Wed)
by akupries (subscriber, #4268)
[Link]
Richard Hipp, sqlite's author wrote an SCM using a single sqlite file per repository as well. It is called Fossil. It manages the sqlite repository now.
Posted Sep 13, 2012 1:58 UTC (Thu)
by martin.langhoff (subscriber, #61417)
[Link] (10 responses)
Monotone using SQLite is a boon for programmers. SQL is easier to wrestle than complex on-disk and in-memory data structures, specially if you are changing the layout. But git design learned from many sources (including Monotone) and had a pretty set data structure from the beginning.
With that clearly-defined data structure, Linus and other kernel hackers cranked out very efficient code. IIRC, Monotone used to take hours to import _one_ snapshot of a kernel, where git could do it in <10s.
See the very very early emails in the git list, by Linus, on his design research and early tests with monotone.
Posted Sep 13, 2012 7:54 UTC (Thu)
by graydon (guest, #5009)
[Link] (9 responses)
That said, monotone was unusably slow _when compared to git_, and as project histories and development parallelism has grown, that delta has become an easy and correct criterion for picking git for production in most cases. Git also picked a more sensible branch-naming model (local, per-repo, no PKI; less ambitious but easier and more powerful), embraced history-rewriting early and aggressively, had the benefit of hindsight in most algorithms, declined to bother tracking object identity (turns out to cost more performance than it's worth), figured out submodules, etc. etc. Git won this space hands down. There's no point competing with it anymore, imo.
Posted Sep 15, 2012 20:54 UTC (Sat)
by cmccabe (guest, #60281)
[Link]
Posted Sep 17, 2012 15:16 UTC (Mon)
by zooko (guest, #2589)
[Link] (7 responses)
It's one of those "for want of a nail the horseshoe was lost" kinds of moments in history -- if monotone had been fast enough for Linus to use at that time then presumably he never would have invented git.
And while *most* of the good stuff that the world has learned from git is stuff that git learned from monotone, I do feel a bit of relief that we have git's current branch naming scheme. Git's approach is basically to not try to solve it, and make it Someone Else's Problem. That sucks, it leads to ad-hoc reliance on DNS/PKI, and it probably contributes to centralization e.g. github, but at least there is an obvious spot where something better could be plugged in to replace it. If we had monotone's deeper integration into DNS/PKI (http://www.monotone.ca/docs/Branches.html), it might be harder for people to understand what the problem is and how to change it.
Posted Sep 18, 2012 15:25 UTC (Tue)
by graydon (guest, #5009)
[Link] (6 responses)
All that's a distraction though, at this stage. Git won; but there's more to do. I agree with you that the residual/next/larger issue is PKI and naming. Or rather, getting _rid_ of PKI-as-we-have-tried-it and deploying something pragmatic, decentralized and scalable in its place for managing names-and-trust. The current system of expressing trust through x.509 PKI is a joke in poor taste, and git (rightly) rejects most of that in favour of the three weaker more-functional models: the "DNS and soon-to-be-PKI DNSSEC+DANE" model of global-name disambiguation, the "manual ssh key-exchange with sticky-key-fingerprints" model of endpoint transport security, and the (imo strictly _worse_) "GPG web of trust" model for long-lived audit-trails. The three of these systems serve as modest backstops to one another but I still feel there's productive work to do exploring the socio-technical nexus of trust-and-naming as a more integrated, simplified, decentralized and less random, more holistic level (RFCs 2693 and 4255 aside). There are still too many orthogonal failure modes, discontinuities and security skeuomorphisms; the experience of naming things, and trusting the names you exchange, at a global scale, still retains far too much of the sensation of pulling teeth. We wind up on IRC with old friends pasting SHA-256 fingerprints of things back and forth and saying "this one? no? maybe this one?" far too often.
Posted Sep 18, 2012 18:59 UTC (Tue)
by jackb (guest, #41909)
[Link] (5 responses)
My theory is that PKI doesn't work because it is based on a flawed understanding of what identity actually means. The fraction of the population that really understands what it means to assign cryptographic trust to a key is statistically indistinguishable from "no one". Maybe the reason that the web of trust we've been promised since the 90s hasn't appeared yet is because the model itself is broken.
Posted Sep 18, 2012 19:43 UTC (Tue)
by hummassa (guest, #307)
[Link] (1 responses)
Ok, but... what is the alternative?
Posted Sep 18, 2012 20:05 UTC (Tue)
by jackb (guest, #41909)
[Link]
The question of "does the person standing in front of me control a particular private key" can be answered by having each person's smartphone sign a challenge and exchange keys via QR codes (bluetooth, NFC, etc). This step should require very little human interaction.
That question, however, does not establish an identity as we humans understand it. Identity between social creatures is a set of shared experiences. The way that you "know" your friends is because of your memories of interacting with them.
Key signing should be done in person and mostly handled by an automated process. Identity formation is done by having the users verify facts about other people based on their shared experiences.
If properly implemented the end result would look a lot like a social network that just happens to produce a cryptographic web of trust as a side effect.
Posted Sep 18, 2012 20:23 UTC (Tue)
by graydon (guest, #5009)
[Link]
(Keep in mind how much online-verification comes out in the details of evaluating trust in our key-oriented PKI system anyways. And how often "denying a centralized / findable verification service" features in attack scenarios. Surprise surprise.)
So, I also expect this will require -- or at least greatly benefit from -- a degree of "going around" current network infrastructure. Or at least a willingness to run verification traffic over a comfortable mixture of channels, to resist whole-network-controlling MITMs (as the current incarnation of the internet seems to have become).
But lucky for our future, communication bandwidth grows faster than everything else, and most new devices have plenty of unusual radios.
Posted Sep 18, 2012 20:25 UTC (Tue)
by Cyberax (✭ supporter ✭, #52523)
[Link] (1 responses)
For example, is there anybody here who can claim enough of ASN.1 knowledge to parse encoded certificates and keys? I certainly don't, every time I need to generate a CSR or a key, I go to Google and search for the required command-line to make OpenSSL spit out the magic binhex block.
Then there's a problem with lack of delegation. It's not possible to create a master cert for "mydomain.com" which I then can use to sign "host1.mydomain.com" and "host2.mydomain.com".
And so on. I'd gladly help a project to replace all this morass with clean JSON-based certificates with clear human-readable encoding.
Posted Sep 18, 2012 21:16 UTC (Tue)
by jackb (guest, #41909)
[Link]
The database would consist of one table that associates arbitrary text strings with public key IDs, and another table containing cryptographically-signed affirmations or refutations of the entries in the first table.
An example of an arbitrary text string could be a legal name, an email address, "inventor of the Linxu kernel", "CEO of Acme, Inc.", etc.
Everybody is free to claim anything they want, and everyone else is free to confirm or refute it. A suitable algorithm would be used to sort out these statements based on the user's location in the web of trust to estimate the veracity of any particular statement.
The value of the web of trust depends on getting people to actually use it so the tools for managing it would need to be enjoyable to work with instead of painful. That's one reason I think making the user interface similar to a social network because the emperical evidence suggests that people like using Facebook more than they like using GPG or OpenSSL. The other reason is that social networks better model how people actually interact in real life so making the web of trust operate that way is more intuitive.
Posted Sep 17, 2012 11:52 UTC (Mon)
by douglasbagnall (subscriber, #62736)
[Link]
The other day I heard a Canonical employee advocating bzr as a git front-end. The argument was that nobody suffers if you use Bazaar locally and Git remotely, so bzr people should just do that and stop fussing. As you suggest, they may have been glossing over incompatibilities in the models, or perhaps they haven't hit them in practice.
Posted Sep 12, 2012 8:11 UTC (Wed)
by afayolle (guest, #45179)
[Link] (1 responses)
I'm forced to use bzr because the projects I work on are collectively hosted on launchpad, and I have 'alias b=bzr' in my .bashrc to alleviate this.
Posted Sep 12, 2012 18:06 UTC (Wed)
by apoelstra (subscriber, #75205)
[Link]
I never noticed this until your comment, but it is interesting to note that typing 'bzr' on dvorak is almost exactly the same movement as on qwerty -- except you use your right hand instead of your left!
(The specific keys, to a qwerty user, are 'n/o'.)
Posted Sep 12, 2012 11:46 UTC (Wed)
by danpb (subscriber, #4831)
[Link] (12 responses)
I think this is really the crux of the matter. There was a window of a few years when a whole range of new dVCS systems were competing to replace the traditional choice of CVS (or SVN), as the defacto standard for Open Source projects. I initially rather liked Mercurial for its user friendliness and used it for a number of projects. It got to the point where even though I preferred Mercurial, so many other projects I needed to interact with were using Git that I had no choice but to learn Git.
In other words, Git has reached that critical mass where everyone in the open source world needs to learn how to use it eventually. Once you've learnt GIT, its breadth of features means there is no compelling reason to carry on using things like Mercurial, Bazaar, Subversion, or any of the others.
I see projects like OpenStack, hosted on Launchpad, casting off Bazaar and switching to using GIT (hosted on GITHub). Similarly projects hosted by Apache, using GIT for their primary dev work, even though they all "officially" using the Subversion for their master tree. Interestingly I'm finding that a very large proportion of people I interview for job positions now have GIT experience from company internal work, even if they haven't worked on Open Source projects before.
Bazaar or the other dVCS tools aren't going away, but I don't see them catching up with GIT at this point. The Debian package stats reinforce this belief
http://qa.debian.org/popcon-graph.php?packages=subversion...
Posted Sep 12, 2012 12:06 UTC (Wed)
by andresfreund (subscriber, #69562)
[Link]
http://qa.debian.org/popcon-graph.php?packages=subversion...
git was the gnu interactive tools package, named gnuit these days.
Posted Sep 12, 2012 15:03 UTC (Wed)
by robert_s (subscriber, #42402)
[Link] (10 responses)
I too am am Mercurial user who's been "forced" to learn git, but The compelling reason that causes me to still manage _my_ projects in mercurial is the fact that it's _much_ harder to shoot yourself in the foot in mercurial.
Even once you've got past git's steep and puzzling learning curve, working with branches in git feels a bit like doing a difficult juggling act with HEADs, and if you accidentally drop one of them (or even just accidentally specify the targets the wrong way round in a git rebase) you can spend an entire afternoon looking for the remains in git fsck.
And then of course there are the nice things like commit phases that mercurial have been adding lately.
Posted Sep 12, 2012 15:54 UTC (Wed)
by mgedmin (subscriber, #34497)
[Link] (1 responses)
Posted Sep 13, 2012 0:06 UTC (Thu)
by nix (subscriber, #2304)
[Link]
I concur that it is nearly impossible to lose work in git: making a new branch before doing something you're scared about suffices in basically all cases, and if you forget that there is the reflog. I've done massive history rewrites and routinely do partial history rewrites and tree partitions and have never lost a byte. (Having said that, I'm sure something horrible will happen now and I'll lose the last year's work or something. Not even git is immune to Murphy's Law.)
Posted Sep 12, 2012 19:20 UTC (Wed)
by price (guest, #59790)
[Link] (3 responses)
You want to use the reflog. Take two minutes and read about it right now:
You will be much, much happier as a result the next time you drop something. Once you know about the reflog, it's virtually impossible to lose anything in a Git repo that you ever put into an actual commit, no matter what you do subsequently.
(The main exception is if you only realize you wanted it weeks after discarding it. By default Git does eventually garbage-collect commits that no refs point to.)
Posted Sep 12, 2012 21:03 UTC (Wed)
by robert_s (subscriber, #42402)
[Link]
Posted Sep 12, 2012 23:43 UTC (Wed)
by cesarb (subscriber, #6266)
[Link] (1 responses)
AFAIK, the reflog is deleted when the branch is deleted. So if it happened on a branch you later deleted, you just lost the reference and it is git fsck time.
I would prefer if it left the reflogs for deleted branches around, and garbage-collected them later.
Posted Sep 12, 2012 23:55 UTC (Wed)
by price (guest, #59790)
[Link]
I do agree that it would be better if Git kept the reflogs around. A case where this does matter is if you want to keep an archival record of changes to the repository, e.g. in a organization's central repository that everybody pushes to. It's simple to configure Git never to expire old reflog entries (see gc.reflogexpire and gc.reflogexpireunreachable in git-config(1)), but AFAIK there's no way to configure it to keep reflogs of deleted refs.
Posted Sep 16, 2012 12:44 UTC (Sun)
by kleptog (subscriber, #1183)
[Link] (1 responses)
The first thing I noticed is Mercurial's rigid adherence to "committed is unchangeable". For me a commit is more a checkpoint, but it's not necessarily something finished. Usually I develop something as a series of patches, commit various bug fixes and use rebase to fold the bugfixes into the appropriate patch.
I was relieved to find the MQ extension which gives you much the functionality but with a very obtuse UI. The phases you point to seem to be a further step in the right direction. Though I feel their painting themselves into a corner since now a review tool like Gerrit becomes impossible: you push to the review tool which would make your patch immutable, while the whole point of the review is to be able to fix the patch!
Other rough edges: that the "pager" extension is not standard, there is no justification for "hg log" on the terminal filling your scrollback buffer with the entire history of your project. The "color" extension could also be better advertised.
My feeling is that git is a tool for people who deal with large numbers of branches and patches daily, and Mercurial is for people push a few patches around occasionally.
Posted Sep 16, 2012 13:04 UTC (Sun)
by dlang (guest, #313)
[Link]
http://xentac.net/2012/01/19/the-real-difference-between-...
Posted Sep 17, 2012 18:31 UTC (Mon)
by luto (guest, #39314)
[Link]
[1] Mercurial also seems to screw up more often. When it finishes embracing the three-way merge, it'll work better. The Mercurial tools certainly are prettier, though.
[2] Mercurial is worse at doing bizarre merges than git. At some point I'll dig up the example that broke Mercurial and file a bug. At least this failure mode doesn't eat my data. (Basically, the git "recursive" algorithm, while imperfect, is better than Mercurial's approach of choosing an arbitrary base from the set of bases with no possibility of overriding the choice.)
Posted Sep 26, 2012 9:34 UTC (Wed)
by makomk (guest, #51493)
[Link]
Posted Sep 12, 2012 12:59 UTC (Wed)
by cmorgan (guest, #71980)
[Link] (6 responses)
Sometimes projects just run out of steam because something else has caught wind. Having worked on projects that have had their moment and then their sunset it does take some time to adjust and move on.
With the rise of Git and related things like GitHub maybe it makes sense for Canonical to cut their losses and migrate to what appears to be a solution that more developers are happy with. They could always try to see if their "must have" features (whatever it is that keeps them using Bazaar other than familiarity and the cost to migrate away from it) could be things that could be integrated into Git.
Posted Sep 13, 2012 10:11 UTC (Thu)
by hingo (guest, #14792)
[Link] (5 responses)
The cli is really the strength of bzr. You typically can get your workflow done with 3 commands: bzr branch, bzr commit, bzr push. Ok, so you need init and pull and merge too, but that's it. Even though checkout type of workflow is supported, you shouldn't really use it.
What I really like is the fact that all branches are laid out in their own directories. This is yet another incarnation of the unixy "everything is a file" approach. I can reuse my knowledge of basic unix commands that I don't need to learn a specific bzr command for: Change to working in another branch: that's "cd". See what branches are available: "ls". Delete a branch: "rm". Also there's no separate clone vs branch, everything is just a branch.
If someone implemented a git client with these semantics and workflows, it would be an immediate reason to stop using bzr for me at least. This should be perfectly doable while staying 100% with the internal repo format of git.
Posted Sep 13, 2012 11:21 UTC (Thu)
by juliank (guest, #45896)
[Link] (3 responses)
> The cli is really the strength of git. You typically can get
But I still wonder why you never need to look at status or diff.
Posted Sep 13, 2012 12:34 UTC (Thu)
by hingo (guest, #14792)
[Link] (2 responses)
Ok, maybe sometime I have actually used diff and status :-) But if you never make mistakes then you don't need to ;-)
Posted Sep 13, 2012 12:43 UTC (Thu)
by juliank (guest, #45896)
[Link] (1 responses)
git clone -b branch-i-want-to-look-at git://example.com/example.git
Posted Sep 13, 2012 12:48 UTC (Thu)
by hingo (guest, #14792)
[Link]
It's like the difference between Mac and Windows, if you will. One has 1 mouse buttons, the other has 2, and the first one is considered more elegant because of that :-)
Posted Sep 13, 2012 15:52 UTC (Thu)
by cdmiller (guest, #2813)
[Link]
Posted Sep 12, 2012 13:24 UTC (Wed)
by philipstorry (subscriber, #45926)
[Link] (2 responses)
I'm a systems administrator, and when setting up some Ubuntu Server boxes I chose to version control the /etc path using etckeeper. Funnily enough, Bazaar was Ubuntu's default choice for the VCS that etckeeper used. (etckeeper is apparently agnostic about such things.)
From using Bazaar there, it's kind of snowballed. The one bit of development I do (which isn't much) now uses Bazaar, but only behind the scenes - not as a DVCS. I just picked it because I was familiar with it.
I also use it to version control my writing (mostly short stories, some novels). That can be fairly handy when you get frustrated and delete an entire passage out of frustration, only to find two days later that the problem was elsewhere!
It's quick, simple, and with Bazaar Explorer it's pretty simple even for a novice to use. It may not be the developer's first choice, but should version control really only be used by developers?
Apple have added versioning into their file handling APIs. If Canonical did the same, then I suspect they'd pick Bazaar. At this point I'm just thinking off the top of my head, mind you. But then, none of the other DVCSes are sponsored by a company that's trying to improve the computing experience.
Posted Sep 13, 2012 16:26 UTC (Thu)
by joey (guest, #328)
[Link] (1 responses)
-- From its README.
Posted Sep 14, 2012 12:20 UTC (Fri)
by philipstorry (subscriber, #45926)
[Link]
I'd thought it was somewhat agnostic about which VCS it used. I doubt I'll switch now - a quick google shows it is possible to convert, but it's more hassle than I can be bothered with.
It's working fine with bzr, and it's Friday. So to change it breaks the two golden rules:
Thanks for pointing that out though - it's good to know I may need to do it in the future, if bzr support ever wanes for the etckeeper project...
--------
Posted Sep 12, 2012 14:01 UTC (Wed)
by corbet (editor, #1)
[Link] (3 responses)
I wrote Mercurial to be Free with a capital 'F' as a reaction to the
object lesson of Bitkeeper. So entrusting my work to an organization
that had plans to embrace and extend it was just not going to happen.
— Matt Mackall
Posted Sep 12, 2012 14:55 UTC (Wed)
by pboddie (guest, #50784)
[Link] (2 responses)
It's one thing to have various technology decisions influencing any project that Canonical is involved in, even though this will deter outsiders by itself; it's another to see Canonical wanting to exercise control over the products of such development as well.
You can certainly lean on the community to do the hard work if they benefit as much from it as you do, but as soon as you ask them to benefit less (and here I ignore the excuses about Canonical's unique stewardship role and corresponding privilege - on whom does the hard work of quality assurance and other tedious stewardship matters fall, exactly, if not the community in many cases?) then they will exercise their rights as volunteers and indulge some other projects instead.
Posted Sep 17, 2012 20:18 UTC (Mon)
by jspaleta (subscriber, #50639)
[Link] (1 responses)
Bzr is primarily used as a tool to interact with launchpad.net. It's never really garnered significant adoption as a standalone tool. Yes some people do use it outside of launchpad, I'm not saying there is zero interest in bzr sans launchpad integration, just that i don't see critical mass in bzr community if launchpad integration were lost.
Because of the intimately tied nature with a Canonical controlled service (launchpad) there is less overall incentive for an external to try to create and lead a competing fork for fear of creating an incompatibility with the launchpad.net service requirements. It's forkable, the license on the code assures that is a possible future, but I doubt anyone is going to ever seriously do it.
It's a deep tangle.
I've yet to hear of someone attempting to take the launchpad.net sources and spinning in an alternative site. If someone did that, forked the launchpad.net codebase and spun up a implementation outside of Canonical's control, then a fork of bzr would be an obvious part of that much more involved effort. In for a penny in for a pound.
The irony of course being that the deep integration with launchpad is probably the only reason keeping bzr alive as a project at all at this point. If Canonical threw away launchpad entirely and started from scratch today... I'd wager they'd pick up git as part of the scaffolding. bzr is just a long lived piece of technical debt at this point for the launchpad team inside Canonical.
-jef
Posted Sep 17, 2012 23:25 UTC (Mon)
by smurf (subscriber, #17840)
[Link]
Posted Sep 12, 2012 14:54 UTC (Wed)
by zooko (guest, #2589)
[Link] (1 responses)
I use bzr to interact with projects that use it. It was easy enough to install and to learn the basic (getting a copy, updating to the latest version, submitting an occasional patch). It's efficient enough. It has not yet stunned me with an incomprehensible and intimidating error message. I'm basically satisfied with it.
Likewise I'm pretty satisfied with mercurial, which I also use only for occasional interaction with projects that use it.
Now git and darcs, I've used -- or tried to use -- extensively in many of my own projects and for my employers, and with both git and darcs I have a love/hate relationship. It's complicated.
Then there are the projects that still use svn or cvs. I find it mildly annoying to try to interact with those projects using those old tools.
Posted Sep 12, 2012 21:03 UTC (Wed)
by marcH (subscriber, #57642)
[Link]
Posted Sep 12, 2012 15:39 UTC (Wed)
by shieldsd (guest, #20198)
[Link] (32 responses)
Git has achieved the status of Linux itself. Just as Linux is now, and will remain, the dominant Unix kernel,git will be the dominant distributed version control system for the foreseeable future.
Though Mercurial has a following, I can't see how it can keep up with git.
Canonical's branching off on its own will just move it farther and farther from the mainstream. It's hard to see what they gain by going this route.
thanks,
Posted Sep 12, 2012 16:45 UTC (Wed)
by Cyberax (✭ supporter ✭, #52523)
[Link]
Posted Sep 12, 2012 18:25 UTC (Wed)
by smurf (subscriber, #17840)
[Link] (30 responses)
IMHO this is not a sustainable long-term strategy.
Posted Sep 12, 2012 18:27 UTC (Wed)
by dlang (guest, #313)
[Link] (28 responses)
as much as the systemd people don't want to let it be known, not everyone has accepted it.
Posted Sep 12, 2012 20:41 UTC (Wed)
by cmccabe (guest, #60281)
[Link] (27 responses)
So yes, I would say that Canonical is very much going off on their own with this project. The fact that they require copyright assignment probably hasn't helped matters.
In general I feel like Canonical has been creating a lot of questionable forks: bzr versus git, upstart versus systemd, Unity versus GNOME. They've always claimed to be UI experts, but they don't seem to be focusing their effort where it counts.
Posted Sep 12, 2012 20:55 UTC (Wed)
by dlang (guest, #313)
[Link] (24 responses)
really it's only the Fedora based distros that have switched, along with a few others, but for every other one that has switched, a non-debian derived distro can be pointed to that hasn't switched.
RHEL may or may not switch in the future. The systemd people will say that it will switch, but we'll see how the datacenter admins that RHEL is built for respond (they don't need many of the desktop based features of systemd and are are both more comfortable and more likely to have odd init stuff that will be affected.
As for Unity vs GNOME, as GNOME3 was released, the distros have splintered like a glass thrown onto concrete. Yes Unity is one of the fragments, but it's far from the only one.
Posted Sep 12, 2012 21:11 UTC (Wed)
by marcH (subscriber, #57642)
[Link]
Plus: they don't care about systemd optimizations and dynamic features. And they'd rather hack and trace shell scripts than use gdb and cc.
Posted Sep 12, 2012 21:13 UTC (Wed)
by Teho (guest, #86286)
[Link]
Posted Sep 12, 2012 21:19 UTC (Wed)
by rahulsundaram (subscriber, #21946)
[Link] (11 responses)
You are clearly trying to understate it. Among the popular ones, the distros that have switched:
* Fedora
"RHEL may or may not switch in the future."
RHEL 7 is switching to systemd as well
http://rhsummit.files.wordpress.com/2012/03/burke_rhel_ro...
Debian hasn't switched and is evaluating systemd along with OpenRC. Ubuntu has no plans to switch at this point.
Posted Sep 12, 2012 21:39 UTC (Wed)
by dlang (guest, #313)
[Link] (10 responses)
your info about RHEL conflicts with other posters here.
Yes, several popular desktop distros have switched to systemd, that's far from the "everyone has switched, except for those lone wolves at Ubuntu who refuse to go along with everyone else" mantra that is being pushed.
Posted Sep 12, 2012 21:55 UTC (Wed)
by Jonno (subscriber, #49613)
[Link] (9 responses)
> that's far from the "everyone has switched, except for those lone wolves at Ubuntu who refuse to go along with everyone else" mantra
Posted Sep 12, 2012 22:17 UTC (Wed)
by dlang (guest, #313)
[Link] (8 responses)
and if Debian remains with upstart (even if they replace the sysvinit option with OpenRC), then Ubuntu sticking with upstart is just staying with the upstream option.
Posted Sep 12, 2012 22:52 UTC (Wed)
by rahulsundaram (subscriber, #21946)
[Link] (7 responses)
I have given you a public source from the company's roadmap slides presented in the company conference and your answer is this embarrassing hand waving?
" do you really think that if Debian abandons upstart for OpenRC that Ubuntu will not follow along?"
This is a poorly phrased question. Debian is not using Upstart now by default. So there is no real question of them abandoning it and yes, Ubuntu might very well decide not to follow if Debian decides to switch to OpenRC or Systemd considering how much they have invested in Upstart and that is quite understandable. Ubuntu has done considerably different things from Debian in many ways including the installer, Unity etc and there is no reason to automatically assume they will follow Debian in this case.
Posted Sep 13, 2012 0:06 UTC (Thu)
by dlang (guest, #313)
[Link] (6 responses)
actually, if I do an upgrade of a Debian system, it prompts me to convert to upstart from a sysv init. If this isn't using upstart by default, what is it?
Posted Sep 13, 2012 2:52 UTC (Thu)
by guillemj (subscriber, #49706)
[Link] (3 responses)
That's right.
> actually, if I do an upgrade of a Debian system, it prompts me to convert to upstart from a sysv init. If this isn't using upstart by default, what is it?
The upstart package in Debian is not Essential, it's not on the base system either (Priority extra), and there's nothing except for live-config-upstart depending on it. So if it's being pulled in on an upgrade that's most probably some third party package doing that, either that or it got selected for upgrade at some point?
Posted Sep 13, 2012 2:59 UTC (Thu)
by dlang (guest, #313)
[Link] (2 responses)
Posted Sep 13, 2012 3:00 UTC (Thu)
by clint (subscriber, #7076)
[Link] (1 responses)
Posted Sep 13, 2012 20:21 UTC (Thu)
by Tester (guest, #40675)
[Link]
Posted Sep 13, 2012 2:59 UTC (Thu)
by clint (subscriber, #7076)
[Link]
Posted Sep 13, 2012 18:23 UTC (Thu)
by smurf (subscriber, #17840)
[Link]
Anyway, there are upstart and systemd packages in Debian.
Debian is probably going to do its usual thing and support both systemd and sysv-rc and/pr openrc and probably upstart long-term -- if for no other reason that tthe fact that systemd contains too many Linux-specific bits and pieces; debian wants to be able to run on top of FreeBSD kernels.
Now let's drop this side discussion and go back to VCS bashing please. ,-)
Posted Sep 13, 2012 21:15 UTC (Thu)
by zooko (guest, #2589)
[Link] (9 responses)
Posted Sep 14, 2012 14:30 UTC (Fri)
by smurf (subscriber, #17840)
[Link] (8 responses)
But you need to look at the actual use cases.
Take git. Linus developed the thing for the kernel. git supported *large* source repositories quite well, right from the start. All the others were "OK it works for ten files and ten revisions, I'm done with the basics. 10000 files and 1000 revisions? Oops, need to take our lunch break now, hopefully it'll be done when I get back." So git was the first DVCS that aktually worked for the "impatioen kernel developer" use case.
Or take systemd. Init's job, as Lennart has shown, isn't done after starting jobs: reliably discovering when a job has *stopped*, and hopefully not interrupting the service it provides while restarting it, is a worthwhile goal too.
I am not the only person out there who has written a whole bunch of software (some of whichtook a significant heap of my time+effort+money), which was "good enough" -- but then somebody else took a look at it, said "cool, but I can do better", did better -- and shared their code with me. So why should I not toss my code into the Great Bitbucket in the Sky, and use theirs (and then improve *that* instead of playing catch-up)?
I'm not going to let my ego get in the way of getting things done. Life's too short for that.
Posted Sep 14, 2012 14:51 UTC (Fri)
by paulj (subscriber, #341)
[Link] (6 responses)
Though, there's no pressing reason why that job must be done by init…
Posted Sep 14, 2012 16:23 UTC (Fri)
by apoelstra (subscriber, #75205)
[Link] (5 responses)
Well, init is the job's parent, so it's uniquely positioned to notice when a job crashes -- and since init started the job, it's also uniquely qualified to /re/start it.
Posted Sep 14, 2012 17:51 UTC (Fri)
by paulj (subscriber, #341)
[Link] (4 responses)
Or you can have a kitchen-sink system, where you put all this into init, and it has to support every possible need any kind of service will ever have.
Posted Sep 14, 2012 18:27 UTC (Fri)
by bronson (subscriber, #4806)
[Link] (3 responses)
Posted Sep 14, 2012 20:07 UTC (Fri)
by paulj (subscriber, #341)
[Link] (1 responses)
Posted Sep 16, 2012 3:16 UTC (Sun)
by bronson (subscriber, #4806)
[Link]
Posted Sep 18, 2012 22:04 UTC (Tue)
by man_ls (guest, #15091)
[Link]
Posted Sep 17, 2012 9:11 UTC (Mon)
by pboddie (guest, #50784)
[Link]
I'm pretty sure Mercurial was developed for working with the kernel sources.
Posted Sep 26, 2012 10:00 UTC (Wed)
by makomk (guest, #51493)
[Link] (1 responses)
Posted Sep 26, 2012 14:38 UTC (Wed)
by smurf (subscriber, #17840)
[Link]
Before: "Owch, the job failed to start. Now which syslog file did it log its error message to? Oops, it was stderr, so hack the initscript; lather,rinse,repeat".
Before: "This job needs to auto-restart when it dies for whatever reason. Write a hacky wrapper which utterly fails to *not* restart the thing when it *should* die."
And so on.
Posted Dec 9, 2012 20:16 UTC (Sun)
by jelmer (guest, #40812)
[Link]
Git and Mercurial were both started after the BitKeeper fiasco sometime around March 2005. bzr was already well underway in February of that year; http://sourcefrog.net/projects/bazaar-ng/doc/news.html
Posted Sep 18, 2012 11:31 UTC (Tue)
by faassen (guest, #1676)
[Link] (1 responses)
Canonical tried pretty hard to get projects in my little community (Zope) to join Launchpad, and also tried pretty hard to get the Zope developers to convert their repository from SVN to Bazaar. So is this "never really meant as products in their own right" a nice bit of historical revisionism, or was this the plan all along and nobody told us?
Posted Sep 18, 2012 15:19 UTC (Tue)
by pboddie (guest, #50784)
[Link]
Unfortunately, with Unity being the other prominent example, they don't have a great record of getting people on board or outperforming the other projects in those fields. And so the gamble of striking out on their own hasn't paid off as much as might have been expected.
Posted Sep 19, 2012 15:45 UTC (Wed)
by jermar (guest, #86674)
[Link] (2 responses)
Posted Sep 19, 2012 16:49 UTC (Wed)
by bronson (subscriber, #4806)
[Link]
Still, overall, that's impressive work per capita by the bzr developers.
Posted Sep 22, 2012 13:02 UTC (Sat)
by robinst (guest, #61173)
[Link]
It's pretty funny that the Ohloh comparison shows Mercurial to be "Mostly written in Perl"! The same is also shown on the language details for Mercurial. Turns out that Mercurial uses something similar to shell scripts with a .t extension for test cases, which the language detector categorizes as Perl. Luckily Ohcount, the library which does the counting, is on GitHub, so I submitted a pull request to change that.
Bazaar on the slow track
Bazaar on the slow track
Bazaar on the slow track
You see that this file has revision 6558. This version is repository-local as there's no way to create a distributed numbering algorithm without synchronization points (mathematically, bzr revisions are a completely ordered set). This fact underlines all the bzr design - it's ridiculously hard to work in a truly distributed manner with bzr. There's even that scary threat of renumbering, where numbers in the trunk _change_.
Bazaar on the slow track
Bazaar on the slow track
Bazaar on the slow track
http://steveko.wordpress.com/2012/02/24/10-things-i-hate-...
Bazaar on the slow track
Bazaar on the slow track
it's just a list of hash-linked diffs between revisions
ITYM 'it's just a parent-linked tree of filesystem tree snapshots in a content-addressable store'.
Bazaar on the slow track
Bazaar on the slow track
Bazaar on the slow track
note that git doesn't force you to throw away the history.
Bazaar on the slow track
Bazaar on the slow track
I don't know how well they do in finding reasonable bisection points in a complex revision graph; git's algorithm is very good these days.
some hard (though not necessarily relevant) numbers
Bazaar on the slow track
Bazaar on the slow track
Bazaar on the slow track
Bazaar on the slow track
Bazaar on the slow track
Bazaar on the slow track
Bazaar on the slow track
Bazaar on the slow track
Bazaar on the slow track
Bazaar on the slow track
Bazaar on the slow track
Bazaar on the slow track
git: 35s, 24.9 MiB
Bazaar on the slow track
Bazaar on the slow track -- history notes
Bazaar on the slow track -- history notes
Bazaar on the slow track -- history notes
Bazaar on the slow track -- history notes
Bazaar on the slow track -- history notes
Bazaar on the slow track -- history notes
Bazaar on the slow track -- history notes
Bazaar on the slow track -- Montone gets too little attention
Bazaar on the slow track -- Montone gets too little attention
Bazaar on the slow track -- Montone gets too little attention
storage structures [...] by far the best is that used by Monotone, which is a single Sqlite file per repository.
Bazaar on the slow track -- Montone gets too little attention
Bazaar on the slow track -- Montone gets too little attention
Bazaar on the slow track -- Montone gets too little attention
Bazaar on the slow track -- Montone gets too little attention
Bazaar on the slow track -- Montone gets too little attention
Bazaar on the slow track -- Montone gets too little attention
All that's a distraction though, at this stage. Git won; but there's more to do. I agree with you that the residual/next/larger issue is PKI and naming. Or rather, getting _rid_ of PKI-as-we-have-tried-it and deploying something pragmatic, decentralized and scalable in its place for managing names-and-trust. The current system of expressing trust through x.509 PKI is a joke in poor taste, and git (rightly) rejects most of that in favour of the three weaker more-functional models: the "DNS and soon-to-be-PKI DNSSEC+DANE" model of global-name disambiguation, the "manual ssh key-exchange with sticky-key-fingerprints" model of endpoint transport security, and the (imo strictly _worse_) "GPG web of trust" model for long-lived audit-trails. The three of these systems serve as modest backstops to one another but I still feel there's productive work to do exploring the socio-technical nexus of trust-and-naming as a more integrated, simplified, decentralized and less random, more holistic level (RFCs 2693 and 4255 aside). There are still too many orthogonal failure modes, discontinuities and security skeuomorphisms; the experience of naming things, and trusting the names you exchange, at a global scale, still retains far too much of the sensation of pulling teeth. We wind up on IRC with old friends pasting SHA-256 fingerprints of things back and forth and saying "this one? no? maybe this one?" far too often.
Bazaar on the slow track -- Montone gets too little attention
Bazaar on the slow track -- Montone gets too little attention
Bazaar on the slow track -- Montone gets too little attention
Bazaar on the slow track -- Montone gets too little attention
Bazaar on the slow track -- Montone gets too little attention
Bazaar on the slow track -- history notes
Bazaar on the slow track
Bazaar on the slow track
Bazaar on the slow track
> managed to advance further and faster. For sheer functionality, Git is
> hard to compete with. For those who are put off by the complexity of Git,
> Mercurial offers a gentler alternative without compromising on features.
> Perhaps most potential users just do not see anything in Bazaar that is
> sufficiently shiny to attract them away from the other tools.
Bazaar on the slow track
Bazaar on the slow track
Bazaar on the slow track
Bazaar on the slow track
Bazaar on the slow track
> an entire afternoon looking for the remains in git fsck.
http://gitfu.wordpress.com/2008/04/06/git-reflog-no-commi...
http://www.kernel.org/pub/software/scm/git/docs/git-reflo...
Bazaar on the slow track
Bazaar on the slow track
Bazaar on the slow track
Bazaar on the slow track
Bazaar on the slow track
Bazaar on the slow track
Bazaar on the slow track
Bazaar on the slow track
Bazaar on the slow track
Bazaar on the slow track
> your workflow done with 3 commands: git clone, git commit,
> git push. Ok, so you need init and pull and merge too, but that's it.
Bazaar on the slow track
Bazaar on the slow track
Bazaar on the slow track
Bazaar on the slow track
Bazaar on the slow track
Bazaar on the slow track
its default VCS is not git -- if they have please complain to them,
as they're making things unnecessarily difficult for you, and causing
unnecessary divergence of etckeeper installations.
You should only be using etckeeper with a VCS other than git if you're
in love with the other VCS.
Bazaar on the slow track
* It Ain't Broken, So Don't Fix It
* Don't Fiddle[1] With It On Friday
[1] Alternative verbs have seen use at this position.
Pretty amusing to think that I forgot totally about this moment in history while writing the article:
Ah yes
A few years ago the core Mercurial and Bzr developers met in London for
a weekend to compare notes and came to a tentative agreement that
merging the two projects would be a good idea. This idea was very
quickly torpedoed by Mark Shuttleworth's insistence that whatever
project resulted would have to have copyright held by Canonical. The
stated reason was allowing proprietary feature extensions as part of
their Launchpad strategy.
Ah yes
Ah yes
LP-bzr vs. LP-git?
Bazaar on the slow track
Bazaar on the slow track
Bazaar on the slow track
dave shields
Bazaar on the slow track
Bazaar on the slow track
Bazaar on the slow track
Bazaar on the slow track
Bazaar on the slow track
Bazaar on the slow track
Bazaar on the slow track
Bazaar on the slow track
* openSUSE
* Mageia
* Arch
Bazaar on the slow track
Bazaar on the slow track
Yes, but he is the only one backing it up with sources...
Correct, but that is not what is being claimed here. What is claimed is that everyone but Ubuntu is either staying with sysvinit (or sysvinit + OpenRC) *or* are moving to systemd, no one else is staying with, or moving to, upstart.
Bazaar on the slow track
Bazaar on the slow track
Bazaar on the slow track
Bazaar on the slow track
Bazaar on the slow track
Bazaar on the slow track
Bazaar on the slow track
Bazaar on the slow track
Bazaar on the slow track
I don't know about upstart, but systemd works really well there.
Bazaar on the slow track
Bazaar on the slow track
Compared to upstart, sysv-init is good enough for me -- so why bother to switch to it? Compared to systemd, it no longer is. Conclusion: all my systems now boot with systemd. It's not the first init replacement out there, but it's the first worth switching to if you've done it the "/etc/init.d/foo start" way for the last 20 years (which, surprise, continues to work just fine with Debian's systemd).
Bazaar on the slow track
Init's job, as Lennart has shown, isn't done after starting jobs: reliably discovering when a job has *stopped*, and hopefully not interrupting the service it provides while restarting it, is a worthwhile goal too.
Bazaar on the slow track
Bazaar on the slow track
Bazaar on the slow track
Bazaar on the slow track
Bazaar on the slow track
I would like to nominate your comment for "Quote of the week" for distributions, if it is not too late. It is sarcastic, it has the LOL factor, and it is true as life.
QotW material
Bazaar on the slow track
Take git. Linus developed the thing for the kernel. git supported *large* source repositories quite well, right from the start. All the others were "OK it works for ten files and ten revisions, I'm done with the basics. 10000 files and 1000 revisions? Oops, need to take our lunch break now, hopefully it'll be done when I get back."
Bazaar on the slow track
Bazaar on the slow track
After: "systemctl status NAME.service"
After: "This job needs to auto-restart? Add a single well-documented line to the .service file."
Bazaar on the slow track
Bazaar on the slow track
Bazaar on the slow track
Ohloh provides a useful comparison between Bazaar, Mercurial and Git based on accurate code metrics. From there, it follows that Bazaar has had the fewest developers in total and also during the past 12 months. But surprisingly, the code activity (2589 commits) produced by its 35 contributors last year is much higher than the activity produced by Mercurial's 115. It's rather closer to what Git's 200 contributors did during the past 12 months.
The trend of lines-of-code is kind of logarithmic, but so is Git's.
From these statistics, there appears to be a drop in number of developers and commits, but given the absolute numbers above, it's probably nothing dramatic.
Bazaar code metrics
Interesting! It looks like Bazaar is seeing a lot of churn?
Bazaar code metrics
bzr hg git
Lines Added 658,329 lines 107,171 lines 830,470 lines
Lines Removed 332,498 lines 39,803 lines 268,636 lines
Also, the 30 day numbers are currently telling a different story than the 12 month numbers.
Mercurial mostly written in Perl?