|| ||Zachary Amsden <zamsden-AT-redhat.com> |
|| ||Ingo Molnar <mingo-AT-elte.hu> |
|| ||Re: [RFC] Unify KVM kernel-space and user-space code into a single project |
|| ||Thu, 18 Mar 2010 11:02:12 -1000|
|| ||Avi Kivity <avi-AT-redhat.com>,
Anthony Liguori <anthony-AT-codemonkey.ws>,
"Zhang, Yanmin" <yanmin_zhang-AT-linux.intel.com>,
Peter Zijlstra <a.p.zijlstra-AT-chello.nl>,
Sheng Yang <sheng-AT-linux.intel.com>,
Marcelo Tosatti <mtosatti-AT-redhat.com>,
oerg Roedel <joro-AT-8bytes.org>,
Jes Sorensen <Jes.Sorensen-AT-redhat.com>,
Gleb Natapov <gleb-AT-redhat.com>, ziteng.huang-AT-intel.com,
Arnaldo Carvalho de Melo <acme-AT-redhat.com>,
Fr?d?ric Weisbecker <fweisbec-AT-gmail.com>|
|| ||Article, Thread
On 03/18/2010 12:50 AM, Ingo Molnar wrote:
> * Avi Kivity<firstname.lastname@example.org> wrote:
>>> The moment any change (be it as trivial as fixing a GUI detail or as
>>> complex as a new feature) involves two or more packages, development speed
>>> slows down to a crawl - while the complexity of the change might be very
>> Why is that?
> It's very simple: because the contribution latencies and overhead compound,
> almost inevitably.
> If you ever tried to implement a combo GCC+glibc+kernel feature you'll know
> Even with the best-run projects in existence it takes forever and is very
> painful - and here i talk about first hand experience over many years.
Ingo, what you miss is that this is not a bad thing. Fact of the matter
is, it's not just painful, it downright sucks.
This is actually a Good Thing (tm). It means you have to get your feature
and its interfaces well defined and able to version forwards and backwards
independently from each other. And that introduces some complexity and
time and testing, but in the end it's what you want. You don't introduce a
requirement to have the feature, but take advantage of it if it is there.
It may take everyone else a couple years to upgrade the compilers, tools,
libraries and kernel, and by that time any bugs introduced by interacting
with this feature will have been ironed out and their patterns well known.
If you haven't well defined and carefully thought out the feature ahead of
time, you end up creating a giant mess, possibly the need for nasty
backwards compatibility (case in point: COMPAT_VDSO). But in the end, you
would have made those same mistakes on your internal tree anyway, and then
you (or likely, some other hapless project maintainer for the project you
forked) would have to go add the features, fixes and workarounds back to
the original project(s). However, since you developed in an insulated
sheltered environment, those fixes and workarounds would not be robust and
independently versionable from each other.
The result is you've kept your codebase version-neutral, forked in outside
code, enhanced it, and left the hard work of backporting those changes and
keeping them version-safe to the original package maintainers you forked
from. What you've created is no longer a single project, it is called a
distro, and you're being short-sighted and anti-social to think you can
garner more support than all of those individual packages you forked. This
is why most developers work upstream and let the goodness propagate down
from the top like molten sugar of each granular package on a flan where it
is collected from the rich custard channel sitting on a distribution plate
below before the big hungry mouth of the consumer devours it and
incorporates it into their infrastructure.
Or at least, something like that, until the last sentence. In short, if
project A has Y active developers, you better have Z >> Y active developers
to throw at project A when you fork it into project B.
to post comments)