User: Password:
|
|
Subscribe / Log in / New account

Tightening the merge window rules

Tightening the merge window rules

Posted Sep 12, 2008 10:28 UTC (Fri) by mingo (subscriber, #31122)
In reply to: Tightening the merge window rules by NAR
Parent article: Tightening the merge window rules

The thing is, every new kernel project should start small and should aim for constant stability.

Dropping a large amount of code on upstream with a large amount of open problems means the project has been done wrong from the get go.

If a project starts small in the upstream kernel, it is not a problem at all to have a constant flow of updates - as long as they are stabilized and are merged in the merge window only. That's how the kernel evolves, gradually.

A project that is in a constant state of breakage makes little sense.


(Log in to post comments)

Project flow

Posted Sep 13, 2008 22:53 UTC (Sat) by man_ls (guest, #15091) [Link]

Of course this is good engineering practice, but you will appreciate that it is not how software projects are usually managed. The usual process starts with a blueprint, then goes through to analysis and development and finally testing (at which point it's a huge mess of code which doesn't work at all). It takes months to get things working again.

It has taken decades for a few people to value constant stability, and even so most of the world isn't there yet. So it is not strange that it should take a couple of years to get used to such a process.

Project flow

Posted Sep 14, 2008 13:10 UTC (Sun) by mingo (subscriber, #31122) [Link]

It has taken decades for a few people to value constant stability, and even so most of the world isn't there yet. So it is not strange that it should take a couple of years to get used to such a process.

Yes, and even for the kernel it has taken almost a decade to reach that state. (Btw., the technological trigger was Git - it enabled the new, distributed, "evolving" workflow.)

So shouting at folks for not getting it right would be rather hypocritical, and in practice upstream is rather flexible about it all.

The comment i replied to claimed that there was a problem with the kernel's development process. I disagree with that, and i think it's natural to expect that if some code wants to reach upstream ASAP it should try to follow and adopt to its development flow.

I.e. new projects should 'become upstream' well before they touch upstream (they should adopt similar principles) - that way there will be a lot less friction after the merge point as well.

Project flow

Posted Sep 15, 2008 14:43 UTC (Mon) by etienne_lorrain@yahoo.fr (guest, #38022) [Link]

> The usual process starts with a blueprint, then goes through to
> analysis and development and finally testing

For my small projet, I have just coded so that I knew exactly what
I wanted to do. Then I rewritten most of the stuff nearly from scratch,
that is just keeping the lower layer functions and reorganising the
whole code.
The problem is that you have something working before the "rewrite",
but nobody else would understand it - and you cannot submit a patch
before the complete organisation, patch which would be huge moving stuff
around, renaming, factoring...
After the rewrite people complain that they have not been involved
in the design, but you just know they would have complained even more
before the reorganisation.
That is just life (of code)...

Etienne.


Copyright © 2017, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds