User: Password:
|
|
Subscribe / Log in / New account

Shuttleworth: Not convinced by rolling releases

Shuttleworth: Not convinced by rolling releases

Posted Mar 8, 2013 13:11 UTC (Fri) by lmb (subscriber, #39048)
Parent article: Shuttleworth: Not convinced by rolling releases

I'm convinced that true rolling releases - continuous delivery on the side of the distribution - is exactly the right approach for the future. Packages should be merged when they are ready and pass all tests. (Depending on the component, that could mean different things and possibly even a chunk of packages to merge a whole feature block if dependencies exist, etc.)

Not "just" for the desktop, but also and in particular for enterprise.

Note that "rolling release" does not mean "shortened support cycles for features or less backwards compatibility." That'd be continuous delivery done wrong.

I really wish a distribution would dare that.


(Log in to post comments)

Shuttleworth: Not convinced by rolling releases

Posted Mar 8, 2013 16:26 UTC (Fri) by ebiederm (subscriber, #35028) [Link]

Supply me with a complete automated proof that the package meets all of it's requirements and I might be convinced that automated testing alone is sufficient. Until that time there need to be human beings in the loop who look at things in different ways from the developer giving feedback.

Fundamentally technical solutions alone are inadequate.

Shuttleworth: Not convinced by rolling releases

Posted Mar 8, 2013 16:37 UTC (Fri) by lmb (subscriber, #39048) [Link]

You will note that I did not include the word "automated" in my comment. That was surprisingly deliberate.

A large number of issues *can* be caught automatically. And I believe overall we're doing way too little automated testing. If you've ever looked at most projects, that is hard to deny; there are huge wins to be had here.

Clearly, not all of the issues can. Neither can everyone run all tests. That requires a distributed approach to testing: running automated tests where they can run (e.g., availability of the hardware or the license key to test compatibility), and perhaps a canary release to a beta tester group whose feedback then (automatically ;-) allows the update for wider distribution. This could (and should) include not just running the code but also source code level patch inspection.

And if the workload is running inside a VM, clone it (and its data), apply patch and see if workload still runs, feed result back.

To anticipate your point, not all analysis of these test results can be automated either. And yes, it should gather not just functional (yes/no) data but also performance, memory usage, etc. Why stop at the simple. ;-)

In short: no, centralized automated tests aren't enough. Even though it'd be amazing if we had more of that already. That does not invalidate continuous delivery.


Copyright © 2017, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds