A large number of issues *can* be caught automatically. And I believe overall we're doing way too little automated testing. If you've ever looked at most projects, that is hard to deny; there are huge wins to be had here.
Clearly, not all of the issues can. Neither can everyone run all tests. That requires a distributed approach to testing: running automated tests where they can run (e.g., availability of the hardware or the license key to test compatibility), and perhaps a canary release to a beta tester group whose feedback then (automatically ;-) allows the update for wider distribution. This could (and should) include not just running the code but also source code level patch inspection.
And if the workload is running inside a VM, clone it (and its data), apply patch and see if workload still runs, feed result back.
To anticipate your point, not all analysis of these test results can be automated either. And yes, it should gather not just functional (yes/no) data but also performance, memory usage, etc. Why stop at the simple. ;-)
In short: no, centralized automated tests aren't enough. Even though it'd be amazing if we had more of that already. That does not invalidate continuous delivery.
Copyright © 2017, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds