Rather than comparing screen shots perhaps a more formal test harness could be put into the installer.
For the installer, each screen or function paints a particular pixel or another signal which can be easily detected from the "outside". To make sure that these signatures are correct a registry could be set up and a script run against the source checking that the signatures match the section of code.
To spell it out - there is a specification that says what should happen at each stage of the process and a signal is generated at each "audit point" which is detectable from the outside of the system. Automated tools verify that the spec's requirements are represented in the code and another one checks the signals from a runtime session.
I don't wish to denigrate someone's hard work that is clearly for the benefit of the SuSE community but surely a bit of co-ordination wouldn't hurt here.