November 10, 2010
This article was contributed by Koen Vervloesem
This year's openSUSE
conference had some interesting sessions about testing topics. One of
those described a framework to automate testing of the distribution's
installation. That way testers don't have to do the repetitive installation
steps themselves. Another session described Testopia, which is a test case
management extension for Bugzilla. OpenSUSE is using Testopia to guide
users that want to help testing the distribution. And last but not least, a
speaker from Mozilla QA talked about how to attract new testers. The common thread in all these sessions is that testing should be made as easy as possible, to attract new testers and keep the current testers motivated.
Automated testing
Testing is an important task for distributions, because a Linux
distribution is a very complex amalgam of various interacting components,
but it would be pretty tiresome and boring for testers to test the openSUSE Factory snapshots
daily. Bernhard Wiedemann, a member of the openSUSE
Testing Core Team, presented the logical solution to this problem:
automate as much as possible. Computers don't get tired and they don't stop
testing out of boredom, even with dozens of identical tests.
But why is automation so important for testing? To answer this question,
Bernhard emphasized that the three chief virtues of a programmer according
to Larry Wall (laziness, impatience, and hubris) also hold for
testers. What we don't want is poor testing, which leads to poor quality of
the distribution, which leads to frustrated testers, which leads to even
poorer testing. This is a vicious circle. What we want instead is good
testing and good processes, which leads to high quality for the
distribution and to happy testers who make the testing and hence the
distribution even better. Testers, as much as programmers, want to automate
things because they want to reduce their overall efforts.
So what are possible targets for automated testing? You could consider
automating the testing of a distribution's installation, testing distribution upgrades, application testing, regression testing, localization testing, benchmarking, and so on. But whatever you test, there will always be some limitations. As the Dutch computer scientist and Turing Award winner Edsger W. Dijkstra once famously said: "Testing can only prove the presence of bugs, not their absence."
Bernhard came up with a way to automate distribution installation
testing using KVM. He now has a cron job that downloads a new ISO for
openSUSE Factory daily and runs his Perl script autoinst for the
test. This script starts openSUSE from the ISO file in a virtual machine
with a monitor interface that accepts commands like sendkey
ctrl-alt-delete to send a key to the machine or screendump
foobar.ppm to create a screenshot. The script compares the screenshots
to known images, which is done by computing MD5 hashes of the pixel
data.
When the screen shot of a specific step of the running installer matches
the known screen shot of the same step in a working installer, the script
marks the test of this step as passed. If they don't match (e.g. because of
an error message), the test is marked as failed. The keys that the script
sends to the virtual machine can also depend on what is shown on the
screen: the script then compares the screen shot to various possible screen
shots of the working installer, each them representing a possible execution
path.
By using the screen shots, a script can
test whether an installation of an openSUSE snapshot worked correctly and
whether Firefox or OpenOffice.org can be started on the freshly installed
operating system without segfaulting. At the end of the test, all images are encoded into a video, which can be consulted by a human tester in circumstances where a task couldn't be marked automatically as passed or failed. Some examples of installation videos can be found on Bernhard's blog.
It's also nice to see that Bernhard is following the motto of this year's openSUSE conference, "collaboration across borders": while parts of his testing framework are openSUSE-specific, it is written in a modular way and can be used to test any operating system that runs on Qemu and KVM. More information can be found on the OS-autoinst web site.
Test plans with Testopia
Holger Sickenberg, the QA Engineering Manager in charge of openSUSE
testing, talked about another way to improve openSUSE's reliability: make
test plans available to testers and users with Testopia, a test case
management extension for Bugzilla. In the past, openSUSE's Bugzilla bug
tracking system only made Testopia available to the openSUSE Testing Core
Team, but since last summer it is open to all contributors. Testopia is
available on Novell's Bugzilla,
where logged-in users can click on "Product Dashboard" and choose a product to see the available test plans, test cases, and test runs. In his talk, Holger gave an overview about how to create your own test plan and how to file a bug report containing all information from a failed test plan.
A test plan is a simple description for the Testopia system and is
actually just a container for test cases. Each test plan targets a
combination of a specific product and version, a specific component, and a specific type of activity. For example, there is a test plan for installing openSUSE 11.3. A test plan can also have more information attached, e.g. a test document.
A test case, then, is a detailed description of what should be done by the tester. It lists the preparation that is needed before executing the test, a step-by-step list of what should be done, a description of the expected result, and information about how to get the system back into a clean state. Other information can also be attached, such as a configuration file or a test script for an automated test system. Holger emphasized that the description of the expected result is really important: "If you don't mention the expected result exactly, your test case can go wrong because the tester erroneously thinks his result is correct."
And then there's a test run, which is a container for test cases for a specific product version and consists of one or more test plans. It also contains test results and a test history. At the end of executing a test run, the user can easily create a bug report if a test case fails by switching to the Bugs tab. The information from the test case is automatically put into the description and summary of the bug report, and when the report is submitted it also appears in the web page of the test run, including its status (e.g. fixed or not).
The benefits of test plans are obvious: users that want to help a
project by testing have a detailed description of what and how to test, and
the integration with Bugzilla makes reporting bugs as easy as possible. It
also lets developers easily see what has been tested and get the results of the tests. These results can also be tracked during the development cycle or compared between different releases. Holger invited everyone with a project in openSUSE to get in touch with the openSUSE Testing Core Team to get a test plan created. The team can be found on the opensuse-testing mailing list and on the #opensuse-testing IRC channel on Freenode.
Mozilla QA
Carsten Book, QA Investigations Engineer at the Mozilla Corporation,
gave a talk about how to get involved in the Mozilla Project and he focused
on Mozilla QA, which has its home on the QMO web site. This QA portal has a
lot of documentation,
e.g. for getting started
with QA. And there are links to various Mozilla QA tools such as Bugzilla, Crash Reporter,
the Litmus system that has test
cases written by Mozilla QA for manual software testing, and some tools to
automate software
testing. For example, Mozilla's test system automatically checks
whether performance has degraded after every check-in of a new feature, to
try to ensure that Firefox won't get any slower.
People who want to help test can of course run a nightly build and file bug reports. There are also Mozilla test days that teach how to get development builds, how to file bugs, and how to work with developers on producing a fix. Contributors with some technical expertise can join one of the quality teams, each focusing on a specific area: Automation, Desktop Firefox, Browser Technologies, WebQA, and Services. Each of the teams has a short but instructive web page with information about what they do and how you can contact them.
An important point that Carsten made was that it should also be easy for
interested people to immediately get an overview of different areas where
they can contribute without having to read dozens of wiki pages. Mozilla
even has a special Get
involved page where you just enter your email address and an area of
interest, with an optional message. After submitting the form, you will get an email to put you in touch with the right person.
Low entry barrier
These three projects are all about lowering the barriers for new testers
— to be able to attract as many testers as possible and to make the
life of existing testers easier — by automating boring and repetitive
tasks. In this way you can keep testers motivated. Wiedemann's autoinst
project seems especially interesting: at the moment it has just the basic features, but it has a lot of potential, e.g. if the feature for comparing screen shots is refined. From a technical point of view, this is an exciting testing project that hopefully finds its way into other distributions.
(
Log in to post comments)