Storage testing
Storage testing
Posted May 29, 2019 3:52 UTC (Wed) by roc (subscriber, #30627)In reply to: Storage testing by tytso
Parent article: Storage testing
Sure, but the software and services infrastructure for writing tests, running tests, processing test results, and reporting those results could be shared with lots of other kinds of tests.
> And networking tests often require a pair of machines with different types of networks between the two.
Ditto. (And presumably networking tests for everything above OSI level 2 can be virtualized to run on a single machine, even a single kernel.)
> Good luck trying to unify it all.
Unifying things after they're up and running is hard. Sharing stuff that already exists instead of creating new infrastructure is easier. Given that the kernel's upstream testing is totally inadequate currently, there's an opportunity here :-).
> Finally, note that there are different types of testing infrastructure. There is the test suite itself, and how you run the test suite in a turn key environment.
Yes, I can see that you want drivers for spawning test kernels on different clouds. They can exist in a world where other testing infrastructure is shared.
Surely you want a world where someone can run all the different kernel test suites (that don't require special hardware), against some chosen kernel version, on the cloud of their choice. That would demand a shared "spawn test kernel" interface that the different suites all use, wouldn't it?
Posted May 29, 2019 23:03 UTC (Wed)
by tytso (subscriber, #9993)
[Link]
I assume you're talking about kselftests, which is the self testing infrastructure which is included as part of the kernel sources? It has a very different purpose compared to other test suites. One of its goals is that it wants the total test time of all of the tests to be 20 (twenty) minutes. That's not a lot of time, even if a single file system were to hog all of it.
Before I send a push request to Linus, I run about 20 VM-hours worth of regression tests for ext4. It's sharded across multiple VM's which get launched in parallel, but that kind of testing is simply not going to be accepted into kselftests. Which is fine; it has a very different goal, which is as a quick "smoke test" for the kernel. You'd have to ask the kselftest maintainer if they were interested in taking it in a broader direction, and adding some of the support that would be needed to allow tests to be sharded across multiple VM's. One of the things that xfstests has, but which kselftests does not, is the option of writing the test results in an XML format, using the Junit format:
<testcase classname="xfstests.global" name="generic/402" time="1">
This allows me to reuse some Junit Python libraries to coalesce multiple XML report files and generate statistics like this:
ext4/4k: 464 tests, 43 skipped, 4307 seconds
This is an example of something which one test infrastructure has, that other testing harnesses don't have. So while it would be "nice" to have one test framework that rules them all, that can work on multiple different cloud hosting services, there are lots of things that are "nice". I'd like to have enough money to fly around in a Private Jet so I didn't have to deal with the TSA; and then I'd like to be rich enough to buy carbon offsets so I wouldn't feel guilty flying around all over the place in a Private Jet. Unfortunately, I don't have the resources to do that any time in the foreseeable future. :-)
The question is who is going to fund that effort, and does it really make sense to ask developers to stop writing tests until this magical unicorn test harness exists? And then we have to ask the question which test infrastructure do we use as the base, and are the maintainers of that test infrastructure interested in adding all of the hair to add support for all of these features that we might "want" to have.
Storage testing
<skipped message="no kernel support for y2038 sysfs switch"/>
</testcase>
ext4/1k: 473 tests, 1 failures, 55 skipped, 4820 seconds
Failures: generic/383
ext4/ext3: 525 tests, 1 failures, 108 skipped, 6619 seconds
Failures: ext4/023
ext4/encrypt: 533 tests, 125 skipped, 2612 seconds
ext4/nojournal: 522 tests, 2 failures, 104 skipped, 3814 seconds
Failures: ext4/301 generic/530
ext4/ext3conv: 463 tests, 1 failures, 43 skipped, 4045 seconds
Failures: generic/347
ext4/adv: 469 tests, 3 failures, 50 skipped, 4055 seconds
Failures: ext4/032 generic/399 generic/477
ext4/dioread_nolock: 463 tests, 43 skipped, 4234 seconds
ext4/data_journal: 510 tests, 4 failures, 92 skipped, 4688 seconds
Failures: generic/051 generic/371 generic/475 generic/537
ext4/bigalloc: 445 tests, 50 skipped, 4824 seconds
ext4/bigalloc_1k: 458 tests, 1 failures, 64 skipped, 3753 seconds
Failures: generic/383
Totals: 4548 tests, 777 skipped, 13 failures, 0 errors, 47529s