|
|
Subscribe / Log in / New account

A kernel unit-testing framework

A kernel unit-testing framework

Posted Mar 2, 2019 21:02 UTC (Sat) by dw (guest, #12017)
In reply to: A kernel unit-testing framework by jorgegv
Parent article: A kernel unit-testing framework

Policies like this do little to improve code quality, and instead turn into a needless box-ticking exercise where people will write bullshit tests just to get a commit in, we've doubtlessly all observed this in userspace projects, commercial or otherwise.

Tests that are written half-heartedly for the most part contribute to brittleness and inflexibility more than anything else. If someone is motivated to write a good test, they will do it by default. If they're forced to write a test, the chance is very low that the test will do much more than validate what the programmer already knew about his code


to post comments

A kernel unit-testing framework

Posted Mar 2, 2019 21:04 UTC (Sat) by dw (guest, #12017) [Link] (1 responses)

(oh, how times are changing. I am forced to rebuke myself for use of 'his' in the parent comment!)

A kernel unit-testing framework

Posted Mar 9, 2019 1:03 UTC (Sat) by nix (subscriber, #2304) [Link]

All this says is that women write tests without being forced to. (In my experience, this is true -- but that's probably because in order to survive against all the headwinds as a female free software developer you need to be really, really good, and writing good tests is strongly correlated with that. Writing good tests is *hard*.)

A kernel unit-testing framework

Posted Mar 2, 2019 21:41 UTC (Sat) by Cyberax (✭ supporter ✭, #52523) [Link] (5 responses)

I worked in many projects that had strict 80% coverage requirement for new code. There was a fair amount of bullshit tests, but:
1) Even bullshit tests periodically helped.
2) The advantages of non-bullshit tests far outweighed the occasional discomfort of having to write BS tests to tick the right box.

Nothing is perfect, but with tests it's pretty much impossible to have too many of them.

A kernel unit-testing framework

Posted Mar 2, 2019 22:25 UTC (Sat) by dw (guest, #12017) [Link] (4 responses)

Tests aren't free -- while they increase the accuracy of future change, they also increase the cost. An ornately tested app is one of the most painful to refactor, because more than half the cost of refactoring is usually paid fixing up tests. If they're bullshit tests, that is very much wasted effort

A kernel unit-testing framework

Posted Mar 2, 2019 22:28 UTC (Sat) by dw (guest, #12017) [Link]

(In the interests of honesty, it's worth noting I also appreciate the value of near-100% coverage, but still prefer selective testing.. our difference of opinion here is merely a proxy for that religious war)

A kernel unit-testing framework

Posted Mar 2, 2019 23:26 UTC (Sat) by Cyberax (✭ supporter ✭, #52523) [Link] (1 responses)

I would argue the total opposite. Tests makes refactoring much easier. Sure, you have to go and fix them but they also provide you a lot of, well, testing.

It's often way too easy to overlook some implicit invariant during a refactoring.

And for trivial refactorings (like renaming stuff) modern IDEs automate these changes.

A kernel unit-testing framework

Posted Mar 3, 2019 5:03 UTC (Sun) by roc (subscriber, #30627) [Link]

That's exactly right. Tests let you refactor with confidence.

A kernel unit-testing framework

Posted Mar 3, 2019 5:13 UTC (Sun) by roc (subscriber, #30627) [Link]

You can adjust the level at which you write tests (unit tests vs system tests and the spectrum in between) to trade off refactoring costs with other variables.

A kernel unit-testing framework

Posted Mar 3, 2019 5:11 UTC (Sun) by roc (subscriber, #30627) [Link] (1 responses)

> If they're forced to write a test, the chance is very low that the test will do much more than validate what the programmer already knew about his code

The point of requiring tests with a patch is not to catch bugs in the patch at the time of submission. Those tests are to catch people breaking that code in the future. Thus, even "does this feature work at all" tests are useful in the long run.

For a very long time Firefox has had a policy of requiring tests with patches, or else an explanation of why a test is impractical. I was involved before and after the policy was introduced and that policy has been extremely valuable.

A kernel unit-testing framework

Posted Mar 4, 2019 17:26 UTC (Mon) by hkario (subscriber, #94864) [Link]

> The point of requiring tests with a patch is not to catch bugs in the patch at the time of submission.

they also show the expected use of the code (they are a form of documentation) and they show (together with CI) that indeed the code meets those expectations (if both sides of a corner case are handled correctly)

ability to refactor code with confidence is definitely worth the occasional BS test

A kernel unit-testing framework

Posted Mar 3, 2019 14:36 UTC (Sun) by k3ninho (subscriber, #50375) [Link] (2 responses)

Your comment reads like you think that one bad experience invalidates the whole practice. Unit-testing, as an approach to rigour in software engineering, is not easy but has enormous payoffs. As roc points out, preventing regressions is a great reason to have a suite of unit tests that a developer can use, on her own machine, before committing, to make sure that there aren't any unintended consequences to the code changes she's making. This has a second benefit in having a quick cycle time -- being able to fix bugs and regressions while the intent of your code is in your head, and you find another way to solve the problem at hand. (We get for free a third benefit to an extensive unit test suite: other developers have said what they intend their functional code to deal with, recorded as examples of positive and negative behaviour in their test suite.)

Read the articles [1], [2], [3] and join to community of practitioners.

Let's not bike-shed different flavours of automated testing. Let's not re-invent something well covered by history. Consider Chicago-style "red-green-refactor" to write failing test cases, fill in merely enough code to pass the tests and then to reshape your code structure to something you and other people can maintain or London-style 'design the interactions between components in your system': test their interfaces with contracts, and use your own inteface contracts with 'test doubles' of external systems (i.e. a mock evolved into a wrapper around external libraries, with a standardised interface[4]) to avoid your unit tests escaping their unitary boundaries.

1: https://github.com/testdouble/contributing-tests/wiki/Lon...
2: https://github.com/testdouble/contributing-tests/wiki/Det...
3: https://github.com/testdouble/contributing-tests/wiki/Don...
3: http://codemanship.co.uk/parlezuml/blog/?postid=987

>Tests that are written half-heartedly for the most part contribute to brittleness and inflexibility more than anything else. If someone is motivated to write a good test, they will do it by default. If they're forced to write a test, the chance is very low that the test will do much more than validate what the programmer already knew about his code[.]

Let's reconsider your first sentence: were we to add some training on test case design to our edict mandating test cases, what might happen? And those brittle and inflexible tests will need reliability designed into them or they don't warrant the trust we grant our automated test suites. A significant part of test case lifecycle is to cull the redundant ones -- something which happens naturally when unit-level test cases are the 'record of intent' from the author of the functional or production code. That's to say that irrelevant functionality will have failing unit tests, and nobody would advocate writing code so that these irrelevant tests pass.

Second sentence: People don't know what good code, good design or good tests look like without training and connection to a community of experts. We can work together to grow a pool of knowledge about this.

Third sentence: We can use this record of assumptions about the code as examples of intent. On top of that, we can train people to consider the equivalence classes of test inputs which cause the software to do the same work and arrive at the same output -- and have tests for these. We can also train people to consider and test what happens to bad input data -- which usually results in systems built defensively to reject bad input and to recover from unintended behaviour.

It sounds like you had a bad experience, which sucks, I'm sure. There are ways to pick yourself up and to practise better unit testing.

K3n.

A kernel unit-testing framework

Posted Mar 3, 2019 20:06 UTC (Sun) by roc (subscriber, #30627) [Link] (1 responses)

FWIW I don't think *unit* tests are necessarily the main kind of tests people should be writing.

For example Firefox has very few true "unit tests", i.e. tests that test the functionality of one code module in isolation. Firefox tests are almost entirely "system tests", tests written in HTML, CSS or JS that test specific Web APIs, each test touching a lot of Firefox modules. There are good reasons to test this way. Those interfaces are public and therefore quite stable (especially after a feature has shipped), so tests need refactoring less often than tests that depend on internal interfaces. Also tests at this level can often be written to work on multiple different browsers, which is extremely useful. And testing modules in isolation often means mocking the interfaces of other modules which can be extremely expensive to build and maintain.

So while I think a policy of requiring tests with patches is important, I don't think they need to be *unit* tests.

A kernel unit-testing framework

Posted Mar 7, 2019 10:22 UTC (Thu) by k3ninho (subscriber, #50375) [Link]

> FWIW I don't think *unit* tests are necessarily the main kind of tests people should be writing.
Yeah, I railed on unit tests, when I also believe that there's a need, like security-in-depth, for testing-in-depth of different layers of the system in ways that are appropriate and convenient.

The traditional sales pitch for predominant unit testing is lightweight, quick-to-evaluate method-level unit tests as the foundation of the Testing Pyramid thing: build trust in your program functions or methods, then build trust in your internal interfaces as you integrate components, then build trust in your external interfaces (user, API), finally trust you've installed it correctly with few fail-fast smoke tests.

I'm reconsidering whether that's right -- given the London School of Test-Driven design builds tests that reflect your design rather than (Chicago School's approach for) a test approach that evaluates every input (or class of inputs that get equivalent results) to the system via its component pieces. Somewhere, an automated framework reflects the full system in a second-system kind of way -- clearly not an acknowledged part of in-depth automated testing.

K3n.

A kernel unit-testing framework

Posted Mar 3, 2019 16:45 UTC (Sun) by rgmoore (✭ supporter ✭, #75) [Link]

Policies like this do little to improve code quality, and instead turn into a needless box-ticking exercise where people will write bullshit tests just to get a commit in,

This seems like a cultural problem rather than a technical one. Demanding that tests be written but not caring about their quality is indeed a box-ticking exercise rather than a serious attempt at improving code quality. But that doesn't say it's wrong to demand tests; it just says that it's wrong to demand tests without treating those tests as seriously as the code they're testing. Tests need to be seen as an essential part of the coding process rather than an afterthought. Tests that are written half-heartedly don't cause the code they're testing to be brittle and inflexible; they are a symptom of the kind of coding practice that produces brittle and inflexible code.

A kernel unit-testing framework

Posted Mar 5, 2019 11:17 UTC (Tue) by jezuch (subscriber, #52988) [Link]

If your project has robust code review process then bullshit tests will have a much harder way in. I require tests to be present for new code while I'm doing the code review mostly because it shows that the developer thought about the API, about what inputs are valid and what inputs are invalid and how the code should react to those (also: failure modes and error handling). I review the tests just as hard as the actual code (though it's often tempting to skip this part or make it less thorough).

And as others said, even bullshit tests have value sometimes.


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds