LWN: Comments on "Linux 5.12's very bad, double ungood day" https://lwn.net/Articles/848431/ This is a special feed containing comments posted to the individual LWN article titled "Linux 5.12's very bad, double ungood day". en-us Thu, 23 Oct 2025 21:45:05 +0000 Thu, 23 Oct 2025 21:45:05 +0000 https://www.rssboard.org/rss-specification lwn@lwn.net Linux 5.12's very bad, double ungood day https://lwn.net/Articles/855107/ https://lwn.net/Articles/855107/ pizza <div class="FormattedComment"> <font class="QuotedText">&gt; Not that it will stop folks complaining when &quot;5.32-alpha0-rc4-pre3&quot; fails to boot on their production system, obviously because it should have been tested first, and we need a pre-pre-pre-pre-pre release snapshot to start testing against.</font><br> <p> I saw this scroll by when I upgraded this system to Fedora 34:<br> <p> $ rpm -q icedtea-web<br> icedtea-web-2.0.0-pre.0.3.alpha16.patched1.fc34.3.x86_64<br> <p> <p> </div> Sun, 02 May 2021 02:58:13 +0000 Linux 5.12's very bad, double ungood day https://lwn.net/Articles/849943/ https://lwn.net/Articles/849943/ pabs <div class="FormattedComment"> While GNU ed is actively developed, it seems unlikely it will ever support LSP :)<br> </div> Sun, 21 Mar 2021 05:34:33 +0000 Linux 5.12's very bad, double ungood day https://lwn.net/Articles/849939/ https://lwn.net/Articles/849939/ Hi-Angel <div class="FormattedComment"> For C these days part of such refactoring could be done with LSP-servers, like clangd for example. Thankfully, all actively developed editors and IDEs support LSP servers, whether natively or through a plugin.<br> </div> Sat, 20 Mar 2021 23:50:23 +0000 Automated tests https://lwn.net/Articles/849137/ https://lwn.net/Articles/849137/ thumperward <div class="FormattedComment"> For an integration test that would specifically have caught this issue, sure. But given the assumption that swap_page_sector() could be called on a swap file, a unit test that called swap_page_sector() on a swap file with a given input and verify that the file contained the right bytes in the right order afterwards is something that could well have existed and caught said bug before the refectoring inadvertently exposed it.<br> </div> Thu, 11 Mar 2021 19:56:10 +0000 Linux 5.12's very bad, double ungood day https://lwn.net/Articles/849110/ https://lwn.net/Articles/849110/ mathstuf <div class="FormattedComment"> Another instance I saw today was that a version of the patch that went in was never on the list (a whitespace fix). While such a tiny change is probably OK in practice, I vastly prefer having a bot attached to the service that stashes away every version of the patchset for posterity.<br> </div> Thu, 11 Mar 2021 16:04:54 +0000 Linux 5.12's very bad, double ungood day https://lwn.net/Articles/849052/ https://lwn.net/Articles/849052/ pbonzini <div class="FormattedComment"> You weren&#x27;t supposed to do that in production though. Also, answering &quot;sorry I can&#x27;t&quot; is perfectly valid. :)<br> </div> Thu, 11 Mar 2021 08:43:24 +0000 Linux 5.12's very bad, double ungood day https://lwn.net/Articles/849028/ https://lwn.net/Articles/849028/ Wol <div class="FormattedComment"> <font class="QuotedText">&gt; I don’t think I’ve built a kernel in 10 years, or maybe that one time 7-8 years ago.</font><br> <p> You clearly don&#x27;t run gentoo :-)<br> <p> Cheers,<br> Wol<br> </div> Wed, 10 Mar 2021 23:20:50 +0000 Linux 5.12's very bad, double ungood day https://lwn.net/Articles/849026/ https://lwn.net/Articles/849026/ roc <div class="FormattedComment"> In the past, when reporting a bug in a release kernel, people have asked me to install some random kernel revision to see if the bug is still present with it. If we should never do that, people should stop asking for it.<br> </div> Wed, 10 Mar 2021 22:22:13 +0000 Hibernation https://lwn.net/Articles/849023/ https://lwn.net/Articles/849023/ corbet That work, and the use case behind it, were discussed in <a href="https://lwn.net/Articles/821158/">this OSPM article</a> from last May. Wed, 10 Mar 2021 22:18:09 +0000 Automated tests https://lwn.net/Articles/849019/ https://lwn.net/Articles/849019/ sjj <div class="FormattedComment"> Interesting, I never thought about hibernation in AWS. I haven’t thought about hibernation in years, since it was the unreliable thing you had to do on laptops of the day.<br> <p> Curious what the use case for it in AWS is. <br> </div> Wed, 10 Mar 2021 22:12:01 +0000 Linux 5.12's very bad, double ungood day https://lwn.net/Articles/849018/ https://lwn.net/Articles/849018/ sjj <div class="FormattedComment"> How many people run non-distro kernels these days, especially in production? If you do that with an rc kernel, you certainly deserve whatever pieces are left of your data.<br> <p> I don’t think I’ve built a kernel in 10 years, or maybe that one time 7-8 years ago.<br> </div> Wed, 10 Mar 2021 21:58:19 +0000 Linux 5.12's very bad, double ungood day https://lwn.net/Articles/848917/ https://lwn.net/Articles/848917/ error27 <div class="FormattedComment"> Yeah. If you make your patch with Coccinelle then we normally encourage you to post the script (unless it&#x27;s one of the ones that ship with the kernel).<br> </div> Wed, 10 Mar 2021 08:10:24 +0000 Linux 5.12's very bad, double ungood day https://lwn.net/Articles/848915/ https://lwn.net/Articles/848915/ epa <div class="FormattedComment"> Another approach is when you use an automated tool to make the change in the first place. I mostly develop in C# and I use the proprietary tool Resharper to rename variables, inline methods, convert for-loops to foreach-loops, and a few more exotic transformations like &quot;extract method&quot; (take a chunk of code and move it to its own method, passing in and out the variables it uses). In this case the commit message could include full details of the transformation you applied, in machine-readable format. Then to verify the patch series you run the same transformation and check the output code is the same. (That would be an additional verification step; it checks you just ran the &quot;convert for to foreach&quot; automated refactoring and didn&#x27;t accidentally introduce other changes, but it doesn&#x27;t check that the refactoring tool itself is correct. So some kind of bytecode check would also be useful.)<br> <p> Such a machine-readable record of the refactoring change would also be handy when rebasing. Instead of hitting a merge conflict on the &#x27;renamed variable&#x27; commit, you could reapply that change using the refactoring tool. As long as both the old commit and the new one are pure refactorings (logical bytecode doesn&#x27;t change), this can be done automatically as part of the rebase process, leaving the human programmer to concentrate on the conflicts that aren&#x27;t trivial.<br> </div> Wed, 10 Mar 2021 07:23:40 +0000 Automated tests https://lwn.net/Articles/848914/ https://lwn.net/Articles/848914/ marcH <div class="FormattedComment"> <font class="QuotedText">&gt; You&#x27;d need to set up a machine with a swap file, </font><br> <p> As this is an apparently common configuration, you&#x27;d expect anyone modifying swap code to grant that configuration some reasonable amount of test time.<br> <p> <font class="QuotedText">&gt; drive it into memory pressure with a lot of dirty anonymous pages,</font><br> <p> Also called &quot;swapping&quot;?<br> <p> <font class="QuotedText">&gt; then somehow verify that none of the swap traffic went astray.</font><br> <p> Unless you&#x27;re re-installing your entire system every few tests, there&#x27;s a good chance you will soon notice something somewhere has gone terribly wrong even when not verifying every byte on the disk. This is apparently how the bug was found and relatively quickly by people not even testing swap but other things. The perfect that does not get done is the enemy of the good that does and this is especially true with chronically underdeveloped validation.<br> <p> </div> Wed, 10 Mar 2021 06:42:43 +0000 Automated tests https://lwn.net/Articles/848908/ https://lwn.net/Articles/848908/ Cyberax <div class="FormattedComment"> <font class="QuotedText">&gt; One could certainly write an automated test to catch this, but it would not be easy. </font><br> I was one of people responsible for getting EC2 instances to hibernate. We used files for hibernation and actually found that the kernel had been broken for YEARS with file hibernation (it required a reboot for the hibernation target setting to take effect).<br> <p> We also had a test for this very issue. It created an EC2 instance with a small disk (~2Gb) and limited RAM (512Mb). The test program created a swap file and then filled the disk to capacity with pseudo-random numbers (by creating a file and writing to it). It then allocated enough pseudo-random data to swap out at least some of it. <br> <p> Then hibernate, thaw, and checksum the disk and the data in RAM to check for corruption.<br> <p> The tests ran in about 2 minutes.<br> </div> Wed, 10 Mar 2021 02:52:00 +0000 Automated tests https://lwn.net/Articles/848907/ https://lwn.net/Articles/848907/ roc <div class="FormattedComment"> You could make it run pretty fast by having the test generate a virtual machine image that is just big enough, i.e. a minimal amount of memory and a minimal-sized block device. Lots of tests could potentially benefit from this.<br> <p> You&#x27;d have to write block device verification code to check the free space and the contents of all files, but that code could be useful for detecting all kinds of bugs.<br> <p> One thing about automated testing is that once you bite the bullet and start creating infrastructure for things that look hard to test, you make it easier to test all kinds of things and people are much more willing to write tests for all kinds of things as part of their normal development. So the sooner you create such infrastructure, the better.<br> </div> Wed, 10 Mar 2021 02:49:39 +0000 Linux 5.12's very bad, double ungood day https://lwn.net/Articles/848906/ https://lwn.net/Articles/848906/ roc <div class="FormattedComment"> Many projects run tests on every PR before merging. It&#x27;s difficult to get this to scale, but not impossible.<br> </div> Wed, 10 Mar 2021 02:43:35 +0000 Linux 5.12's very bad, double ungood day https://lwn.net/Articles/848904/ https://lwn.net/Articles/848904/ roc <div class="FormattedComment"> Projects that are serious about testing run all their automated tests as often as they can. The sooner you detect a regression the easier it is to bisect, debug, and fix, with less impact on other developers and users.<br> <p> In practice, large projects often try to maximise bang-for-the-buck by dividing tests into tiers, e.g. tier 1 tests run on every push, tier 2 every day, maybe a tier 3 that runs less often. Many projects use heuristics or machine learning to choose which tests to run in each run of tier 1.<br> <p> Yes, I understand that it&#x27;s difficult to thoroughly test weird hardware and configuration combinations. Ideally organizations that produce hardware with Linux support would contribute testing on that hardware. But even if we ignore all those bugs, there are still lots of core kernel bugs not being caught by kernel CI.<br> </div> Wed, 10 Mar 2021 02:34:53 +0000 Linux 5.12's very bad, double ungood day https://lwn.net/Articles/848897/ https://lwn.net/Articles/848897/ NYKevin <div class="FormattedComment"> This is (hypothetically) a test. It doesn&#x27;t need to be a reasonable FS in the first place. Tell the swapfile code that the file begins at such-and-such offset and occupies such-and-such size, and fill the rest of the image with 0xDEADBEEF or whatever sentinel value you like. I&#x27;m sure they could make an extremely stupid filesystem that works that way internally (and just returns EROFS or whatever if the user tries to create files or do other things it doesn&#x27;t like), so you don&#x27;t even need to properly mock out any of the FS code.<br> </div> Wed, 10 Mar 2021 01:47:39 +0000 Linux 5.12's very bad, double ungood day https://lwn.net/Articles/848889/ https://lwn.net/Articles/848889/ dbnichol <div class="FormattedComment"> Why not test linus master HEAD prior to tagging? It&#x27;s not like he pulls all the requests and tags the release in one giant push. Testing a tagged release makes sense as that&#x27;s what people are going to use, but a CI system could test HEAD constantly. I think that&#x27;s what basically every other project does - test what&#x27;s on the branch before you tag it.<br> </div> Tue, 09 Mar 2021 22:44:33 +0000 Automated tests https://lwn.net/Articles/848886/ https://lwn.net/Articles/848886/ corbet One could certainly write an automated test to catch this, but it would not be easy. You'd need to set up a machine with a swap file, drive it into memory pressure with a lot of dirty anonymous pages, then somehow verify that none of the swap traffic went astray. That means comparing the entire block device (including free space) with what you expect it to be, or mapping out the swap file, picking the swap traffic out of a blktrace stream, and ensuring that each page goes to the right place. <p> Certainly doable, but this would not be a fast-running test. Tue, 09 Mar 2021 21:33:49 +0000 Linux 5.12's very bad, double ungood day https://lwn.net/Articles/848884/ https://lwn.net/Articles/848884/ marcH <div class="FormattedComment"> <font class="QuotedText">&gt; I was a bit disappointed that the (otherwise excellent as usual) article doesn&#x27;t discuss the possibility of automated tests catching buggy changes like this before they are merged anywhere. The kernel is behind current best practices there.</font><br> <p> I think the gap in the article hides a much more worrying gap. There&#x27;s a long and interesting discussion in the middle of the comments about test setups and workflows with not a single link to .... test code. Is there any?<br> </div> Tue, 09 Mar 2021 21:11:28 +0000 Linux 5.12's very bad, double ungood day https://lwn.net/Articles/848882/ https://lwn.net/Articles/848882/ error27 <div class="FormattedComment"> I use my rename_rev.pl script to review renamed the variable patches.<br> <p> <a href="https://github.com/error27/rename_rev">https://github.com/error27/rename_rev</a><br> <p> It also has a -r NULL mode to review changes like &quot;if (foo != NULL) {&quot; --&gt; &quot;if (foo) {&quot; because sometimes people get those transitions inverted. A bunch of little tricks like that.<br> <p> I sometimes think like you do about ways to do something similar at the bytecode level or with static analysis but I can&#x27;t think how that would work...<br> </div> Tue, 09 Mar 2021 21:01:49 +0000 Linux 5.12's very bad, double ungood day https://lwn.net/Articles/848876/ https://lwn.net/Articles/848876/ marcH <div class="FormattedComment"> <font class="QuotedText">&gt; Unfortunately patchwork is disconnected from the target repository. That&#x27;s why Gerrit is better, for example.</font><br> <p> patchwork tries to find code submissions not addressed to it directly. All other code review solutions require everything to be sent to them directly, they act as gateways. That&#x27;s the very simple reason why they have all the information and all work better.<br> <p> The icing on the cake is of course recipient-controlled email notifications instead of sender-control notifications (a.k.a... &quot;spam&quot;). In other words people routinely subscribe and unsubscribe to/from individual code reviews; not possible with a crude mailing list.<br> <p> None of this is rocket science, it&#x27;s all very easy to see and understand except when obsessing about the &quot;email versus web&quot; user interface questions; these debates are not &quot;wrong&quot; but they hide the much more important backend and database design issues.<br> <p> You could totally have a gateway type code review tool entirely driven by email. In fact requesting all submissions, review comments and approvals or rejections to be sent (by email) to patchwork directly and putting patchwork in better control of the git repo would get at least half-way there.<br> <p> </div> Tue, 09 Mar 2021 20:29:33 +0000 Linux 5.12's very bad, double ungood day https://lwn.net/Articles/848875/ https://lwn.net/Articles/848875/ mathstuf <div class="FormattedComment"> Crazy idea: require kernel.testing.alpha=5.20.0-alpha1 on the cmdline to boot such an alpha kernel. Reject such an option in non-alpha kernels (including development kernels; the tagged release would require this code and not otherwise).<br> <p> But this kind of one-off code is annoying to test itself and someone will script adding it to their boot command lines anyways.<br> </div> Tue, 09 Mar 2021 19:55:11 +0000 Linux 5.12's very bad, double ungood day https://lwn.net/Articles/848871/ https://lwn.net/Articles/848871/ NYKevin <div class="FormattedComment"> I tend to imagine you can get 90% of the way there with gcc -S -O0 -finline-functions -findirect-inlining and some combination of the max-inline tuning parameters (whose documentation I&#x27;m finding a little hard to follow). But that won&#x27;t cross translation units because it doesn&#x27;t link. You&#x27;d need some kind of LTO optionally followed by a disassembly step to get the other 10%.<br> </div> Tue, 09 Mar 2021 19:09:43 +0000 Linux 5.12's very bad, double ungood day https://lwn.net/Articles/848848/ https://lwn.net/Articles/848848/ Wol <div class="FormattedComment"> And how many people will ignore that (or be unaware) and load alphaN on their production server anyway?<br> <p> Horse to water and all that ...<br> <p> Cheers,<br> Wol<br> </div> Tue, 09 Mar 2021 17:47:19 +0000 Linux 5.12's very bad, double ungood day https://lwn.net/Articles/848845/ https://lwn.net/Articles/848845/ daenzer <div class="FormattedComment"> Yeah, user-mode-linux should have sufficed in this case.<br> <p> And there are CI pipelines on <a href="https://gitlab.freedesktop.org/">https://gitlab.freedesktop.org/</a> running VMs via KVM, which should allow booting kernels built as part of the pipeline as well. (Another possibility is having dedicated test machines which can be powered on/off remotely; Mesa is using a number of those for testing GPU drivers)<br> </div> Tue, 09 Mar 2021 17:32:36 +0000 Linux 5.12's very bad, double ungood day https://lwn.net/Articles/848844/ https://lwn.net/Articles/848844/ mathstuf <div class="FormattedComment"> Here&#x27;s an idea:<br> <p> - initialize a swapfile<br> - do things with it<br> - copy it to a new, fresh FS<br> - restore from it<br> <p> That at least gets the &quot;everything was written to the swapfile&quot; (rather than willy-nilly across the hosting FS). Doesn&#x27;t guarantee that *extra* writes didn&#x27;t go out. Some form of writing a given bitpattern to the entire FS and then ensuring that post-FS init and swap file creation, nothing else changes (modulo mtime/inode updates) might suffice? I have no idea if such a &quot;simple&quot; FS exists though. Or create a single file which takes up the remaining FS space with a given bitpattern reads properly after using the swapfile. Wrecking any data or metadata would presumably be detectable then, no?<br> </div> Tue, 09 Mar 2021 16:43:07 +0000 Linux 5.12's very bad, double ungood day https://lwn.net/Articles/848841/ https://lwn.net/Articles/848841/ andy_shev <div class="FormattedComment"> Unfortunately patchwork is disconnected from the target repository. That&#x27;s why Gerrit is better, for example.<br> </div> Tue, 09 Mar 2021 16:12:33 +0000 Linux 5.12's very bad, double ungood day https://lwn.net/Articles/848840/ https://lwn.net/Articles/848840/ pabs <div class="FormattedComment"> You could run user-mode-linux, or possibly kvm if the platform has nested VMs.<br> </div> Tue, 09 Mar 2021 16:12:01 +0000 Linux 5.12's very bad, double ungood day https://lwn.net/Articles/848795/ https://lwn.net/Articles/848795/ pizza <div class="FormattedComment"> <font class="QuotedText">&gt; That&#x27;s why we linux-next, which is used for lots of automated integration tests</font><br> <font class="QuotedText">&gt; Three weeks passed between the buggy commit entering linux-next and upstream.</font><br> <p> So the &quot;problem&quot; here isn&#x27;t that nothing was being tested, it&#x27;s just that none of the tests run during this interval window caught this particular issue. It&#x27;s also not clear that there was even a test out there that could have caught this, except by pure happenstance.<br> <p> But that&#x27;s the reality of software work; a bug turns up, write a test to catch it (and hopefully others of the same class), add it to the test suite (which runs as often as your available resources allow) .... and repeat endlessly.<br> </div> Tue, 09 Mar 2021 15:35:01 +0000 Linux 5.12's very bad, double ungood day https://lwn.net/Articles/848796/ https://lwn.net/Articles/848796/ Cyberax <div class="FormattedComment"> Kinda. You&#x27;ll have alphaN that is released before rcN, and the distinction is that alphaN is meant only for automatic testing on throwaway hardware.<br> </div> Tue, 09 Mar 2021 15:26:52 +0000 Linux 5.12's very bad, double ungood day https://lwn.net/Articles/848794/ https://lwn.net/Articles/848794/ pizza <div class="FormattedComment"> ... okay, so rename &quot;rc1&quot; to &quot;alpha1&quot; , &quot;rc2&quot; to &quot;alpha2&quot; and so forth. Problem solved?<br> <p> Not that it will stop folks complaining when &quot;5.32-alpha0-rc4-pre3&quot; fails to boot on their production system, obviously because it should have been tested first, and we need a pre-pre-pre-pre-pre release snapshot to start testing against.<br> <p> <p> <p> <p> </div> Tue, 09 Mar 2021 15:25:15 +0000 Linux 5.12's very bad, double ungood day https://lwn.net/Articles/848793/ https://lwn.net/Articles/848793/ geert <div class="FormattedComment"> That&#x27;s why we linux-next, which is used for lots of automated integration tests<br> <p> $ git tag --contains 48d15436fde6<br> next-20210128<br> next-20210129<br> next-20210201<br> next-20210202<br> next-20210203<br> next-20210204<br> next-20210205<br> next-20210208<br> next-20210209<br> next-20210210<br> next-20210211<br> next-20210212<br> next-20210215<br> next-20210216<br> next-20210217<br> next-20210218<br> next-20210219<br> next-20210222<br> next-20210223<br> next-20210224<br> next-20210225<br> next-20210226<br> next-20210301<br> next-20210302<br> next-20210303<br> next-20210304<br> next-20210305<br> next-20210309<br> v5.12-rc1<br> v5.12-rc1-dontuse<br> v5.12-rc2<br> <p> Three weeks passed between the buggy commit entering linux-next and upstream.<br> </div> Tue, 09 Mar 2021 15:22:49 +0000 Linux 5.12's very bad, double ungood day https://lwn.net/Articles/848792/ https://lwn.net/Articles/848792/ Cyberax <div class="FormattedComment"> Linus can issue a pre-RC (alpha1?) to give time to run the tests, a day before the actual RC.<br> </div> Tue, 09 Mar 2021 15:12:28 +0000 Linux 5.12's very bad, double ungood day https://lwn.net/Articles/848791/ https://lwn.net/Articles/848791/ pbonzini <div class="FormattedComment"> There&#x27;s already CKI, KernelCI and more. However testing kernels is not that easy. For example for obvious reasons you cannot install a kernel from a bog standard gitlab pipeline.<br> </div> Tue, 09 Mar 2021 14:54:48 +0000 Linux 5.12's very bad, double ungood day https://lwn.net/Articles/848790/ https://lwn.net/Articles/848790/ daenzer <div class="FormattedComment"> Like other commenters, I was a bit disappointed that the (otherwise excellent as usual) article doesn&#x27;t discuss the possibility of automated tests catching buggy changes like this before they are merged anywhere. The kernel is behind current best practices there.<br> <p> E.g. it shouldn&#x27;t be hard to hook up tests to a GitLab CI pipeline which can catch this bug (and more), and only allow merging changes which pass the tests.<br> </div> Tue, 09 Mar 2021 14:29:23 +0000 Linux 5.12's very bad, double ungood day https://lwn.net/Articles/848788/ https://lwn.net/Articles/848788/ epa <div class="FormattedComment"> I guess if the test setup uses a checksummed filesystem then you verify the filesystem after the tests have completed and if it&#x27;s corrupted, that&#x27;s a failure. If the filesystem doesn&#x27;t have per-file checksums, you can at least do the usual fsck stuff to check metadata. (For testing it would be handy to have a filesystem mode that always zeroes out unused pages, so that a thorough fsck can later check that all unused pages are zero.)<br> </div> Tue, 09 Mar 2021 14:11:16 +0000 Linux 5.12's very bad, double ungood day https://lwn.net/Articles/848786/ https://lwn.net/Articles/848786/ pizza <div class="FormattedComment"> <font class="QuotedText">&gt; Well then, like I said above: tests need to be run before the RC releases.</font><br> <p> Okay, so... when exactly?<br> <p> There are 10K commits (give or take a couple thousand) that land in every -rc1. Indeed, until -rc1 lands, nobody can really be sure if a given pull request (or even a specific patch) will get accepted. This is why nearly all upstream tooling treats &quot;-rc1&quot; as the &quot;time to start looking for regressions&quot; inflection point [1], and they spend the next *two months* fixing whatever comes up. This has been the established process for over a decade now.<br> <p> So what if there was a (nasty) bug that takes down a test rig? That&#x27;s what the test rigs are for! The only thing unusual about this bug is that it leads to silent corruption, to the point where &quot;testing&quot; in of itself wasn&#x27;t enough; the test would have had to been robust enough to ensure nothing unexpected was written anywhere to the disk. That&#x27;s a deceptively hairy testing scenario, arguably going well beyond the tests folks developing filesystems run.<br> <p> Note I&#x27;m not making excuses here; it is a nasty bug and clearly the tests that its developers ran was insufficient. But it is ridiculous to expect &quot;release-quality&quot; regression testing to be completed at the start of the designated testing period.<br> <p> [1] Indeed, many regressions are due to the combinations of unrelated changes in a given -rc1; each of those 10K patches in of themselves is fine, but (eg) patch #3313 could lead to data loss, but only in combination of a specific kernel option being enabled, and run on a system containing an old 3Ware RAID controller and a specific motherboard with a PCI-X bridge that can&#x27;t pass through MSI interrupts due to how it was physically wired up. [2] [3]<br> <p> [2] It&#x27;s sitting about four feet away from me as I type this. <br> <p> [3] Kernel bugzilla #43074<br> </div> Tue, 09 Mar 2021 13:27:06 +0000