LWN: Comments on "Bcachefs goes to "externally maintained"" https://lwn.net/Articles/1035736/ This is a special feed containing comments posted to the individual LWN article titled "Bcachefs goes to "externally maintained"". en-us Tue, 28 Oct 2025 10:36:10 +0000 Tue, 28 Oct 2025 10:36:10 +0000 https://www.rssboard.org/rss-specification lwn@lwn.net Debian https://lwn.net/Articles/1039662/ https://lwn.net/Articles/1039662/ taladar <div class="FormattedComment"> Because that way future versions of the same package will also be installed from the repository where you originally installed it from while the -t option is a one time thing.<br> </div> Fri, 26 Sep 2025 07:52:02 +0000 Debian https://lwn.net/Articles/1039439/ https://lwn.net/Articles/1039439/ daenzer <div class="FormattedComment"> <span class="QuotedText">&gt; Can't you just change the priority with apt pins?</span><br> <p> You can, offhand I'm not sure why that would be preferable though.<br> </div> Thu, 25 Sep 2025 09:12:58 +0000 Debian https://lwn.net/Articles/1039436/ https://lwn.net/Articles/1039436/ taladar <div class="FormattedComment"> Can't you just change the priority with apt pins? Or is experimental treated differently from any other repository?<br> </div> Thu, 25 Sep 2025 08:56:18 +0000 So what exactly *is* in the cards, then? https://lwn.net/Articles/1039428/ https://lwn.net/Articles/1039428/ daenzer <div class="FormattedComment"> If upstream allows co-installation of multiple versions, it can be done with Debian packages as well. It's done all the time, normally by adding a version suffix to the package name.<br> <p> Obviously, this involves more maintenance effort compared to a single version, so there's a trade-off.<br> </div> Thu, 25 Sep 2025 07:36:21 +0000 Debian https://lwn.net/Articles/1039425/ https://lwn.net/Articles/1039425/ daenzer <div class="FormattedComment"> <span class="QuotedText">&gt; There's a separate experimental repository that's an extension to unstable rather than a complete distribution in itself, and which doesn't trigger any sort of migration. It's also weird in that even explicitly enabling it by adding a source won't allow you to install packages from it - you need an explicit "apt -t experimental" statement to pull from there.</span><br> <p> To be pedantic, if a package is available only in experimental, apt selects it for installation even without -t experimental (or the /experimental suffix). However, -t experimental is still needed if the package depends on a version only in experimental of a package also available in another suite.<br> </div> Thu, 25 Sep 2025 07:33:08 +0000 So what exactly *is* in the cards, then? https://lwn.net/Articles/1038088/ https://lwn.net/Articles/1038088/ paulj <div class="FormattedComment"> <span class="QuotedText">&gt; The real breakdown was in the private maintainer thread, when Linus had quite a bit to say about how he doesn't trust my judgement based on, as far as I can tell, not much more than the speed with which I work and get stuff out. That speed is a direct result of very good QA (including the best automated testing of any filesystem in the kernel), a modern and very hardened codebase, and the simple fact that I know my code like the back of my hand and am very good at what I do.</span><br> <p> Kent, do you realise the implicit message you are sending to other kernel people when you write things like this? You are somewhat implicitly saying that the kernel development process is generally much slower than your process, cause others do not have good code, don't have good testing, and they don't know the code well. <br> <p> I am sure that's not how you intend it, but this is the kind of message you send to others when you blow your own trumpet in such ways in comms to peers and to longer standing kernel people - whether you are explicit or subtle in it. You are signalling that you consider yourself superior in such descriptions and ALSO implicitly in how you argue for exceptions again and again, even when maintainers with the final say have told you you will not get an exception at this time, particularly if you then point at other exceptional cases that you think you are better than.<br> <p> Can you understand how this might rub others up the wrong way? Have you ever had to work with someone who regularly, through whatever implicit signals, makes it clear they think they are superior? Do you know how off-putting that can be to others?<br> <p> I beseech you, yet again, to take a long break from engaging in comment threads here on LWN, or on Phoronix, or Reddit, etc., and also take a break from engaging with other kernel devs, and just go and focus on your code and making it great for your users. Refrain from making comparisons to other developers or their code or engineering practices - in any way, however subtle. <br> <p> Do that, make bcachefs undisputably awesome, let your code do the talking, and things will eventually come good again.<br> <p> If you can't stay off comment threads, where you seem to - regularly or irregularly - drop misjudged clangers about how good you think you are, then the chances of things coming good aren't as good I fear.<br> </div> Mon, 15 Sep 2025 11:12:33 +0000 So what exactly *is* in the cards, then? https://lwn.net/Articles/1037578/ https://lwn.net/Articles/1037578/ deepfire <div class="FormattedComment"> One person speaks about technical details and impersonal principles of communication and organisation.<br> <p> The other goes as far as employing mind reading and generally positions themselves as a judge of character.<br> <p> Someone clearly needs to get off the high horse.<br> </div> Thu, 11 Sep 2025 01:06:44 +0000 Debian https://lwn.net/Articles/1037180/ https://lwn.net/Articles/1037180/ daniels <div class="FormattedComment"> <span class="QuotedText">&gt; the public conversations help with that</span><br> <p> How would you say that’s going?<br> </div> Mon, 08 Sep 2025 19:50:55 +0000 So what exactly *is* in the cards, then? https://lwn.net/Articles/1037014/ https://lwn.net/Articles/1037014/ marcH <div class="FormattedComment"> <span class="QuotedText">&gt; Try git log v6.16-rc1..v6.16 -- fs/xfs</span><br> <p> Please be specific; I just did and I found nothing shocking. The commits with "refactor" or "factor" in their name seemed very trivial, even I could make sense of them.<br> <p> <span class="QuotedText">&gt; There was ~0 risk of regression with the patch in question.</span><br> <p> I was speaking in general, not about any particular patch in question. I don't even know which patch you're referring to.<br> <p> <span class="QuotedText">&gt; No, you've got it backwards. The experimental label is for communication to users, it's not for driving development policy.</span><br> <p> I think you missed the point I was trying to make. I'm not sure you really tried.<br> <p> <span class="QuotedText">&gt; But one of the key things we balance in "fast vs. safe" is regression risk, and that does vary over the lifecycle of a project. </span><br> <p> Yet another wall of text full of things that make sense and that I tend to agree with, but I really can't relate much of it with the points I was trying to make. This is not communicating, just speaking. And I'm amazed you have time left to write code after digressing and repeating yourself so much in obscure corners like this one. Indeed, burn out must not be far away. Unless there's a lot of copy/paste?<br> <p> <span class="QuotedText">&gt; where we all learn from and teach each other,</span><br> <p> I have not read everything, very far from it but I don't remember you "learning" much. Could you name one significant and non-technical thing that you've learned during all this drama and will try to do differently going forwards? Trying to be absurdly clear: an answer to such a question (if any) should not say _anything_ about others, only about yourself.<br> </div> Sun, 07 Sep 2025 03:04:54 +0000 So what exactly *is* in the cards, then? https://lwn.net/Articles/1036990/ https://lwn.net/Articles/1036990/ koverstreet <div class="FormattedComment"> <span class="QuotedText">&gt; Surprising, can you please share some commit IDs?</span><br> <p> Try git log v6.16-rc1..v6.16 -- fs/xfs<br> <p> <span class="QuotedText">&gt; The most important points seem to be missing from that list: size and nature of the changes. For both risk and maintainer bandwidth reasons.</span><br> <p> <span class="QuotedText">&gt; If a "critical bug fix" has a non-negligible risk of regression, then either there's a clear divergence on the definition of a "critical bug fix", or the whole feature should be temporarily disabled (cause it has no bug fix simple enough for an RC phase). Or just filed and advertised, e.g. "don't use version X".</span><br> <p> There was ~0 risk of regression with the patch in question.<br> <p> bcachefs's journalling is drastically simpler than ext4's: we journal btree updates and nothing else - it's just a list of keys. For normal journal replay, we just sort all the keys in the journal and keep the newest when there's a duplicate. For journal_rewind, all we do is tweak the sort function if it's a non-alloc leaf node key. (We can't rewind the interior node updates and we don't need to, which means alloc info will be inconsistent; that's fine, we just force a fsck).<br> <p> IOW: algorithmically this is very simple stuff, which means it's very testable, and it's in one of the codepaths best covered by automated tests - and it's all behind a new option, so it has zero affect on existing operation. This is about as low regression risk as it gets, and the new code has performed flawlessly every time we've used it.<br> <p> <span class="QuotedText">&gt; - Either a significant number of bcachefs people use Linus' mainline and trust it with their data. Then that branch is not really "experimental" any more (whatever the label says), and no large change should ever be submitted in the RC phase but only small, "critical bug fixes"</span><br> <p> No, you've got it backwards. The experimental label is for communication to users, it's not for driving development policy.<br> <p> We ALWAYS develop in the safest way we practically can, but we do have to balance that with shipping and getting it done. Getting too conservative about safety paralyzes the development process, and if we slow down to the point that we're not able to close out bugs users are hitting in a reasonable timeframe or ship features users need (an important consideration when e.g. we've got a lot of users waiting for erasure coding to land so they can get onto something more manageable, robust and better supported), then we're not doing it right.<br> <p> OTOH, there's generally no need to hair split over this, because if you're doing things right, good techniques for ensuring reliability and avoiding regressions are just no brainers that let you both ship more reliable code and move faster: if you strike a good balance, most of the techniques you use are just plain win/win.<br> <p> E.g. good automated testing is a _massive_ productivity boost; you find bugs quicker (hours instead of weeks) while the code is in your head. Investing in that is a total no brainer. Switching from C to Rust is another obvious win/win (and god I wish bcachefs was already written in Rust).<br> <p> Work smarter, not harder.<br> <p> But one of the key things we balance in "fast vs. safe" is regression risk, and that does vary over the lifecycle of a project. Early on, you do need to move quicker: you have lots of bugs to close out, features that may require some rearchitecting, so accepting some risk of regression is totally fine and reasonable as long as those regressions are minor and infrequent compared to the rest of the bugs you're closing out (you want the total bugcount to be going down fast) and you're not creating problems for yourself down the road or your users: users will be fine with that as long as you're quickly closing out the actual issues they hit. I eyeball the ratio of regression fixes to other bugfixes (as well as time spent) to track this; suffice it to say regressions have not generally been a problem. (The two big ones that we were bit by in the 6.16 cycle were pretty exceptional and caused by partly by factors outside of our control, and both were addressed on multiple levels - new hardening, new tests - to ensure bugs like that don't happen again).<br> <p> The other key thing you're missing is: it's a filesystem, and people test filesystems by using them and putting their data on them.<br> <p> It is _critical_ that we get lots of real world testing before lifting the experimental label, and that means people are going to be using it and trusting it like any other filesystem, and that means we have to be supporting it like any other filesystem. "No big changes" is far too simple a rule to ever work - experimental or not. Like I said earlier, you're always balancing regression risk vs. how much users need it, with the goal being ensuring that users have working machines.<br> <p> There's also the minor but important detail that lots of users are using bcachefs explicitly because they've been burned by another COW filesystem that will go unnamed (as in, losing entire filesystems multiple times), so they're using bcachefs because even at this still slightly rough and early state, the things they have to put up with are way better than losing more filesystems.<br> <p> That is, they're using bcachefs precisely because of things like this: when something breaks, I make sure it gets fixed and they get their data back. Ensuring users do not lose data is always the top priority. It's exactly the same as the kernel's rule about "do not break userspace". The kernel's only function is to run all the other applications that users actually want to run: if we're breaking them, we're failing at our core function. A filesystem that loses data is failing at its core function, and should be discarded for something better.<br> <p> <span class="QuotedText">&gt; That sounds like 20 years of filesystem experience and 0 year experience of not being the boss?</span><br> <p> Well, if everything comes down to authority and chains of command now then maybe kernel culture is too far gone for filesystem work to be done here. Because that's not good engineering: good engineering requires an inquisitive, open culture where we listen and defer to the experts in their field, where we all learn from and teach each other, and when there's a conflict or disagreement we hash it out and figure out what the right answer is based on our shared goals (i.e. delivering working code).<br> <p> <span class="QuotedText">&gt; Maintainers don't really have time to explain;</span><br> <p> That's a poor excuse for "I don't have time to be a good manager/engineer".<br> <p> In engineering, we always have to be able to justify our decisionmaking. I have to be able to explain what I'm doing and why to my users, or they won't trust my code. I have to be able to explain what I'm doing and why to the developers I work with on the bcachefs codebase, or they'll never learn how things work - plus, I do make mistakes, and if you can't explain your reasoning that's a very big clue that you might be mistaken.<br> </div> Sat, 06 Sep 2025 19:43:46 +0000 So what exactly *is* in the cards, then? https://lwn.net/Articles/1036930/ https://lwn.net/Articles/1036930/ marcH <div class="FormattedComment"> <span class="QuotedText">&gt; I was actually legitimately surprised to see that it looks like I've been stricter with what I consider a critical bugfix than other subsystems. </span><br> <span class="QuotedText">&gt; ...</span><br> <span class="QuotedText">&gt; I even saw refactorings go in for XFS during rc6 or rc7 recently.</span><br> <p> Surprising, can you please share some commit IDs?<br> <p> <span class="QuotedText">&gt; It's normally based on just common sense and using good judgement, balancing how important a patch is to users vs. the risk of regression. </span><br> <p> The most important points seem to be missing from that list: size and nature of the changes. For both risk and maintainer bandwidth reasons. <br> <p> If a "critical bug fix" has a non-negligible risk of regression, then either there's a clear divergence on the definition of a "critical bug fix", or the whole feature should be temporarily disabled (cause it has no bug fix simple enough for an RC phase). Or just filed and advertised, e.g. "don't use version X".<br> <p> <span class="QuotedText">&gt; (While still in the experimental phase I do accept a slightly higher risk of (non serious!) regressions that I will post experimental so that I can prioritize throughput of getting bugfixes out; that's why I was surprised.)</span><br> <p> I think I've been noticing a bit of dissonance on that "experimental" topic...<br> <p> - Either a significant number of bcachefs people use Linus' mainline and trust it with their data. Then that branch is not really "experimental" any more (whatever the label says), and no large change should ever be submitted in the RC phase but only small, "critical bug fixes"<br> - Or, it really is still "experimental", users should not trust that mainline branch, and then there is no emergency to fix problems in it! Because users shouldn't trust anyway. It's "experimental" after all.<br> <p> In BOTH cases, no large change should ever be submitted in the RC phase! I mean, in neither case is any time-consuming _process exception_ needed.<br> <p> <span class="QuotedText">&gt; I've been working in storage for going on 20 years at this point, and I've always been the one ultimately responsible for my code,...</span><br> <p> That sounds like 20 years of filesystem experience and 0 year experience of not being the boss?<br> <p> Learning is hard, unlearning is much harder. Unlearning complete control seems crazy hard.<br> <p> <span class="QuotedText">&gt; things break down when people start dictating and taking an "I know better, even though I'm not explaining my reasoning" attitude.</span><br> <p> Maintainers don't really have time to explain; the onus is on the submitter to make them understand and build trust. Whatever the perception is, using words as "dictating" can only backfire. Looks like it does. Maybe the submitter does not communicate well and should try harder. Maybe the maintainer is not smart enough or does not have enough time. Then the submitter should fork (and maybe come back later). Maybe both sides have issues.<br> <p> </div> Fri, 05 Sep 2025 22:40:43 +0000 So what exactly *is* in the cards, then? https://lwn.net/Articles/1036920/ https://lwn.net/Articles/1036920/ koverstreet <div class="FormattedComment"> The kernel development process as it is normally applied would've been fine for bcachefs: like I mentioned elsewhere, I started perusing pull requests from other subsystems and I was actually legitimately surprised to see that it looks like I've been stricter with what I consider a critical bugfix than other subsystems. (While still in the experimental phase I do accept a slightly higher risk of (non serious!) regressions that I will post experimental so that I can prioritize throughput of getting bugfixes out; that's why I was surprised.)<br> <p> Other subsystems will absolutely send features outside the merge window if there's a good reason for it; I even saw refactorings go in for XFS during rc6 or rc7 recently.<br> <p> It's normally based on just common sense and using good judgement, balancing how important a patch is to users vs. the risk of regression. That should take into account QA processes, history of regressions in that subsystem (which tells us how well those QA processes are working), how sensitive the code is, and how badly the patch is needed. And when there's concerns they're talked through; things break down when people start dictating and taking an "I know better, even though I'm not explaining my reasoning" attitude.<br> <p> The real breakdown was in the private maintainer thread, when Linus had quite a bit to say about how he doesn't trust my judgement based on, as far as I can tell, not much more than the speed with which I work and get stuff out. That speed is a direct result of very good QA (including the best automated testing of any filesystem in the kernel), a modern and very hardened codebase, and the simple fact that I know my code like the back of my hand and am very good at what I do.<br> <p> I've been working in storage for going on 20 years at this point, and I've always been the one ultimately responsible for my code, top to bottom, from high level design all the way down to responding to every last bug report and working with users to make sure that things are debugged and resolved thoroughly and people aren't left hanging. People are still running, and like and trust, code that manages their data that I wrote when I was 25, and there's a bunch of people who are getting their kernel from my git repository - and for a lot of people it's explicitly because they've lost data to our other in-kernel COW filesystem and needed something more reliable, and they have found that bcachefs delivers. I don't know anyone in the filesystem world with that kind of resume.<br> <p> <span class="QuotedText">&gt; This kind of implies that Linus will one day start accepting your bcachefs PRs again. Is it something that he confirmed to you? </span><br> <p> We both explicitly left the door open to that in the private maintainer thread, although on my end it will naturally be contingent upon having better processes and decisionmaking in place.<br> </div> Fri, 05 Sep 2025 19:16:39 +0000 So what exactly *is* in the cards, then? https://lwn.net/Articles/1036911/ https://lwn.net/Articles/1036911/ mmechri <div class="FormattedComment"> @Kent: You’ve made it clear that you believe the kernel development rules/processes are inadequate for bcachefs. That’s your prerogative. But surely, given how long you’ve been around, you knew that long before submitting bcachefs for mainline. Given this, why did you submit it for mainline at all? Did you expect that bcachefs would be exempted from following those rules/processes? This isn’t a rhetorical question, I’m genuinely trying to understand your thought process.<br> <p> <span class="QuotedText">&gt; So now, it probably won't go back upstream until it's well and truly finished</span><br> <p> This kind of implies that Linus will one day start accepting your bcachefs PRs again. Is it something that he confirmed to you? <br> </div> Fri, 05 Sep 2025 16:46:06 +0000 In defense of Debian https://lwn.net/Articles/1036680/ https://lwn.net/Articles/1036680/ koverstreet <div class="FormattedComment"> I've yet to hear how swapping out Rust dependencies makes things better for the end user...<br> </div> Thu, 04 Sep 2025 12:40:50 +0000 In defense of Debian https://lwn.net/Articles/1036666/ https://lwn.net/Articles/1036666/ taladar <div class="FormattedComment"> What you are describing are not unit tests, they are tests that live in the same repository of any kind.<br> </div> Thu, 04 Sep 2025 10:00:36 +0000 In defense of Debian https://lwn.net/Articles/1036633/ https://lwn.net/Articles/1036633/ vasvir <div class="FormattedComment"> This is not exactly what I said.<br> <p> Debian has a vast repository where the unstable fork mostly works for most of the packages and only but a few are caught in the update drama we are discussing.<br> <p> I said that I prefer it that way because at the end it leads to some consensus even if there are conflicts in some (rare) cases.<br> <p> Debian unstable was a very workable system for me for the last 20 years.<br> <p> As I said I prefer it that way and stand behind Debian decisions because they are both pragmatic but are also idealistic driving to a better system as I see it as a user.<br> <p> Of course if Debian unstable was mostly unworkable because it was caught in update vicious cycle for many critical packages with buggy or obsolete versions of known software then I would sing a different tune.<br> <p> But for me the have nailed it striking the perfect balance.<br> <p> <p> </div> Wed, 03 Sep 2025 18:38:56 +0000 In defense of Debian https://lwn.net/Articles/1036626/ https://lwn.net/Articles/1036626/ koverstreet <div class="FormattedComment"> Ok, maybe we were speaking past each other a bit. But the integration tests for bcachefs live in an entirely separate repository [1], as is typical for filesystems.<br> <p> [1]: <a href="https://evilpiepirate.org/git/ktest.git/">https://evilpiepirate.org/git/ktest.git/</a><br> </div> Wed, 03 Sep 2025 17:21:00 +0000 In defense of Debian https://lwn.net/Articles/1036617/ https://lwn.net/Articles/1036617/ zdzichu <div class="FormattedComment"> Yet unit test are crucial. The Debian maintainer changes Rust dependency, runs cargo test and see NO tests failing. Because there are no bcachefs-tools tests present. But the maintainer has no signal that changed dependency possibly broke something.<br> <p> BTW, as a happy bcache user, I'm salty that you've hijacked the name for unrelated filesystem. Could you rename the FS?<br> </div> Wed, 03 Sep 2025 15:05:36 +0000 In defense of Debian https://lwn.net/Articles/1036570/ https://lwn.net/Articles/1036570/ farnz Your first sentence says that Rust-style integration tests are "just impractical for a filesystem". The rest of your comment describes Rust-style integration tests. <p>Which is it? Is the Rust style, where integration tests are run in a harness with layers and layers of setup, practical for a filesystem, or not? Wed, 03 Sep 2025 13:14:34 +0000 In defense of Debian https://lwn.net/Articles/1036566/ https://lwn.net/Articles/1036566/ koverstreet <div class="FormattedComment"> That approach to testing is just impractical for a filesystem, where basically everything has to run in a harness with layers and layers of setup. <br> <p> That's what I favor functional testing over unit testing. Yes, unit test failures are easier to debug - but a good test harness that localizes test failures to the line of code of the test that failed, plus making sure your system has good error logging (when the system breaks, the system should tell you everything you need to know about what broke) helps a great deal.<br> </div> Wed, 03 Sep 2025 13:03:25 +0000 In defense of Debian https://lwn.net/Articles/1036524/ https://lwn.net/Articles/1036524/ koverstreet <div class="FormattedComment"> There are many ways to test things :)<br> <p> I've got automated testing that is primarily meant to test the kernel side of things, but because it test single device mode, multi device mode, encryption, degraded mounts, device add, remove, evacuate, etc. - it covers all the critical functionality of bcachefs tools. <br> <p> We also have a bunch of people that run HEAD of my bcachefs kernel and tools branches and update frequently; when there's something big going on I ping people and ask them to test (and tell them what do watch for).<br> <p> I just favor functional tests over until tests.<br> </div> Wed, 03 Sep 2025 12:58:17 +0000 So what exactly *is* in the cards, then? https://lwn.net/Articles/1036514/ https://lwn.net/Articles/1036514/ paravoid <div class="FormattedComment"> No conversation about velocity ever took place. Disagreements about velocity, distro policies or whatever, were not the reasons the package was orphaned, not adopted to this day, and dropped from unstable. Kent's hostility and inability to work in the collaborative<br> environment that open source is, is. I hope that's evident just by looking at this LWN page when he is still talking about "screwups" to this day, and his refusal to apologize for conduct that is clearly hostile and unacceptable in our communities.<br> <p> e2fsprogs' maintainer in Debian is, and has been for the past 20+ years, Theodore Ts'o, also ext4 upstream. "they had a package maintainer who was willing to slow down and do the required legwork [...] and take into account the upstream needs" is not untrue, but kind of weird way to put this, so I guess this is all just speculation on Kent's part without knowledge of the actual facts (despite the confidence with which it was claimed).<br> <p> Generally speaking, it's been hard to keep up with this thread and try to fact-check claims that are... a creative approach to the truth, to say the least. "Alternative" facts and shifting the focus of the conversation elsewhere (e.g. getting kicked off Debian in a thread about getting kicked off the kernel) are well-known ways to exhaust everyone else you've ever disagreed with, leaving you alone to present your own version of the truth. I'd encourage everyone to approach with caution before forming an opinion.<br> </div> Wed, 03 Sep 2025 10:35:04 +0000 In defense of Debian https://lwn.net/Articles/1036508/ https://lwn.net/Articles/1036508/ farnz While I can't find any Rust tests in <tt>bcachefs-tools</tt> either, I'd note that you wouldn't necessarily see any <tt>#[cfg(test)]</tt> in a tested Rust program - it's conventional to have that for unit tests (to avoid compiling tests you won't be running), but it's not required. <p>Unit tests must <a href="https://doc.rust-lang.org/stable/reference/attributes/testing.html#the-test-attribute">be annotated <tt>#[test]</tt></a>, and integration tests either live in a <a href="https://doc.rust-lang.org/cargo/reference/cargo-targets.html#integration-tests"><tt>tests</tt></a> directory, or are referenced by a <a href="https://doc.rust-lang.org/cargo/reference/manifest.html"><tt>[[tests]]</tt> section in <tt>Cargo.toml</tt></a>. There's also <a href="https://doc.rust-lang.org/rustdoc/write-documentation/documentation-tests.html">doctests</a>, which you'd have seen running <tt>cargo test</tt>. <p>That said, the other problem I see with <tt>bcachefs-tools</tt> from a testing perspective is that it's a big binary, rather than a thin shim that can be verified by inspection (because it's tiny), and a library that's testable via normal Rust means. You can do this with a single crate containing both a library and a binary, or with two crates, one containing the library and one containing the binary; both allow for a well-tested library with a tiny shim binary. <p>For reference, the typical binary shim looks similar to: <pre> <tt> <p> fn main() -&gt; Result&lt;(), library_crate::Error&gt; { <br> &#160;&#160;&#160;&#160;let args = library_crate::parse_args_os(std::env::args_os())?; <br> &#160;&#160;&#160;&#160;library_crate::do_the_thing(args) <br> } </tt> </pre> <p>You can see how it's clear from inspection that the binary shim is bug-free, as long as the compiler and the library crate are bug-free. Wed, 03 Sep 2025 10:04:06 +0000 So what exactly *is* in the cards, then? https://lwn.net/Articles/1036511/ https://lwn.net/Articles/1036511/ paulj <div class="FormattedComment"> Kent... you can't fix the world.<br> <p> Debian is its own ecosystem. Each distro generally is. They have their ways of doing things. They may sometimes seem wrong to you, but they have their reasons - which could extend far beyond your code and your concerns, and also stretch far across time. There may be deeply buried community reasons for some things being seemingly less efficient than they should be to you.<br> <p> Let them do their thing.<br> <p> Unless you want to become a DD, and spend many years building trust with others, demonstrating you understand all the relevant processes and the trade-offs behind them, and demonstrating you know how to persuade other DDs to change a process. Just let them do their thing. Keep an eye on out on what patches they apply to ship your code (if they package your code) - see what patches you can incorporate, or what deeper fixes you can make to your code to avoid some patch; help them if they ask for help and you can. But... let them do their thing, and don't go telling them you know their ecosystem better and how to (paraphrasing) "Fix their mess". Just don't do that.<br> <p> Let them do their thing.<br> <p> You go focus on bcachefs and your users, as you know you should, and just let the other stuff slide.<br> </div> Wed, 03 Sep 2025 09:43:18 +0000 So what exactly *is* in the cards, then? https://lwn.net/Articles/1036506/ https://lwn.net/Articles/1036506/ farnz I would argue that the only critical filesystems (defined as those where you can expect Debian to make exceptions to normal policy, rather than the norm of can't expect an exception) are those recommended by <tt>debian-installer</tt>. At the moment, that's ext4 only, so only ext4's support programs are also critical. <p>In other words, <tt>xfsprogs</tt>, <tt>btrfs-tools</tt> and similar are not critical, because the users of those filesystems are doing something non-default, and should be thinking about what they're doing. <tt>e2fsprogs</tt> is critical, because someone who's following Debian's recommendations will be using it. Wed, 03 Sep 2025 09:10:09 +0000 Debian https://lwn.net/Articles/1036501/ https://lwn.net/Articles/1036501/ dsfch <div class="FormattedComment"> If any message, however worded, comes across as "maintained but there's no need for a new release [EVER!]" then there's a miscommunication on first principles. Software - and worse so, its uses/usecases - evolves. Even mathematically-proven-"perfect" software will find itself in the situation that it lacks features or is found to have unwanted - even if potentially not "undescribed" - side effects.<br> <p> If it's not clear to users from the very beginning what "no more releases" means, and that makes the difference between _retained_ and _maintained_, then someone has been communicating in a way-too-people-pleasing style.<br> </div> Wed, 03 Sep 2025 08:00:13 +0000 So what exactly *is* in the cards, then? https://lwn.net/Articles/1036500/ https://lwn.net/Articles/1036500/ taladar <div class="FormattedComment"> You are looking at this the wrong way. If exceptions are needed for popular packages then something is wrong with the process that likely also affects less popular packages.<br> <p> This is quite similar to the way e.g. Microsoft, Google or Apple sometimes use internal, undocumented OS APIs to solve a problem their public APIs can't solve and then complain when other people start reverse engineering and using that too for lack of alternatives.<br> <p> If the official way does not work for everybody, not even for the popular packages that get lots of attention by everyone involved, something likely needs to change.<br> </div> Wed, 03 Sep 2025 07:51:13 +0000 In defense of Debian https://lwn.net/Articles/1036499/ https://lwn.net/Articles/1036499/ taladar <div class="FormattedComment"> So you are saying you are okay with everything being broken all the time because some dependency is in the process of being updated? Just to avoid shipping the old and the new version simultaneously while that process is ongoing?<br> <p> Who cares about motivation, it will still take time for people to actually do that work.<br> </div> Wed, 03 Sep 2025 07:44:42 +0000 In defense of Debian https://lwn.net/Articles/1036494/ https://lwn.net/Articles/1036494/ zdzichu <div class="FormattedComment"> <span class="QuotedText">&gt; Things are infinitely better in Rust, but you can still get e.g. undocumented slight behavioral changes, and given all the things the mount helper has to do, even something as simple as a minor update to the CLI parsing library absolutely needs to be tested, and in various configurations: because if you break things, people's machines won't boot.</span><br> <p> That's a great theory, but I see no Rust tests in bcachefs-tools. There are no `#[cfg(test)]` anywhere. `cargo test` only runs stuff provided by some upstream packages. It's impossible to know if a version change breaks something or not, when there are not tests.<br> <p> Or maybe `bcachefs-tools` are special, one of a kind, and standardised Rust tests are irrelevant.<br> </div> Wed, 03 Sep 2025 07:18:22 +0000 In defense of Debian https://lwn.net/Articles/1036486/ https://lwn.net/Articles/1036486/ koverstreet <div class="FormattedComment"> Semvar is not something you can remotely trust in the C world. Semvar works for glibc and a few other big libraries where the developers are super educated about the pitfalls, but there are _zero_ checks built into the language and things can break in horribly unpredictable ways - remember, C is not a memory safe language.<br> <p> Things are infinitely better in Rust, but you can still get e.g. undocumented slight behavioral changes, and given all the things the mount helper has to do, even something as simple as a minor update to the CLI parsing library absolutely needs to be tested, and in various configurations: because if you break things, people's machines won't boot.<br> <p> We've had a 2-3 util-linux changes (libblkid) break mounting for quite a few people in recent memory, generating quite a few bug reports again to, you guessed it... the bcachefs developers. (And I wasn't impressed with the util-linux response; the change was a good idea in principle, but if it causes people's machines to fail to mount, revert it and rethink it).<br> <p> This sort of thing is why I get very, very nervous when I hear people saying "this is a simple change, we can do it ourselves and bypass the testing upstream does"<br> </div> Wed, 03 Sep 2025 01:21:21 +0000 In defense of Debian https://lwn.net/Articles/1036482/ https://lwn.net/Articles/1036482/ NYKevin <div class="FormattedComment"> <span class="QuotedText">&gt; This creates motivation in upstreams of app_1 to fix their app and in lib_a to consider app_1 case. This may create conflicts, friction, infighting, blame shifting and drama.</span><br> <p> Well, maybe.<br> <p> It will motivate the Debian maintainers of app_1 and/or lib_a to fix their versions of those packages. In some cases, the upstreams might be willing to work with Debian on these issues, especially if lib_a unintentionally broke semver. But in the general case, Debian is not in the business of contacting upstreams to complain about simple version mismatches that can easily be patched downstream. More than a few upstreams would at best reply with a curt WONTFIX, and at worst with a string of invective.<br> <p> This is entirely fair and reasonable (not the invective, but such is the reality of FOSS). From the upstream's perspective, we say that you need version 1.53, these Debian people are coming in and trying to use a backwards-incompatible 1.54 (or vice-versa), and we never promised that would work, so of course they get to keep both pieces.<br> </div> Wed, 03 Sep 2025 00:54:29 +0000 So what exactly *is* in the cards, then? https://lwn.net/Articles/1036466/ https://lwn.net/Articles/1036466/ sheepdestroyer <div class="FormattedComment"> I'm the type of user that had been eagerly waiting for years, for bcachefs to go upstream and lose the experimental tag, before migrating everything.<br> I hope it stays in somehow.<br> </div> Tue, 02 Sep 2025 22:09:57 +0000 So what exactly *is* in the cards, then? https://lwn.net/Articles/1036456/ https://lwn.net/Articles/1036456/ georgh-cat <div class="FormattedComment"> <span class="QuotedText">&gt;and the userbase seems to be more interested in erasure coding and the management stuff than performance</span><br> <p> Amen to that.<br> </div> Tue, 02 Sep 2025 20:35:05 +0000 So what exactly *is* in the cards, then? https://lwn.net/Articles/1036449/ https://lwn.net/Articles/1036449/ koverstreet <div class="FormattedComment"> There's no need to bring in this popularity contest thinking; we're not talking about things that affect the rest of the system.<br> <p> We're just talking about perfectly avoidable screwups.<br> <p> And no, e2fsprogs got their exception because they had a package maintainer who was willing to slow down and do the required legwork on Debian policy and take into account the upstream needs, i.e. testability and reliable bugfixes.<br> <p> Like I said, I've had to tell distro people to slow down multiple times; "slow down if you think you can't do it right" is a perfectly reasonable position. Blindly charging ahead with things that are only important for stable when we're not ready and prioritizing distro rules over shipping working code got us into the Debian mess; with the kernel, all we needed was for sane and consistent policy (i.e. prioritize keeping things working for the end user), like the rest of the kernel has, and calm reasonable conversations about priorities instead of dictating over the minutia.<br> <p> Maybe you don't think bcachefs is important, but the users running it certainly do, most of the users running it that I've talked to are doing so specifically because they needed something more reliable - so it's my responsibility to see that it continues to be, and that does mean dealing with all sorts of issues and screwups as they arise.<br> </div> Tue, 02 Sep 2025 19:25:47 +0000 So what exactly *is* in the cards, then? https://lwn.net/Articles/1036441/ https://lwn.net/Articles/1036441/ pizza <div class="FormattedComment"> <span class="QuotedText">&gt; I realise you have difficulty hearing this: but bcachefs simply isn’t (yet) important. People are willing to make exceptions to process if it’s important enough. Bcachefs simply isn’t there. Ext4 and by extension E2fsprogs is.</span><br> <p> e2fsprogs has been around for over three decades, ie far longer than ext4 itself.<br> <p> If ext4 required a hypothetical 'e4fsprogs' instead, would you also be arguing that shouldn't be considered "critical" when Debian started shipping kernels ext4?<br> <p> FFS, if "filesystem recovery tools" aren't considered critical path, then WTF possibly could?<br> <p> </div> Tue, 02 Sep 2025 17:47:42 +0000 So what exactly *is* in the cards, then? https://lwn.net/Articles/1036433/ https://lwn.net/Articles/1036433/ MrWim <div class="FormattedComment"> I realise you have difficulty hearing this: but bcachefs simply isn’t (yet) important. People are willing to make exceptions to process if it’s important enough. Bcachefs simply isn’t there. Ext4 and by extension E2fsprogs is.<br> <p> That E2fsprogs got an exception is not surprising because ext4 *matters*. It matters more than process. Bcachefs-tools *is* just a random package as far as others are concerned. <br> <p> I believe that your communication difficulty arises because you don’t understand that bcachefs is simply not a high priority to the people you’re communicating with. It’s just yet another package/patch to them.<br> <p> I sincerely hope you succeed in getting bcachefs to the point that it matters too.<br> </div> Tue, 02 Sep 2025 17:29:20 +0000 So what exactly *is* in the cards, then? https://lwn.net/Articles/1036423/ https://lwn.net/Articles/1036423/ koverstreet <div class="FormattedComment"> <span class="QuotedText">&gt; So the packaging in Debian was under development in unstable, and it was broken. Well duh, it's called unstable for a reason. Ideally at some point it would have worked so the moment bcachefs was actually stable, the Debian packages would have been ready to go. I guess we'll never know now.</span><br> <p> Nononononono :P<br> <p> I keep hammering on this, because it's important. This attitude might be ok for any random user package where it's a minor inconvenience if you break it, but not for the filesystem. It's not a minor inconvenience if the filesystem breaks; it's the one component that absolutely has to work.<br> <p> For the filesystem, the experimental label is a warning to users; it does _not_ mean that we're allowed to screw around and break things on purpose. You should consider the experimental label as "dry run mode", we haven't been able to test it as widely as we want so we know we're not finished fixing bugs, but we still do development as if it was a normal stable released filesystem like any other.<br> <p> Importantly, we want to see that not just the code is stable but all the processes for supporting that code are in place and working BEFORE lifting the experimental label.<br> </div> Tue, 02 Sep 2025 16:25:43 +0000 So what exactly *is* in the cards, then? https://lwn.net/Articles/1036417/ https://lwn.net/Articles/1036417/ kleptog <div class="FormattedComment"> <span class="QuotedText">&gt; And what's the problem with that? What stops Debian from including multiple versions of a dependency?</span><br> <p> Nothing technical, OpenSSL is probably the most famous example. There are many libraries/programs that exist in multiple versions to assist with migrations.<br> <p> What isn't sensible is to include an extra version of a dependency that is only used by a single package. That way lies madness. There is a trade-off to be made between adding extra versions and forcing everything to a single one. It requires human judgement and calm discussion. Not a single developer claiming their package is special.<br> <p> So the packaging in Debian was under development in unstable, and it was broken. Well duh, it's called unstable for a reason. Ideally at some point it would have worked so the moment bcachefs was actually stable, the Debian packages would have been ready to go. I guess we'll never know now.<br> </div> Tue, 02 Sep 2025 15:55:08 +0000 In defense of Debian https://lwn.net/Articles/1036302/ https://lwn.net/Articles/1036302/ vasvir <div class="FormattedComment"> I am running unstable so I guess I am using a rolling distribution. I prefer it from testing because in my (albeit several years old) experience when something breaks it is fixed way faster than testing.<br> <p> So to answer your question:<br> <p> When the distribution decides to bump lib_a version (we are talking semver here, so we assume there are some incompatibility between minor version let's say 1.53 and 1.54 of lib_a) that breaks app_1 and maybe unbreak app_2.<br> <p> This creates motivation in upstreams of app_1 to fix their app and in lib_a to consider app_1 case. This may create conflicts, friction, infighting, blame shifting and drama.<br> <p> Debian can host lib_a1 (1.53 xor 1.54) and lib_a2 (2.0.1) at the same time so there is no problem in the hypothetical perfect world where semver works as it should.<br> <p> As I said it I prefer it that way. I can totally understand why you might prefer it another way as I have been in the other side of the argument also as application developer that just wants to ship.<br> <p> </div> Tue, 02 Sep 2025 09:22:01 +0000 Debian https://lwn.net/Articles/1036291/ https://lwn.net/Articles/1036291/ epa <div class="FormattedComment"> Yes, I got the wrong idea. Unstable is the “most unstable” Debian release.<br> </div> Tue, 02 Sep 2025 07:45:51 +0000